Chatbot Icon

September 11, 2022

How to setup your local database management with Docker

Written By

Alex Harvey


How to

Read time

5 minute

How to: setup your local database management with Docker
Whilst working with a previous client, we noticed a pain point in their setup process for new developers. Amongst a long list of other things new developers needed to do, one task that seemed excessive was the database setup. The project used a Postgres database and the setup of this involved a long list of steps, including:
  • Postgres installation
  • Logging into Postgres
  • Creating Postgres users
  • Altering user permissions
  • Creating the Postgres databases
  • Pulling data from a remote environment
  • Populating the local database with the pulled data
We noticed this bottleneck immediately and within the first few days had implemented a cleaner and faster approach for setting up developers local databases using Docker. In this blog post we’ll be outlining how you can do this to save yourself time and effort otherwise spent coding!

What is Docker?

Docker is a software platform that is built for developing, shipping and running applications. What Docker does beautifully is packaging software into isolated environments called containers, which include everything needed for that software to run.
Imagine this scenario. You write some code locally on your machine, get it all working and send it off to another person for testing. They try running the code on their machine but it doesn’t work. What’s the problem? It was working fine on your machine. Maybe they’re running a different OS, or their software isn’t up to date. This is a classic problem in software development - getting code to run in the same way in different environments.

What about virtual machines?

Sure, we could use a virtual machine to solve the above problem if we ensured that the code is developed and tested on the same virtual machine image, but this would be a cumbersome process. Not only do virtual machines have to quarantine off a portion of their hosts resources (memory, storage space, processing power), but they also need to boot up an entire operating system. Another problem is that our host machine can’t communicate directly with the virtual machine but instead needs to communicate via a hypervisor (the application that allows you to run the virtual machine).
Docker’s approach is much simpler. Docker containers are fast, lightweight and portable, and they run directly on the host machine, meaning they can communicate with our system and can access and update any required data.

How does Docker work?

To understand how Docker works, we need to explain the difference between Docker images and Docker containers:
  • Docker image: a file that represents a package of software containing all the dependencies needed to run it correctly
  • Docker container: an isolated, executable package of software containing all the dependencies needed to run it correctly
There is a subtle difference here but essentially you can think of a Docker image as a template that tells us how to build a Docker container, and a container as an instance of that template that is actually run on our machine when we need to use the piece of software it packages.
To get started using Docker, you can download official Docker images from the DockerHub. You can also create custom Docker images using Dockerfiles, for which you can find more info here.

Creating a Postgres database with Docker

Let’s take a look at how we can create a Postgres database using Docker. To get started, download Docker here and install it on your machine.
Now let’s open up the command line and create a new directory for our database:
Ensure Docker is running on your computer by opening the app you installed earlier. If you’re on Mac, it should appear as icon in your status bar. After a few seconds it should tell you that it is running.
We can now create a Postgres database using a single command:
docker run --name example-db -e POSTGRES_PASSWORD=password -p 5432:5432 -d postgres
There’s quite a lot going on in this command, so let’s breakdown what’s going on:
  • docker run: This is the command used to run a container.
  • --name exampledb: The --name flag lets us give our container a name, which in this case is exampledb.
  • -e POSTGRES_PASSWORD=password: The -e flag lets us set environment variables for the container, in this case we are setting the Postgres password as it is the only required variable that Postgres needs.
  • -p 5432:5432: The -p flag publishes a container port to a host systems port, so that we can access the running application on our host system. Note that the first port is the host port and the second is the container port.
  • -d: The -d flag means detach which runs the container in the background freeing up that terminal instance and allowing us to run other commands.
  • postgres: This is the Docker image that we are creating the container from, in this case we are using the official Postgres Docker image.
For more details on the usage of docker run, check out the Docker documentation.
Execute the above command and your database will be created and running! You can test the connection in your preferred database management tool, here we’re using TablePlus. Note the default User is postgres:

Using a Configuration File

The previous command is an easy, clean solution if we want to get a database running quickly. But what if we don’t want to run such a verbose command every time we create a database? And what if we want to run multiple containers, add more configuration options and share these across a development team? We can do all of these things by creating a docker-compose.yaml file, which we use to configure our applications services:
services: db: image: postgres environment: POSTGRES_USER: alex POSTGRES_PASSWORD: password POSTGRES_DB: example ports: - "5432:5432" volumes: - ./postgres:/var/lib/postgresql/data
What’s going on here? services is the only required attribute which tells Docker which services we would like to run. Under this we have db, which is the name we have given our only service. This consists of the Docker image, which in this case is postgres, the environment variables we wish to pass to it, the port we want to publish to from the container to the host, and finally the volume we want to bind mount from the host to the container.
Once we’ve created the above file, we can create and run all services via a single command:
docker-compose up
To stop the containers running, we simply run:
docker-compose down
The official Postgres Docker image will run all .sql scripts found in the /docker-entrypoint-initdb.d/ folder, so if we have a local sql dump that we want to use to populate the database we can even add that into the docker-compose.yml file:
services: db: image: postgres environment: POSTGRES_USER: alex POSTGRES_PASSWORD: password POSTGRES_DB: example ports: - "5432:5432" volumes: - ./postgres/db.sql:/docker-entrypoint-initdb.d/db.sql
This gives new developers coming into your project a super quick way to get their local database set up and aligned with the databases other devs on your project are using. All they need to do is copy your staging/dev database and place it into the correct directory.
And that’s it! A super simple and configurable way to set up a Postgres database for your project that can be started quickly, runs on any OS and used by developers across the team!


Related posts

How did you find your chatbot experience today?

Thank you for your feedback

There has been an unexpected error, please try again later