Don’t Make This Config Mistake in Docker!

How I broke (and fixed) my Postgres Docker container

Justin Masayda
3 min readMar 13, 2023

--

For a recent personal project, I was using PostgreSQL as my data persistence solution. I’m still learning Postgres, so while I was setting it up I poked around the configuration file, postgres.conf , to get a better idea of what Postgres is capable of.

Well, curiosity killed the cat, as they say — in this case, curiosity killed the database container.

To understand what went wrong, let me explain my setup. I was using the default Postgres Docker image with a Docker volume bound to the data directory.

Setup

For the uninitiated, A Postgres server is simply a collection of a few processes which work together to manage a single database cluster. Each Postgres server instance manages a data directory (or “data area”) which contains its configuration and where it actually stores data on the filesystem. I say “server instance” because many Postgres servers may be run simultaneously on a single host provided they have unique data directories and bind to unique ports.

Anyway, because Docker containers are ephemeral by design, any data saved to them disappears with the container when it is shut down. Docker handles persistent storage through the use of Docker volumes, which are basically directories which exist outside of any container but can be mounted into them.

The Problem

So, while I was poking around the config file, I noticed the line max_connections = 100 , which is the number of Postgres clients that are allowed to connect to the server simultaneously. I wanted to see what happens when you reach the limit, so I changed the limit to 2 connections and restarted the server. The change caused the configuration to be invalid, because unless configured otherwise, at least three connections must be reserved for superusers. Consequently, the server immediately terminated whenever I attempted to start it. And because the server is the main process of the container, when that process ends, the container stops.

Okay, so I just had to edit the file again to set the limit higher. The thing is, I couldn’t access the config file. Without the container staying running, I couldn’t open a terminal on it to edit the configuration like I did before. But of course, the config still existed. It was located in the data directory, which was actually a Docker volume, so I just had to check where the volume was located, right?

Except I couldn’t find the volume anywhere on my file system. Docker said it was in /var/lib/docker/volumes/, but that directory did not exist. So, where was it?

Solution

What I didn’t know was that Docker doesn’t run natively on Mac and Windows, it runs in a thin virtual Linux container of its own, and it is within this container that volumes are located. To access it, I had to run:

docker run -it — privileged — pid=host debian nsenter -t 1 -m -u -n -i sh

This gave me a shell into a container which actually had the volumes in it. From there I could edit the file, reset it to a valid state, and restart the Postgres container.

Lesson

Well, I did find out what happens when you reach the max_connections limit. The server rather underwhelmingly replies, FATAL: sorry, too many clients already.

Bad config files are very easy to create and will stop a process from being able to start, so it’s important to be able to edit and test them easily. I don’t think accessing volumes (on MacOS or Windows) without a container running is well documented, so I hope this helps someone who may be facing a similar problem.

--

--

Justin Masayda

Software engineer | Machine learning specialist | Learning audio programming | Jazz pianist | Electronic music producer