3

I have a main service in my docker-compose file that uses postgres's image and, though I seem to be successfully connecting to the database, the data that I'm writing to it is not being kept beyond the lifetime of the container (what I did is based on this tutorial).

Here's my docker-compose file:

main:
  build: .
  volumes:
    - .:/code
  links:
    - postgresdb
  command: python manage.py insert_into_database
  environment:
    - DEBUG=true


postgresdb:
  build: utils/sql/
  volumes_from:
    - postgresdbdata
  ports:
    - "5432"
  environment:
    - DEBUG=true


postgresdbdata:
  build: utils/sql/
  volumes:
    - /var/lib/postgresql
  command: true
  environment:
    - DEBUG=true

and here's the Dockerfile I'm using for the postgresdb and postgresdbdata services (which essentially creates the database and adds a user):

FROM postgres

ADD make-db.sh /docker-entrypoint-initdb.d/

How can I get the data to stay after the main service has finished running, in order to be able to use it in the future (such as when I call something like python manage.py retrieve_from_database)? Is /var/lib/postgresql even the right directory, and would boot2docker have access to it given that it's apparently limited to /Users/?

Thank you!

4
  • Is Auto-Commit set to true, or are you committing your changes manually? Commented Apr 24, 2015 at 16:03
  • auto-commit on sqlalchemy (that's what I'm using)? I believe I'm committing the changes "manually" --that is, by running python manage.py insert_into_database within the main service and letting that commit to Postgres (which worked before I started using Docker). Is this what you mean? Commented Apr 24, 2015 at 16:07
  • Yes. Knowing Postgres, a failure to commit seemed the likeliest explanation. I am out of my depth otherwise and shall withdraw. Commented Apr 24, 2015 at 16:11
  • Thanks for your answer, @Politank-Z! I think it's a problem with the way I set up Docker though, because I had no problems with Postgres before I tried it... Commented Apr 24, 2015 at 16:14

1 Answer 1

6

The problem is that Compose creates a new version of the postgresdbdata container each time it restarts, so the old container and its data gets lost.

A secondary issue is that your data container shouldn't actually be running; data containers are really just a namespace for a volume that can be imported with --volumes-from, which still works with stopped containers.

For the time being the best solution is to take the postgresdbdata container out of the Compose config. Do something like:

$ docker run --name postgresdbdata postgresdb echo "Postgres data container"
Postgres data container

The echo command will run and the container will exit, but as long as don't docker rm it, you will still be able to refer to it in --volumes-from and your Compose application should work fine.

Sign up to request clarification or add additional context in comments.

3 Comments

I see... So what can I do, Adrian?
@miguel5 You caught me in between drafts - I've updated my answer.
it's not compose's fault: "When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost." (docs.docker.com/compose/overview/#/…)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.