4

I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.

My docker-compose-devel.yml:

server:
  image: node:0.10

client:
  image: node:0.10
  links:
    - server

I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.

I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.

The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?

3 Answers 3

2

Here's a solution I came up with that's hackish; please let me know if you can do better.

docker-compose-devel.yml:

server:
  image: node:0.10
  command: sleep infinity

client:
  image: node:0.10
  links:
    - server

In window 1:

docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash

In window 2:

docker-compose --file docker-compose-dev.yml run client bash
Sign up to request clarification or add additional context in comments.

Comments

1

I guess your main problem is about restarting the application when there are changes in the code.

Personnaly, I launch my applications in development containers using forever.

forever -w -o log/out.log -e log/err.log app.js

The w option restarts the server when there is a change in the code.

I use a .foreverignore file to exclude the changes on some files:

**/.tmp/**
**/views/**
**/assets/**
**/log/**

If needed, I can also launch a shell in a running container:

docker exec -it my-container-name bash

This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.


Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.

Having two distinct applications, you could have a docker-compose configuration for each one.

The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):

server:
  image: node:0.10
  links:
    - db
  ports:
   - "8080:80"
  volumes:
   - ./src:/src
db:
  image: postgres
  environment:
   POSTGRES_USER: dev
   POSTGRES_PASSWORD: dev

The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.

client:
  image: node:0.10
  external_links:
   - project_server_1:server  # Use "docker ps" to know the name of the server's container
  ports:
   - "80:80"
  volumes:
   - ./src:/src

Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.

Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.

With this solution, each docker-compose.yml file could be commited in the repository of the related app.

5 Comments

Thank you. I like the suggestion of using Forever. I don't like the idea of running a shell independently because I want to watch stdout of my server.
Using docker compose up (without the -d option), you will be able to watch stdout. I mentionned docker exec, but it is optional and only if you want do execute some command in the container for whatever reason.
Yes, I mentioned that in my question, but I want an interactive shell for both server and client.
Could you consider using a docker-compose.yml file for the server and another one for the client?
I don't understand how that would help. Would you like to update your answer with an explanation?
0

First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml

To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).

When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:

docker exec -it web_server bash.

If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose

1 Comment

Thanks; I don't want to run a shell independently because I want to watch stdout from my server.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.