Working locally with Docker containers
Over the weekend I wrote a small tool that will automatically update the
/etc/hosts file with your running docker container. You can find the script here:
First create a docker network. This will allow the containers to connect to each other and automatically resolve hostnames. This is most of the magic tbh.
docker network create evilcorp.internal
Now you can start new containers in the network:
docker run --network evilcorp.internal --rm -it nginx docker run --network evilcorp.internal --name hello --rm -it nginx
After starting the above two container you can then start ...
... and you should see a section like this in your
# ! docker-hosts-update start ! # This section was automatically generated by docker-hosts-update # Don't edit this part manually :) 172.20.0.2 hello.evilcorp.internal 172.20.0.3 friendly_golick.evilcorp.internal # ! docker-hosts-update end !
Because these networks are on the same bridged network, docker will first of all ensure that
their hostnames resolve to each other. E.g. in the first container it will be able to
connect to the "hello" container, either via
hello.evilcorp.internal or just
What my script does is easily enable the host machine to resolve the hostnames too. This saves
you the trouble of either getting the ip manually (because hostname, duh) but also removes
the need for other approaches where you need to assign different ports to different containers, and then
use some sort of service discovery tool and proxy to manage them. On both host and container the
http://hello.evilcorp.internal will resolve to the correct container.
Finally when you stop the containers using
docker stop hello, you'll see the lines
automatically removed from the hosts file, if
docker-hosts-update is still running.