This series of tutorials deals with networking for swarm services. For networking with standalone containers, see Networking with standalone containers. If you need to learn more about Docker networking in general, see the overview.
This page includes the following tutorials. You can run each of them on Linux, Windows, or a Mac, but for the last one, you need a second Docker host running elsewhere.
-
Use the default overlay network demonstrates how to use the default overlay network that Docker sets up for you automatically when you initialize or join a swarm. This network is not the best choice for production systems.
-
Use user-defined overlay networks shows how to create and use your own custom overlay networks, to connect services. This is recommended for services running in production.
-
Use an overlay network for standalone containers shows how to communicate between standalone containers on different Docker daemons using an overlay network.
Prerequisites#
These require you to have at least a single-node swarm, which means that
you have started Docker and run docker swarm init
on the host. You can run
the examples on a multi-node swarm as well.
Use the default overlay network#
In this example, you start an alpine
service and examine the characteristics
of the network from the point of view of the individual service containers.
This tutorial does not go into operation-system-specific details about how overlay networks are implemented, but focuses on how the overlay functions from the point of view of a service.
Prerequisites#
This tutorial requires three physical or virtual Docker hosts which can all communicate with one another. This tutorial assumes that the three hosts are running on the same network with no firewall involved.
These hosts will be referred to as manager
, worker-1
, and worker-2
. The
manager
host will function as both a manager and a worker, which means it can
both run service tasks and manage the swarm. worker-1
and worker-2
will
function as workers only,
If you don't have three hosts handy, an easy solution is to set up three Ubuntu hosts on a cloud provider such as Amazon EC2, all on the same network with all communications allowed to all hosts on that network (using a mechanism such as EC2 security groups), and then to follow the installation instructions for Docker Engine - Community on Ubuntu.
Walkthrough#
Create the swarm#
At the end of this procedure, all three Docker hosts will be joined to the swarm
and will be connected together using an overlay network called ingress
.
-
On
manager
. initialize the swarm. If the host only has one network interface, the--advertise-addr
flag is optional.console $ docker swarm init --advertise-addr=<IP-ADDRESS-OF-MANAGER>
Make a note of the text that is printed, as this contains the token that you will use to join
worker-1
andworker-2
to the swarm. It is a good idea to store the token in a password manager. -
On
worker-1
, join the swarm. If the host only has one network interface, the--advertise-addr
flag is optional.console $ docker swarm join --token <TOKEN> \ --advertise-addr <IP-ADDRESS-OF-WORKER-1> \ <IP-ADDRESS-OF-MANAGER>:2377
-
On
worker-2
, join the swarm. If the host only has one network interface, the--advertise-addr
flag is optional.console $ docker swarm join --token <TOKEN> \ --advertise-addr <IP-ADDRESS-OF-WORKER-2> \ <IP-ADDRESS-OF-MANAGER>:2377
-
On
manager
, list all the nodes. This command can only be done from a manager.```console $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS d68ace5iraw6whp7llvgjpu48 * ip-172-31-34-146 Ready Active Leader nvp5rwavvb8lhdggo8fcf7plg ip-172-31-35-151 Ready Active ouvx2l7qfcxisoyms8mtkgahw ip-172-31-36-89 Ready Active ```
You can also use the
--filter
flag to filter by role:```console $ docker node ls --filter role=manager
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS d68ace5iraw6whp7llvgjpu48 * ip-172-31-34-146 Ready Active Leader
$ docker node ls --filter role=worker
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS nvp5rwavvb8lhdggo8fcf7plg ip-172-31-35-151 Ready Active ouvx2l7qfcxisoyms8mtkgahw ip-172-31-36-89 Ready Active ```
-
List the Docker networks on
manager
,worker-1
, andworker-2
and notice that each of them now has an overlay network calledingress
and a bridge network calleddocker_gwbridge
. Only the listing formanager
is shown here:```console $ docker network ls
NETWORK ID NAME DRIVER SCOPE 495c570066be bridge bridge local 961c6cae9945 docker_gwbridge bridge local ff35ceda3643 host host local trtnl4tqnc3n ingress overlay swarm c8357deec9cb none null local ```
The docker_gwbridge
connects the ingress
network to the Docker host's
network interface so that traffic can flow to and from swarm managers and
workers. If you create swarm services and do not specify a network, they are
connected to the ingress
network. It is recommended that you use separate
overlay networks for each application or group of applications which will work
together. In the next procedure, you will create two overlay networks and
connect a service to each of them.
Create the services#
-
On
manager
, create a new overlay network callednginx-net
:console $ docker network create -d overlay nginx-net
You don't need to create the overlay network on the other nodes, because it will be automatically created when one of those nodes starts running a service task which requires it.
-
On
manager
, create a 5-replica Nginx service connected tonginx-net
. The service will publish port 80 to the outside world. All of the service task containers can communicate with each other without opening any ports.[!NOTE]
Services can only be created on a manager.
console $ docker service create \ --name my-nginx \ --publish target=80,published=80 \ --replicas=5 \ --network nginx-net \ nginx
The default publish mode of
ingress
, which is used when you do not specify amode
for the--publish
flag, means that if you browse to port 80 onmanager
,worker-1
, orworker-2
, you will be connected to port 80 on one of the 5 service tasks, even if no tasks are currently running on the node you browse to. If you want to publish the port usinghost
mode, you can addmode=host
to the--publish
output. However, you should also use--mode global
instead of--replicas=5
in this case, since only one service task can bind a given port on a given node. -
Run
docker service ls
to monitor the progress of service bring-up, which may take a few seconds. -
Inspect the
nginx-net
network onmanager
,worker-1
, andworker-2
. Remember that you did not need to create it manually onworker-1
andworker-2
because Docker created it for you. The output will be long, but notice theContainers
andPeers
sections.Containers
lists all service tasks (or standalone containers) connected to the overlay network from that host. -
From
manager
, inspect the service usingdocker service inspect my-nginx
and notice the information about the ports and endpoints used by the service. -
Create a new network
nginx-net-2
, then update the service to use this network instead ofnginx-net
:console $ docker network create -d overlay nginx-net-2
console $ docker service update \ --network-add nginx-net-2 \ --network-rm nginx-net \ my-nginx
-
Run
docker service ls
to verify that the service has been updated and all tasks have been redeployed. Rundocker network inspect nginx-net
to verify that no containers are connected to it. Run the same command fornginx-net-2
and notice that all the service task containers are connected to it.[!NOTE]
Even though overlay networks are automatically created on swarm worker nodes as needed, they are not automatically removed.
-
Clean up the service and the networks. From
manager
, run the following commands. The manager will direct the workers to remove the networks automatically.console $ docker service rm my-nginx $ docker network rm nginx-net nginx-net-2
Use a user-defined overlay network#
Prerequisites#
This tutorial assumes the swarm is already set up and you are on a manager.
Walkthrough#
-
Create the user-defined overlay network.
console $ docker network create -d overlay my-overlay
-
Start a service using the overlay network and publishing port 80 to port 8080 on the Docker host.
console $ docker service create \ --name my-nginx \ --network my-overlay \ --replicas 1 \ --publish published=8080,target=80 \ nginx:latest
-
Run
docker network inspect my-overlay
and verify that themy-nginx
service task is connected to it, by looking at theContainers
section. -
Remove the service and the network.
```console $ docker service rm my-nginx
$ docker network rm my-overlay ```
Use an overlay network for standalone containers#
This example demonstrates DNS container discovery -- specifically, how to communicate between standalone containers on different Docker daemons using an overlay network. Steps are:
- On
host1
, initialize the node as a swarm (manager). - On
host2
, join the node to the swarm (worker). - On
host1
, create an attachable overlay network (test-net
). - On
host1
, run an interactive alpine container (alpine1
) ontest-net
. - On
host2
, run an interactive, and detached, alpine container (alpine2
) ontest-net
. - On
host1
, from within a session ofalpine1
, pingalpine2
.
Prerequisites#
For this test, you need two different Docker hosts that can communicate with each other. Each host must have the following ports open between the two Docker hosts:
- TCP port 2377
- TCP and UDP port 7946
- UDP port 4789
One easy way to set this up is to have two VMs (either local or on a cloud provider like AWS), each with Docker installed and running. If you're using AWS or a similar cloud computing platform, the easiest configuration is to use a security group that opens all incoming ports between the two hosts and the SSH port from your client's IP address.
This example refers to the two nodes in our swarm as host1
and host2
. This
example also uses Linux hosts, but the same commands work on Windows.
Walk-through#
-
Set up the swarm.
a. On
host1
, initialize a swarm (and if prompted, use--advertise-addr
to specify the IP address for the interface that communicates with other hosts in the swarm, for instance, the private IP address on AWS):```console $ docker swarm init Swarm initialized: current node (vz1mm9am11qcmo979tlrlox42) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-5g90q48weqrtqryq4kj6ow0e8xm9wmv9o6vgqc5j320ymybd5c-8ex8j0bc40s6hgvy5ui5gl4gy 172.31.47.252:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. ```
b. On
host2
, join the swarm as instructed above:console $ docker swarm join --token <your_token> <your_ip_address>:2377 This node joined a swarm as a worker.
If the node fails to join the swarm, the
docker swarm join
command times out. To resolve, rundocker swarm leave --force
onhost2
, verify your network and firewall settings, and try again. -
On
host1
, create an attachable overlay network calledtest-net
:console $ docker network create --driver=overlay --attachable test-net uqsof8phj3ak0rq9k86zta6ht
Notice the returned NETWORK ID -- you will see it again when you connect to it from
host2
. -
On
host1
, start an interactive (-it
) container (alpine1
) that connects totest-net
:console $ docker run -it --name alpine1 --network test-net alpine / #
-
On
host2
, list the available networks -- notice thattest-net
does not yet exist:console $ docker network ls NETWORK ID NAME DRIVER SCOPE ec299350b504 bridge bridge local 66e77d0d0e9a docker_gwbridge bridge local 9f6ae26ccb82 host host local omvdxqrda80z ingress overlay swarm b65c952a4b2b none null local
-
On
host2
, start a detached (-d
) and interactive (-it
) container (alpine2
) that connects totest-net
:console $ docker run -dit --name alpine2 --network test-net alpine fb635f5ece59563e7b8b99556f816d24e6949a5f6a5b1fbd92ca244db17a4342
[!NOTE]
Automatic DNS container discovery only works with unique container names.
-
On
host2
, verify thattest-net
was created (and has the same NETWORK ID astest-net
onhost1
):console $ docker network ls NETWORK ID NAME DRIVER SCOPE ... uqsof8phj3ak test-net overlay swarm
-
On
host1
, pingalpine2
within the interactive terminal ofalpine1
:```console / # ping -c 2 alpine2 PING alpine2 (10.0.0.5): 56 data bytes 64 bytes from 10.0.0.5: seq=0 ttl=64 time=0.600 ms 64 bytes from 10.0.0.5: seq=1 ttl=64 time=0.555 ms
--- alpine2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.555/0.577/0.600 ms ```
The two containers communicate with the overlay network connecting the two hosts. If you run another alpine container on
host2
that is not detached, you can pingalpine1
fromhost2
(and here we add the remove option for automatic container cleanup):sh $ docker run -it --rm --name alpine3 --network test-net alpine / # ping -c 2 alpine1 / # exit
-
On
host1
, close thealpine1
session (which also stops the container):console / # exit
-
Clean up your containers and networks:
You must stop and remove the containers on each host independently because Docker daemons operate independently and these are standalone containers. You only have to remove the network on
host1
because when you stopalpine2
onhost2
,test-net
disappears.a. On
host2
, stopalpine2
, check thattest-net
was removed, then removealpine2
:console $ docker container stop alpine2 $ docker network ls $ docker container rm alpine2
a. On
host1
, removealpine1
andtest-net
:console $ docker container rm alpine1 $ docker network rm test-net