October 31, 2018
This lab shows the properties of docker network and how it relates to Linux network. 2 types of networking are demonstrated, namely bridge networking and overlay networking. Examples are shown to prove both types of networking can work properly in NAT (PAT) and Swarm mode.
docker network ls
The output shows the container networks that are created as part of a standard installation of Docker.
docker network inspect <network>
The output shows network configuration details, which include name, ID, driver, IPAM driver, subnet info, connected containers, and more.
docker info
The output shows the bridge, host, macvlan, null, and overlay drivers.
Every clean installation of Docker comes with a pre-built network called bridge. It’s important to note that the network and the driver are connected, but they are not the same. The bridge driver provides single-host networking based on a Linux bridge (a.k.a. a virtual switch).
sudo apt-get install bridge-utils
brctl show
The bridge network is the default network for new containers. This means that unless you specify a different network, all new containers will be connected to the bridge network.
docker run -dt ubuntu sleep infinity
brctl show
Notice the docker0 bridge now has an interface connected. This interface connects the docker0 bridge to the new container just created.
docker network inspect
See it from field IPv4Address.
ping -c5 <IPv4 Address>
It should work without packet loss.
docker exec -it yourcontainerid /bin/bash
apt-get update && apt-get install -y iputils-ping
ping -c5 www.github.com
This shows that the new container can ping the internet and therefore has a valid and working network configuration.
It is possible to use NAT (Network Address Translation) to reach an NGINX container from another host. In this test, we will use PAT (Port Address Translation) instead.
docker run --name web1 -d -p 8080:80 nginx
curl 127.0.0.1:8080
You should be able to see the welcome page HTML of NGINX.
docker swarm init --advertise-addr $(hostname -i)
docker swarm join \
> --token <join token> \
> <manager node ip:port>
docker node ls
docker network create -d overlay overnet
docker network ls
The new “overnet” network is shown on the last line of the output above. Notice how it is associated with the overlay driver and is scoped to the entire Swarm.
On worker node, this will not appear, because Docker only extends overlay networks to hosts when they are needed. This is usually when a host runs a task from a service that is created on the network.
docker service create --name myservice \
--network overnet \
--replicas 2 \
ubuntu sleep infinity
docker service ps myservice
docker network ls
The “overnet” network is shown on the last line of the output above, inspect to get some details.
docker network inspect overnet
docker network inspect overnet
docker exec -it yourcontainerid /bin/bash
apt-get update && apt-get install -y iputils-ping
ping -c5 <worker node container ip>
It should work without any packet loss, which indicates that both tasks from the myservice service are on the same overlay network spanning both nodes and that they can use this network to communicate.
docker exec -it yourcontainerid /bin/bash
ping -c5 myservice
The container can ping the myservice service by name, we can also learn the virtual IP (VIP) assigned to the myservice service.
docker service inspect myservice
docker service rm myservice
docker kill yourcontainerid1 yourcontainerid2
docker swarm leave --force
Written by Warren who studies distributed systems at George Washington University. You might wanna follow him on Github