Thanks to the modern and convenient transportation network, whether it is highway, railway or plane, our travel has become very convenient. When data swims freely in the container world, it will also experience all kinds of networks. At present, windows containers support NAT, overlay, transparent, l2bridge and l2tunnel. L2tunnel is used in azure, which is beyond the scope of this article. Next, let’s look at other centralized networks.
Before introducing container network, you need to understand the type of virtual switch of hyper-v. Hyper-V now has three types of virtual switches, external, internal and private. External and internal are used in the container network. The external virtual switch will be connected to the physical network card of the container host. The internal virtual switching opportunity creates a virtual network card locally.
To view the container network of the current computer, you can get it by running docker network ls.
C:\Users\greggu\vsrepos\posts $ docker network ls NETWORK ID NAME DRIVER SCOPE a5f85bc334db Default Switch ics local d2eb7fb1de63 External transparent local 4ea88ac7d5be nat nat local 76080eecc255 none null local
When the container engine runs for the first time, a network named NAT will be created by default, which uses an internal virtual switch and a network named NATWinNATWindows system components. By default, containers running on windows will be connected to the network and automatically obtain the IP address from the 172.16.0.0/16 network. In NAT networks, container to container host port forwarding / mapping is also supported.
When the container starts, you can connect to the network of type transparent by specifying the — network parameter. At this time, the container will connect to the physical network through the Hyper-V external switch and obtain the IP address from the DHCP server in the external network. You can also add an additional — IP parameter to specify the fixed IP, but it should be noted that the — IP6 parameter is not supported by the windows container at present. The following is an example of specifying a network when starting a container.
docker run -it --rm --name demo02 --network External greggu/demo02:0.0.1 cmd
When the container engine is running in swarm cluster mode, the container will be connected to the overlay network. Connect to all containers on the overlay network, and the containers on the host can communicate with each other. Overlay networks can be used with kubernetes through plug-ins. Currently supported network plug-ins are flannel and ovn.
When the container is connected to the l2bridge network, the same IP network segment will be used as the container host. However, the IP address must be statically assigned from the container host network. In this network mode, all containers on the container host will share the same MAC address with the container host because of the MAC address rewriting function.