Good news in terms of my docker misery. I’m writing the complete process of how I fixed the network problems in a few steps. Disclaimer: Before you proceed and follow these steps, please keep in mind, that I’m not sure if I’m breaking any internal docker mechanics or opening the gate to hell with this. Whatever you do, backup your data!
A little recap of what has happened until now:
Running containers on a NAS, which just exposes the possibility by using docker. First time docker user here. Network setup was a pain. Containers couldn’t reach the outer network (as in, no update calls etc.). The goal is having a reverse proxy with ssl, further services behind the reverse proxy.
Let’s get it on
While losing my mind, I’ve looked on the network part again. Docker bridges, the created bridge interfaces and the system routes. What I found there was a bit confusing. The bridge as well as the interface behind it got IPs and routes to an unmapped ip subnet reserved for dhcp. Neither pingable nor in any kind of connection making it possible to communicate.
As this was my last hope, I just modified the interface and the routes for it.
For a start, create a new bridge and get info of the newly created subnet.
# docker network create test # docker network list NETWORK ID NAME DRIVER SCOPE 8d3a71442dbd bridge bridge local 83ec6ae8033b host host local 61ee2316d6d1 none null local e28eb058d071 test bridge local # docker network inspect test --%< snip -- "Subnet": "172.20.0.0/16", "Gateway": "172.20.0.1"
So far, so good. Spin up a container and ping google.
# docker run -it --rm --network test debian:buster root@c4b624e044c6:/# ip addr show eth0 83: eth0@if84: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:14:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.20.0.2/16 scope global eth0 valid_lft forever preferred_lft forever root@c4b624e044c6:/# ping google.com ping: google.com: Temporary failure in name resolution root@c4b624e044c6:/# exit
The container is inside the correct subnet. IP looks fine. Ping however doesn’t. So let’s look for the interface and the route on the host:
# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.178.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-e28eb058d071 --%< snip -- # ip addr show br-e28eb058d071 82: br-e28eb058d071: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:32:a0:a3:f5 brd ff:ff:ff:ff:ff:ff inet 169.254.251.27/16 brd 169.254.255.255 scope global br-e28eb058d071 valid_lft forever preferred_lft forever inet6 fe80::42:32ff:fea0:a3f5/64 scope link valid_lft forever preferred_lft forever
Huh? Probably I misunderstand networks in general, but I’m not really surprised having no connection with this setup. There is no other route or bridge which would bring both parts together. In this case I delete the route and add a new one corresponding for the subnet. Additionally the interface will receive the IP .1 as the host is the gateway.
# ip route del 169.254.0.0/16 dev br-e28eb058d071 # ip route add 172.20.0.0/16 dev br-e28eb058d071 # ip addr add 172.20.0.1 dev br-e28eb058d071
Starting a new container up again, a fully functional network is reachable.
# docker run -it --rm --network test debian:buster root@053b730b0023:/# ping google.de PING google.de (188.8.131.52) 56(84) bytes of data. 64 bytes from fra16s13-in-f3.1e100.net (184.108.40.206): icmp_seq=1 ttl=118 time=24.4 ms 64 bytes from fra16s13-in-f3.1e100.net (220.127.116.11): icmp_seq=2 ttl=118 time=22.3 ms 64 bytes from fra16s13-in-f3.1e100.net (18.104.22.168): icmp_seq=3 ttl=118 time=23.5 ms 64 bytes from fra16s13-in-f3.1e100.net (22.214.171.124): icmp_seq=4 ttl=118 time=23.8 ms ^C --- google.de ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 6ms rtt min/avg/max/mdev = 22.269/23.486/24.375/0.783 ms
Whoohoo! It’s time for bringing up Traefik and other services! But not for now. These steps will follow in the next post.
Last words to this case
The NAS OS is limited in contrast to a full blown linux – well, no surprise here. But still I’m not sure, if this problem comes frome the NAS OS in this case, or the implementation of docker inside it. In desperate need for a solution I’ve done the same scenario on a vps. Vanilla debian buster, installed docker from the docker repos and went with it. The key difference is, that the docker bridge gets the correct route and ip set. Create bridge, run container, ping outside of it – no problem. So I’m kinda torn back and forth whether the problem comes from. But then again: it works now, how I want it to work. Case closed.