Mimicking an AWS VPC Setup using docker.

Hello everyone! In this article we're going to be playing around with docker networking and what we aim to do is mimic a complete Network setup originally made on AWS (VPC) and reimplement it using docker networks.
The setup originally looked as follows:

A very basic network setup consisting of 3 subnets:
A subnet for the frontend which is public. Public subnets mean that they can be accessed from outside the network. In other words exposed to the public
A subnet for the backend, can be private or public. If any external APIs exist then maybe it's best having this public.
A subnet for the database. This subnet is declared private. No-one outside this subnet is allowed to access the database. However in AWS other subnets in the same VPC can access it. So we'll be aiming to do the same thing.
We'll be adding more to this setup as we go along, Let's start!
For simplicity we'll have 3 docker containers running a httpd server. Our first step is making sure that the IP addresses and DNS hostnames get resolved.
Let's create the very first subnet for the frontend
docker network create frontend-nw --subnet 10.0.0.0/24
The first 3 bytes identify the network portion and the last byte is the host portion, we can have up to 255 hosts on this network.
Let's spin up the frontend container. All containers have ping and traceroute installed.
docker run -d --network frontend-nw --name frontend httpd
Upon inspecting the frontend container. It got assigned an IP address of 10.0.0.2 inside the frontend-nw
--cap-add=NET_ADMIN is to give privileges for network tasks.
Now we create the backend network which should be the exact same as the one above.
docker network create backend-nw --subnet 10.0.1.0/24
We chose a different subnet this time as seen in the command above.
Now let's create and add a backend container to this network.
docker run -d --network backend-nw --name backend --cap-add=NET_ADMIN httpd
Assigned with ip address 10.0.1.2
Now we have 2 networks and inside each one is a container.
However, these containers cannot talk to each other.
If we try and ping backend from frontend weather by hostname or ip address It will not resolve.
That's because simply they're not in the same network and frontend doesn't know how to reach backend.
To resolve this we need to create some sort of Gateway or Router that connects different networks together. Not only this but we'll need to add rules to the route table in both subnets directing any traffic requesting services in other networks to the gateway respectively.
We're going to be creating a gateway container and joining it in both networks. If we were to setup this in AWS we would only add routing rules which point to the different subnets. In docker it's not the same.
Let's create the gateway container
docker run --name gateway httpd
Join the gateway to both networks.
docker network connect frontend-nw gateway
docker network connect backend-nw gateway
Disconnect it from the bridge network
docker network disconnect bridge gateway
If we try and ping any of the frontend or backend from the gateway we will get a successful response.
However we still need to add routing rules to both frontend and backend to tell them how to route their packets because they still can't see each other.
In the frontend container add routing rules to route anything with 10.0.1.x to the gateway.
ip route add 10.0.1.0/24 via 10.0.0.3 Route anything within the backend subnet to the gateway.
In the backend we need to do the same as well
ip route add 10.0.0.0/24 via 10.0.1.3
By doing this, if we bash into frontend and trace route to the backend service we see the following:
traceroute to 10.0.1.2 (10.0.1.2), 30 hops max, 60 byte packets
1 gateway.frontend-nw (10.0.0.3) 3.642 ms 3.187 ms 3.076 ms
2 10.0.1.2 (10.0.1.2) 2.979 ms 1.590 ms 1.321 msb
& from backend to frontend
traceroute to 10.0.0.2 (10.0.0.2), 30 hops max, 60 byte packets
1 gateway.backend-nw (10.0.1.3) 1.813 ms 1.395 ms 1.195 ms
2 10.0.0.2 (10.0.0.2) 1.141 ms 0.872 ms 0.803 ms
Our current setup in docker looks as follows:

However we only resolve IP addresses we need a way to resolve the hostnames as well.
A solution is to replace the gateway image with something like dnsmasq and configure the hostnames in it's config
Also in every container we'll update the resolv.conf to use the gateway nameserver
# resolv.conf
nameserver $GATEWAY_IP #backand and frontend have different values depending on the network
We'll configure the dnsmasq config /etc/dnsmasq.conf and add the container names along with their ip addresses in their networks respectively.
no-resolv # tells the os to not use resolv.conf dnsmasq instead.
server=8.8.8.8 # googles dns resolvers
server=8.8.4.4 #
host-record=backend,10.0.1.2 # backend name and ip
host-record=frontend,10.0.0.2 # frontend name and ip
By applying these changes. Pinging backend from frontend and vice versa will resolve successfully!
Final step is to add the database private network.
Let's create the new network and containers.
docker network create database-nw --subnet 10.0.2.0/24 --internal
the --internal means that the network is isolated from external networks, including the host's network interface. Containers connected to an internal network can communicate with each other but cannot access external networks, nor can they be accessed from outside the Docker network.
Now the container.
docker run --cap-add=NET_ADMIN --network database-nw -d --name database httpd
If you try to ping google from this container it won't work unlike the other containers. This network is completely isolated from the outside world which is a good security measure to take.
Usually if the database were to perform any updates it would access the internet using a NAT gateway but this is outside the scope of this article.
Now let's allow backend to access the database. We're doing so by enhancing our already existing gateway and adding more routing rules to the database and backend containers.
Let's start by updating the gateway itself and adding it to the internal network
docker network connect database-nw gateway
Let's route requests going to 10.0.2.0/24 to the gateway from backend.
ip route add 10.0.2.0/24 via 10.0.1.3
ip route add 10.0.1.0/24 via 10.0.2.3
This way we can access our database from our backend subnet!
To also add DNS configuration let's update our dnsmasq config to have the database container name
no-resolv # tells the os to not use resolv.conf dnsmasq instead.
server=8.8.8.8 # googles dns resolvers
server=8.8.4.4 #
host-record=backend,10.0.1.2 # backend name and ip
host-record=frontend,10.0.0.2 # frontend name and ip
host-record=database, 10.0.2.2 # database name and ip
And configure our database to use dnsmasq as a nameserver. Now We can use database as a hostname and everything will work perfect!
Our setup now looks like this with a gateway/dns resolver that sits between 3 subnets acting as their router.

Summary
This article was quite a deep dive into networking in docker. Achieving the same setup of some VPC network via docker is something really cool. Hope it was clear and if anyone has questions make sure to leave them in the comments below. Till the next one!




