Mastering Docker Networking: Communication, Isolation, and Security in Containerized Environments

Mastering Docker Networking: Communication, Isolation, and Security in Containerized Environments

In this blog, we embark on a journey into the intricacies of Docker networking, unraveling the nuances of communication, isolation, and security within containerized environments.

Why networking is needed in the container?

Networking allows Docker containers to communicate with other containers and also with the host system. Without any network configuration, a container cannot communicate even with the host system. By default, the bridge network is created in the container so that it can communicate with the host system.

Let's go with a simple example: In our host system, we have installed the docker and run an application on it. One container(c1) has the basic login system and everyone accesses it but another container (c2) has the payment credentials or the sensitive database details, so this container needs to be secured and isolated from the other containers and has access to limited administrative persons. But while making the container secure and isolated, this container also be able to communicate with the other containers and the host system.

So here we can see the two scenarios in this example:

  1. Containers talk with each other

  2. containers are isolated from each other

Now we will see how the Docker Networking perfectly does the solution of the above scenarios. Let's look at the options that can be implemented so that containers talk to each other and are isolated from each other.

A) Bridge Network

It is the default network in docker. It is the virtual ethernet by which using this bridge network, containers interact with the host machine. This means whenever a docker container is created the Docker creates a virtual ethernet called Veth or dockerO. Without this virtual ethernet, the container cannot communicate with the host system. If there was no virtual ethernet. the IP subnet of the container differs from the host IP and thus they could not communicate with each other.

But the bridge network doesn't offer isolation, anyone who has access to the host IP, can easily mess up with the docker containers as the docker networking and host networking are the same. So this approach is not applicable for isolation and secure containers.

B) Host Networking

When using the host networking mode in Docker, containers directly share the network namespace with the host system, binding to the host's eth0 interface. This means that anyone with access to the host also has direct access to the containers. While this facilitates communication between containers and the host, it comes at the cost of isolation. The lack of network namespace separation creates a potential security vulnerability, as attackers gaining access to the host can easily compromise the containers due to the shared network path. Therefore, although host networking provides efficient communication, it falls short in terms of container isolation and can pose security risks, making it crucial to evaluate the specific use case and security requirements before opting for this mode.

So this approach does not match our scenario of communication between containers, isolation, and a secured environment for the containers.

create a container and assign the host network to the container

docker run -d --name host-demo --network=host nginx:latest

We can see the container IP is bound with the host IP so there is no assigned IP.

C) Overlay Networking

An overlay network in Docker is a network that spans multiple Docker hosts, enabling communication between containers on different hosts. It is commonly used in orchestration platforms like Kubernetes and Docker Swarm for connecting containers across a cluster of machines.

While overlay networks offer powerful capabilities, they are not always preferred for simpler use cases due to the following reasons:

  • complexity: Setting up and managing overlay networks can be complex, involving additional configuration and coordination across multiple hosts.

  • Overhead: Overlay networks introduce additional overhead in terms of encapsulation and network traffic routing, potentially impacting performance.

  • Resource consumption: Running overlay networks requires additional resources, both in terms of network bandwidth and computational power.

  • Ease of use: The configuration and management of overlay networks might be more challenging for users who are new to Docker networking. Simpler alternatives, like custom bridge networks, often provide a more straightforward solution.

So What's the best approach?

The custom bridge network in Docker is often considered a versatile and effective networking solution. It provides a well-balanced approach by facilitating seamless communication between containers, ensuring isolation through separate network namespaces, and enhancing security by reducing the attack surface compared to host networking. This approach strikes a practical balance, making it suitable for a wide range of scenarios.

However, the choice of networking mode should always be based on the specific requirements and considerations of the given use case, as more complex deployments may benefit from other networking options such as overlay networks in orchestration environments.

In this practical scenario, we have established a network environment comprising three Docker containers—c1 representing a login application, c2 as a logout application, and c3 serving as a payment application. Containers c1 and c2 are interconnected within the default bridge network, sharing the host's network namespace through Veth connections. This enables seamless communication between c1 and c2 using either container names or IP addresses.

In contrast, container c3 operates in a custom bridge network, ensuring isolation from c1 and c2. Despite this isolation, c3 can communicate with the host system through its eth0 interface. To facilitate this communication, we've exposed specific ports on c3, allowing access from the host system. This setup combines effective communication between c1 and c2 within the default bridge network while maintaining isolation for c3, achieving a practical balance in networking for these distinct containers.

Now let's create a login app using the docker command and exec into the container and install the ping so that we can communicate with another container later.

docker run -d --name login nginx:latest
#to exec into the container
docker exec -it login /bin/bash
#install ping
apt-get install iputils-ping -y

To verify whether ping is installed or not

ping -V
#sample output
root@48f25576a2d1:/# ping -V
ping from iputils 20221126
libcap: yes, IDN: yes, NLS: no, error.h: yes, getrandom(): yes, __fpending(): yes

Now in the other tab let's run our other container.

docker run -d --name logout nginx:latest

We can see we have our two containers login and logout running. Let's check their IP addresses.

docker inspect container_name

The IP address for the login container on using the inspect command

The IP address for the logout container on using the inspect command

Hence the IP address for the login container is 172.17.0.2 and for the logout container is 172.17.0.3, which indicates that both containers are indeed in the same subnet (172.17.0.0/16) created under the default bridge network. This allows them to communicate with each other using these IP addresses within the same network namespace.

Now communicate between the containers. So from the login container, we will try to ping the logout container whose IP is 172.17.0.3

Great we can communicate between the two containers.

Now let's create another container name payment and assign a custom bridge network to this container. But before that let's create a new custom network.

docker network create secure-network

Now let's assign this custom network to our new container.

docker run -d --name payment --network=secure-network nginx:latest

List the available docker containers.

Let us inspect the payment container

docker inspect payment

Here for the payment container, the network is secure-network and the IP address is 172.19.0.2 while the IP address for the login container is 172.17.0.2 and for the logout container is 172.17.0.3, We can see the payment container is in a different subnet.

Now let's ping the payment container from the login container.

Here we can see the login container is not able to ping the payment container and there is a 100% loss in packets.

We've successfully set up a Docker networking configuration where two containers (login and logout) are in the same default bridge network (172.17.0.0/16) and can communicate with each other seamlessly. Simultaneously, the third container (payment) is securely isolated in a different subnet (172.19.0.0/16) under a custom bridge network (secure-network). This ensures that the payment container is separated from the communication between the login and logout containers, providing a secure and isolated environment.