Container and Microservices Networking
Docker networking, Kubernetes network model, service mesh, CNI, and container network security
Chapter 10: Container and Microservices Networking
The Kubernetes Cluster That Exposed Millions of Records
In 2022, a major e-commerce platform suffered a significant data breach. The root cause wasnβt a sophisticated zero-day exploit or advanced malwareβit was a Kubernetes NetworkPolicy misconfiguration.
A developer had deployed a new microservice that needed to access the payment processing service. To get it working quickly, they created a NetworkPolicy that allowed all ingress traffic to the payment namespace. This βtemporaryβ configuration was never reviewed and remained in production for months.
When attackers compromised a low-privileged pod through a vulnerable dependency, they discovered unrestricted network access across the entire cluster. They pivoted from a marketing microservice to the payment system, extracting millions of credit card records.
The fix? A properly configured NetworkPolicy that would have taken 15 minutes to write correctly. The damage? Over $50 million in breach costs, regulatory fines, and reputation damage.
Container networking is deceptively simple on the surface but critically complex underneath. Understanding it is essential for securing modern applications.
Why Container Networking Matters
If cloud networking was a significant shift from traditional data centers, container networking takes that shift even further. Instead of thinking about servers and IP addresses, weβre now dealing with ephemeral workloads that may exist for only seconds, with IP addresses that change constantly.
The containerized world is different from anything weβve discussed so far in this book. But it still relies on the same fundamental networking conceptsβIP addresses, routing, DNS, firewallsβjust implemented in new ways with new abstractions.
The shift to containerized applications has fundamentally changed networking:
Traditional Application
Traditional Application:
βββββββββββββββββββββββ
[Server 1] βββ [Server 2] βββ [Server 3]
β β β
App A App B App C
β β β
Fixed IP Fixed IP Fixed IP
Containerized Application:
βββββββββββββββββββββββββ
[Host 1] βββββββββββββββ [Host 2] βββββββββββββββ [Host 3]
β β β
βββββ΄ββββ βββββ΄ββββ βββββ΄ββββ
βPod Podβ βPod Podβ βPod Podβ
βPod Podβ βPod β βPod Podβ
βPod β βPod Podβ βPod β
βββββββββ βββββββββ βββββββββ
Dozens of containers Containers come and go Dynamic IPs
per host constantly constantly changing
Key differences:
| Aspect | Traditional | Containerized |
|---|---|---|
| Scale | Tens of servers | Thousands of containers |
| Lifetime | Months/years | Minutes/hours |
| IP addresses | Static | Dynamic |
| Network config | Manual | Automated |
| Isolation | Physical/VLAN | Software-defined |
In this chapter, weβll start with Docker networking fundamentals, then move to Kubernetes networking (which builds on Docker concepts but adds its own complexity), and finally explore advanced topics like Network Policies and Service Mesh. By the end, youβll understand how container networks work and how to secure them.
Docker Networking Fundamentals
Before we can understand Kubernetes networking, we need to understand Docker networking. Docker introduced containerization to the mainstream, and its networking model forms the foundation for everything that follows.
When you run a container with Docker, you might not think about networkingβyou just expose a port and it works. But understanding whatβs happening underneath helps you troubleshoot problems and recognize security implications.
Container Isolation
Each Docker container gets:
- Its own network namespace
- Its own network stack (interfaces, routing table, iptables)
- Isolation from other containers (by default)
# Examine container network namespace
docker run --rm alpine ip addr
# Shows container's isolated network interfaces
# Compare to host
ip addr
# Different interfaces, different IPs
Docker Network Drivers
Docker provides several networking modes:
Bridge Network (Default)
Bridge Network Architecture
Bridge Network Architecture:
βββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββ
β Host Machine β
β β
β βββββββββββββββββββββββββββββββββββ β
β β docker0 (bridge) β β
β β 172.17.0.1 β β
β βββββββββββ¬ββββββββββββ¬ββββββββββββ β
β β β β
β ββββββββ΄βββ βββββββ΄ββββ β
β βContainerβ βContainerβ β
β β .2 β β .3 β β
β β (veth) β β (veth) β β
β βββββββββββ βββββββββββ β
β β
β [eth0: Host's real network interface] β
βββββββββββββββββββ¬ββββββββββββββββββββββββββ
β
External Network
- Containers connect to a virtual bridge (
docker0) - Each container gets a virtual ethernet (veth) pair
- NAT provides outbound internet access
- Port mapping exposes services externally
# Create container on default bridge
docker run -d --name web nginx
# Map port 80 to host port 8080
docker run -d -p 8080:80 --name web2 nginx
# Inspect network
docker network inspect bridge
Host Network
Container shares hostβs network namespace directly:
docker run --network=host nginx
# Container uses host's IP directly
# No network isolation
# No port mapping needed
Security implications: Container can see all host network traffic and bind to any port.
None Network
Container has no network connectivity:
docker run --network=none alpine
# Only loopback interface
# Maximum isolation
Custom Bridge Networks
User-defined bridges with better features:
# Create custom network
docker network create --driver bridge mynetwork
# Run containers on custom network
docker run -d --name app1 --network mynetwork nginx
docker run -d --name app2 --network mynetwork alpine sleep 1000
# Containers can reach each other by name
docker exec app2 ping app1 # DNS resolution works!
Custom bridge advantages:
- Automatic DNS resolution between containers
- Better isolation (containers only see their network)
- Can connect/disconnect containers dynamically
Overlay Network
Spans multiple Docker hosts for swarm/cluster deployments:
Overlay Network
Overlay Network:
βββββββββββββββ
βββββββββββββββββββ βββββββββββββββββββ
β Host 1 β β Host 2 β
β β β β
β βββββββ βββββββ β VXLAN β βββββββ βββββββ β
β β C1 β β C2 β ββββββββββΊβ β C3 β β C4 β β
β ββββ¬βββ ββββ¬βββ β Tunnel β ββββ¬βββ ββββ¬βββ β
β β β β β β β β
β βββββ¬ββββ β β βββββ¬ββββ β
β [overlay] β β [overlay] β
β β β β
ββββββββββ¬βββββββββ ββββββββββ¬βββββββββ
β β
βββββββββββββ¬ββββββββββββββββ
β
Physical Network
- Uses VXLAN encapsulation
- Containers across hosts appear on same network
- Encrypted option available
Security Note: Overlay networks create attack paths between hosts. A compromised container on one host can potentially reach containers on other hosts. Network policies are essential. VXLAN headers can also be spoofed if the underlay network is compromised.
Kubernetes Networking Model
Docker networking handles containers on a single host, but modern applications run across clusters of hosts with hundreds or thousands of containers. This is where Kubernetes comes inβand Kubernetes has its own networking model with different assumptions and requirements.
Understanding Kubernetes networking is essential because itβs the dominant container orchestration platform. If youβre working in a containerized environment, youβre probably working with Kubernetes (or something similar like Amazon ECS or Azure Container Instances that uses similar concepts).
Kubernetes networking is more complex than Docker, with its own networking model and requirements.
The Four Kubernetes Networking Problems
Kubernetes addresses four distinct networking challenges:
- Container-to-container (within a pod): localhost
- Pod-to-pod: Flat network, every pod can reach every pod
- Pod-to-service: Stable endpoints via Services
- External-to-internal: Ingress controllers
Pod Networking
Kubernetes Pod Network Model
Kubernetes Pod Network Model:
ββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Kubernetes Cluster β
β β
β Node 1 (10.0.1.10) Node 2 (10.0.1.11) β
β ββββββββββββββββββββββ ββββββββββββββββββββββ β
β β βββββββββββββββ β β βββββββββββββββ β β
β β β Pod A β β β β Pod C β β β
β β β 10.244.1.5 β βββββββββββΊβ β 10.244.2.3 β β β
β β βββββββββββββββ β β βββββββββββββββ β β
β β βββββββββββββββ β β βββββββββββββββ β β
β β β Pod B β β β β Pod D β β β
β β β 10.244.1.6 β β β β 10.244.2.4 β β β
β β βββββββββββββββ β β βββββββββββββββ β β
β β β β β β
β β Pod CIDR: β β Pod CIDR: β β
β β 10.244.1.0/24 β β 10.244.2.0/24 β β
β ββββββββββββββββββββββ ββββββββββββββββββββββ β
β β
β All pods can communicate directly (no NAT) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Kubernetes networking requirements:
- All pods can communicate without NAT
- All nodes can communicate with all pods without NAT
- The IP a pod sees for itself is the same IP others see
Services
Services provide stable endpoints for dynamic pod sets:
# Service definition
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp # Routes to pods with this label
ports:
- port: 80 # Service port
targetPort: 8080 # Pod port
type: ClusterIP # Internal only
Service types:
| Type | Access | Use Case |
|---|---|---|
| ClusterIP | Internal only | Inter-service communication |
| NodePort | External via node IP:port | Development, simple exposure |
| LoadBalancer | Cloud load balancer | Production external services |
| ExternalName | DNS alias | External service integration |
kube-proxy
kube-proxy implements Services using one of several modes:
kubeproxy Modes
kube-proxy Modes:
ββββββββββββββββ
iptables mode (default):
- Creates iptables rules for each service
- Kernel-level packet processing
- Random backend selection
IPVS mode:
- Uses Linux IPVS (IP Virtual Server)
- Better performance at scale
- More load balancing options
eBPF mode (Cilium):
- Modern, efficient packet processing
- Bypasses iptables entirely
- Superior observability
Container Network Interface (CNI)
You might be wondering: how does Kubernetes actually implement pod networking? The answer is that Kubernetes doesnβt implement it directlyβit delegates to plugins through a standard called CNI (Container Network Interface).
CNI is the glue between Kubernetes and the actual networking implementation. This pluggable architecture means you can choose different networking solutions depending on your needsβand your choice has significant security implications.
How CNI Works
CNI Plugin Flow
CNI Plugin Flow:
βββββββββββββββ
1. Kubelet creates pod sandbox (pause container)
2. Kubelet calls CNI plugin:
βββ /opt/cni/bin/<plugin> ADD <config>
3. CNI plugin:
βββ Creates network interface
βββ Assigns IP address
βββ Configures routes
βββ Returns result to kubelet
4. Pod containers start with networking ready
Popular CNI Plugins
| Plugin | Features | Best For |
|---|---|---|
| Calico | NetworkPolicy, BGP, eBPF | Production, security-focused |
| Cilium | eBPF, advanced observability | High performance, observability |
| Flannel | Simple overlay | Development, simple clusters |
| Weave | Mesh networking, encryption | Multi-cloud, encryption needed |
| AWS VPC CNI | Native AWS networking | EKS clusters |
Calico Architecture
Calico Architecture
Calico Architecture:
βββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Kubernetes Cluster β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Felix (on each node) β β
β β - Programs iptables/eBPF for network policy β β
β β - Manages routes β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β BIRD (BGP daemon) β β
β β - Distributes routes between nodes β β
β β - Enables pod-to-pod communication β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Typha (optional) β β
β β - Caches datastore for scale β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β etcd / Kubernetes API β β
β β - Stores network configuration β β
β β - Stores network policies β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Network Policies
Now we arrive at the most critical security topic in container networking: Network Policies. These are Kubernetesβ equivalent of firewall rulesβthey control which pods can talk to which other pods.
Understanding Network Policies is essential because of a dangerous default: Kubernetes allows all pods to communicate with all other pods by default. Without explicit Network Policies, any compromised container can reach any other container in the cluster. This is exactly how the breach in our opening story occurred.
Default Behavior
By default, Kubernetes allows all pod-to-pod traffic. This is the most common security misconfiguration in Kubernetes clusters.
Default (No NetworkPolicy)
Default (No NetworkPolicy):
ββββββββββββββββββββββββββ
All pods can communicate with all other pods:
[Frontend] ββββββββββββΊ [Backend] ββββββββββββΊ [Database]
β β β
βββββββββββββββββββββββββ΄βββββββββββββββββββββββ
Any pod can reach any pod - dangerous!
Creating Network Policies
# Deny all ingress to a namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {} # Applies to all pods
policyTypes:
- Ingress # Block all incoming traffic
# Allow specific traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Network Policy Best Practices
# Recommended baseline: deny all, then allow specific
---
# 1. Deny all ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-ingress
namespace: myapp
spec:
podSelector: {}
policyTypes:
- Ingress
---
# 2. Deny all egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
namespace: myapp
spec:
podSelector: {}
policyTypes:
- Egress
---
# 3. Allow DNS (required for service discovery)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: myapp
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
---
# 4. Allow specific application traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: myapp
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- port: 5432
** COMMON MISTAKE**
NetworkPolicies only work if your CNI plugin supports them. Flannel alone doesnβt enforce NetworkPolicies. Always verify your CNI supports network policies before relying on them for security. Test by creating a deny-all policy and confirming traffic is actually blocked.
Service Mesh
Network Policies give us basic firewall-like control over pod communication. But in complex microservice architectures, we often need more: encryption between services, fine-grained access control, observability into traffic patterns, and resilience features like retries and circuit breakers.
This is where service mesh comes inβan additional layer of infrastructure that handles service-to-service communication with advanced features built in.
What is a Service Mesh?
Without Service Mesh
Without Service Mesh:
ββββββββββββββββββββ
[Microservice A] ββdirect connectionβββΊ [Microservice B]
β
βββ Handles own TLS
βββ Handles own retries
βββ Handles own load balancing
βββ No visibility
With Service Mesh:
βββββββββββββββββ
[Microservice A] ββ [Sidecar Proxy] βββββββΊ [Sidecar Proxy] ββ [Microservice B]
β β
ββββββββββββββββββββββββββ
Control Plane
ββββββββββββββββββββββββββββ
β - mTLS everywhere β
β - Automatic retries β
β - Load balancing β
β - Observability β
β - Traffic policies β
ββββββββββββββββββββββββββββ
Istio Architecture
Istio Components
Istio Components:
ββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Control Plane β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β istiod β β
β β - Pilot: Service discovery, config β β
β β - Citadel: Certificate management, mTLS β β
β β - Galley: Configuration validation β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Data Plane β
β β
β βββββββββββββββββββββββββ βββββββββββββββββββββββββ β
β β Pod A β β Pod B β β
β β βββββββββββββββββββ β β βββββββββββββββββββ β β
β β β Application β β β β Application β β β
β β ββββββββββ¬βββββββββ β β ββββββββββ¬βββββββββ β β
β β β β β β β β
β β ββββββββββ΄βββββββββ β β ββββββββββ΄βββββββββ β β
β β β Envoy Sidecar ββββΌβββββββΌββΊβ Envoy Sidecar β β β
β β βββββββββββββββββββ β β βββββββββββββββββββ β β
β βββββββββββββββββββββββββ βββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Service Mesh Security Features
| Feature | Description | Benefit |
|---|---|---|
| mTLS | Automatic mutual TLS between all services | Encrypted, authenticated communication |
| Authorization Policies | Fine-grained access control | Limit which services can communicate |
| JWT Validation | Token verification at mesh level | Consistent authentication |
| Rate Limiting | Traffic throttling | DoS protection |
# Istio Authorization Policy example
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: backend-policy
namespace: production
spec:
selector:
matchLabels:
app: backend
rules:
- from:
- source:
principals: ["cluster.local/ns/production/sa/frontend"]
to:
- operation:
methods: ["GET", "POST"]
paths: ["/api/*"]
Container Network Security
Security Threats in Container Networking
Container Network Attack Vectors
Container Network Attack Vectors:
ββββββββββββββββββββββββββββββββ
1. Container Escape β Host Network Access
[Container] ββescapeβββΊ [Host] βββΊ All network traffic
2. Lateral Movement Between Containers
[Compromised Pod] ββno network policyβββΊ [Database Pod]
3. Service Account Token Theft
[Pod] βββΊ [Kubernetes API] βββΊ Cluster compromise
4. DNS Spoofing within Cluster
[Malicious Pod] ββfake DNSβββΊ [Victim Pod]
5. Network Sniffing (if privileged)
[Privileged Container] ββtcpdumpβββΊ [All pod traffic]
Security Best Practices Checklist
# Pod Security Best Practices
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 2000
containers:
- name: app
image: myapp:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
# No hostNetwork, hostPID, hostIPC
Network Security Checklist
- NetworkPolicies enforced (deny-all baseline)
- CNI plugin supports NetworkPolicy
- Service mesh mTLS enabled (if using mesh)
- No privileged containers in production
- No hostNetwork unless absolutely required
- Pod-to-pod traffic encrypted
- Egress traffic controlled
- API server access restricted
- Service account tokens mounted only when needed
TRY IT YOURSELF
Set up a test cluster (minikube or kind) and verify NetworkPolicy enforcement:
# Install Calico on kind kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml # Create test namespace kubectl create ns test # Deploy test pods kubectl run -n test web --image=nginx kubectl run -n test client --image=busybox -- sleep 3600 # Verify connectivity (should work) kubectl exec -n test client -- wget -qO- http://web # Apply deny-all policy kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all namespace: test spec: podSelector: {} policyTypes: - Ingress EOF # Verify connectivity blocked kubectl exec -n test client -- wget -qO- --timeout=3 http://web # Should timeout
Key Takeaways
-
Docker networking provides multiple drivers (bridge, host, overlay) with different isolation and connectivity trade-offs
-
Kubernetes networking requires all pods to communicate without NATβCNI plugins implement this
-
Services provide stable endpoints for dynamic pod sets; understand ClusterIP, NodePort, LoadBalancer
-
NetworkPolicies are your primary container firewallβdefault Kubernetes allows all traffic; you must explicitly restrict it
-
Service mesh adds mTLS, observability, and fine-grained policies at the cost of complexity
-
Security requires layers: NetworkPolicies, pod security, service mesh policies, and proper configuration
Review Questions
-
What are the four networking problems Kubernetes solves?
-
Why is the default βallow allβ pod networking policy a security risk?
-
How do NetworkPolicies differ from traditional firewalls?
-
What security benefits does a service mesh provide?
-
Why might a CNI plugin choice affect your security posture?
Further Reading
- Kubernetes Networking Documentation: kubernetes.io/docs/concepts/cluster-administration/networking/
- Calico Documentation: docs.projectcalico.org
- Cilium Documentation: docs.cilium.io
- Istio Documentation: istio.io/docs