Part I: Network Theory Chapter 10

Container and Microservices Networking

Docker networking, Kubernetes network model, service mesh, CNI, and container network security

Chapter 10: Container and Microservices Networking

The Kubernetes Cluster That Exposed Millions of Records

In 2022, a major e-commerce platform suffered a significant data breach. The root cause wasn’t a sophisticated zero-day exploit or advanced malwareβ€”it was a Kubernetes NetworkPolicy misconfiguration.

A developer had deployed a new microservice that needed to access the payment processing service. To get it working quickly, they created a NetworkPolicy that allowed all ingress traffic to the payment namespace. This β€œtemporary” configuration was never reviewed and remained in production for months.

When attackers compromised a low-privileged pod through a vulnerable dependency, they discovered unrestricted network access across the entire cluster. They pivoted from a marketing microservice to the payment system, extracting millions of credit card records.

The fix? A properly configured NetworkPolicy that would have taken 15 minutes to write correctly. The damage? Over $50 million in breach costs, regulatory fines, and reputation damage.

Container networking is deceptively simple on the surface but critically complex underneath. Understanding it is essential for securing modern applications.


Why Container Networking Matters

If cloud networking was a significant shift from traditional data centers, container networking takes that shift even further. Instead of thinking about servers and IP addresses, we’re now dealing with ephemeral workloads that may exist for only seconds, with IP addresses that change constantly.

The containerized world is different from anything we’ve discussed so far in this book. But it still relies on the same fundamental networking conceptsβ€”IP addresses, routing, DNS, firewallsβ€”just implemented in new ways with new abstractions.

The shift to containerized applications has fundamentally changed networking:

Traditional Application

Traditional Application:
───────────────────────

[Server 1] ─── [Server 2] ─── [Server 3]
     β”‚              β”‚              β”‚
  App A          App B          App C
     β”‚              β”‚              β”‚
Fixed IP       Fixed IP       Fixed IP

Containerized Application:
─────────────────────────

[Host 1] ─────────────── [Host 2] ─────────────── [Host 3]
    β”‚                        β”‚                        β”‚
β”Œβ”€β”€β”€β”΄β”€β”€β”€β”                β”Œβ”€β”€β”€β”΄β”€β”€β”€β”                β”Œβ”€β”€β”€β”΄β”€β”€β”€β”
β”‚Pod Podβ”‚                β”‚Pod Podβ”‚                β”‚Pod Podβ”‚
β”‚Pod Podβ”‚                β”‚Pod    β”‚                β”‚Pod Podβ”‚
β”‚Pod    β”‚                β”‚Pod Podβ”‚                β”‚Pod    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”˜                β””β”€β”€β”€β”€β”€β”€β”€β”˜                β””β”€β”€β”€β”€β”€β”€β”€β”˜

Dozens of containers    Containers come and go    Dynamic IPs
per host                constantly                 constantly changing

Key differences:

AspectTraditionalContainerized
ScaleTens of serversThousands of containers
LifetimeMonths/yearsMinutes/hours
IP addressesStaticDynamic
Network configManualAutomated
IsolationPhysical/VLANSoftware-defined

In this chapter, we’ll start with Docker networking fundamentals, then move to Kubernetes networking (which builds on Docker concepts but adds its own complexity), and finally explore advanced topics like Network Policies and Service Mesh. By the end, you’ll understand how container networks work and how to secure them.


Docker Networking Fundamentals

Before we can understand Kubernetes networking, we need to understand Docker networking. Docker introduced containerization to the mainstream, and its networking model forms the foundation for everything that follows.

When you run a container with Docker, you might not think about networkingβ€”you just expose a port and it works. But understanding what’s happening underneath helps you troubleshoot problems and recognize security implications.

Container Isolation

Each Docker container gets:

  • Its own network namespace
  • Its own network stack (interfaces, routing table, iptables)
  • Isolation from other containers (by default)
# Examine container network namespace
docker run --rm alpine ip addr
# Shows container's isolated network interfaces

# Compare to host
ip addr
# Different interfaces, different IPs

Docker Network Drivers

Docker provides several networking modes:

Bridge Network (Default)

Bridge Network Architecture

Bridge Network Architecture:
───────────────────────────

       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
       β”‚              Host Machine                 β”‚
       β”‚                                           β”‚
       β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”‚
       β”‚   β”‚      docker0 (bridge)           β”‚     β”‚
       β”‚   β”‚        172.17.0.1               β”‚     β”‚
       β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β”‚
       β”‚             β”‚           β”‚                 β”‚
       β”‚      β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”΄β”€β”€β”€β”             β”‚
       β”‚      β”‚Containerβ”‚  β”‚Containerβ”‚             β”‚
       β”‚      β”‚  .2     β”‚  β”‚  .3     β”‚             β”‚
       β”‚      β”‚ (veth)  β”‚  β”‚ (veth)  β”‚             β”‚
       β”‚      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜             β”‚
       β”‚                                           β”‚
       β”‚   [eth0: Host's real network interface]   β”‚
       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚
                    External Network
  • Containers connect to a virtual bridge (docker0)
  • Each container gets a virtual ethernet (veth) pair
  • NAT provides outbound internet access
  • Port mapping exposes services externally
# Create container on default bridge
docker run -d --name web nginx

# Map port 80 to host port 8080
docker run -d -p 8080:80 --name web2 nginx

# Inspect network
docker network inspect bridge

Host Network

Container shares host’s network namespace directly:

docker run --network=host nginx
# Container uses host's IP directly
# No network isolation
# No port mapping needed

Security implications: Container can see all host network traffic and bind to any port.

None Network

Container has no network connectivity:

docker run --network=none alpine
# Only loopback interface
# Maximum isolation

Custom Bridge Networks

User-defined bridges with better features:

# Create custom network
docker network create --driver bridge mynetwork

# Run containers on custom network
docker run -d --name app1 --network mynetwork nginx
docker run -d --name app2 --network mynetwork alpine sleep 1000

# Containers can reach each other by name
docker exec app2 ping app1  # DNS resolution works!

Custom bridge advantages:

  • Automatic DNS resolution between containers
  • Better isolation (containers only see their network)
  • Can connect/disconnect containers dynamically

Overlay Network

Spans multiple Docker hosts for swarm/cluster deployments:

Overlay Network

Overlay Network:
───────────────

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚     Host 1      β”‚         β”‚     Host 2      β”‚
β”‚                 β”‚         β”‚                 β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β” β”‚ VXLAN   β”‚ β”Œβ”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ C1  β”‚ β”‚ C2  β”‚ │◄───────►│ β”‚ C3  β”‚ β”‚ C4  β”‚ β”‚
β”‚ β””β”€β”€β”¬β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”˜ β”‚ Tunnel  β”‚ β””β”€β”€β”¬β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”˜ β”‚
β”‚    β”‚       β”‚    β”‚         β”‚    β”‚       β”‚    β”‚
β”‚    β””β”€β”€β”€β”¬β”€β”€β”€β”˜    β”‚         β”‚    β””β”€β”€β”€β”¬β”€β”€β”€β”˜    β”‚
β”‚    [overlay]    β”‚         β”‚    [overlay]    β”‚
β”‚                 β”‚         β”‚                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                           β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β”‚
              Physical Network
  • Uses VXLAN encapsulation
  • Containers across hosts appear on same network
  • Encrypted option available

Security Note: Overlay networks create attack paths between hosts. A compromised container on one host can potentially reach containers on other hosts. Network policies are essential. VXLAN headers can also be spoofed if the underlay network is compromised.


Kubernetes Networking Model

Docker networking handles containers on a single host, but modern applications run across clusters of hosts with hundreds or thousands of containers. This is where Kubernetes comes inβ€”and Kubernetes has its own networking model with different assumptions and requirements.

Understanding Kubernetes networking is essential because it’s the dominant container orchestration platform. If you’re working in a containerized environment, you’re probably working with Kubernetes (or something similar like Amazon ECS or Azure Container Instances that uses similar concepts).

Kubernetes networking is more complex than Docker, with its own networking model and requirements.

The Four Kubernetes Networking Problems

Kubernetes addresses four distinct networking challenges:

  1. Container-to-container (within a pod): localhost
  2. Pod-to-pod: Flat network, every pod can reach every pod
  3. Pod-to-service: Stable endpoints via Services
  4. External-to-internal: Ingress controllers

Pod Networking

Kubernetes Pod Network Model

Kubernetes Pod Network Model:
────────────────────────────

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      Kubernetes Cluster                         β”‚
β”‚                                                                 β”‚
β”‚    Node 1 (10.0.1.10)              Node 2 (10.0.1.11)           β”‚
β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”‚
β”‚    β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚          β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚       β”‚
β”‚    β”‚   β”‚ Pod A       β”‚  β”‚          β”‚   β”‚ Pod C       β”‚  β”‚       β”‚
β”‚    β”‚   β”‚ 10.244.1.5  β”‚  │◄────────►│   β”‚ 10.244.2.3  β”‚  β”‚       β”‚
β”‚    β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚          β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚       β”‚
β”‚    β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚          β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚       β”‚
β”‚    β”‚   β”‚ Pod B       β”‚  β”‚          β”‚   β”‚ Pod D       β”‚  β”‚       β”‚
β”‚    β”‚   β”‚ 10.244.1.6  β”‚  β”‚          β”‚   β”‚ 10.244.2.4  β”‚  β”‚       β”‚
β”‚    β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚          β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚       β”‚
β”‚    β”‚                    β”‚          β”‚                    β”‚       β”‚
β”‚    β”‚   Pod CIDR:        β”‚          β”‚   Pod CIDR:        β”‚       β”‚
β”‚    β”‚   10.244.1.0/24    β”‚          β”‚   10.244.2.0/24    β”‚       β”‚
β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜       β”‚
β”‚                                                                 β”‚
β”‚    All pods can communicate directly (no NAT)                   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Kubernetes networking requirements:

  • All pods can communicate without NAT
  • All nodes can communicate with all pods without NAT
  • The IP a pod sees for itself is the same IP others see

Services

Services provide stable endpoints for dynamic pod sets:

# Service definition
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: myapp        # Routes to pods with this label
  ports:
  - port: 80          # Service port
    targetPort: 8080  # Pod port
  type: ClusterIP     # Internal only

Service types:

TypeAccessUse Case
ClusterIPInternal onlyInter-service communication
NodePortExternal via node IP:portDevelopment, simple exposure
LoadBalancerCloud load balancerProduction external services
ExternalNameDNS aliasExternal service integration

kube-proxy

kube-proxy implements Services using one of several modes:

kubeproxy Modes

kube-proxy Modes:
────────────────

iptables mode (default):
- Creates iptables rules for each service
- Kernel-level packet processing
- Random backend selection

IPVS mode:
- Uses Linux IPVS (IP Virtual Server)
- Better performance at scale
- More load balancing options

eBPF mode (Cilium):
- Modern, efficient packet processing
- Bypasses iptables entirely
- Superior observability

Container Network Interface (CNI)

You might be wondering: how does Kubernetes actually implement pod networking? The answer is that Kubernetes doesn’t implement it directlyβ€”it delegates to plugins through a standard called CNI (Container Network Interface).

CNI is the glue between Kubernetes and the actual networking implementation. This pluggable architecture means you can choose different networking solutions depending on your needsβ€”and your choice has significant security implications.

How CNI Works

CNI Plugin Flow

CNI Plugin Flow:
───────────────

1. Kubelet creates pod sandbox (pause container)
   
2. Kubelet calls CNI plugin:
   └── /opt/cni/bin/<plugin> ADD <config>
   
3. CNI plugin:
   └── Creates network interface
   └── Assigns IP address
   └── Configures routes
   └── Returns result to kubelet
   
4. Pod containers start with networking ready
PluginFeaturesBest For
CalicoNetworkPolicy, BGP, eBPFProduction, security-focused
CiliumeBPF, advanced observabilityHigh performance, observability
FlannelSimple overlayDevelopment, simple clusters
WeaveMesh networking, encryptionMulti-cloud, encryption needed
AWS VPC CNINative AWS networkingEKS clusters

Calico Architecture

Calico Architecture

Calico Architecture:
───────────────────

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     Kubernetes Cluster                          β”‚
β”‚                                                                 β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚                    Felix (on each node)                   β”‚  β”‚
β”‚  β”‚  - Programs iptables/eBPF for network policy              β”‚  β”‚
β”‚  β”‚  - Manages routes                                         β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚                                                                 β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚                    BIRD (BGP daemon)                      β”‚  β”‚
β”‚  β”‚  - Distributes routes between nodes                       β”‚  β”‚
β”‚  β”‚  - Enables pod-to-pod communication                       β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚                                                                 β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚                    Typha (optional)                       β”‚  β”‚
β”‚  β”‚  - Caches datastore for scale                             β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚                                                                 β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚                    etcd / Kubernetes API                  β”‚  β”‚
β”‚  β”‚  - Stores network configuration                           β”‚  β”‚
β”‚  β”‚  - Stores network policies                                β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Network Policies

Now we arrive at the most critical security topic in container networking: Network Policies. These are Kubernetes’ equivalent of firewall rulesβ€”they control which pods can talk to which other pods.

Understanding Network Policies is essential because of a dangerous default: Kubernetes allows all pods to communicate with all other pods by default. Without explicit Network Policies, any compromised container can reach any other container in the cluster. This is exactly how the breach in our opening story occurred.

Default Behavior

By default, Kubernetes allows all pod-to-pod traffic. This is the most common security misconfiguration in Kubernetes clusters.

Default (No NetworkPolicy)

Default (No NetworkPolicy):
──────────────────────────

All pods can communicate with all other pods:

[Frontend] ◄──────────► [Backend] ◄──────────► [Database]
     β”‚                       β”‚                      β”‚
     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
     
     Any pod can reach any pod - dangerous!

Creating Network Policies

# Deny all ingress to a namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: production
spec:
  podSelector: {}    # Applies to all pods
  policyTypes:
  - Ingress          # Block all incoming traffic
# Allow specific traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

Network Policy Best Practices

# Recommended baseline: deny all, then allow specific
---
# 1. Deny all ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all-ingress
  namespace: myapp
spec:
  podSelector: {}
  policyTypes:
  - Ingress

---
# 2. Deny all egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all-egress
  namespace: myapp
spec:
  podSelector: {}
  policyTypes:
  - Egress

---
# 3. Allow DNS (required for service discovery)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: myapp
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53

---
# 4. Allow specific application traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-policy
  namespace: myapp
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - port: 8080
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: database
    ports:
    - port: 5432

** COMMON MISTAKE**

NetworkPolicies only work if your CNI plugin supports them. Flannel alone doesn’t enforce NetworkPolicies. Always verify your CNI supports network policies before relying on them for security. Test by creating a deny-all policy and confirming traffic is actually blocked.


Service Mesh

Network Policies give us basic firewall-like control over pod communication. But in complex microservice architectures, we often need more: encryption between services, fine-grained access control, observability into traffic patterns, and resilience features like retries and circuit breakers.

This is where service mesh comes inβ€”an additional layer of infrastructure that handles service-to-service communication with advanced features built in.

What is a Service Mesh?

Without Service Mesh

Without Service Mesh:
────────────────────

[Microservice A] ──direct connection──► [Microservice B]
     β”‚
     β”œβ”€β”€ Handles own TLS
     β”œβ”€β”€ Handles own retries
     β”œβ”€β”€ Handles own load balancing
     └── No visibility

With Service Mesh:
─────────────────

[Microservice A] ── [Sidecar Proxy] ◄─────► [Sidecar Proxy] ── [Microservice B]
                          β”‚                        β”‚
                          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                Control Plane
                          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                          β”‚ - mTLS everywhere        β”‚
                          β”‚ - Automatic retries      β”‚
                          β”‚ - Load balancing         β”‚
                          β”‚ - Observability          β”‚
                          β”‚ - Traffic policies       β”‚
                          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Istio Architecture

Istio Components

Istio Components:
────────────────

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                       Control Plane                             β”‚
β”‚                                                                 β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚  β”‚                        istiod                           β”‚    β”‚
β”‚  β”‚  - Pilot: Service discovery, config                     β”‚    β”‚
β”‚  β”‚  - Citadel: Certificate management, mTLS                β”‚    β”‚
β”‚  β”‚  - Galley: Configuration validation                     β”‚    β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                        Data Plane                               β”‚
β”‚                                                                 β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚   β”‚        Pod A          β”‚      β”‚        Pod B          β”‚      β”‚
β”‚   β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚      β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚      β”‚
β”‚   β”‚  β”‚   Application   β”‚  β”‚      β”‚  β”‚   Application   β”‚  β”‚      β”‚
β”‚   β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚      β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚      β”‚
β”‚   β”‚           β”‚           β”‚      β”‚           β”‚           β”‚      β”‚
β”‚   β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚      β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚      β”‚
β”‚   β”‚  β”‚  Envoy Sidecar  │◄─┼──────┼─►│  Envoy Sidecar  β”‚  β”‚      β”‚
β”‚   β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚      β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚      β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Service Mesh Security Features

FeatureDescriptionBenefit
mTLSAutomatic mutual TLS between all servicesEncrypted, authenticated communication
Authorization PoliciesFine-grained access controlLimit which services can communicate
JWT ValidationToken verification at mesh levelConsistent authentication
Rate LimitingTraffic throttlingDoS protection
# Istio Authorization Policy example
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: backend-policy
  namespace: production
spec:
  selector:
    matchLabels:
      app: backend
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/production/sa/frontend"]
    to:
    - operation:
        methods: ["GET", "POST"]
        paths: ["/api/*"]

Container Network Security

Security Threats in Container Networking

Container Network Attack Vectors

Container Network Attack Vectors:
────────────────────────────────

1. Container Escape β†’ Host Network Access
   [Container] ──escape──► [Host] ──► All network traffic

2. Lateral Movement Between Containers
   [Compromised Pod] ──no network policy──► [Database Pod]

3. Service Account Token Theft
   [Pod] ──► [Kubernetes API] ──► Cluster compromise

4. DNS Spoofing within Cluster
   [Malicious Pod] ──fake DNS──► [Victim Pod]

5. Network Sniffing (if privileged)
   [Privileged Container] ──tcpdump──► [All pod traffic]

Security Best Practices Checklist

# Pod Security Best Practices
apiVersion: v1
kind: Pod
metadata:
  name: secure-pod
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    fsGroup: 2000
  containers:
  - name: app
    image: myapp:latest
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop:
          - ALL
    # No hostNetwork, hostPID, hostIPC

Network Security Checklist

  • NetworkPolicies enforced (deny-all baseline)
  • CNI plugin supports NetworkPolicy
  • Service mesh mTLS enabled (if using mesh)
  • No privileged containers in production
  • No hostNetwork unless absolutely required
  • Pod-to-pod traffic encrypted
  • Egress traffic controlled
  • API server access restricted
  • Service account tokens mounted only when needed

TRY IT YOURSELF

Set up a test cluster (minikube or kind) and verify NetworkPolicy enforcement:

# Install Calico on kind
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

# Create test namespace
kubectl create ns test

# Deploy test pods
kubectl run -n test web --image=nginx
kubectl run -n test client --image=busybox -- sleep 3600

# Verify connectivity (should work)
kubectl exec -n test client -- wget -qO- http://web

# Apply deny-all policy
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: test
spec:
  podSelector: {}
  policyTypes:
  - Ingress
EOF

# Verify connectivity blocked
kubectl exec -n test client -- wget -qO- --timeout=3 http://web
# Should timeout

Key Takeaways

  1. Docker networking provides multiple drivers (bridge, host, overlay) with different isolation and connectivity trade-offs

  2. Kubernetes networking requires all pods to communicate without NATβ€”CNI plugins implement this

  3. Services provide stable endpoints for dynamic pod sets; understand ClusterIP, NodePort, LoadBalancer

  4. NetworkPolicies are your primary container firewallβ€”default Kubernetes allows all traffic; you must explicitly restrict it

  5. Service mesh adds mTLS, observability, and fine-grained policies at the cost of complexity

  6. Security requires layers: NetworkPolicies, pod security, service mesh policies, and proper configuration


Review Questions

  1. What are the four networking problems Kubernetes solves?

  2. Why is the default β€œallow all” pod networking policy a security risk?

  3. How do NetworkPolicies differ from traditional firewalls?

  4. What security benefits does a service mesh provide?

  5. Why might a CNI plugin choice affect your security posture?


Further Reading

  • Kubernetes Networking Documentation: kubernetes.io/docs/concepts/cluster-administration/networking/
  • Calico Documentation: docs.projectcalico.org
  • Cilium Documentation: docs.cilium.io
  • Istio Documentation: istio.io/docs