Kubernetes Network Policies Explained: Secure Your Cluster

A deep dive into Kubernetes Network Policies, how to restrict traffic, and best practices for securing your workloads.

HA
Hari Prasad
June 12, 2024
5 min read ...
Financial Planning Tool

PPF Calculator

Calculate your Public Provident Fund returns with detailed projections and tax benefits. Plan your financial future with precision.

Try Calculator
Free Forever Secure
10K+
Users
4.9★
Rating
Career Tool

Resume Builder

Create professional DevOps resumes with modern templates. Showcase your skills, experience, and certifications effectively.

Build Resume
No Login Export PDF
15+
Templates
5K+
Created
Kubernetes Tool

EKS Pod Cost Calculator

Calculate Kubernetes pod costs on AWS EKS. Optimize resource allocation and reduce your cloud infrastructure expenses.

Calculate Costs
Accurate Real-time
AWS
EKS Support
$$$
Save Money
AWS Cloud Tool

AWS VPC Designer Pro

Design and visualize AWS VPC architectures with ease. Create production-ready network diagrams with subnets, route tables, and security groups in minutes.

Design VPC
Visual Editor Export IaC
Multi-AZ
HA Design
Pro
Features
Subnets Security Routing
Explore More

Discover My DevOps Journey

Explore my portfolio, read insightful blogs, learn from comprehensive courses, and leverage powerful DevOps tools—all in one place.

50+
Projects
100+
Blog Posts
10+
Courses
20+
Tools

Kubernetes Network Policies allow you to control traffic flow at the IP address or port level between pods, providing a crucial security layer in your cluster. In this comprehensive guide, we’ll explore how to implement zero-trust networking in Kubernetes.

Why Network Policies Matter

In a Kubernetes cluster without network policies, all pods can communicate with each other by default. This creates security risks:

  • Lateral Movement: Compromised pods can access other pods
  • Data Exfiltration: Malicious actors can extract sensitive data
  • Compliance Violations: Fails to meet zero-trust security requirements
  • Attack Surface: Exposes unnecessary services to potential threats

Network Policies implement microsegmentation, limiting blast radius and enforcing least-privilege access.

Prerequisites: CNI Plugin Support

Network Policies require a CNI plugin that supports them:

Calico - Most feature-rich, supports advanced policies
Cilium - eBPF-based, high performance
Weave Net - Simple setup, good for small clusters
Antrea - VMware’s solution, feature-rich
Flannel - Does not support Network Policies (basic)

Check your CNI:

kubectl get pods -n kube-system | grep -E 'calico|cilium|weave|antrea'

# For AKS
az aks show --resource-group myResourceGroup --name myAKSCluster \
  --query networkProfile.networkPlugin

# For EKS
kubectl describe daemonset -n kube-system aws-node

# For GKE
gcloud container clusters describe my-cluster --format="value(networkPolicy)"

Getting Started: Basic Concepts

Network Policies control:

  1. Ingress Traffic: Incoming connections to pods
  2. Egress Traffic: Outgoing connections from pods

Key components:

  • podSelector: Which pods the policy applies to
  • policyTypes: Ingress, Egress, or both
  • ingress/egress rules: Allow specific traffic

Example 1: Deny All Ingress Traffic (Default Deny)

Start with a default-deny posture:

# deny-all-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
  namespace: production
spec:
  podSelector: {}  # Empty selector = all pods in namespace
  policyTypes:
  - Ingress

Apply it:

kubectl apply -f deny-all-ingress.yaml

# Verify
kubectl get networkpolicy -n production
kubectl describe networkpolicy deny-all-ingress -n production

Example 2: Allow Traffic from Specific Pods

Allow only frontend pods to access backend:

# allow-frontend-to-backend.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
      tier: api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
          tier: web
    ports:
    - protocol: TCP
      port: 8080

Visualization:

┌─────────────┐
│  Frontend   │
│  (allowed)  │
└──────┬──────┘
       │ :8080
       ▼
┌─────────────┐
│   Backend   │
│   :8080     │
└─────────────┘

Example 3: Allow from Specific Namespace

Enable cross-namespace communication:

# allow-from-monitoring.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-monitoring
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: monitoring
    ports:
    - protocol: TCP
      port: 9090  # Prometheus scrape port

Label your namespace:

kubectl label namespace monitoring name=monitoring

Example 4: Egress Control - Database Access

Control outbound traffic to database:

# allow-egress-to-database.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-egress-to-database
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: postgres
    ports:
    - protocol: TCP
      port: 5432
  - to:  # Allow DNS
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53

Important: Always allow DNS unless you want to break everything!

Example 5: Internet Egress Control

Restrict internet access while allowing internal traffic:

# restrict-internet-egress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-internet-egress
  namespace: production
spec:
  podSelector:
    matchLabels:
      tier: backend
  policyTypes:
  - Egress
  egress:
  # Allow internal cluster communication
  - to:
    - podSelector: {}
  # Allow DNS
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53
  # Allow specific external services
  - to:
    - ipBlock:
        cidr: 52.85.0.0/16  # AWS CloudFront
  - to:
    - ipBlock:
        cidr: 104.16.0.0/12  # Cloudflare
    ports:
    - protocol: TCP
      port: 443

Advanced: Complete Three-Tier Application

Secure a complete application stack:

# three-tier-network-policies.yaml
---
# Frontend Policy: Accept from LoadBalancer, talk to Backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      tier: frontend
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: ingress-nginx
    ports:
    - protocol: TCP
      port: 3000
  egress:
  # Talk to backend
  - to:
    - podSelector:
        matchLabels:
          tier: backend
    ports:
    - protocol: TCP
      port: 8080
  # DNS
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53

---
# Backend Policy: Accept from Frontend, talk to Database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      tier: backend
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          tier: frontend
    ports:
    - protocol: TCP
      port: 8080
  egress:
  # Talk to database
  - to:
    - podSelector:
        matchLabels:
          tier: database
    ports:
    - protocol: TCP
      port: 5432
  # DNS
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53
  # External API (optional)
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 169.254.169.254/32  # Block AWS metadata
        - 10.0.0.0/8          # Block internal
        - 172.16.0.0/12       # Block internal
        - 192.168.0.0/16      # Block internal
    ports:
    - protocol: TCP
      port: 443

---
# Database Policy: Accept only from Backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: database-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      tier: database
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          tier: backend
    ports:
    - protocol: TCP
      port: 5432
  egress:
  # DNS only
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53

Apply all policies:

kubectl apply -f three-tier-network-policies.yaml

# Verify
kubectl get networkpolicies -n production

Testing Network Policies

Use a debug pod to test connectivity:

# Create test pod
kubectl run test-pod --image=nicolaka/netshoot -it --rm -n production

# Inside the pod:
# Test DNS
nslookup kubernetes.default

# Test connectivity to backend
curl -v http://backend-service:8080

# Test blocked connection
curl -v http://database-service:5432
# Should timeout or be refused

# Exit
exit

Automated Testing Script

#!/bin/bash
# test-network-policies.sh

NAMESPACE="production"
TIMEOUT=5

echo "Testing Network Policies in $NAMESPACE..."

# Test 1: Frontend to Backend (Should succeed)
echo "Test 1: Frontend -> Backend"
kubectl run temp-frontend --image=busybox -n $NAMESPACE \
  --labels="tier=frontend" --rm -it --restart=Never -- \
  timeout $TIMEOUT wget -qO- backend-service:8080 && \
  echo "✅ PASS" || echo "❌ FAIL"

# Test 2: Frontend to Database (Should fail)
echo "Test 2: Frontend -> Database (should fail)"
kubectl run temp-frontend --image=busybox -n $NAMESPACE \
  --labels="tier=frontend" --rm -it --restart=Never -- \
  timeout $TIMEOUT wget -qO- database-service:5432 && \
  echo "❌ FAIL: Should be blocked!" || echo "✅ PASS: Correctly blocked"

# Test 3: Backend to Database (Should succeed)
echo "Test 3: Backend -> Database"
kubectl run temp-backend --image=busybox -n $NAMESPACE \
  --labels="tier=backend" --rm -it --restart=Never -- \
  timeout $TIMEOUT wget -qO- database-service:5432 && \
  echo "✅ PASS" || echo "❌ FAIL"

Monitoring & Troubleshooting

View Policy Details

# List all policies
kubectl get networkpolicies --all-namespaces

# Describe specific policy
kubectl describe networkpolicy frontend-policy -n production

# Get YAML output
kubectl get networkpolicy backend-policy -n production -o yaml

Using Cilium for Enhanced Visibility

If using Cilium CNI:

# Install Cilium CLI
curl -L --remote-name-all \
  https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
tar xzvf cilium-linux-amd64.tar.gz
sudo mv cilium /usr/local/bin/

# Monitor network traffic
cilium monitor --type drop

# Check connectivity
cilium connectivity test

# Visualize with Hubble
cilium hubble port-forward &
hubble observe --follow

Common Issues

Problem: Policies not taking effect

# Check CNI supports network policies
kubectl get pods -n kube-system -o wide

# Verify policy exists
kubectl get netpol -A

# Check pod labels
kubectl get pods --show-labels -n production

Problem: Can’t reach DNS

# Always add this egress rule
egress:
- to:
  - namespaceSelector:
      matchLabels:
        name: kube-system
  ports:
  - protocol: UDP
    port: 53

Best Practices

1. Start with Default Deny

# Apply to every namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

2. Use Meaningful Labels

# Good labels
labels:
  app: myapp
  tier: backend
  env: production
  team: platform

# Bad labels
labels:
  name: pod1
  type: thing

3. Document Your Policies

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-gateway-policy
  namespace: production
  annotations:
    description: "Allows ingress from nginx-ingress to API gateway on port 8080"
    owner: "platform-team@example.com"
    last-reviewed: "2024-06-01"
spec:
  # ... policy rules

4. Use GitOps

Store policies in version control:

network-policies/
├── base/
│   ├── default-deny.yaml
│   └── kustomization.yaml
├── production/
│   ├── frontend-policy.yaml
│   ├── backend-policy.yaml
│   ├── database-policy.yaml
│   └── kustomization.yaml
└── development/
    ├── allow-all.yaml
    └── kustomization.yaml

5. Test in Non-Production First

# Apply to dev namespace
kubectl apply -f network-policies/ -n development

# Test thoroughly
./test-network-policies.sh development

# Then promote to production
kubectl apply -f network-policies/ -n production

Integration with Service Mesh

Combine Network Policies with service mesh for defense in depth:

# Istio + Network Policy Example
---
# Network Policy: L3/L4 security
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-l3-l4-policy
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend

---
# Istio: L7 security + mTLS
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: backend-l7-policy
spec:
  selector:
    matchLabels:
      app: backend
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/production/sa/frontend"]
    to:
    - operation:
        methods: ["GET", "POST"]
        paths: ["/api/*"]

Policy Management Tools

1. Inspektor Gadget

# Install
kubectl gadget deploy

# Monitor network activity
kubectl gadget network-policy monitor

2. Network Policy Viewer

# Install as a kubectl plugin
kubectl krew install np-viewer

# Visualize policies
kubectl np-viewer -n production

3. Cilium Editor (GUI)

# Access Cilium Editor
kubectl port-forward -n kube-system service/hubble-ui 12000:80
# Visit http://localhost:12000

Security Checklist

✅ Default deny policy in every namespace
✅ Explicit allow rules for required traffic
✅ DNS egress allowed to kube-system
✅ Block cloud metadata endpoints (169.254.169.254)
✅ Restrict internet egress to specific CIDRs
✅ Use namespace selectors for cross-namespace traffic
✅ Label all pods and namespaces consistently
✅ Document policies with annotations
✅ Test policies in development before production
✅ Monitor policy violations with CNI tools
✅ Regular policy audits and reviews
✅ Version control all network policies

Conclusion

Kubernetes Network Policies are essential for implementing zero-trust networking and securing your cluster. By starting with a default-deny posture and explicitly allowing only necessary traffic, you significantly reduce your attack surface and limit the blast radius of security incidents.

Remember: Security is a journey, not a destination. Regularly review and update your network policies as your application evolves.

Additional Resources


Questions about network policies? Drop a comment below!

HA
Author

Hari Prasad

Seasoned DevOps Lead with 11+ years of expertise in cloud infrastructure, CI/CD automation, and infrastructure as code. Proven track record in designing scalable, secure systems on AWS using Terraform, Kubernetes, Jenkins, and Ansible. Strong leadership in mentoring teams and implementing cost-effective cloud solutions.

Continue Reading

DevOps Tools & Calculators Free Tools

Power up your DevOps workflow with these handy tools

Enjoyed this article?

Explore more DevOps insights, tutorials, and best practices

View All Articles