I was building a local dev environment with two services behind a single ingress. The setup wasn't complicated — a kind cluster, two deployments, one ingress-nginx controller routing traffic based on path. Should be done in 20 minutes.
Two hours later, I had learned two things that are not prominently documented anywhere and which will break your setup if you miss either of them.
The Goal
Two HTTP services exposed through a single local domain via ingress-nginx:
localhost/api/users→ users service (port 8000)localhost/api/orders→ orders service (port 8080)
No cloud provider. No NodePort hacks. Just a clean ingress setup that mirrors what you'd run in production.
Step 1: The kind Cluster Config
kind runs Kubernetes inside Docker containers. To expose the ingress controller to localhost:80 and localhost:443, you need to map the container's ports to your host — and critically, label the node that will run the ingress controller.
# kind-cluster.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
- role: worker
The ingress-ready=true label on the control-plane node is important — we'll use it shortly. Create the cluster:
kind create cluster --config kind-cluster.yaml --name dev
Step 2: Deploy Two Services
Two minimal services using the hashicorp/http-echo image — returns a string response so you can confirm routing works.
# services.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: users
spec:
replicas: 1
selector:
matchLabels:
app: users
template:
metadata:
labels:
app: users
spec:
containers:
- name: users
image: hashicorp/http-echo
args: ["-text=hello from users"]
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: users-service
spec:
selector:
app: users
ports:
- port: 80
targetPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: orders
spec:
replicas: 1
selector:
matchLabels:
app: orders
template:
metadata:
labels:
app: orders
spec:
containers:
- name: orders
image: hashicorp/http-echo
args: ["-text=hello from orders"]
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: orders-service
spec:
selector:
app: orders
ports:
- port: 80
targetPort: 5678
kubectl apply -f services.yaml
Gotcha #1: Use the kind-Specific Ingress Manifest
This is where most tutorials skip a critical detail.
The ingress-nginx project ships separate manifests for different environments:
| Environment | Manifest |
|---|---|
| Cloud (AWS, GCP, Azure) | deploy/static/provider/cloud/deploy.yaml |
| kind (local) | deploy/static/provider/kind/deploy.yaml |
| Bare metal | deploy/static/provider/baremetal/deploy.yaml |
If you apply the generic or cloud manifest to a kind cluster, the ingress controller will start — it'll even show as Running — but traffic from localhost:80 will never reach it. The pod is listening on the right port inside the container, but kind has no idea where to forward host traffic.
The kind-specific manifest is pre-configured with the right hostNetwork, hostPort, and DaemonSet settings that make port forwarding through kind actually work.
# Wrong — works in cloud, silently broken in kind
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
# Correct — use the kind provider
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Gotcha #2: Patch the Node Selector
Here's the second one that got me. Even with the correct kind manifest applied, ingress still won't work.
The problem: the ingress-nginx controller pod will schedule on any available node — likely a worker — but the port mappings from extraPortMappings in your cluster config only exist on the control-plane node. Traffic hits localhost:80, kind forwards it to the control-plane container, but the ingress controller isn't running there.
The fix is to patch the ingress-nginx deployment to add a nodeSelector that forces the pod onto the control-plane node (the one labeled ingress-ready=true in your cluster config):
kubectl patch deployment ingress-nginx-controller \
-n ingress-nginx \
--type=json \
-p='[
{
"op": "add",
"path": "/spec/template/spec/nodeSelector",
"value": {
"ingress-ready": "true"
}
}
]'
Wait for it to be ready:
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
This is the step that's almost never mentioned. The kind ingress docs hint at it, but it's easy to miss. Without this patch, the controller is running on the wrong node and your curl localhost will time out forever.
Step 3: Create the Ingress Resource
With the controller on the right node, wire up your two services:
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: local-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /api/users
pathType: Prefix
backend:
service:
name: users-service
port:
number: 80
- path: /api/orders
pathType: Prefix
backend:
service:
name: orders-service
port:
number: 80
kubectl apply -f ingress.yaml
Verify It Works
curl localhost/api/users
# hello from users
curl localhost/api/orders
# hello from orders
If either of those hangs or returns a 404, run through this checklist:
# Is the controller pod on the right node?
kubectl get pod -n ingress-nginx -o wide
# SHOULD show it on the node with 'ingress-ready=true'
# Check which node has the label
kubectl get nodes --show-labels | grep ingress-ready
# Is the ingress class set?
kubectl get ingressclass
# Should show 'nginx'
# Check controller logs for routing errors
kubectl logs -n ingress-nginx deploy/ingress-nginx-controller
Why This Isn't Obvious
The core issue is that kind is not just "Kubernetes on your machine" — it's Kubernetes running inside Docker containers, with an extra networking layer between your host and the cluster. Standard ingress-nginx manifests are built assuming a cloud load balancer or bare-metal host network that doesn't have this extra layer.
The kind-specific manifest handles the hostPort binding. The node selector patch ensures the pod that binds those ports is actually on the node where kind mapped them from your host.
Miss either of these and everything looks like it's running — the controller is up, the ingress object has an address, the services respond inside the cluster — but localhost just times out.
Full Setup Script
#!/usr/bin/env bash
set -euo pipefail
INGRESS_MANIFEST="https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml"
kind create cluster --config kind-cluster.yaml --name dev
kubectl apply -f services.yaml
kubectl apply -f "$INGRESS_MANIFEST"
kubectl patch deployment ingress-nginx-controller -n ingress-nginx --type=json -p='[
{"op":"add","path":"/spec/template/spec/nodeSelector","value":{"ingress-ready":"true"}}
]'
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
kubectl apply -f ingress.yaml
echo "Done. Test with: curl localhost/api/users"
The two things to remember: kind manifest, not cloud manifest — and patch the node selector after applying it.