Back to Blog
Security

TLS Termination at the Ingress: What Most Kubernetes Teams Get Wrong

Where your TLS actually terminates in Kubernetes matters more than you think. Misconfigurations at the ingress level lead to silent security gaps and broken mTLS.

CertGuard Team··7 min read

Your traffic is probably unencrypted right now

Not on the internet. Between your ingress controller and your pods. Most Kubernetes clusters running nginx-ingress or Traefik terminate TLS at the edge and then forward plain HTTP to backend services over the cluster network. And most teams are completely fine with that, because "it's internal traffic."

Until it isn't.

Shared clusters, multi-tenant environments, compliance audits that actually read your network diagrams. The moment someone asks "is traffic encrypted in transit?" you realize your answer depends on what "in transit" means. And that's a conversation nobody wants to have at 4 PM on a Friday with an auditor on the call.

Where does TLS actually end?

There are basically three patterns for handling TLS in Kubernetes, and each has tradeoffs that nobody talks about in the "getting started" docs.

Edge termination is what 90% of teams run. The ingress controller handles TLS, strips it, sends HTTP to the service. Simple. Fast. Your pods don't need to know anything about certificates. But the traffic between ingress and pod? Cleartext.

# This is what most people have. And yes, the backend is HTTP.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  annotations:
    # nginx just terminates and proxies as http
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
  tls:
  - hosts:
    - app.example.com
    secretName: app-tls-secret
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app
            port:
              number: 8080

Re-encryption (sometimes called "backend TLS") is where the ingress terminates the client TLS connection but then opens a new TLS connection to the backend pod. Two TLS handshakes. More CPU, more latency, more certificates to manage. But encrypted end-to-end.

# Re-encrypt to backend. Your pod needs its own cert now.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    # skip verification if using self-signed backend certs
    # (not great, but common in practice)
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "off"
spec:
  tls:
  - hosts:
    - app.example.com
    secretName: app-tls-secret
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app
            port:
              number: 8443

Passthrough means the ingress doesn't touch TLS at all. It just forwards the raw TCP stream to the backend, and the pod handles everything. You lose the ability to do path-based routing (because the ingress can't read the HTTP headers without decrypting), but the TLS connection is truly end-to-end from client to pod.

Most teams pick edge termination because it's easy. And honestly? For many workloads, it's fine. The problem is when teams pick it by default without realizing they picked it at all.

The proxy-ssl-verify trap

So you decided on re-encryption. Good. You set backend-protocol: "HTTPS" and your pod serves TLS on port 8443. But the ingress can't verify the backend certificate because it's signed by some internal CA that nginx doesn't trust.

What does everyone do?

# the "make it work" annotation
nginx.ingress.kubernetes.io/proxy-ssl-verify: "off"

Congratulations, you now have encrypted traffic that you don't actually verify. A man-in-the-middle inside your cluster could present any certificate and nginx would happily forward traffic to it. You went through the effort of re-encryption and then disabled the one thing that makes it meaningful.

The fix involves mounting your internal CA bundle and pointing nginx at it:

nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
nginx.ingress.kubernetes.io/proxy-ssl-secret: "default/internal-ca-cert"
nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "2"

But managing that CA, distributing it, rotating it; that's a whole thing. Which is why people just turn verification off. Circle of life.

Service meshes don't magically solve this

Someone on the team will suggest Istio. Or Linkerd. "Just add a sidecar, it handles mTLS automatically." And they're not wrong, technically. Service meshes do handle pod-to-pod encryption transparently. Sidecar proxies intercept traffic before it leaves the pod and encrypt it with mutual TLS using automatically rotated certificates.

But.

You still need to handle the edge. The traffic from the internet to your ingress controller isn't covered by the mesh. And the traffic from the ingress controller to the first sidecar proxy? That depends on your setup. If your ingress controller isn't part of the mesh (and often it isn't, especially with nginx-ingress), there's a gap.

Istio's own documentation recommends using their ingress gateway instead of a standard Kubernetes Ingress for exactly this reason. But teams that already have nginx-ingress running don't want to rip it out. So they run both. Two ingress controllers, two sets of certificates, two places where TLS can break. Fun.

Linkerd is better here, honestly. It can inject its sidecar alongside nginx-ingress pods, which means the mesh covers the hop from ingress to backend. But you need to configure the ingress to skip its own TLS to the backend and let the sidecar handle it instead. It works, but the mental model gets confusing fast.

Certificate secrets and who can read them

Here's something that doesn't get enough attention. When you create a TLS secret in Kubernetes for your ingress, that secret contains your private key. Any pod in the same namespace with a service account that has get permissions on secrets can read it.

# check who can read secrets in your namespace
kubectl auth can-i get secrets --as=system:serviceaccount:default:default -n production
# if this says "yes", you have a problem

In a lot of clusters, the default service account has way too many permissions. A compromised pod could extract your TLS private key, and now someone has your production certificate's private key. They can impersonate your domain.

cert-manager makes this worse in a way, because it creates secrets automatically and teams forget they exist. You set up a Certificate resource, cert-manager creates the secret, the ingress references it, and nobody ever audits who else in that namespace can read it.

RBAC scoping for secrets should be tight. Limit get on secrets to only the service accounts that genuinely need them. Use separate namespaces for different trust levels. Consider external secret stores like Vault if you're really serious about it.

The annotation sprawl problem

Every ingress controller has its own annotation syntax. And TLS configuration lives almost entirely in annotations. Which means your security posture is defined by a bunch of string key-value pairs that have no schema validation, no type checking, and will silently be ignored if you typo them.

# spot the bug
nginx.ingress.kubernetes.io/ssl-redirect: "True"  # should be "true" (lowercase)
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-ciphers: "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256"
nginx.ingress.kubernetes.io/ssl-protocols: "TLSv1.3 TLSv1.2"

That "True" with a capital T? In some versions of nginx-ingress it works, in others it doesn't. No error, no warning, just HTTP traffic that should have been redirected to HTTPS slipping through. Your security scanner won't catch it because it tests the happy path.

And when you switch from nginx-ingress to Traefik, or to the Gateway API? All those annotations are meaningless. You're starting from scratch with a completely different configuration model. The Gateway API is supposed to fix this with proper typed resources for TLS configuration, but adoption is still early and not every feature maps cleanly.

What you should actually do

Audit your current setup. Run kubectl get ingress -A -o yaml and grep for backend-protocol. If you don't see it anywhere, all your backends are HTTP. Decide if that's acceptable for your threat model.

If you need encryption inside the cluster, a service mesh is probably less painful than managing backend certificates manually. Linkerd is lighter than Istio and covers the basics. But if you only have a few services, just giving each pod its own TLS cert via cert-manager is straightforward enough.

Lock down secret access. Today. Run the kubectl auth can-i check above in every namespace that has TLS secrets. Fix the RBAC before something else does.

If you're starting fresh, look at the Gateway API instead of Ingress. The TLSRoute and HTTPRoute resources are better designed, and you won't be fighting annotation typos for the rest of your career.

And stop setting proxy-ssl-verify: "off". If you're going to re-encrypt, do it properly or don't bother. Half-measures in TLS are worse than plaintext, because at least with plaintext you know where you stand.