Your TLS secret lives in one namespace. Your ingress needs it in three.
This is the part of Kubernetes TLS that nobody talks about in the tutorials. You set up cert-manager, you get your first certificate issued, everything works beautifully in your default namespace. Then the team grows. Someone creates a staging namespace. Another team spins up a microservice in its own namespace. And suddenly you're staring at a problem that Kubernetes was never really designed to solve cleanly.
Secrets don't cross namespace boundaries. Full stop.
That's by design, and it makes sense from a security perspective. But it creates a real operational headache when you've got a wildcard cert for *.example.com and six namespaces that all need it.
The copy-paste approach (and why it breaks at 3 AM)
The first thing most teams try is just copying the secret. kubectl get secret, pipe it through some YAML manipulation, apply it to the target namespace. I've seen shell scripts that do this sitting in cron jobs on someone's laptop. Seriously.
# the "it works on my machine" approach
kubectl get secret wildcard-tls -n cert-system -o yaml | \
sed 's/namespace: cert-system/namespace: api-prod/' | \
kubectl apply -f -
# congratulations, you now have a stale copy that won't auto-renew
This works until it doesn't. The cert renews in the source namespace but the copies are stale. Nobody notices for 90 days. Then three services go down simultaneously because they're all serving an expired cert that was technically renewed, just not where it needed to be.
I've seen this exact scenario play out at a fintech company. Their payment processing went down for 40 minutes because a copied wildcard cert expired in their payments namespace while the original in cert-manager's namespace was perfectly valid and freshly renewed.
Kubernetes Reflector: the duct tape that actually holds
The most pragmatic solution I've found is kubernetes-reflector. It watches for annotations on secrets and mirrors them across namespaces automatically. Not glamorous. Gets the job done.
apiVersion: v1
kind: Secret
metadata:
name: wildcard-tls
namespace: cert-system
annotations:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "api-prod,api-staging,frontend"
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
type: kubernetes.io/tls
# cert-manager populates data automatically
When cert-manager renews the cert, reflector picks up the change and pushes it everywhere. The lag is usually under 30 seconds. For most workloads that's fine.
But there's a catch. Reflector needs RBAC access to read and write secrets across all those namespaces. So you're trading one security concern for another. Your secret management tool now has broad cluster permissions. Some security teams will not be okay with that, and honestly, they have a point.
The cert-manager native way: one Certificate per namespace
The "correct" approach according to cert-manager's documentation is to create a Certificate resource in every namespace that needs one. Each namespace gets its own cert-manager Certificate, its own Secret, its own renewal cycle.
# api-prod/certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: api-tls
namespace: api-prod
spec:
secretName: api-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
dnsNames:
- api.example.com
- "*.api.example.com"
Clean. Isolated. Each namespace owns its destiny.
Also means you're issuing way more certificates than you need to. If you're using Let's Encrypt, you've got rate limits to think about. 50 certificates per registered domain per week. Sounds like plenty until you're running 30 microservices across 4 environments and each one wants its own cert. You do the math.
What about trust-manager?
cert-manager's newer sibling project, trust-manager, handles distributing CA bundles across namespaces. People sometimes confuse this with secret replication. It's not. trust-manager distributes trust anchors, the CA certificates that your workloads need to verify connections. It doesn't replicate TLS key pairs.
Still useful though. If you're doing mTLS between services and every namespace needs your internal CA bundle, trust-manager is the right tool. Just don't expect it to solve the "I need my TLS secret in five namespaces" problem.
The GitOps angle nobody considers early enough
Here's where it gets interesting. If you're running ArgoCD or Flux, your secret replication strategy needs to play nice with your GitOps workflow. Reflector creates secrets that aren't in your git repo. ArgoCD sees them as "out of sync" and wants to delete them.
You end up adding resource exclusions:
# argocd-cm ConfigMap
resource.exclusions: |
- apiGroups:
- ""
kinds:
- Secret
clusters:
- "*"
# too broad? yeah. but the alternative is listing every reflected secret
That's a pretty big hammer. You're telling ArgoCD to ignore all secrets, which defeats a chunk of the GitOps promise. Some teams scope it tighter with label selectors, but it's fiddly and breaks whenever someone forgets to label a reflected secret correctly.
The better pattern, if you can stomach the complexity, is to use Sealed Secrets or External Secrets Operator for your static secrets and let cert-manager handle TLS secrets entirely outside GitOps. Accept that TLS secrets are dynamic resources managed by a controller, not static config that belongs in git.
Multi-cluster makes everything worse
Single cluster, the solutions above work. Multiple clusters? Welcome to a new category of pain.
You can't just reflect secrets across clusters. You need something like External Secrets Operator pulling from a central vault, or you need cert-manager running independently in each cluster with its own ACME account. Both approaches have tradeoffs. Central vault means a single point of failure for all your TLS. Independent cert-manager means independent rate limit tracking and no shared state.
A pattern that works reasonably well for teams running 3-10 clusters: one "certificate authority" cluster that runs cert-manager and pushes issued certs to HashiCorp Vault (or AWS Secrets Manager, whatever). Each workload cluster runs External Secrets Operator and pulls what it needs. Renewal happens centrally. Distribution is eventually consistent.
# ExternalSecret in workload cluster
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: wildcard-tls
namespace: api-prod
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: wildcard-tls
template:
type: kubernetes.io/tls
data:
- secretKey: tls.crt
remoteRef:
key: pki/certs/wildcard-example-com
property: certificate
- secretKey: tls.key
remoteRef:
key: pki/certs/wildcard-example-com
property: private_key
The 1h refresh interval is a deliberate choice. You want it frequent enough to pick up renewals quickly but not so aggressive that you're hammering Vault. For certificates with 90-day lifetimes, checking hourly is more than sufficient.
Practical checklist before you pick an approach
Ask yourself these questions. Be honest about the answers.
How many namespaces actually need the same cert? If it's two, just create two Certificate resources and move on with your life. The reflector/vault/ESO complexity isn't worth it for two namespaces.
Are you hitting Let's Encrypt rate limits? If not, one Certificate per namespace is simpler and more secure. Each namespace is self-contained. No cross-namespace RBAC needed.
Do you run GitOps? Then think hard about how dynamic secrets interact with your sync process before you deploy anything.
Multiple clusters? Skip the simple solutions entirely. Go straight to External Secrets Operator with a central secret store. You'll end up there anyway; might as well save the migration pain.
And whatever you pick, set up monitoring on certificate expiry in every namespace, not just where cert-manager runs. Because the whole point of this exercise is making sure a valid cert exists where traffic actually terminates. Not where it gets issued.