Back to Blog
Ssl

Wildcard Certificates: The Hidden Costs Nobody Warns You About

Wildcard certs seem like a shortcut. They are — until they bite you. A practical look at when wildcards make sense and when they quietly become a liability.

CertGuard Team··7 min read

One cert to rule them all. What could go wrong?

Wildcard certificates are one of those things that sound perfect on paper. You buy *.example.com, slap it on every subdomain, and never think about individual certs again. Clean. Simple. Done.

Except it never works out that way.

I've watched teams adopt wildcards with genuine enthusiasm, only to end up in worse shape than before — not because wildcards are inherently bad, but because people misunderstand what they're actually signing up for. The convenience is real. The tradeoffs are also real, and they tend to surface at 2 AM on a Saturday.

What a wildcard cert actually covers

Quick refresher, because this trips people up constantly: a wildcard certificate for *.example.com covers api.example.com, app.example.com, staging.example.com. One level deep. That's it.

It does NOT cover example.com itself (the bare domain). And it absolutely does not cover api.v2.example.com. Two levels deep? You need another cert. Or a SAN entry. Or a separate wildcard for *.v2.example.com. The number of engineers I've seen discover this in production is… uncomfortable.

Most CAs will let you add the bare domain as a Subject Alternative Name alongside the wildcard. But you have to ask for it. It's not automatic. And if your provisioning is automated through something like cert-manager, you need to explicitly configure that SAN or your naked domain gets browser warnings.

The blast radius problem

Here's where it gets uncomfortable.

When you use one wildcard cert across 15 subdomains and that private key leaks — or even just gets compromised on one server — every single subdomain is exposed. Your API. Your admin panel. Your customer portal. That internal tool someone spun up three months ago and forgot about. All of them. One key, one revocation, one very bad day.

With individual certificates, a compromise on staging.example.com is contained. You revoke that one cert, rotate that one key, move on. With a wildcard, you're revoking everything and redeploying everywhere. And you're doing it under pressure, because your production services are also affected.

A client I worked with ran a wildcard across their entire SaaS platform. About 20 subdomains, mix of customer-facing and internal. When a dev accidentally committed the private key to a public repo (yes, really), the incident response wasn't "rotate one cert." It was "rotate the cert on every load balancer, every Kubernetes ingress, every CDN config, simultaneously, at 11 PM." That took four hours. Four hours of partial downtime across their whole product.

Validation headaches with Let's Encrypt

Let's Encrypt issues wildcard certs. Great. But there's a catch: they require DNS-01 validation. No HTTP-01 option for wildcards. This means your automation needs DNS API access — and not every DNS provider makes that easy.

If you're on Cloudflare or Route53, you're fine. The certbot plugins work, cert-manager has solid integrations, life is good. But if you're on some registrar's basic DNS with no API? You're doing manual TXT record updates every 60-90 days. "Automated" wildcard renewal becomes a calendar reminder and a prayer.

# certbot wildcard with Cloudflare DNS plugin
certbot certonly \
  --dns-cloudflare \
  --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini \
  -d "*.example.com" \
  -d "example.com"

# the credentials file — keep this locked down
# dns_cloudflare_api_token = your-scoped-api-token
# chmod 600 this file, obviously

That API token scope matters, by the way. I've seen setups where the Cloudflare token had full account access because someone copied a tutorial without thinking. Your cert renewal automation should have the narrowest possible permissions — Zone:DNS:Edit for the specific zone, nothing else.

When wildcards actually make sense

I'm not saying never use them. There are legitimate cases.

Multi-tenant platforms where customers get subdomains — customer1.app.com, customer2.app.com — wildcards are practically the only sane option. You can't provision individual certs for hundreds of subdomains that spin up dynamically. Well, you can, with something like Caddy's on-demand TLS, but that introduces its own complexity.

Development and staging environments. If you've got *.staging.example.com with services popping up and disappearing constantly, a wildcard keeps things simple. The blast radius is lower here anyway — it's staging.

Internal infrastructure behind a VPN. When the only people hitting those subdomains are your own team through a secured network, the risk profile changes. A wildcard on *.internal.example.com is reasonable.

The cert-manager approach in Kubernetes

If you're running Kubernetes, cert-manager handles wildcards reasonably well, but the config catches people off guard.

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: wildcard-example-com
  namespace: default
spec:
  secretName: wildcard-example-com-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  dnsNames:
    - "*.example.com"
    - "example.com"   # don't forget this one
  # DNS solver is mandatory for wildcards
  # make sure your ClusterIssuer has the right DNS01 config

The thing people miss: cert-manager stores the wildcard cert as a Kubernetes secret. Every namespace that needs it has to either reference that secret or you need to duplicate it. Tools like kubernetes-replicator can sync secrets across namespaces, but that's another moving part in your cluster.

Private key distribution is the real problem

This is the part most guides skip entirely.

With individual certs per service, each service has its own private key. Clean separation. With a wildcard, you need that same private key on every server, every container, every load balancer that terminates TLS for any subdomain. That key is now copied across your infrastructure.

How are you distributing it? Baked into container images? (Please don't.) Mounted from a secrets manager? (Better.) Synced via some custom script? (Fragile.) Every copy is an attack surface. Every deployment pipeline that touches that key is a potential leak vector.

Compare that to individual certs where the private key never leaves the server it was generated on. The security posture is just fundamentally different.

So what should you actually do?

Use wildcards where the convenience genuinely outweighs the risk. Multi-tenant subdomains, ephemeral environments, internal services. Treat the private key like a root password — limit who and what can access it.

For production services with distinct subdomains? Individual certs. Yes, it's more to manage. But cert-manager, Caddy, and even basic certbot with cron make this borderline trivial now. The "convenience" argument for wildcards made more sense in 2015 when cert provisioning was manual and expensive. With Let's Encrypt and modern tooling, provisioning 20 individual certs is barely more work than one wildcard.

And monitor your certs regardless. Wildcard or not, if you're not tracking expiration dates across your infrastructure, you're gambling. Set up alerts at 30 days, 14 days, and 7 days before expiry. If you're relying on "the automation will handle it" without monitoring that the automation actually ran... you've been lucky, not good.