One key to compromise them all
Someone on your team accidentally commits a private key to a public GitHub repo. It happens more than anyone wants to admit. If that key belongs to a single-domain cert for staging.example.com, you revoke it, rotate, move on. Bad day, but contained.
Now imagine that key was for *.example.com.
Every subdomain. Your API. Your admin panel. Your customer portal. The internal tooling you swore nobody external could reach. All of it, compromised with a single leaked file. And you won't know who's already using it to intercept traffic until it's too late.
The blast radius problem nobody talks about during procurement
When teams evaluate wildcard certificates, the conversation usually goes like this: "We have 40 subdomains and managing individual certs is painful, so let's just get a wildcard." Reasonable. But the security review, if there even is one, tends to focus on whether the CA is reputable and whether the price is right. Nobody asks what happens when the key gets out.
The blast radius of a wildcard key compromise is total.
An attacker with your wildcard private key can perform man-in-the-middle attacks against any subdomain, set up convincing phishing pages on subdomains your users trust, and decrypt captured TLS traffic if you're not using forward secrecy everywhere. Most organizations don't even have a complete inventory of their subdomains, which makes incident response after a wildcard compromise genuinely nightmarish. You're revoking a cert that's deployed across dozens of services, some of which were set up by people who left the company two years ago.
Real incidents that should scare you
A fintech company I consulted for had their wildcard cert deployed on 67 services across three cloud providers. When a developer's laptop got stolen (unencrypted disk, naturally), the key was sitting in a local nginx config. Revoking and rotating took 11 days. Eleven. Because nobody had a definitive list of where that cert was installed, and some services broke in spectacular ways when the old cert got revoked before the new one was deployed.
They didn't get breached, as far as they know. But for 11 days, anyone with that key could impersonate any of their services.
Compare that to an e-commerce platform that used individual certs per service. When a key leaked through a misconfigured Docker image, they identified the affected service in 20 minutes, rotated the cert in under an hour, and the blast radius was exactly one microservice. Nobody else was affected.
When wildcards actually make sense (yes, sometimes they do)
Wildcards aren't universally terrible. They're a tool, and like most tools, the problem is misuse.
They're reasonable when all subdomains run on the same infrastructure, the key is stored in a hardware security module or a secrets manager with strict access controls, you have automated rotation with short lifetimes (think 30 days max), and the number of places the key exists is small and well-documented.
Multi-tenant SaaS platforms where customers get subdomains like customer.app.example.com are a legitimate use case. But even there, you want the wildcard scoped to just *.app.example.com, not the root domain.
SAN certificates: the middle ground most teams ignore
Subject Alternative Name certificates let you put multiple specific domains on one cert. You get the operational convenience of fewer certificates without the "one key rules everything" risk of wildcards.
# Check SANs on an existing cert
openssl x509 -in cert.pem -noout -text | grep -A1 "Subject Alternative Name"
# What you'll see for a SAN cert
# DNS:api.example.com, DNS:app.example.com, DNS:admin.example.com
# vs a wildcard
# DNS:*.example.com
The tradeoff is obvious. With a SAN cert, you need to know your domains upfront and reissue when you add new ones. With a wildcard, any new subdomain just works. But that "just works" convenience is exactly the security problem. You can't accidentally cover a subdomain you forgot existed if it has to be explicitly listed.
Let's Encrypt makes SAN certs trivially easy with certbot:
# SAN cert with specific subdomains
certbot certonly --dns-cloudflare \
-d api.example.com \
-d app.example.com \
-d admin.example.com \
--cert-name example-services
# Compare to the wildcard approach
certbot certonly --dns-cloudflare \
-d "*.example.com" \
--cert-name example-wildcard
Same effort to set up. Very different security posture.
A practical segmentation strategy
The best approach for most organizations isn't "no wildcards ever" or "wildcards everywhere." It's segmentation based on trust boundaries.
Group your subdomains by security sensitivity. Your public marketing site and your internal admin panel should never share a certificate, wildcard or otherwise. Within each group, decide whether the operational overhead of individual certs is worth the security benefit.
# Example: tiered certificate strategy
#
# Tier 1 - Critical (individual certs, short-lived, HSM-stored keys)
# admin.example.com
# api.example.com
# payments.example.com
#
# Tier 2 - Standard (SAN cert, 90-day rotation)
# app.example.com
# docs.example.com
# status.example.com
#
# Tier 3 - Low risk (wildcard acceptable)
# *.dev.example.com (internal dev environments)
# *.staging.example.com (staging, no real data)
Your payments endpoint and your dev sandbox have no business sharing a private key. Sounds obvious when written down. But I've seen Fortune 500 companies do exactly this because "it's easier."
Key storage matters more than cert type
Honestly, a wildcard cert with the key in AWS Secrets Manager, rotated every 30 days, deployed via automation, is more secure than individual certs with keys sitting in plaintext on disk across 40 servers. The cert type matters less than how you handle the key.
If you're going to use wildcards, at minimum:
# Store keys in a secrets manager, not on disk
aws secretsmanager create-secret \
--name "prod/wildcard-key" \
--secret-string file://private.key \
--tags Key=rotation-days,Value=30
# Automate rotation so humans never touch the key
# Your CI/CD pipeline should pull the key at deploy time
# and never persist it to disk
Never copy the key manually between servers. That's how keys end up in Slack messages, email attachments, and random home directories on jump boxes. Automate the distribution or don't use a wildcard.
Monitoring wildcard abuse
Certificate Transparency logs are your friend here. If someone uses your compromised wildcard key to get a new cert issued, it'll show up in CT logs. But that only helps if you're watching.
Set up alerts for any certificate issuance on your domain that you didn't initiate. Tools like CertGuard, Facebook's CT monitoring, or even a simple crt.sh query on a cron job can catch unauthorized issuance. The gap between compromise and detection is where the real damage happens. Shrink that gap.
# Quick and dirty CT log check
# Run this daily, compare output with known certs
curl -s "https://crt.sh/?q=%.example.com&output=json" | \
jq '.[] | select(.not_before > "2026-01-01") | {id, common_name, issuer_name, not_before}'
Not production-grade monitoring, but it's better than nothing. And you'd be surprised how many teams have literally nothing.
The rotation question
Short-lived certificates reduce blast radius by limiting the window of exposure. If your wildcard cert is valid for 90 days and auto-rotates, a compromised key is useful for at most 90 days (assuming you don't detect it sooner). Compare that to the old days of 2-year certs where a leaked key was a gift that kept on giving.
Google's been pushing for 90-day max validity, and Apple already enforces it in Safari for new certs. The industry is moving toward shorter lifetimes whether you like it or not. For wildcards specifically, shorter is strictly better because it limits how long a compromised key remains valid.
If you can get wildcard rotation down to 30 days with full automation, the blast radius argument weakens significantly. Not disappears, but weakens. A 30-day window with a compromised wildcard is still worse than a 30-day window with a single-domain cert. But it's a lot better than a 365-day wildcard sitting on an unpatched server somewhere.
What to actually do on Monday morning
Audit where your wildcard certs are deployed. All of them. If you can't produce a complete list in under an hour, that's your first problem.
For each wildcard, ask: does every subdomain this covers actually need to share a key with every other subdomain? If the answer is no (and it usually is), start breaking them up. Move critical services to individual or SAN certs. Keep wildcards for genuinely low-risk, homogeneous environments.
And please, for the love of uptime, document where every cert is installed. The next incident response will thank you.