The certificate that works on your machine and nowhere else
You generated a self-signed certificate. You added it to your macOS Keychain. Your browser stopped complaining. You committed the cert to the repo, spun up Docker Compose, and suddenly everything is broken again. Connection refused. Certificate not trusted. Your Node.js service throws UNABLE_TO_VERIFY_LEAF_SIGNATURE and you're back to square one.
Sound familiar?
The problem isn't your certificate. The problem is that every runtime, every OS, and every container image maintains its own trust store. And none of them talk to each other. You're not managing one trust relationship. You're managing five or six, minimum.
Trust stores are not universal (and nobody tells you this)
macOS uses the Keychain. Windows has its Certificate Store. Linux distros use /etc/ssl/certs/ or /etc/pki/tls/certs/ depending on whether you're on Debian or RHEL. Firefox ignores all of them and uses its own NSS database. Java has its own cacerts keystore. Node.js sometimes uses the OS store, sometimes doesn't, depending on the version and how it was compiled.
That's the landscape. Six different trust stores on a single developer machine, before you even think about containers.
When you run mkcert -install, it handles the local machine well. It injects the root CA into the macOS Keychain, the Windows cert store, the NSS database for Firefox, and the Java keystore if it finds one. Genuinely solid tooling. But it can't reach inside your Docker containers, and that's where most dev environments actually run services.
Why Docker makes everything harder
A fresh node:20-alpine image has its own CA bundle. Your beautiful locally-trusted root CA? Not in there. The container has no idea it exists.
Most developers hit this wall and do one of two things. They set NODE_TLS_REJECT_UNAUTHORIZED=0 and move on with their lives, or they spend an hour Googling Dockerfile incantations. Both approaches have problems.
Disabling TLS verification is the classic "it works" trap. You stop testing the actual TLS behavior your production code relies on. You miss certificate chain issues. You train your muscle memory to ignore security warnings. And inevitably someone copies that env var into a staging deployment. I've seen it happen three times at different companies. Once it made it to production and stayed there for four months.
The Dockerfile approach works but gets messy fast. Here's what you actually need for Alpine-based images:
# Copy your CA root cert into the container
COPY ./.certs/rootCA.pem /usr/local/share/ca-certificates/dev-root-ca.crt
RUN apk add --no-cache ca-certificates && update-ca-certificates
For Debian/Ubuntu images, the path changes:
COPY ./.certs/rootCA.pem /usr/local/share/ca-certificates/dev-root-ca.crt
RUN update-ca-certificates
Straightforward enough. But now multiply this across every service in your docker-compose.yml. Your API, your worker, your database sidecar, your reverse proxy. Each one needs the CA cert copied in and the trust store updated. Miss one and you get cryptic connection failures between services.
A pattern that actually scales
Stop copying certs into individual Dockerfiles. Mount the CA certificate as a volume and handle trust store updates at runtime.
# docker-compose.yml
services:
api:
build: ./api
volumes:
- ./.certs/rootCA.pem:/usr/local/share/ca-certificates/dev-root-ca.crt:ro
environment:
- NODE_EXTRA_CA_CERTS=/usr/local/share/ca-certificates/dev-root-ca.crt
worker:
build: ./worker
volumes:
- ./.certs/rootCA.pem:/usr/local/share/ca-certificates/dev-root-ca.crt:ro
environment:
- NODE_EXTRA_CA_CERTS=/usr/local/share/ca-certificates/dev-root-ca.crt
NODE_EXTRA_CA_CERTS is the secret weapon here. It tells Node.js to trust additional CAs without replacing the default bundle. No Dockerfile changes needed. Works with any Node.js version above 7.3.0. For Go services, you'd mount to the system path and run update-ca-certificates in an entrypoint script instead.
For Python, it's REQUESTS_CA_BUNDLE or SSL_CERT_FILE. For Java, you need to import into the JVM keystore at startup. Every runtime has its own mechanism and you need to know which one your stack uses.
The mkcert + docker workflow nobody documents properly
Here's the full setup that works across a team. Not theoretical, this ran in a 12-person engineering team for two years without issues.
# One-time setup (each developer runs this)
brew install mkcert # or use your OS package manager
mkcert -install
# Generate the root CA files (already exists after install)
# Copy from mkcert's CAROOT
cp "$(mkcert -CAROOT)/rootCA.pem" ./.certs/rootCA.pem
# Generate service certs
mkcert -cert-file ./.certs/local.pem -key-file ./.certs/local-key.pem "localhost" "*.local.dev" "api.local.dev" "127.0.0.1" "::1"
The .certs/ directory goes in .gitignore. Every developer generates their own. You could share certs across the team, but then you're distributing private keys through git, and that's the kind of shortcut that ends careers.
Add a setup script to your project. Something like scripts/setup-certs.sh that checks if mkcert is installed, generates the certs if they don't exist, and tells the developer what to do if something's missing. Automate the boring parts.
When Node.js ignores your trust store
This one catches people. Node.js compiled with certain flags or running in certain environments won't use the OS trust store at all. It bundles Mozilla's CA list at compile time. So you can add your CA to the system trust store all day long; Node doesn't care.
NODE_EXTRA_CA_CERTS bypasses this entirely. It works regardless of how Node was compiled. Always use it for containerized Node.js services. Don't rely on system trust store updates alone.
There's a gotcha though. The env var only accepts a single file path. If you need multiple extra CAs, concatenate them into one PEM file. And it has to be set before the process starts. Setting it at runtime with process.env does nothing.
Java is its own universe
JVM applications use cacerts, a binary keystore format, completely separate from whatever your OS trusts. If you're running a Java service alongside Node.js services in Docker Compose, you need two different trust store strategies.
# In your Java service entrypoint
keytool -importcert -noprompt -trustcacerts -alias dev-root-ca -file /certs/rootCA.pem -keystore $JAVA_HOME/lib/security/cacerts -storepass changeit
Yes, the default keystore password is literally "changeit". It has been since the late 90s. No, nobody changes it in development.
Debugging trust failures without losing your mind
When something doesn't work, you need to know which trust store is being consulted. openssl s_client is your best friend:
# From inside the container
openssl s_client -connect api.local.dev:443 -CAfile /usr/local/share/ca-certificates/dev-root-ca.crt
# Check what CAs the system trusts
awk -v cmd='openssl x509 -noout -subject' '/BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt | grep -i "dev-root"
If openssl s_client works but your app doesn't, the problem is runtime-specific. The app isn't using the system trust store. For Node, check NODE_EXTRA_CA_CERTS. For Java, check the keystore. For Python, check the requests bundle path.
If openssl s_client fails too, your CA cert isn't in the container's trust store. Check the volume mount, check the file permissions (must be readable), check that update-ca-certificates actually ran.
Stop fighting this manually
The real answer for teams larger than three people is to standardize. Pick one approach: mkcert for cert generation, NODE_EXTRA_CA_CERTS for Node services, volume mounts for distribution, entrypoint scripts for Java. Document it once. Put it in the project's README. Make the setup script idempotent so running it twice doesn't break anything.
And please, stop setting NODE_TLS_REJECT_UNAUTHORIZED=0. Your production code doesn't run with that flag. Your tests shouldn't either. The fifteen minutes you spend setting up proper trust stores saves you from the subtle, nasty bugs that only show up when TLS actually matters.
Which is always. TLS always matters.