The CI Pipeline That Trusts Nothing
Your integration tests pass locally. Every single one, green across the board. You push, CI picks it up, and
three minutes later half your test suite is red. The error? UNABLE_TO_VERIFY_LEAF_SIGNATURE or
certificate verify failed or some variant of "I don't trust this cert."
Sound familiar?
The root cause is almost always the same: your CI runner doesn't have the same certificate trust store as your laptop. You installed a root CA on your Mac six months ago, forgot about it, and now every environment that isn't your machine breaks. I've seen this tank entire release cycles. A fintech team once spent two days debugging what turned out to be a missing CA bundle in their Alpine-based Docker image.
Stop Using NODE_TLS_REJECT_UNAUTHORIZED=0
The most common "fix" is the worst one. Setting NODE_TLS_REJECT_UNAUTHORIZED=0 in Node.js,
or verify=False in Python requests, or -k in curl. You've just turned off TLS
verification entirely. Congratulations, your tests pass. They also pass when someone intercepts the
connection. They pass when the certificate is expired. They pass when the hostname doesn't match.
The test isn't testing anything anymore.
And the worst part: these flags have a habit of leaking into production configs. I've audited codebases where
VERIFY_SSL=false was set in a .env.example file that got copied into production
deployments. Three years running. Nobody noticed because everything "worked."
Generating Certs That Actually Work in CI
The proper approach takes maybe ten extra minutes to set up. You create a CA, generate certs signed by that CA, and inject the CA into your CI runner's trust store. Not glamorous, but it works correctly.
#!/bin/bash
# generate-test-certs.sh
# run this once, commit the certs to your repo (yes really, they're test certs)
# Create the CA key and cert
openssl genrsa -out test-ca.key 2048
openssl req -x509 -new -nodes -key test-ca.key \
-sha256 -days 3650 \
-out test-ca.crt \
-subj "/CN=CI Test CA/O=TestOrg"
# Generate a server cert signed by our CA
openssl genrsa -out server.key 2048
openssl req -new -key server.key \
-out server.csr \
-subj "/CN=localhost"
# the SAN extension matters more than you'd think
cat > ext.cnf << EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage=digitalSignature, keyEncipherment
subjectAltName=@alt_names
[alt_names]
DNS.1 = localhost
DNS.2 = *.test.internal
IP.1 = 127.0.0.1
EOF
openssl x509 -req -in server.csr \
-CA test-ca.crt -CAkey test-ca.key \
-CAcreateserial -out server.crt \
-days 3650 -sha256 \
-extfile ext.cnf
Notice the SAN (Subject Alternative Name) section. Chrome stopped accepting certificates without SANs back in 2017. Plenty of other tools followed. If your cert only has a CN and no SAN, modern TLS libraries will reject it. OpenSSL won't warn you about this when generating the cert. It just silently creates something that half your stack won't trust.
Injecting the CA Into Different CI Environments
Every CI platform and every base image has its own way of handling trusted CAs. There is no universal standard, which is frankly annoying.
For Debian/Ubuntu-based runners (GitHub Actions, most GitLab runners):
# In your CI config
- name: Trust test CA
run: |
sudo cp test-ca.crt /usr/local/share/ca-certificates/test-ca.crt
sudo update-ca-certificates
Alpine (common in Docker-based CI):
cp test-ca.crt /usr/local/share/ca-certificates/
update-ca-certificates
# or if that doesn't exist:
cat test-ca.crt >> /etc/ssl/certs/ca-certificates.crt
But here's where it gets messy. Some tools don't use the system trust store at all. Node.js, for instance,
bundles its own CA list compiled from Mozilla's root store. You need NODE_EXTRA_CA_CERTS:
export NODE_EXTRA_CA_CERTS=./test-ca.crt
Python's requests library uses certifi, which also bundles its own roots.
You either set REQUESTS_CA_BUNDLE or patch it at the session level. Java has its own
keystore, cacerts, and you need keytool to import into it. Go uses the system
store on Linux but has its own fallback paths.
Every. Language. Different.
A Docker Compose Setup That Doesn't Suck
Most integration test setups in CI involve spinning up services with Docker Compose. The pattern that works best is mounting certs as volumes and having each service pick them up.
# docker-compose.test.yml
services:
api:
build: .
volumes:
- ./certs/server.crt:/etc/ssl/server.crt:ro
- ./certs/server.key:/etc/ssl/server.key:ro
environment:
- TLS_CERT_PATH=/etc/ssl/server.crt
- TLS_KEY_PATH=/etc/ssl/server.key
test-runner:
build:
context: .
dockerfile: Dockerfile.test
volumes:
- ./certs/test-ca.crt:/usr/local/share/ca-certificates/test-ca.crt:ro
environment:
- NODE_EXTRA_CA_CERTS=/usr/local/share/ca-certificates/test-ca.crt
- API_URL=https://api:443
depends_on:
- api
The :ro mount flag is a small thing but worth mentioning. Read-only mounts prevent your
tests from accidentally modifying the certs during a run. Sounds paranoid until it happens to you.
The "Should I Commit Test Certs to Git?" Debate
Short answer: yes, for test-only certs.
Longer answer: these are not production secrets. They're test fixtures, no different from a seed database or a mock API response. The CA key is the only sensitive part, and even that only matters if someone could trick your browser into trusting the test CA, which would require them to also install it in your system trust store. So the threat model is basically zero.
Generating certs on every CI run is technically more "secure" but adds complexity and flakiness. OpenSSL version differences between runners can produce subtly different certs. One team I worked with had their tests randomly fail because the CI runner got upgraded from Ubuntu 20.04 to 22.04, which shipped a newer OpenSSL that changed the default signature algorithm. Committed test certs avoid all of that.
Just put them in a certs/ or test/fixtures/tls/ directory and add a README
explaining what they are. Future you will appreciate it.
Cert Expiry in Test Fixtures
Set your test certs to expire in 10 years. Seriously. The default OpenSSL expiry is 30 days. If you
forget to override -days, your CI starts failing a month after you set this up and nobody
remembers why.
A logistics company I consulted for had exactly this problem, except it was worse. They generated certs with a 1-year validity, committed them, and then 14 months later (because the project was shelved and revived) every single integration test failed. The error messages pointed at network issues. It took someone two days to realize the test certs had expired.
# bad: default 30 days
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -out server.crt
# good: 10 years, you'll rewrite this service three times before it expires
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key \
-out server.crt -days 3650 -sha256
Testing Certificate Validation Itself
Here's something most teams skip entirely. If your application does certificate validation (and it should), you need negative tests too. Tests that verify your app correctly rejects bad certs.
// expired-cert.test.ts
import { createServer } from 'https';
import { readFileSync } from 'fs';
describe('TLS validation', () => {
it('rejects expired certificates', async () => {
// server with an intentionally expired cert
const server = createServer({
cert: readFileSync('./test/fixtures/tls/expired-server.crt'),
key: readFileSync('./test/fixtures/tls/expired-server.key'),
});
await new Promise(resolve => server.listen(0, resolve));
const port = (server.address() as any).port;
// your HTTP client should throw here
await expect(
fetch(`https://localhost:${port}/health`)
).rejects.toThrow();
server.close();
});
it('rejects wrong hostname', async () => {
// cert is valid but issued for "other.example.com"
const server = createServer({
cert: readFileSync('./test/fixtures/tls/wrong-host.crt'),
key: readFileSync('./test/fixtures/tls/wrong-host.key'),
});
await new Promise(resolve => server.listen(0, resolve));
const port = (server.address() as any).port;
await expect(
fetch(`https://localhost:${port}/health`)
).rejects.toThrow();
server.close();
});
});
Generate a few purpose-built bad certs: one expired, one with a wrong hostname, one signed by an untrusted CA. Commit them alongside your good test certs. These negative tests catch regressions where someone accidentally loosens TLS verification, which happens more often than you'd expect during "quick fixes."
GitHub Actions Gotcha
One specific thing about GitHub Actions that trips people up: the services containers
start before your checkout step runs. So if your service needs a cert that lives in your repo, you
can't use the services block directly. You have to start the service manually after checkout.
# this won't work, certs aren't available yet
jobs:
test:
services:
api:
image: my-api:latest
# can't mount repo files here
# do this instead
jobs:
test:
steps:
- uses: actions/checkout@v4
- name: Start API with test certs
run: docker compose -f docker-compose.test.yml up -d
- name: Run tests
run: npm test
env:
NODE_EXTRA_CA_CERTS: ./certs/test-ca.crt
Small thing. Costs about an hour of debugging if you don't know it.
Keep Your Test TLS Close to Production TLS
The whole point of using proper self-signed certs in CI instead of disabling verification is to catch TLS issues before they hit production. So make your test setup mirror production as closely as possible. Use the same TLS version constraints. Use the same cipher suites. If production enforces TLS 1.3, your test server should too.
Otherwise you're just adding ceremony without actually catching bugs. And nobody needs more ceremony in their CI pipeline.