Back to Blog
Devops

Setting Up Trusted HTTPS in Local Development Without the Pain

A practical guide to generating and trusting self-signed certificates for local development, covering mkcert, OpenSSL, and Docker workflows.

CertGuard Team··7 min read

The localhost HTTPS Problem

Most web applications eventually need HTTPS in development. Maybe the app uses secure cookies, or the frontend calls an API that enforces HSTS, or the OAuth provider flat-out refuses to redirect to an HTTP URL. Whatever the reason, the moment arrives: local dev needs TLS.

The lazy fix — clicking past browser warnings — works until it doesn't. Service workers won't register. Mixed content blocks silently break features. And developers build muscle memory around ignoring security warnings, which is exactly the wrong habit to cultivate.

Why Self-Signed Certificates Get a Bad Reputation

A self-signed certificate is technically identical to any other X.509 cert, minus one thing: no trusted CA vouches for it. Browsers and HTTP clients don't inherently distrust the cryptography — they distrust the identity claim. That distinction matters.

In production, this is a real problem. In development, identity verification against a public CA is meaningless anyway. The machine talking to localhost is... itself. So the actual challenge isn't security — it's getting the trust chain right so tools stop complaining.

The Fast Path: mkcert

mkcert is a zero-config tool that creates a local Certificate Authority, installs it into system and browser trust stores, and then issues certificates signed by that CA. The result: genuinely trusted HTTPS on localhost with two commands.

# Install (macOS)
brew install mkcert
mkcert -install

# Generate cert for local domains
mkcert localhost 127.0.0.1 ::1 myapp.local
# Creates ./localhost+3.pem and ./localhost+3-key.pem

That's it. No OpenSSL incantations, no manually importing root CAs, no per-browser configuration. The generated files work with Node.js, nginx, Caddy, or anything that accepts PEM files.

A Node.js example using these certs:

const https = require('https');
const fs = require('fs');

const server = https.createServer({
  key: fs.readFileSync('./localhost+3-key.pem'),
  cert: fs.readFileSync('./localhost+3.pem'),
}, (req, res) => {
  res.writeHead(200);
  res.end('Hello over TLS');
});

server.listen(3000, () => {
  console.log('https://localhost:3000');
});

Chrome, Firefox, curl — everything trusts it. No flags, no exceptions, no warnings.

When You Need OpenSSL Directly

Sometimes mkcert isn't an option. CI environments, restricted machines, or situations where installing a local CA isn't desirable. OpenSSL can generate a self-signed cert in one command, though the trust part requires manual steps.

# Generate a self-signed cert valid for 365 days
openssl req -x509 -newkey rsa:2048 -nodes \
  -keyout key.pem -out cert.pem -days 365 \
  -subj "/CN=localhost" \
  -addext "subjectAltName=DNS:localhost,IP:127.0.0.1"

The subjectAltName extension is critical. Modern browsers ignore the Common Name (CN) field for hostname validation — they only check Subject Alternative Names. Skip the SAN and Chrome will reject the cert even after you manually trust it. This trips people up constantly.

To trust this cert on macOS:

# Add to system keychain as trusted
sudo security add-trusted-cert -d -r trustRoot \
  -k /Library/Keychains/System.keychain cert.pem

On Linux, copy it to /usr/local/share/ca-certificates/ and run sudo update-ca-certificates. On Windows, import it into the Trusted Root Certification Authorities store via certmgr.msc.

Docker and Self-Signed Certs

Docker adds a layer of complexity because containers have their own trust stores. A cert trusted on the host means nothing inside a container unless explicitly added.

The cleanest approach: mount the certs and update the trust store at container startup.

# docker-compose.yml
services:
  app:
    build: .
    volumes:
      - ./certs:/certs:ro
    environment:
      - NODE_EXTRA_CA_CERTS=/certs/rootCA.pem

  nginx:
    image: nginx:alpine
    volumes:
      - ./certs/localhost.pem:/etc/nginx/ssl/cert.pem:ro
      - ./certs/localhost-key.pem:/etc/nginx/ssl/key.pem:ro
      - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
    ports:
      - "443:443"

The NODE_EXTRA_CA_CERTS environment variable tells Node.js to trust additional CA certificates without modifying the system store. Handy for containers where you don't want to run update-ca-certificates on every startup.

For non-Node containers that make HTTPS calls to other local services:

# In Dockerfile
COPY certs/rootCA.pem /usr/local/share/ca-certificates/local-dev-ca.crt
RUN update-ca-certificates

Handling Multiple Local Domains

Microservices architectures often mean multiple local services that need to talk to each other over TLS. Running api.local, auth.local, and app.local simultaneously creates a mini PKI problem.

mkcert handles this elegantly — generate one cert covering all names:

mkcert api.local auth.local app.local "*.local"

Pair this with entries in /etc/hosts (or a tool like dnsmasq) and each service resolves to 127.0.0.1 with a valid cert. No wildcard DNS hacks required.

For teams, commit the generated certificates to the repo (yes, really — these are dev-only certs with no production value) and document the one-time mkcert -install step in the README. New developers get working HTTPS after a single command.

Common Pitfalls

Expired dev certs. Self-signed certs expire too. Generate them with a long validity period (3-5 years) or add cert generation to your project's setup script. Nothing wastes time quite like debugging a failing integration test that turns out to be an expired localhost cert.

Hardcoded rejectUnauthorized: false. This shows up in Node.js codebases as a "quick fix" for self-signed cert errors. The problem: it tends to migrate into production code. Use NODE_EXTRA_CA_CERTS instead. The app code stays identical between environments.

Forgetting to include SANs. Already mentioned, but worth repeating. Chrome 58+ and all modern browsers require the Subject Alternative Name extension. A cert with only a CN will fail silently in browsers while working fine with older curl versions — a confusing debugging session.

Browser-specific trust stores. Firefox maintains its own certificate store, separate from the OS. On Linux, trusting a cert at the system level won't affect Firefox unless you also import it through about:preferences#privacy or set security.enterprise_roots.enabled to true in about:config. mkcert handles this automatically, which is another reason to prefer it.

A Practical Setup Script

Wrapping cert generation into a project script removes friction for the whole team:

#!/bin/bash
# scripts/setup-certs.sh
set -euo pipefail

CERT_DIR="./certs"
DOMAINS="localhost 127.0.0.1 ::1 app.local api.local"

if [ ! -f "$CERT_DIR/localhost.pem" ]; then
  echo "Generating development certificates..."
  mkdir -p "$CERT_DIR"
  mkcert -cert-file "$CERT_DIR/localhost.pem" \
         -key-file "$CERT_DIR/localhost-key.pem" \
         $DOMAINS
  echo "Done. Certs written to $CERT_DIR/"
else
  echo "Certs already exist in $CERT_DIR/, skipping."
fi

Add this to package.json as a postinstall hook or call it from a Makefile. First-time setup takes seconds; subsequent runs skip the generation.

When Self-Signed Isn't Enough

Certain scenarios genuinely need a publicly trusted certificate, even in development. Testing against third-party webhooks, mobile app development with certificate pinning, or validating ACME flows all require real certs. Tools like ngrok or Cloudflare Tunnels provide trusted HTTPS for local services without any cert management — useful as a complement to self-signed setups rather than a replacement.

For the 90% case, though, a properly configured self-signed setup eliminates an entire category of development friction. The ten minutes spent setting it up correctly save hours of debugging mysterious TLS errors, prevent bad security habits from forming, and keep local development environments closer to production — which is the whole point.