Back to Blog
Security

TLS 1.3 Zero-RTT Is Fast. It Also Breaks Your Security Model.

Zero-RTT resumption in TLS 1.3 trades safety for speed. Most teams enable it without understanding the replay risks. Here is what actually goes wrong.

CertGuard Team··7 min read

The Speed Tax Nobody Reads the Fine Print On

When TLS 1.3 shipped, the headline feature everyone latched onto was 0-RTT resumption. Zero round trips. Instant encrypted connections for returning clients. CDN providers marketed it aggressively. Cloudflare enabled it by default. And engineers everywhere thought: free performance. No tradeoffs.

Except there's a massive tradeoff, and it's buried in section 8 of RFC 8446 under language that reads like a legal disclaimer.

0-RTT data is replayable.

Wait, What Does "Replayable" Actually Mean Here?

In a normal TLS 1.3 handshake (1-RTT), the client and server exchange fresh key material before any application data flows. An attacker who captures the encrypted traffic can't replay it because the keys are unique to that session. Standard stuff.

0-RTT works differently. The client uses a pre-shared key from a previous session to encrypt application data and sends it alongside the ClientHello, before the server has responded at all. The server decrypts it using the cached session state. No round trip needed.

But here's the problem. That early data isn't protected by any value the server contributes to this specific connection. A network attacker who records the ClientHello and its 0-RTT payload can replay it verbatim to the server. The server will accept it, decrypt it, and process it. Again.

If your 0-RTT data was a GET request for a static page, who cares. If it was a POST that transfers money, creates an order, or modifies state, you just processed it twice.

Why the Protocol Can't Fix This

This isn't a bug. It's a fundamental consequence of how 0-RTT achieves its speed. Without a server-contributed nonce in the key derivation, there's no way for the protocol itself to distinguish a legitimate first submission from a replayed copy. The TLS working group knew this. They shipped it anyway, with warnings.

Some implementations try to mitigate it with single-use session tickets. The idea: each ticket is valid for one resumption only. An attacker replays it, the server rejects the duplicate ticket.

Sounds clean. Falls apart at scale.

If you're running a single server, single-use tickets work. But behind a load balancer with 50 backends? You need a globally synchronized ticket store with sub-millisecond consistency. Miss the sync window by even a little, and two backends both accept the same ticket. Most distributed deployments just accept a time-based window instead, which means replays within that window still succeed.

Real World: How This Goes Wrong

A team I worked with ran an API behind Cloudflare with 0-RTT enabled. Their POST endpoint for webhook delivery wasn't idempotent. An attacker sitting on the network path between the client and Cloudflare's edge recorded TLS handshakes and replayed the 0-RTT portions. The webhooks fired twice. Sometimes three times if the attacker was persistent with multiple edge PoPs.

They only noticed because a downstream payment processor flagged duplicate transaction IDs. Not TLS errors. Not server logs. A billing anomaly.

The fix wasn't complicated. They made the endpoint idempotent with a unique request token. But the point is: TLS gave them a false sense of security. "We're on 1.3, everything is encrypted and safe" was the assumption. Nobody had read the 0-RTT section.

Who Should Actually Use 0-RTT

The safe use cases are narrower than most people think:

  • Static content delivery. GET requests for cacheable resources. CDNs love this, and it's genuinely safe.
  • Idempotent APIs where replaying a request produces the same result. Reading data, checking status, fetching configs.
  • Anything where you've built replay protection at the application layer already. Nonces, request IDs, deduplication tables.

That's it. If your server processes state-changing requests in 0-RTT data without application-level replay protection, you have a vulnerability. Full stop.

Configuring Your Way Out

Most TLS libraries let you control 0-RTT behavior on both sides. In OpenSSL, the server calls SSL_CTX_set_max_early_data() to set the maximum bytes of early data it'll accept. Set it to 0 and you've disabled 0-RTT entirely.

// Disable 0-RTT on the server side (OpenSSL)
SSL_CTX_set_max_early_data(ctx, 0);

// Or if you want it, cap the size and handle replays yourself
SSL_CTX_set_max_early_data(ctx, 16384);
// then in your app: check SSL_get_early_data_status()
// and enforce idempotency for anything received as early data

Nginx has ssl_early_data on|off. When enabled, it sets the $ssl_early_data variable to "1" for requests that arrived as 0-RTT. You can use that to reject non-safe methods:

# nginx.conf
ssl_early_data on;

# Block state-changing methods in 0-RTT
if ($ssl_early_data = 1) {
    set $block_early "yes";
}
if ($request_method !~ ^(GET|HEAD|OPTIONS)$) {
    set $block_early "${block_early}-unsafe";
}
if ($block_early = "yes-unsafe") {
    return 425;  # Too Early
}

HTTP status 425 (Too Early) exists specifically for this. It tells the client: retry this request after the handshake completes. Browsers and well-behaved HTTP clients handle it automatically. The request goes through on the full 1-RTT handshake instead, adding maybe 10-30ms. Acceptable for a POST.

The CDN Complication

If you're behind a CDN, the 0-RTT decision happens at their edge, not your origin. Cloudflare accepts 0-RTT by default and forwards it to your origin as a normal request. They add an Cf-0rtt header so you can detect it, but you have to actually check for it.

Fastly and AWS CloudFront are more conservative. Fastly doesn't support 0-RTT at all as of writing. CloudFront strips early data for non-GET requests. Know what your CDN does here. Don't assume.

Testing for Replay Vulnerability

You can test this yourself with a packet capture and some scripting. Connect to your server, complete a handshake, note the session ticket. Then craft a new ClientHello reusing that ticket with 0-RTT data containing a POST request. Send it twice. If your server processes both, you're vulnerable.

There are also tools like tlsfuzzer that have specific 0-RTT replay test cases. Run them against staging, not production. Obviously.

The Bigger Lesson

TLS 1.3 is a better protocol than 1.2 in almost every way. But 0-RTT is the one place where the protocol designers explicitly chose speed over safety and put the burden on application developers to fill the gap. Most application developers don't know about the gap.

If you're doing certificate monitoring (and you should be), add 0-RTT configuration to your audit checklist. It's not a certificate issue per se, but it sits right next to your TLS config and it's the kind of thing that silently goes wrong for months before anyone notices.

Check your servers. Check your CDN settings. Make your state-changing endpoints idempotent regardless, because replay protection is good hygiene even outside the 0-RTT context. And read section 8 of the RFC. It's only a few pages, and it might save you from a very confusing incident at 3 AM.