Assertions are not the problem; your verification process is

By Pekka Enberg — Nov 21st, 2025

There's been a lot of discussion about error handling since the Cloudflare incident this week, which took down significant parts of the internet. The incident was a cascading failure: an upgrade to a database system changed the format of a "feature file", which caused a proxy engine using that file to crash because it did not expect the change. However, much of the focus was on the Option::unwrap() call in their proxy engine written in Rust, which many saw as the culprit.

For those not familiar with Rust, the Option type represents a value that is either set or not. The unwrap() method either returns the value or panics the program. In the Cloudflare case, the larger data format caused memory preallocation to fail in the proxy, but the code expected it to succeed. The call to unwrap() would therefore cause the proxy to stop, with catastrophic consequences, as the rest of the system was unable to recover.

In the many online discussions, I saw the following arguments:

However, all the arguments are wrong.

The Option::unwrap() method is indeed problematic because people do use it to ignore error handling in many Rust examples. Furthermore, the technique is clearly poorly named because it does not clearly signal that it will cause the program to stop when there's no value. (Some people suggested renaming it to unwrap_or_panic() and I'd agree with that.) However, unwrap() does not exist to skip error handling. Instead, it is an assertion in disguise to prevent the program from entering an unknown state.

But surely the programmers should have just handled the error? Absolutely. The invariant they introduced (either on purpose or by accident) was clearly wrong: there was no way for them to guarantee that data from an external system would always be correct. However, arguing that programmers should handle every error is pointless, because we, as an industry, cannot produce defect-free programs, except for exceptional cases where there's a strong focus on verification and formal methods. Furthermore, the paradox of error handling is that the more error handling you have, the more likely it is that the error handling code itself has defects.

To build robust software systems, crashing the program on error is actually an excellent strategy in many cases. Of course, this does not mean you crash the program on every error. Your system needs to handle errors, especially when handling data from external sources. But you need to deeply think it through and verify the state of your system, and stop execution on an invariant violation. You need to do this because the alternatives are much worse.

To understand why, consider the model of how defects evolve into failures outlined by Andreas Zeller in "Why Programs Fail" (2009):

A defect is a programming error that can arise in various ways: logic errors, incorrect assumptions about system behavior, typos, misunderstood requirements, or race conditions. When defective code executes, it corrupts the system's state in ways the programmer never intended or anticipated. Without proper safeguards like assertions or invariant checks, these corrupted states can persist undetected, silently spreading their effects.

Not all corrupted states cause immediate harm. Some remain benign, never affecting the system's observable behavior. Others, however, act like an infection propagating through data structures, function calls, and system components, gradually corrupting more of the system's state. When this cascade of corruption finally breaches the boundary between internal state and external behavior, we witness a failure: the program crashes, produces incorrect output, or behaves erratically. These failures are what users experience as a "bug."

The assertions in your code are your last line of defence, as Joran Greef from TigerBeetle keeps saying. They're there to prevent your systems from spiralling into a bad state with unexpected behavior, which could lead to catastrophic consequences. Your programs don't become more robust with fewer assertions (or by disabling them in production). They become robust via systematic error handling, strong invariants, and comprehensive verification. For example, fuzz testing, deterministic simulation testing, and formal methods are all verification strategies that can help you.

Of course, building robust software is hard, continuous work with no shortcuts. But it's the only way to write software that fails safely when, not if, our assumptions prove wrong. In other words, the assertions in your code crashing in production are not the problem; your verification process is.