Cryspen Series
On the Promises of 'High-Assurance' Cryptography (Feb 5)
We Found Bugs in Cryspen's Verified Code (Feb 12)
Even More Bugs in Cryspen's libcrux: ML-DSA (Feb 17)
Paper: Verification Theatre · Free talk (Feb 25): High Assurance Cryptography and the Ethics of Disclosure

Over the past two weeks, we have published eight vulnerabilities across Cryspen’s cryptographic libraries, including three bugs in formally verified code that Cryspen’s own documentation claims is correct.

We now report two additional findings in libcrux’s ML-DSA implementation—the post-quantum digital signature algorithm standardized as FIPS 204. Both are FIPS 204 specification violations.

Finding 1: ML-DSA Verifier Norm Check Is Dead Code

Location: libcrux-ml-dsa/src/ml_dsa_generic.rs, line 404.

The ML-DSA verifier checks whether the signer response vector $\mathbf{z}$ has an infinity norm below a certain bound. This is a critical security check: it ensures that $\mathbf{z}$ does not leak information about the secret key. FIPS 204, Algorithm 8 (ML-DSA.Verify_internal), step 13 specifies:

$$\text{return } \lVert\mathbf{z}\rVert_\infty < \gamma_1 - \beta \text{ and } \tilde{c} = \tilde{c}'$$

And Algorithm 7 (ML-DSA.Sign_internal), step 23 specifies:

$$\text{if } \lVert\mathbf{z}\rVert_\infty \geq \gamma_1 - \beta \text{ or } \lVert r_0\rVert_\infty \geq \gamma_2 - \beta \text{, then } (\mathbf{z}, \mathbf{h}) \leftarrow \bot$$

Both use $\gamma_1 - \beta$. The libcrux verifier uses the wrong bound:

// Line 402-405 (verifier, BUG):
if vector_infinity_norm_exceeds::<SIMDUnit>(
    &deserialized_signer_response,
    (2 << GAMMA1_EXPONENT) - BETA,
//   ^^^^^^^^^^^^^^^^^^^^^^^^ = 2·γ₁ - β
) {
    return Err(VerificationError::SignerResponseExceedsBoundError);
}

The expression 2 << GAMMA1_EXPONENT computes $2 \cdot 2^{\gamma_1\text{exp}} = 2\gamma_1$, not $\gamma_1$. Compare with the signer, which correctly uses 1 << GAMMA1_EXPONENT:

// Line 284 (signer, correct):
if vector_infinity_norm_exceeds::<SIMDUnit>(&mask, (1 << GAMMA1_EXPONENT) - BETA) {
//                                                  ^^^^^^^^^^^^^^^^^^^^^^^^ = γ₁ - β

The signer checks $\gamma_1 - \beta$. The verifier checks $2\gamma_1 - \beta$. One constant was doubled.

The Check Can Never Trigger

The signer response $\mathbf{z}$ is deserialized from a $\gamma_1$-encoded representation: each coefficient is encoded as a value in $[-\gamma_1, \gamma_1]$. This means that for any deserialized $\mathbf{z}$, $\lVert\mathbf{z}\rVert_\infty \leq \gamma_1$. The buggy bound is $2\gamma_1 - \beta$. Since $\beta > 0$:

$$\gamma_1 < 2\gamma_1 - \beta$$

The check vector_infinity_norm_exceeds(&deserialized_signer_response, (2 << GAMMA1_EXPONENT) - BETA) can never return true. The verifier’s norm check is dead code. For each ML-DSA parameter set:

Parameter Set $\gamma_1$ $\beta$ Max $\lVert\mathbf{z}\rVert_\infty$ Buggy bound ($2\gamma_1 - \beta$) Correct bound ($\gamma_1 - \beta$)
ML-DSA-44 $2^{17} = 131072$ 78 131072 262066 130994
ML-DSA-65 $2^{19} = 524288$ 196 524288 1048380 524092
ML-DSA-87 $2^{19} = 524288$ 120 524288 1048456 524168

In every case, the maximum possible infinity norm ($\gamma_1$) is less than the buggy bound ($2\gamma_1 - \beta$). The check is vacuously true. The verifier unconditionally passes any deserialized signer response, regardless of whether it satisfies the FIPS 204 requirement $\lVert\mathbf{z}\rVert_\infty < \gamma_1 - \beta$.

Concrete Impact

A conforming signer will never produce a signature with $\lVert\mathbf{z}\rVert_\infty \geq \gamma_1 - \beta$—the signing algorithm rejects such candidates and retries. But a malicious signer could produce signatures with $\gamma_1 - \beta \leq \lVert\mathbf{z}\rVert_\infty \leq \gamma_1$ that pass libcrux’s verifier but would be correctly rejected by any FIPS 204-conforming implementation.

This creates an interoperability hazard. A system using libcrux for verification would accept signatures that every other conforming implementation rejects:

Provenance

Introduced in commit 326c837a33 on January 7, 2025, by Jonas Schneider-Bensch, with the message “Further macro monomorphization of parameter sets.” The signer’s correct bound was already present in the codebase. The verifier’s bound was written incorrectly in the same refactoring pass.

Finding 2: Hint Deserialization Missing Final Bound Check

Location: libcrux-ml-dsa/src/encoding/signature.rs, lines 91–120.

ML-DSA signatures contain a hint vector $\mathbf{h}$ that encodes which coefficients of the high-order bits need adjustment. The hint is serialized as a list of indices followed by cumulative counts, with a maximum of $\omega$ total true hints across all polynomials. FIPS 204, Algorithm 21 (HintBitUnpack), step 4 specifies, for every row $i$ from 0 to $k - 1$:

$$\text{if } y[\omega + i] < \text{Index} \text{ or } y[\omega + i] > \omega \text{ then return } \bot$$

This checks the current cumulative count $y[\omega + i]$ against $\omega$ on every iteration. It ensures that no row’s cumulative count exceeds the allowed maximum.

The libcrux implementation checks the wrong variable:

for i in 0..rows_in_a {
    let current_true_hints_seen = hint_serialized[max_ones_in_hint + i] as usize;

    if (current_true_hints_seen < previous_true_hints_seen)
        || (previous_true_hints_seen > max_ones_in_hint)
    //      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    //      Checks PREVIOUS iteration's count, not current.
    //      FIPS 204 checks: y[ω + i] > ω (i.e., current > max)
    {
        malformed_hint = true;
        break;
    }

    for j in previous_true_hints_seen..current_true_hints_seen {
        // ... process hint indices ...
        set_hint(out_hint, i, hint_serialized[j] as usize);
    }

    // ...
    previous_true_hints_seen = current_true_hints_seen;
}

The check uses previous_true_hints_seen > max_ones_in_hint instead of current_true_hints_seen > max_ones_in_hint. It validates the count from the previous iteration rather than the current one. For the last row (i = rows_in_a - 1), the current count is read, the hint indices are processed and applied via set_hint, and previous_true_hints_seen is updated—but the overflow check will not fire until the next iteration, which never comes.

The subsequent padding check (line 122) is supposed to verify that all remaining bytes in the hint section are zero:

for j in previous_true_hints_seen..max_ones_in_hint {
    if hint_serialized[j] != 0 {
        malformed_hint = true;
        break;
    }
}

But if previous_true_hints_seen equals or exceeds max_ones_in_hint after the last row, this range is empty and the loop body never executes. The padding check is silently skipped.

Concrete Impact

A crafted signature can set the last row’s cumulative count to a value greater than $\omega$ (max_ones_in_hint). The inner loop will then iterate over bytes in the trailer region of the hint encoding, interpreting arbitrary byte values as hint indices and setting corresponding positions via set_hint. For each parameter set:

Parameter Set $\omega$ (max_ones_in_hint) Rows ($k$) Trailer bytes
ML-DSA-44 80 4 hint_serialized[80..84]
ML-DSA-65 55 6 hint_serialized[55..61]
ML-DSA-87 75 8 hint_serialized[75..83]

The trailer bytes are the cumulative count bytes for each row. An attacker controls these values as part of the signature. By setting the last row’s count to max_ones_in_hint + rows_in_a (or any value up to 255), the inner loop reads the trailer bytes as hint indices, setting arbitrary hint positions in the last polynomial. This produces a hint vector that a FIPS 204-conforming implementation would reject.

As with the verifier norm bound, this creates a divergence between libcrux and conforming implementations. A signature accepted by libcrux may be rejected elsewhere, or worse, the corrupted hint vector may cause the verification to produce a different result than intended:

Provenance

The hint deserialization logic has been touched by multiple authors across several commits. The current loop structure was established by Jonas Schneider-Bensch in commit 83ffae727d on January 2, 2025, and further modified by Franziskus Kiefer (Cryspen’s CEO) in commit 404115bc08 on January 13, 2025. The early-return logic was later replaced with flag-and-break by Jonas Schneider-Bensch in commit d979b001f9 on November 17, 2025, as a workaround for a bug in Eurydice, with the comment: “We would like to use early returns below, but doing so triggers a bug in Eurydice.”

The workaround for Eurydice’s inability to handle early returns inside loops restructured the control flow in a way that obscured the missing final bound check. The original early-return code may or may not have had the same bug, but the flag-and-break rewrite made it harder to spot.

The Pattern Continues

These two findings bring the total to ten vulnerabilities across Cryspen’s cryptographic libraries in two weeks. The pattern we identified in our original post and paper remains consistent: bugs cluster in code that is unverified or lax-verified, while the marketing claims make no such distinction.

Both findings are in ML-DSA’s core verification path—the code that determines whether a digital signature is valid. Both are straightforward deviations from FIPS 204 that a careful comparison against the specification would catch. And both reside in modules that are admitted by default in the build system, meaning that even if someone wrote formal specifications for them, the proofs would not be checked.

The ML-DSA Makefile makes the verification boundary explicit. Twenty-seven modules are in VERIFIED_MODULES: SIMD arithmetic, NTT, encoding primitives, and type definitions. Everything else—including the top-level signing and verification logic, signature encoding, and key generation—is in ADMIT_MODULES. The verified modules are the leaves. The roots are unverified.

Cryspen’s response to our previous findings was to claim that “no bugs have been found in the verified code.” We showed that this was false. But even setting that aside, the framing itself is misleading. When the signing algorithm, the verification algorithm, and the signature encoding are all unverified, saying “no bugs in the verified code” is like saying “no bugs in the parts we checked”—while the parts you did not check contain specification violations that affect every signature operation.

Formal verification is supposed to provide assurance that goes beyond testing. But assurance requires that the verification boundary be communicated honestly. When a library is marketed as “formally verified” and “high assurance,” users reasonably expect that the core cryptographic operations—signing, verification, key generation—are within that boundary. In libcrux’s ML-DSA, they are not.

Cryspen Series
On the Promises of 'High-Assurance' Cryptography (Feb 5)
We Found Bugs in Cryspen's Verified Code (Feb 12)
Even More Bugs in Cryspen's libcrux: ML-DSA (Feb 17)
Paper: Verification Theatre · Free talk (Feb 25): High Assurance Cryptography and the Ethics of Disclosure

Want to work together?

Choose Symbolic Software as your trusted partner in enhancing security and fostering integrity within the digital ecosystem.

Start a Project