Effective immediately, Symbolic Software is recommending that all clients design new cryptographic systems as post-quantum native. This post explains the reasoning behind that recommendation, subjects the evidence to the scrutiny it deserves, and raises questions about the epistemic standards the applied cryptography community should be holding itself to as the post-quantum transition accelerates.
The recommendation is, on balance, probably correct. But “probably correct” is not the standard this profession is supposed to work to. The gap between the two deserves honest examination.
The Google Quantum AI Result
On March 30, 2026, a team led by Google Quantum AI—with co-authors from UC Berkeley, the Ethereum Foundation, and Stanford—published a whitepaper presenting updated quantum resource estimates for solving the elliptic curve discrete logarithm problem on the secp256k1 curve (ECDLP-256)—the curve underpinning the vast majority of cryptocurrency wallet signatures and a significant share of TLS deployments. The headline claim is a nearly 20-fold reduction in physical qubit requirements compared to Litinski’s 2023 estimates, which were themselves already a substantial improvement over earlier work.
Two compiled circuits are presented, representing different points on the space-time tradeoff curve:
- Circuit A (low-qubit variant): $\leq 1{,}200$ logical qubits and $\leq 90$ million Toffoli gates.
- Circuit B (low-gate variant): $\leq 1{,}450$ logical qubits and $\leq 70$ million Toffoli gates.
Under assumptions about superconducting qubit hardware (physical error rates at or below $10^{-3}$, planar degree-4 connectivity), these circuits are projected to execute on fewer than 500,000 physical qubits in a matter of minutes. The resource cost curve for ECDLP-256 is now tracking the same downward trajectory that RSA-2048 followed over the preceding decade.
The result is significant regardless of one’s position on post-quantum timelines. It narrows the gap between theoretical quantum advantage and engineering feasibility in a way that demands serious attention.
A Novel Disclosure Model
The paper introduces something genuinely new to the quantum cryptanalysis literature: rather than publishing the full circuit constructions, the authors provide a zero-knowledge proof—specifically, a Groth16 SNARK executed within the SP1 zkVM—attesting that they possess a quantum kickmix circuit of the claimed size that correctly computes secp256k1 elliptic curve point addition on 9,024 Fiat-Shamir-sampled pseudo-random inputs. (A kickmix circuit, as defined in the paper, is composed of classical reversible logic gates, measurement-based uncomputation, and diagonal phasing gates for phase correction—efficiently simulable classically, but structured for direct quantum execution.)
The stated rationale is responsible disclosure. Publishing optimized quantum circuits for breaking deployed elliptic curve cryptography would hand a blueprint to any future adversary with access to a sufficiently large quantum computer. The ZK approach attempts to let the community verify the plausibility of the resource estimate without providing the means to execute the attack.
This is an intellectually interesting approach and, in principle, a reasonable application of zero-knowledge techniques to a disclosure problem that has no good precedent. The authors deserve credit for thinking carefully about the responsible communication of quantum cryptanalytic capability, particularly in a domain—cryptocurrency—where public confidence is itself a security-relevant property.
What the Proof Covers—and What It Doesn’t
The ZK proof must be evaluated precisely for what it attests to and what it leaves open.
What is proven: The authors possess a quantum kickmix circuit that correctly computes secp256k1 point addition, and that circuit fits within the claimed resource counts. This is verified against 9,024 pseudo-random inputs derived via the Fiat-Shamir heuristic (SHAKE256 seeded with the circuit’s own bytes). The paper provides an explicit soundness bound: if the circuit produced incorrect outputs on more than 1% of inputs, the probability of passing all 9,024 tests is at most $(1-0.01)^{9024} \approx 2^{-130}$, yielding 128 bits of cryptographic security for the claim that the circuit is approximately correct on at least 99% of inputs.
What is not proven:
The full Shor compilation is not attested. The ZK proof covers the elliptic curve point addition subroutine, which the authors identify as the computational bottleneck. The reduction from point-addition cost to full end-to-end Shor execution cost relies on published windowed arithmetic techniques (double-and-add chains, interleaved modular multiplication). These techniques are well-established in the literature and the reduction is reasonable—but it remains an inferential step. The proof does not verify the complete quantum circuit that would actually break a key. An error in the broader compilation—the modular inversion, the quantum Fourier transform, the classical pre- and post-processing—would not be caught by this proof.
The soundness guarantee covers approximate correctness, not exact correctness. The paper’s soundness analysis establishes that the circuit is correct on at least 99% of inputs with $2^{-130}$ failure probability—a strong bound. But Shor’s algorithm is tolerant of this: a superposition with 1% incorrect values causes the algorithm to fail at most 1% of the time. What is less explicitly analyzed is the end-to-end relationship between “the point addition subroutine is 99%-correct on random inputs” and “the full Shor circuit, composed of 28 windowed point additions interleaved with modular arithmetic and a quantum Fourier transform, produces correct discrete logarithms.” The subroutine correctness bound is rigorous; the compositional argument connecting it to the top-level ECDLP claim rests on standard but unstated assumptions about how errors in the subroutine propagate through the larger algorithm.
The physical qubit projection rests on hardware assumptions that do not yet hold. The 500,000 physical qubit figure assumes error rates at or below $10^{-3}$ sustained across planar degree-4 connectivity at scale. Current superconducting qubit hardware has demonstrated error rates in this range on small systems, but maintaining these rates at the 500,000-qubit scale involves engineering challenges—crosstalk, wiring density, cooling—that have not been demonstrated. The projection is plausible given current trajectories, but it is a projection, not a measurement.
The circuits themselves are not available for independent analysis. This is by design—the entire point of the ZK approach is to withhold the circuits. But it means the community cannot independently verify the circuit architecture, check for optimization opportunities that might further reduce costs, or identify errors in the compilation strategy. The claim is auditable only to the extent that the ZK proof permits.
None of these observations make the paper wrong. Taken together, they make the paper a strong signal that is not yet conclusive evidence. This distinction matters.
The Risk Calculus Behind the Recommendation
Given the above, why is Symbolic Software recommending post-quantum native design?
The answer is the asymmetry of consequences.
Consider the two error modes. If the recommendation is premature—if the quantum timeline turns out to be significantly longer than the resource estimates suggest—the cost to clients is engineering overhead: larger key sizes, larger signatures, increased bandwidth, more complex protocol negotiations, and the implementation risks inherent in deploying newer cryptographic primitives. These costs are real and non-trivial, but they are recoverable. A system designed with ML-KEM and ML-DSA that turns out not to have needed them for another fifteen years is a system that over-invested in security. There are worse failure modes.
If the recommendation is late—if the estimates are roughly correct and classical elliptic curve cryptography becomes vulnerable within the operational lifetime of systems being designed today—the consequences are categorically different. Private keys are extractable. Signatures are forgeable. Store-now-decrypt-later attacks, in which encrypted traffic captured today is decrypted by a future quantum adversary, become retroactively devastating. These consequences are not recoverable. Data that has been exfiltrated cannot be un-exfiltrated.
This asymmetry is not new. It has been the standard argument for post-quantum preparedness for years. What the Google result changes is the urgency of the timeline, not the structure of the argument. The resource estimates have now improved to the point where “within the operational lifetime of systems being designed today” is no longer a worst-case assumption but a median-case projection.
The Harder Question: Epistemic Standards Under Pressure
The risk calculus is straightforward. The harder question is about the epistemic standards the community is applying as it navigates the post-quantum transition.
Applied cryptography has historically operated under a norm of open, reproducible, and fully auditable evidence. Protocols are published. Proofs are checked. Implementations are reviewed. When a vulnerability is claimed, the expectation is that the claim can be independently verified—not as a matter of trust, but as a matter of scientific practice. This norm exists for good reason: the history of the field is littered with examples of informal reasoning, indirect signals, and trusted-but-wrong claims leading to catastrophic outcomes. Snake oil gets sold on the basis of plausible-sounding arguments. Bad standards get adopted because the evidence supporting them was not subjected to sufficient scrutiny.
The Google team’s ZK disclosure model represents a deliberate departure from this norm. The departure is well-motivated—there is a legitimate argument that publishing optimized quantum attack circuits would cause more harm than benefit. But it creates a new epistemic regime in which the community is asked to make critical infrastructure decisions based on claims that are, by design, not fully auditable.
Several aspects of this regime deserve scrutiny:
The precedent effect. If “trust the ZK proof, don’t ask to see the circuits” becomes the accepted standard for quantum cryptanalysis disclosure, the community will be building security policy on a foundation where the underlying evidence is structurally unverifiable beyond what the prover chooses to reveal. This is an uncomfortable position regardless of the credibility of the prover—and Google Quantum AI is a very credible prover. The question is not whether this team can be trusted, but whether this disclosure model should become normative. Norms are not evaluated against the best-case actor; they are evaluated against the full range of actors who will invoke them.
The gap between verified and verifiable. A ZK proof provides cryptographic assurance that a specific computation was performed correctly. It does not provide the community with the ability to understand what was computed, to identify potential improvements, or to catch errors in the broader reasoning that connects the proven subroutine to the top-level claim. In traditional disclosure, a published circuit can be independently optimized, refuted, or extended. Under ZK disclosure, the community’s role is reduced from auditor to verifier of a predefined claim. These are different epistemic positions, and the latter is weaker.
The relationship between conservatism and rigor. The natural response to all of the above is: “Just be conservative. Assume the attack works and design against it. That’s Kerckhoffs’ principle applied to threat modeling.” This response is correct as far as it goes, and it is the basis of the recommendation being made here. But there is a meaningful difference between conservatism as a design principle—where uncertainty is resolved in favor of caution—and conservatism as an epistemic shortcut—where uncertainty is invoked to avoid the hard work of evaluating evidence rigorously. The former strengthens the field. The latter, if left unchecked, erodes the standards that make the field’s work meaningful.
The applied cryptography community has spent decades building a culture in which claims are expected to be substantiated, assumptions are expected to be explicit, and “it seems plausible” is not considered sufficient grounds for action. The post-quantum transition is now testing whether that culture holds under the pressure of urgency.
What Should Happen Next
Several concrete steps would strengthen the evidentiary basis for post-quantum migration decisions:
Independent reproduction of the resource estimates. The Google result is currently a single data point. A well-attested single data point, but a single data point nonetheless. Independent teams should attempt to reproduce or improve on the ECDLP-256 resource estimates using the publicly available literature on quantum arithmetic and Shor compilation. This is not an expression of doubt in the Google result—it is standard scientific practice. A global cryptographic migration should not be calibrated against a claim that only one team can verify.
End-to-end soundness analysis of the ZK disclosure model. The paper provides a clear soundness bound for the subroutine ($2^{-130}$ failure probability for 99%-correctness). What remains implicit is the compositional argument: how subroutine-level approximate correctness propagates through the full Shor circuit (28 windowed point additions, modular arithmetic, QFT) to yield a correct discrete logarithm. The authors also note a striking irony—the Groth16 SNARK itself relies on pairing-friendly elliptic curves vulnerable to the very quantum attacks the paper analyzes—which bounds the proof’s validity to the pre-CRQC era. A formal end-to-end analysis connecting the ZK-attested subroutine properties to the top-level ECDLP claim would strengthen the disclosure model considerably.
Continued hardware benchmarking against the projected requirements. The 500,000 physical qubit projection is credible given current trajectories, but it depends on engineering parameters that should be tracked publicly. As superconducting qubit systems scale, the community should maintain a clear, openly accessible mapping between demonstrated hardware capabilities and the requirements of the published quantum circuits. This mapping would provide a more rigorous basis for timeline estimates than trend extrapolation alone.
An honest conversation about epistemic norms under uncertainty. The applied cryptography community needs to explicitly discuss how it will handle situations where the evidence for a threat is strong but not fully transparent. The current post-quantum transition is the first instance of this problem at scale, but it will not be the last. Establishing clear norms now—about what constitutes sufficient evidence for different classes of decisions, about the obligations of provers who withhold details, and about the role of independent verification—will serve the field well beyond the current transition.
The Recommendation
Symbolic Software is advising all clients to design new cryptographic systems as post-quantum native. This means ML-KEM for key encapsulation, ML-DSA for digital signatures, and hybrid constructions where backward compatibility with classical systems is required during transition. The recommendation applies to systems currently in the design phase; guidance on migrating existing systems is available on a per-engagement basis.
This recommendation is made with full awareness that the evidence base, while strong, is not conclusive in the way the profession traditionally demands. The risk asymmetry justifies action. The evidentiary gaps justify continued scrutiny. Both of these things can be true simultaneously.
The post-quantum transition should be pursued with urgency. It should not be pursued with credulity. The applied cryptography community’s greatest asset is its insistence on rigor, and that insistence should not be the first casualty of the transition it is meant to protect.
Want to work together?
Choose Symbolic Software as your trusted partner in enhancing security and fostering integrity within the digital ecosystem.
Start a Project