International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

05 November 2025

Elizabeth Crites, Alistair Stewart
ePrint Report ePrint Report
We disprove a range of conjectures for Reed-Solomon codes underpinning the security and efficiency of many modern proof systems, including SNARKs based on FRI (Ben-Sasson-Bentov-Horesh-Riabzev, ICALP’18), DEEP-FRI (Ben-Sasson-Goldberg-Kopparty-Saraf, ITCS’20), STIR (Arnon-Chiesa-Fenzi-Yogev, CRYPTO’24), and WHIR (Arnon-Chiesa-Fenzi-Yogev, preprint). Concretely, we prove that the following conjectures are false:

1. The correlated agreement up-to-capacity conjecture of Ben-Sasson-Carmon-Ishai-Kopparty-Saraf (J. ACM’23), 2. The mutual correlated agreement up-to-capacity conjecture of WHIR, 3. The list-decodability up-to-capacity conjecture of DEEP-FRI, which follows from existing results in the literature.

We then propose minimal modifications to these conjectures up to the list-decoding capacity bound.

Our second main contribution is a proof that correlated agreement with small enough error probability implies list decoding of Reed-Solomon codes. Thus, any future results on our correlated agreement conjectures with small enough error probability would imply similar results in classical list decoding. A reduction from proximity gaps to list-decodability was heretofore a natural open problem.
Expand
Paco Poilbout, Thomas Roche, Laurent Imbert
ePrint Report ePrint Report
Post-Quantum key encapsulation mechanisms based on the re-encryption framework of Fujisaki and Okamoto have proved very sensitive to Plaintext Checking Oracle (PCO) attacks. The first theoretic works on PCO attacks were rapidly followed by practical attacks on real implementations, notably on NIST standardized ML-KEM. The actual realization of a PCO relies on side-channel leakages that are inherently noisy ; even more so if the implementation embeds side-channel countermeasures. In this paper we tackle the often overlooked complications caused by highly noisy PCOs. We demonstrate that the impact of wrong oracle answers can be very efficiently reduced with the use of the so-called Sequential Probability Ratio Test (SPRT). This test can be seen as an elegant and natural early abort strategy on top of the commonly used approaches based on majority-voting or the likelyhood ratio test. As far as we know, this is the first use of SPRT in the context of side-channel attacks. We show that it allows to divide by a factor up to 3 the attack complexity compared to the traditional approaches. By establishing new comparisons with recently published noisy PCO attacks we emphasize that SPRT should be considered as the novel baseline for all future works in this line of research.
Expand
Mengce Zheng, Yansong Feng, Abderrahmane Nitaj, Yanbin Pan
ePrint Report ePrint Report
We investigate cryptanalytic attacks on non-linear polynomial congruential generators (PCGs), a class of number-theoretic pseudorandom number generators. A PCG operates by iterating an algebraic map $F(x) \bmod{p}$ on a secret initial seed $v_0$ using fixed parameters, and outputs a truncated portion of each state (for example, the most significant bits). We propose new and improved lattice-based attacks that exploit systems of modular polynomial equations derived from PCGs.

Specifically, we analyze three common non-linear PCGs: the Quadratic Congruential Generator (QCG), the Power Generator, and the Pollard Generator. We establish asymptotic bounds for predicting these PCGs, assuming the adversary has access to an infinitely long output sequence. To derive these bounds, we develop new symbolic techniques that build on the automated Coppersmith's method framework recently developed by Feng et al. (Crypto '25). Our approach is more flexible than previous methods and is particularly well-suited for deriving symbolic bounds. Applying our techniques, we obtain the best-known analytical results for asymptotic attacks on these PCGs:

We present, for the first time, asymptotic attack bounds on QCGs with partially known coefficients. We extend and improve the asymptotic attack of Herrmann and May (Asiacrypt '09) on Power Generators. We improve the asymptotic attack of Bauer et al. (PKC '12) on Pollard Generators and confirm their conjecture.

We validate our theoretical findings with numerical experiments that demonstrate the practicality and efficacy of our attacks.
Expand
Andrei Alexei, Marios Omar Choudary, Vlad-Florin Dragoi
ePrint Report ePrint Report
In this article, we provide the first side-channel attack on the Berlekamp- Massey (BM) algorithm, which is the decoder used in the decryption process of the Classic McEliece KEM. We conduct a chosen plaintext key recovery attack that exploits the power consumption of the BM, which is highly dependent on the secret Goppa support elements. We exploit the relation between plaintexts of small Hamming weight, secret elements in the Goppa support and power traces using an efficient Template Attack. Our method completely recovers the secret Goppa support for the first parameter set of the Classic McEliece KEM using a single attack trace per secret coefficient. The entire support can be recovered in less than 7 seconds on a standard computer. Our experiments are performed using the ChipWhisperer-Lite board platform with the ARM Cortex-M4 microcontroller.
Expand
Preshtha Garg, Sanjam Garg, Guru-Vamsi Policharla, Bhaskar Roberts
ePrint Report ePrint Report
Anonymous credentials allow users to authenticate themselves in an anonymous and unlinkable fashion. By the end of 2026, EU member states will be required to issue digital identity wallets to their residents that enable authentication in this manner. In decentralized settings, we desire schemes with additional properties: schemes that allow multiple authorities to issue credentials, hide the identities of the issuers, and allow verifiers to dynamically choose their policies.

We present the first construction of issuer-hiding anonymous credentials with constant-sized showing, threshold issuance, and no requirement of interactive setup. Silent (non-interactive) setup is crucial as the various issuers may be slow-moving, independent organizations that are unwilling to coordinate in a distributed key generation protocol beforehand. Our construction also supports dynamic verifier policies. This is useful if different verifiers disagree about which issuers they trust or what threshold they accept.

At the heart of our scheme, we construct threshold structure-preserving signatures with silent setup and prove security in the generic group model. We also provide a NIZK for anonymous showing that is more efficient than a standard application of Groth-Sahai proofs. Finally, we provide an implementation of our scheme in Rust, along with concrete efficiency metrics.
Expand
Justin Thaler
ePrint Report ePrint Report
SNARKs work by having a prover commit to a witness and then prove that the committed witness is valid. The prover’s work is dominated by two tasks: (i) committing to data and (ii) proving that the committed data is well-formed. The central thesis of this survey is that fast SNARKs minimize both costs by using the sum-check protocol.

But not all uses of sum-check are equally effective. The fastest SNARKs invoke sum-check in highly sophisticated ways, exploiting repeated structure in computation to aggressively minimize commitment costs and prover work. I survey the key ideas that enable this: batch evaluation arguments, read/write memory checking, virtual polynomials, sparse sum-checks, and small-value preservation. These techniques unlock the full potential of the sum-check protocol as a foundation for fast SNARK proving.
Expand
Antoine Bak, Augustin Bariant, Aurélien Boeuf, Pierre Briaud, Morten Øygarden, Atharva Phanse
ePrint Report ePrint Report
The security of many arithmetization-oriented (AO) hash functions depends of the hardness of Constrained-input constrained-output (CICO) problems. These problems have received significant attention from the cryptographic community in recent years, with notable advances in Gröbner basis and resultant-based attacks, yet progress has mainly been limited to CICO problems restricted to a single output. In this work, we build on the "FreeLunch method" of Bariant et al. (Crypto 2024) that constructs Gröbner bases "for free" in this particular case, and extend it to CICO problems with multiple outputs. More precisely, we consider tools for solving weighted polynomial systems, and show how to apply them in the AO setting. This results in new polynomial modelings, more efficient methods for computing the initial Gröbner basis under certain assumptions, and improved complexity estimates for the change of ordering step, derived from tighter upper bounds on the ideal degree. We apply our framework to Poseidon, Neptune and XHash8, where our assumptions are experimentally verified, and theory matches practice. For Griffin and ArionHash our assumptions are not verified, leaving us with improved, yet loose, upper bounds on the ideal degree. While our results do not threaten the security of any full-round hash function, they provide new insights into the security of these primitives under more general CICO problems.
Expand
Georg Fuchsbauer, Pranav Garimidi, Guru-Vamsi Policharla, Max Resnick, Ertem Nusret Tas
ePrint Report ePrint Report
Cryptographic commitments allow a party to commit to a value such that it is computationally infeasible to later open that commitment to a different value. Although they are ubiquitous, standard commitment schemes allow the committer to outsource both the generation of the commitment and the openings to a third party. This is benign for most use cases; however, when commitments serve as cryptographic attestations of work such as relaying blocks or storing data, participants can outsource the task and still claim credit, undermining the intended economic properties of the protocol. This work initiates the study of non-delegatable commitments, a new primitive where forming a commitment requires possession of a private key, and delegating the commitment process necessarily leaks that key. We formally define the primitive and provide a generic construction that is secure in the random oracle model given a polynomial commitment scheme. Additionally, we show how this primitive can be applied to solve a variety of mechanism design problems.
Expand
Bishwajit Chakraborty, Chandranan Dhar
ePrint Report ePrint Report
The sponge construction underpins many modern symmetric primitives, enabling efficient hashing and authenticated encryption. While full-state absorption is known to be secure in keyed sponges, the security of full-state squeezing has remained unclear. Recently, Lefevre and Marhuenda-Beltr\'an introduced \(\textsf{MacaKey}\), claiming provable security even when both phases operate over the full state. In this work, we revisit this claim and show that \(\textsf{MacaKey}\) is insecure. A simple four-query distinguishing attack violates its claimed bound, exploiting the exposure of the full internal state and the resulting loss of secrecy in the capacity portion during squeezing. We then propose two simple yet effective fixes that restore security with negligible overhead. The first, \textsf{pMacaKey}, introduces an additional permutation between the absorption and squeezing phases to re-randomize the internal state. The second, \textsf{KeyMacaKey}, achieves a similar effect by incorporating a keyed finalization step without requiring an extra permutation call. We formally prove the security of \textsf{pMacaKey} in the random permutation model and conjecture that \textsf{KeyMacaKey} achieves comparable bounds. Both variants retain the full-state efficiency of \textsf{MacaKey} while ensuring strong, provable security guarantees.
Expand
Behzad Abdolmaleki, Matteo Campanelli, Quang Dao, Hamidreza Khoshakhlagh
ePrint Report ePrint Report
With proof-carrying data (PCD), nodes in a distributed computation can certify its correct execution obtaining proofs with low-verification overhead (relative to the complexity of the computation). As PCD systems—and their special case, incrementally verifiable computation (IVC)—see rapid adoption in practice, understanding their robustness against malleability attacks becomes crucial. In particular, it remains unexplored whether recursive proof systems satisfy simulation extractability (SIM-EXT)—a property ensuring non-malleability and composability. This work provides the first systematic study of simulation extractability for PCD. We begin by observing that the standard SIM-EXT notion for non-recursive zkSNARKs does not directly extend to PCD/IVC settings. To address this, we propose a new, tailored definition of SIM-EXT for proof-carrying data that accounts for their idiosyncratic features. Using this framework, we prove two general results: (1) that a simulation-extractable SNARK implies a simulation-extractable PCD when used recursively, and (2) that even lighter PCD constructions—built from a (not necessarily succinct) argument of knowledge (NARK) combined with a split-accumulation scheme—achieve SIM-EXT of PCD by requiring SIM-EXT only from the underlying NARK. Our results show that many modern PCD systems are already simulation-extractable by design.
Expand
Caroline Fontaine, Marc Renard, Renaud Sirdey, Oana Stan
ePrint Report ePrint Report
FuncCPA is a recent security notion in which the CPA game is extended by a functional re-encryption oracle in order to model setups in which a server performing FHE computations is allowed to interactively delegate part of the computation back to the client. In this paper, we study funcCPA-style variants of several CCA security notions, including CCA1 and the more recent vCCA security. Contrary to the CPA case where a strict separation holds between CPA and funcCPA, we show that these new variants are equivalent to their respective originating CCA security notions. Interestingly, funcCPA-style security notions also model setups where, rather than delegating part of the encrypted domain computation all the way back to the client, the server has the ability to perform this delegation towards a honest or semi-honest client proxy it hosts, such as a secure enclave. We then provide a number of blueprints for achieving both FHE-like capabilities and advanced CCA security properties which may then meaningfully be implemented by leveraging on the combination of a partially homormophic scheme and a semi-honest non-colluding enclave hosted within the server performing the encrypted domain calculations itself.
Expand
Charles Bouillaguet, Thibauld Feneuil, Jules Maire, Matthieu Rivain, Julia Sauvage, Damien Vergnaud
ePrint Report ePrint Report
We revisit multivariate commitments based on the hardness of solving systems of multivariate quadratic (MQ) equations over finite fields. We analyze a simple construction where a message µ is committed as c = (µ + F(r), G(r)), with F and G random quadratic maps. We prove that the scheme is computationally hiding assuming the intractability of the MQ problem. Its binding property reduces to solving random bilinear systems. We prove that this problem is NP-complete and study the performance of existing algebraic and hybrid attacks. We show that this commitment is well-suited for integration with zero-knowledge proofs. Using the Threshold-computation-in-the-Head framework, we construct zero-knowledge efficient arguments of knowledge for the opening and arguments for relations on committed values. We apply this to construct an efficient blind signature scheme à la Fischlin, and we demonstrate that our techniques yield a fully multivariate construction of signatures with efficient protocols, enabling practical post-quantum anonymous credentials.
Expand
Showkot Hossain, Wenyi Tang, Changhao Chenli, Haijian Sun, WenZhan Song, Seokki Lee, Mic Bowman, Taeho Jung
ePrint Report ePrint Report
Healthcare data sharing is fundamental for advancing medical research and enhancing patient care, yet it faces significant challenges in privacy, data ownership, and interoperability due to fragmented data silos across institutions and strict regulations (e.g., GDPR, HIPAA). To bridge these gaps, we propose MtDB, a novel decentralized database architecture addressing secure data sharing in multi-tenant database ecosystems. MtDB employs blockchain for metadata coordination and sharing, IPFS for distributed data addressing, a universal SQL query interface for data access, and Intel SGX for integrity-protected query execution with enforced access control. We provide an open-source implementation demonstrating MtDB’s capabilities for secure, patient-centric healthcare data sharing while preserving ownership and enforcing policies. Experimental results show MtDB achieves 35 milliseconds query latency for indexed queries over 400M multi-tenant medical records while maintaining cryptographic security guarantees, with only 1.2–1.3× performance overhead compared to non-secure baselines.
Expand
Thomas Haines, Jarrod Rose
ePrint Report ePrint Report
Electronic voting systems claiming to provide verifiability are seeing increased adoption. Previous work on analyzing these systems has focused on vulnerabilities arising in the specification and implementation of the core protocol and primitives; once the system has been analyzed for these vulnerabilities and appropriate fixes deployed, one might have hoped that the systems would provide the claimed security.

In this paper, we discuss two categories of vulnerabilities which still seem prevalent in otherwise carefully designed, implemented, and audited systems. We present ten examples of vulnerabilities or weaknesses in these categories drawn from the SwissPost and Belenios systems. Our discussion covers why vulnerabilities in these categories maybe escaping detection and what can be done about it; all the solutions we considered are unsatisfactory and our aim is to highlight this area as an important open problem.
Expand
Rex Fernando, Guru-Vamsi Policharla, Andrei Tonkikh, Zhuolun Xiang
ePrint Report ePrint Report
MEV (Maximal Extractable Value) remains one of the most corrosive forces in blockchain systems, enabling frontrunning, sandwiching, and other manipulations that directly exploit users. The core culprit is the transparent mempool: validators see transactions before they are ordered. Encrypted mempools are a promising solution by hiding transaction contents until after ordering.

We present the first integration of encrypted mempools with a high-performance BFT protocol. Our system uses a cryptographic scheme based on recent work on batched threshold encryption, and improves on the cryptographic state of the art in this line of work. The system ensures confidentiality of transactions during ordering while sustaining performance on par with leading BFT designs. Specifically, the proposal-to-execution latency of our system yields only a 27 ms overhead (14%) compared to the baseline. The result is a practical consensus layer that simultaneously defends against MEV and delivers the throughput and latency needed for real deployments. This work closes the gap between cryptographic defenses and production-ready consensus, showing that robust MEV protection and high performance can, in fact, coexist.
Expand

03 November 2025

Sean Bowe, Ian Miers
ePrint Report ePrint Report
Anonymous payment protocols based on Zerocash (IEEE S&P 2014) have seen widespread deployment in decentralized cryptocurrencies, as have derivative protocols for private smart contracts. Despite their strong privacy properties, these protocols have a fundamental scaling limitation in that they require every consensus participant to maintain a perpetually growing set of nullifiers--- unlinkable revocation tokens used to detect double-spending---which must be stored, queried and updated by all validating nodes. This set grows linearly in the number of historic transactions and cannot be discarded without the undesirable effect of destroying unspent funds.

In this short note, we introduce a new technique that enables continual, permanent pruning of nullifiers by validators, without imposing significant computation, bandwidth or latency overhead for users, and without compromising privacy. Our main contribution is a general model we call oblivious synchronization whereby users ask untrusted remote services (which ingest and process the public ledger) to create succinct proofs that coins are unspent and otherwise valid. Crucially, these services are fully oblivious to their clients' transaction details and cannot link their clients to any transactions that ultimately appear on the public ledger. Moreover, these services only keep ephemeral state per client and users can freely switch between services without incurring redundant computational effort.
Expand
Eden Florentz- Konopnicki, Ron D. Rothblum
ePrint Report ePrint Report
Zero-knowledge proofs allow to encode a computation so that it can be verified without revealing any additional information beyond its correctness. In this work we focus on proofs that are statistically sound meaning that even an unbounded prover cannot make the verifier accept a false statement, except with negligible probability, and computationally zero-knowledge. The seminal result of Goldreich, Micali and Wigderson (CRYPTO 1986) shows that, assuming the existence of a one-way function, such zero-knowledge proofs exist for all languages in NP.

Some of the early protocols, such as that of GMW, have a large polynomial overhead in communication compared to the original NP witness. A line of works has shown that in many cases this communication overhead can be avoided. Most recently, Athamnah et al. (TCC 2024) constructed zero-knowledge proofs for all bounded-depth NP relations, where the communication complexity is only larger by an additive factor than the original NP witness. The main caveat of their result is that the protocol makes a non-blackbox use of the one-way function.

In this work we show that such succinct zero-knowledge proofs exist for the same class of NP relations, where the protocol makes only a blackbox use of a one-way function. Our protocol achieves a negligible soundness error, in contrast to recent works which can achieve, at best, an inverse polynomial error.
Expand
Sven Bauer, Fabrizio De Santis
ePrint Report ePrint Report
Embedded devices commonly rely on digital signatures to ensure both integrity and authentication. For example, digital signatures are typically verified during the boot process or firmware updates to verify the integrity of a system. They are also used to ensure authenticity of a communication party in secure protocols. Fault injection can be used to tamper with a device in order to cause malfunctioning during cryptographic computations. For example, fault injections can be used to disturb digital signing operations. With the right type of fault an attacker can compute private keys from faulted signatures. However, fault injections can also be used during verification to get maliciously crafted digital signatures accepted during signature verification with catastrophic consequences for the security of an embedded device. In this paper, we introduce new non-obvious fault injection attacks on the verification routines of Dilithium and Falcon signature schemes, which allow an attacker to get signatures for arbitrary messages accepted by fault injection. We demonstrate the feasibility of our attacks by simulations using an ARM Cortex-M4 and the pqm4 library as a target of evaluation and pinpoint vulnerable instructions. Finally, we propose and discuss possible countermeasures against these attacks.
Expand
Ruben Niederhagen, Hoang Nguyen Hien Pham
ePrint Report ePrint Report
This work improves upon the instruction set extension proposed in the paper "Towards ML-KEM and ML-DSA on OpenTitan", in short OTBNTW, for OpenTitan’s big number coprocessor OTBN. OTBNTW introduces a dedicated vector instruction for prime-field Montgomery multiplication, with a high multi-cycle latency and a relatively low utilization of the underlying integer multiplication unit. The design targets post-quantum cryptographic schemes ML-KEM and ML-DSA, which rely on 12-bit and 23-bit prime field arithmetic, respectively. We improve the efficiency of the Montgomery multiplication by fully exploiting existing integer multiplication resources and move modular multiplication from hardware back to software by providing more powerful and versatile integer-multiplication vector instructions. This enables us not only to reduce the overall computational overhead through lazy reduction in software but also to improve performance in other functions beyond finite-field arithmetic. We provide two variants of our instruction set extension, each offering different trade-offs between resource usage and performance. For ML-KEM and ML-DSA, we achieve a speedup of up to 17% in cycle count, with an ASIC area increase of up to 6% and an FPGA resource usage increase of up to 4% more LUT, 20% more CARRY4, 1% more FF, and the same number of DSP compared to OTBNTW. Overall, we significantly reduce the ASIC time-area product, if the designs are clocked at their individual maximum frequency, and at least match that of OTBNTW, if the designs are clocked at the same frequency.
Expand
Beatrice Biasioli, Chiara Marcolla, Nadir Murru, Matilda Urani
ePrint Report ePrint Report
The Brakerski-Gentry-Vaikuntanathan (BGV) scheme is one of the most significant fully homomorphic encryption (FHE) schemes. It belongs to a class of FHE schemes whose security is based on the presumed intractability of the Learning with Errors (LWE) problem and its ring variant (RLWE). Such schemes deal with a quantity, called noise, which increases each time a homomorphic operation is performed. Specifically, in order for the scheme to work properly, it is essential that the noise remains below a certain threshold throughout the process. For BGV, this threshold strictly depends on the ciphertext modulus, which is one of the initial parameters whose selection heavily affects both the efficiency and security of the scheme. For an optimal parameter choice, it is crucial to accurately estimate the noise growth, particularly that arising from multiplication, which is the most complex operation. In this work, we propose a novel average-case approach that precisely models noise evolution and guides the selection of initial parameters, improving efficiency while ensuring security. The key innovation of our method lies in accounting for the dependencies among ciphertext errors generated with the same key, and in providing general guidelines for accurate parameter selection that are library-independent.
Expand
Next ►