IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
04 August 2025
Divesh Aggarwal, Pranjal Dutta, Saswata Mukherjee, Satyajeet Nagargoje, Maciej Obremski
Randomness is a fundamental requirement in cryptographic systems, enabling secure encryption, commitments, and zero-knowledge proofs. However, real-world randomness sources often suffer from weaknesses that adversaries can exploit, leading to significant security vulnerabilities. While deterministic randomness extraction from a single min-entropy source is impossible, two-source extractors provide a robust solution by generating nearly uniform randomness from two independent weak sources. Moreover, cryptographic systems must also be resilient to leakage and tampering attacks, necessitating the development of non-malleable two-source extractors.
In this work, we construct a two-source non-malleable extractor in the Common Reference String (CRS) model, where a random low-degree polynomial is sampled once and made accessible to independent random sources, the distinguisher, and the tamperer. Our extractor requires only linear min-entropy in both sources and doesn't rely on strong computational assumptions, in contrast to prior constructions requiring computational assumptions such as sub-exponential hardness of the Decisional Diffie-Hellman (DDH) problem. Notably, our construction builds upon and relies on the recent breakthrough proof of the polynomial Freiman-Ruzsa conjecture. A connection of the Freiman-Ruzsa conjecture with two-source extractors was considered in prior work [ZBS11],[AGMR24], but their construction did not achieve non-malleability.
Our results advance the state of non-malleable cryptographic primitives, with applications in secure storage, leakage-resilient cryptography, and privacy amplification. By eliminating the need for strong computational hardness assumptions, our techniques provide a more foundational and widely applicable method for randomness extraction.
We also show, that the requirements on CRS for our application are so mild that the CRS can be sampled with $2$ party computation even when one of the parties is malicious (setting in which establishing unbiased coins is impossible).
In this work, we construct a two-source non-malleable extractor in the Common Reference String (CRS) model, where a random low-degree polynomial is sampled once and made accessible to independent random sources, the distinguisher, and the tamperer. Our extractor requires only linear min-entropy in both sources and doesn't rely on strong computational assumptions, in contrast to prior constructions requiring computational assumptions such as sub-exponential hardness of the Decisional Diffie-Hellman (DDH) problem. Notably, our construction builds upon and relies on the recent breakthrough proof of the polynomial Freiman-Ruzsa conjecture. A connection of the Freiman-Ruzsa conjecture with two-source extractors was considered in prior work [ZBS11],[AGMR24], but their construction did not achieve non-malleability.
Our results advance the state of non-malleable cryptographic primitives, with applications in secure storage, leakage-resilient cryptography, and privacy amplification. By eliminating the need for strong computational hardness assumptions, our techniques provide a more foundational and widely applicable method for randomness extraction.
We also show, that the requirements on CRS for our application are so mild that the CRS can be sampled with $2$ party computation even when one of the parties is malicious (setting in which establishing unbiased coins is impossible).
Sebastian Angel, Sofía Celi, Elizabeth Margolin, Pratyush Mishra, Martin Sander, Jess Woods
We introduce Coral, a system for proving in zero-
knowledge that a committed byte stream corresponds to a
structured object in accordance to a Context Free Grammar.
Once a prover establishes the validity of the parsed object with
Coral, they can selectively prove facts about the object—such as
fields in Web API responses or in JSON Web Tokens—–to third
parties or blockchains. Coral reduces the problem of correct
parsing to a few simple checks over a left-child right-sibling
tree and introduces a novel segmented memory abstraction that
unifies and extends prior constructions for RAM in zkSNARKs.
Our implementation of Coral runs on a standard laptop, and
non-interactively proves the parsing of real Web responses
(JSON) and files (TOML and C) in seconds. The resulting
proofs are small and cheap to verify.
Jan Bormet, Arka Rai Choudhuri, Sebastian Faust, Sanjam Garg, Hussien Othman, Guru-Vamsi Policharla, Ziyan Qu, Mingyuan Wang
Threshold encrypted mempools protect the privacy of transactions up until the point their inclusion on chain is confirmed. They are a promising approach to protection against front-running attacks on decentralized blockchains.
Recent works have introduced two key properties that an encryption scheme must satisfy in order to scale to large scale decentralized blockchains such as Ethereum: Silent Setup [Garg-Kolonelos-Policharla-Wang, CRYPTO'24], demands that a threshold encryption scheme does not require any interaction during the setup phase and only relies on the existence of Public Key Infrastructure. Batched Decryption [Choudhuri-Garg-Piet-Policharla, USENIX'24], demands that an entire block containing $B$ encrypted transactions can be decrypted using communication that is independent of (or sublinear in) $B$, without compromising the privacy of transactions that have not yet been confirmed.
While existing constructions achieve either Silent Setup or Batched Decryption independently, a truly decentralized and scalable encrypted mempool requires both properties to be satisfied simultaneously. In this work, we present the first ``Batched Threshold Encryption scheme with Silent Setup'' built using bilinear pairings. We provide formal definitions for the primitive, and prove security in the Generic Group Model. We provide several optimizations and implement our scheme to evaluate its performance. Our experiments demonstrate its efficiency for deployment in blockchain systems.
Recent works have introduced two key properties that an encryption scheme must satisfy in order to scale to large scale decentralized blockchains such as Ethereum: Silent Setup [Garg-Kolonelos-Policharla-Wang, CRYPTO'24], demands that a threshold encryption scheme does not require any interaction during the setup phase and only relies on the existence of Public Key Infrastructure. Batched Decryption [Choudhuri-Garg-Piet-Policharla, USENIX'24], demands that an entire block containing $B$ encrypted transactions can be decrypted using communication that is independent of (or sublinear in) $B$, without compromising the privacy of transactions that have not yet been confirmed.
While existing constructions achieve either Silent Setup or Batched Decryption independently, a truly decentralized and scalable encrypted mempool requires both properties to be satisfied simultaneously. In this work, we present the first ``Batched Threshold Encryption scheme with Silent Setup'' built using bilinear pairings. We provide formal definitions for the primitive, and prove security in the Generic Group Model. We provide several optimizations and implement our scheme to evaluate its performance. Our experiments demonstrate its efficiency for deployment in blockchain systems.
Nick Aquina, Simon Rommel, Idelfonso Tafur Monroy
Cascader has been introduced as a new key exchange protocol based on iterative multiplicative recurrence. This short note presents a practical shared key recovery attack on the Cascader key exchange protocol. This note also shows that Cascader as a hash function is not collision resistant, presents a new upper bound on the output space of Cascader and shows that a Cascader-based KDF is not secure against an Adaptive Chosen Public Inputs Attack (CPM).
Joshua Limbrey, Andrew Mendelsohn
In Submission 2025/1391 to the IACR Cryptology ePrint Archive, the Inverse Discrete Logarithm Problem (IDLP) is introduced and used to build a key exchange protocol and a KEM. The author claims both classical and post-quantum security for IDLP and therefore for the proposed protocols. It is the purpose of this note to give an efficient quantum algorithm for IDLP, based on the algorithm of Shor. We give an implementation of our algorithm, replacing the use of Shor's algorithm with an oracle.
Juliane Krämer, Patrick Struck, Maximiliane Weishäupl
In this note we analyze the various binding properties of combiners for KEMs. We show that several properties follow easily for the most general combiner—assuming a collision-resistant hash function—while more performance-oriented versions require the respective property from one or both of the underlying KEMs. Other properties are not obtained as directly and require either more properties of one underlying KEM or the respective property from both KEMs.
Seyoung Yoon, Gyeongju Song, Kyungbae Jang, Sangmin Cha, Hwajeong Seo
As quantum computing technology rapidly advances, threats to existing symmetric-key and public-key cryptosystems are becoming increasingly real. In this study, we implement a SHA-1 quantum circuit that operates efficiently in a quantum computing environment. We optimize the quantum circuit, focusing on minimizing total circuit depth, a key performance indicator of quantum algorithms. The SHA-1 quantum circuit implementation used 985 qubits, resulting in a measured circuit depth of 9,026. Furthermore, by integrating this optimized circuit with the Grover algorithm, we establish the foundation for an efficient quantum attack on the SHA-1 algorithm. This research is significant not only because it presents a resource-efficient SHA-1 quantum implementation but also because it enables accelerated attacks in a quantum computing environment.
Dan Boneh, Joachim Neu, Valeria Nikolaenko, Aditi Partap
Data availability sampling (DAS) is an important technique to horizontally scale consensus protocols without compromising on the number of adversarial nodes that can be tolerated. DAS is on the technical roadmap of major blockchains such as Ethereum. A major challenge for DAS schemes, that has not been formally studied in the literature, is how incomplete shares can be repaired. The need for repairing data shares motivates key aspects of Ethereum's DAS-based sharding vision called "Danksharding".
In this work, we make two contributions. First, we provide a new definitional framework that formalizes the notion of repair, along with the security guarantees that a DAS scheme must provide. Second, we propose a new DAS scheme designed with efficient repair in mind, based on locally-correctable multiplicity codes. To facilitate using these codes, we introduce a new multivariate polynomial commitment scheme that (i) supports efficient openings of partial derivatives of a committed polynomial, (ii) supports fast batch opening proof generation at many points, and (iii) has an algorithm to recompute (repair) opening proofs at a point from only a few other proofs. The proposed scheme improves upon the state-of-the-art Ethereum Fulu DAS scheme, slated for deployment in late 2025/early 2026, in storage overhead, repair bandwidth and coordination, while only slightly increasing dispersal cost and sampling bandwidth. Our techniques readily carry over to data availability schemes based on verifiable information dispersal (VID).
Matteo Campanelli, Dario Fiore, Mahak Pancholi
Incrementally Verifiable Computation (IVC) allows one to prove the correctness of a computation of potentially unbounded length in an incremental way, while a computationally weak client can efficiently check its correctness in time sublinear in the computation's length. IVC is particularly useful in several real-world applications such as scalable blockchains, distributed computation, and verifiable machine learning. Yet, most existing IVC schemes are only provably secure for constant-depth computations. Arguing their security for computations of polynomial depth relies on heuristic assumptions, raising both theoretical and practical concerns.
In this work, we delve into the security foundations of incremental proof systems, addressing two main questions. First, we revisit the security analysis, in the unbounded-depth regime, of the canonical construction of IVC based on the recursive composition of SNARKs. We extend this analysis to include SNARKs that are straightline extractable in the algebraic group model (AGM) and some additional oracle model. As a consequence of our result, we obtain novel instantiations of IVC for unbounded-depth computations based on AGM-based SNARKs, such as Groth16 or Marlin, to name a few—an important class of SNARKs not captured by similar analyses in prior work [Chiesa et al. TCC 2024].
Second, we consider incremental proof systems for arbitrary depth computations in which full-blown extractability is not necessary. We study under what conditions they can be instantiated from the recursive composition of "plain" building blocks (SNARKs, folding, accumulation schemes), that is without requiring special straightline extractability. We introduce incremental functional commitments (incremental FC), a primitive that allows one to commit to a large data $D$ and later prove a function $f(D)$. The key aspect is that both the committing and proving functionalities operate incrementally, processing $D$ in a streaming, piece-by-piece manner. Also, like in standard FCs, their security property is a form of evaluation binding, a notion that is weaker than knowledge-soundness (it states that it is hard to produce two valid proofs for the same commitment and two distinct outputs). Our second main result consists of a construction of incremental FCs based on recursive composition of SNARKs and its security analysis, which shows that arbitrarily deep compositions of primitives with non-straightline extractors do not suffer from inherent security limitations.
03 August 2025
Yalan Wang, Liqun Chen, Yangguang Tian, Long Meng, Christopher J.P. Newton
The World Wide Web Consortium (W3C) has established standards for decentralized identities (DIDs) and verifiable credentials (VCs). A DID serves as a unique identifier for an entity, while a VC validates specific attributes associated with the DID holder. To prove ownership of credentials, users generate verifiable presentations (VPs). To enhance privacy, the W3C standards advocate for randomizable signatures in VC creation and zero-knowledge proofs for VP generation. However, these standards face a significant limitation: they cannot effectively verify cross-domain credentials while maintaining anonymity. In this paper, we present Anonymous Verifiable Presentations with Extended Usability (AVPEU), a novel framework that addresses this limitation through the introduction of a notary system. At the technical core of AVPEU lies our proposed randomizable message-hiding signature scheme. We provide both a generic construction of AVPEU and specific implementations based on Boneh-Boyen-Shacham (BBS), Camenisch-Lysyanskaya (CL), and Pointcheval-Sanders (PS) signature. Our experimental results demonstrate the feasibility of these schemes.
Yalan Wang, Bryan Kumara, Harsh Kasyap, Liqun Chen, Sumanta Sarkar, Christopher J.P. Newton, Carsten Maple, Ugur Ilker Atmaca
All-but-one Vector Commitments (AVCs) allow a committed vector to be verified by randomly opening all but one of the committed values. Typically, AVCs are instantiated using Goldwasser-Goldreich-Micali (GGM) trees. Generating these trees comprises a significant computational cost for AVCs due to a large number of hash function calls. Recently, correlated GGM
(cGGM) trees were proposed to halve the number of hash calls and Batched AVCs (BAVCs) using one large GGM tree were integrated to FAEST to form the FAEST version 2 signature scheme, which improves efficiency and reduces the signature size. However, further optimizations on BAVC schemes remain possible.
Inspired by the large-GGM based BAVC and the cGGM tree, this paper proposes BACON, a BAVC with aborts scheme by leveraging a large cGGM tree. BACON executes multiple instances of AVC in a single batch and enables an abort mechanism to probabilistically reduce the commitment size. We prove that BACON is secure under the ideal cipher model and the random oracle model. We also discuss the possible application of the proposed BACON, i.e., FAEST version 2. Furthermore, because the number of hash calls in a large cGGM tree is halved compared with that used in a large GGM tree, theoretically, our BACON is more efficient than the state-of-the-art BAVC scheme.
Mirza Ahad Baig, Christoph Ullrich Günther, Krzysztof Pietrzak
The blocks in the Bitcoin blockchain record the amount of work W that went into creating them through proofs of work. When honest parties control a majority of the work, consensus is achieved by picking the chain with the highest recorded weight. Resources other than work have been considered to secure such longest-chain blockchains. In Chia, blocks record the amount of disk-space S (via a proof of space) and sequential computational steps V (through a VDF).
In this paper, we ask what weight functions Γ(S,V,W) (that assign a weight to a block as a function of the recorded space, speed, and work) are secure in the sense that whenever the weight of the resources controlled by honest parties is larger than the weight of adversarial parties, the blockchain is secure against private double-spending attacks.
We completely classify such functions in an idealized “continuous” model: Γ(S,V,W) is secure against private double-spending attacks if and only if it is homogeneous of degree one in the timed resources V and W, i.e., αΓ(S,V,W)=Γ(S,α V, α W). This includes the Bitcoin rule Γ(S,V,W)=W and the Chia rule Γ(S,V,W) = S · V. In a more realistic model where blocks are created at discrete time-points, one additionally needs some mild assumptions on the dependency on S (basically, the weight should not grow too much if S is slightly increased, say linear as in Chia).
Our classification is more general and allows various instantiations of the same resource. It provides a powerful tool for designing new longest-chain blockchains. E.g., consider combining different PoWs to counter centralization, say the Bitcoin PoW W_1 and a memory-hard PoW W_2. Previous work suggested to use W_1+W_2 as weight. Our results show that using e.g., √(W_1)·√(W_2) or min{W_1,W_2} are also secure, and we argue that in practice these are much better choices.
In this paper, we ask what weight functions Γ(S,V,W) (that assign a weight to a block as a function of the recorded space, speed, and work) are secure in the sense that whenever the weight of the resources controlled by honest parties is larger than the weight of adversarial parties, the blockchain is secure against private double-spending attacks.
We completely classify such functions in an idealized “continuous” model: Γ(S,V,W) is secure against private double-spending attacks if and only if it is homogeneous of degree one in the timed resources V and W, i.e., αΓ(S,V,W)=Γ(S,α V, α W). This includes the Bitcoin rule Γ(S,V,W)=W and the Chia rule Γ(S,V,W) = S · V. In a more realistic model where blocks are created at discrete time-points, one additionally needs some mild assumptions on the dependency on S (basically, the weight should not grow too much if S is slightly increased, say linear as in Chia).
Our classification is more general and allows various instantiations of the same resource. It provides a powerful tool for designing new longest-chain blockchains. E.g., consider combining different PoWs to counter centralization, say the Bitcoin PoW W_1 and a memory-hard PoW W_2. Previous work suggested to use W_1+W_2 as weight. Our results show that using e.g., √(W_1)·√(W_2) or min{W_1,W_2} are also secure, and we argue that in practice these are much better choices.
Sofiane Azogagh, Zelma Aubin Birba, Sébastien Gambs, Marc-Olivier Killijian
While the use of homomorphic encryption (HE) for encrypted inference has received considerable attention, its application for the training of machine learning (ML) models remains comparatively underexplored, primarily due to the high computational overhead traditionally associated with fully homomorphic encryption (FHE).
In this work, we address this challenge by leveraging the inherent connection between inference and training in the context of Extremely Randomized Trees (ERT), thereby enabling efficient training directly over encrypted data.
More precisely, we instantiate this approach by the training of ERT within the TFHE framework.
Our implementation demonstrates that it is possible to train ERTs on encrypted datasets with a runtime significantly lower than current state-of-the-art methods for training Random Forests in the encrypted domain while achieving comparable predictive accuracy.
This result highlights a promising direction for practical privacy-preserving machine learning using FHE.
Our second main contribution consists in leveraging the properties of ERTs to create the first ML model that enables private unlearning. This approach makes the unlearning process indistinguishable from training, thus allowing clients to conceal the true nature of the operations being conducted on the model.
Vincenzo Botta, Simone Bottoni, Matteo Campanelli, Emanuele Ragnoli, Alberto Trombetta
Verifiable Databases (VDBs) enable clients delegating storage to a provider without having to trust it: given a claimed response to a query, they can check its correctness by holding a short digest to the database and a small related certificate (a proof). The foundational role of databases and the increasing trend of storage delegation make this an important primitive.
Existing VDB approaches face fundamental tradeoffs (on which we improve in this work). A line of work on VDB designs has leveraged general purpose proof systems (SNARKs). The resulting constructions are expressive—they support a large class of queries—but require cumbersome intermediate representations, often rely on heuristic assumptions and are overall very complex. Other prior approaches adopted cleverly combined specialized authenticated data structures (e.g., set accumulators). These designs tend to be simple, elegant and rely on well founded cryptographic assumptions; however they have limited expressivity and some undesirable efficiency features (e.g., scaling quadratically in the number of columns).
We present $\mathsf{qedb}$, a novel construction for verifiable databases that addresses these limitations. $\mathsf{qedb}$ supports more expressive queries than previous approaches based on specialized data structures. At the same time it preserves their simplicity and improves on several of their limitations: it removes the quadratic dependency on the number of columns present in state-of-the-art constructions; its proof sizes are completely independent of the database size (without requiring any circuit representation).
One of our primary contributions is a foundational framework that cleanly separates VDB logic from cryptographic instantiations. At its essence, it resembles other common information theoretic frameworks, such as Polynomial Interactive Oracle Proofs (PIOPs). At the same time it diverges from existing approaches by being slightly specialized for the database setting. We demonstrate how to instantiate our framework using modern pairing-based linear-map vector commitments and set accumulators. More in general, we show that our building blocks can be derived from extractable homomorphic polynomial commitments. Being modular, our approach permits alternative instantiations, such as with lattice-based polynomial commitments enabling post-quantum security.
We implemented $\mathsf{qedb}$ in Rust and experimentally showed that it efficiently scales to datasets with millions of rows while maintaining competitive proving and verification times. This evidence indicates that our approach provides a foundation for practical, secure, and expressive verifiable database systems.
Florian Krieger, Florian Hirner, Ahmet Can Mert, Sujoy Sinha Roy
Fully Homomorphic Encryption (FHE) and Post-Quantum Cryptography (PQC) involve polynomial multiplications, which are a common performance bottleneck. To resolve this bottleneck, polynomial multiplications are often accelerated in hardware using the Number-Theoretic Transformation (NTT) or the Fast Fourier Transformation (FFT). In particular, NTT operates over modular rings while FFT operates over complex numbers. NTT and FFT are widely deployed in applications with diverse parameter sets, leading to long design times for hardware accelerators. Existing hardware generation tools have limited functionality since they do not support generic on-the-fly twiddle factor generation or different memory-related optimizations. This paper improves the hardware design process and presents a generic and flexible tool to generate FFT and NTT architectures. In contrast to prior work, we combine on-the-fly twiddle factor generation and stall-free memory accesses. Moreover, we enhance hardware design flexibility through memory-optimized or routing-optimized design strategies. While our memory-optimized strategy minimizes twiddle factors in ROM, our routing-optimized strategy allows significantly higher clock frequencies on FPGAs. These optimization strategies allow effective customization of NTT/FFT architectures, spanning from low-end PQC to high-end FHE accelerators. Compared to existing works, we reach up to 15.9x lower latency and up to 7.4x improved ATP for FFT applications such as the Falcon signature scheme. Considering other NTT tools, we decrease latency by up to 1.8x and 2x for PQC and FHE parameter sets, respectively.
Yifan Song, Xiaxi Ye
In this work, we study the communication complexity of MPC achieving perfect security with optimal resilience ($t
On the positive side, we construct a perfectly secure MPC protocol for SIMD circuits with a communication complexity of $O(|C|)$ elements assuming preprocessing data of size $O(|C|)$, where the preprocessing data consists of packed Beaver triples over bivariate polynomials. Furthermore, we show that packed Beaver triples over bivariate polynomials can be prepared at an amortized cost of $O(1)$ elements plus $O(1)$ three-party Beaver triples per secret.
On the negative side, we establish a communication lower bound proving that preparing packed Beaver triples over bivariate polynomials requires at least $\Omega(n)$ elements of communication per secret. This lower bound is derived by first proving a communication lower bound for verifying the correctness of packed Beaver triples with perfect security with abort, and then efficiently reducing the task of verifying packed Beaver triples to preparing packed Beaver triples over bivariate polynomials. To match this bound, we give a concrete construction for packed Beaver triples over bivariate polynomials with $O(n)$ elements per secret, demonstrating the tightness of our lower bound.
Our proof technique also extends to show that for the task of computing the inner-product of two length-$|C|$ vectors, any MPC protocol that achieves perfect security with abort requires either $\Omega(|C|\cdot n)$ elements of communication or $\Omega(|C|)$ elements of preprocessing data.
On the positive side, we construct a perfectly secure MPC protocol for SIMD circuits with a communication complexity of $O(|C|)$ elements assuming preprocessing data of size $O(|C|)$, where the preprocessing data consists of packed Beaver triples over bivariate polynomials. Furthermore, we show that packed Beaver triples over bivariate polynomials can be prepared at an amortized cost of $O(1)$ elements plus $O(1)$ three-party Beaver triples per secret.
On the negative side, we establish a communication lower bound proving that preparing packed Beaver triples over bivariate polynomials requires at least $\Omega(n)$ elements of communication per secret. This lower bound is derived by first proving a communication lower bound for verifying the correctness of packed Beaver triples with perfect security with abort, and then efficiently reducing the task of verifying packed Beaver triples to preparing packed Beaver triples over bivariate polynomials. To match this bound, we give a concrete construction for packed Beaver triples over bivariate polynomials with $O(n)$ elements per secret, demonstrating the tightness of our lower bound.
Our proof technique also extends to show that for the task of computing the inner-product of two length-$|C|$ vectors, any MPC protocol that achieves perfect security with abort requires either $\Omega(|C|\cdot n)$ elements of communication or $\Omega(|C|)$ elements of preprocessing data.