International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

12 October 2025

Lorenzo Grassi, Dmitry Khovratovic, Katharina Koschatko, Christian Rechberger, Markus Schofnegger, Verena Schröppel
ePrint Report ePrint Report
We present Poseidon2b, a version of Poseidon2 defined over binary extension fields. It is specifically designed to inherit many of the circuit-friendly properties of its prime field version, and to be used together with binary extension field proving systems such as Binius. Benchmarking demonstrates the merits around proof size, proving time, and especially verification time.

We also revisit recent attacks on Poseidon and Poseidon2 and discuss their applicability in the binary field extension setting, in addition to analyzing attack vectors that were not applicable in the prime field setting. In particular, we lay special focus on algebraic cryptanalysis and subspace trails, techniques which resulted in attacks on initial versions of Poseidon defined over binary extension fields.
Expand
Deokhwa Hong, Yongwoo Lee
ePrint Report ePrint Report
FHEW-like homomorphic encryption (HE) schemes, introduced by Ducas and Micciancio (Eurocrypt 2015), represent the most efficient family of HE schemes in terms of both latency and key size. However, their bootstrapping noise is highly sensitive to parameter selection, leaving only a sparse set of feasible parameters. Because bootstrapping noise directly affects security and performance, existing approaches tend to choose parameters that drive noise excessively low—resulting in large key sizes and high latency. In this paper, we propose a new bootstrapping modification that permits an almost continuous spectrum of parameter choices. In our best knowledge, this is the first practical HE scheme for which the evaluation failure probability is precisely determined without requiring any information about the message distribution. We further show that, under our method, the parameter‐optimization task reduces to a generalized knapsack problem solvable in polynomial time. As a result, the traditionally cumbersome process of selecting parameters for FHEW‐like schemes becomes tractable. Experimental results show that our method improves bootstrapping runtime by approximately 17% and reduces key size by about 45%.
Expand
Rutchathon Chairattana-Apirom, Stefano Tessaro, Nirvan Tyagi
ePrint Report ePrint Report
Privacy-preserving advertisement attribution allows websites selling goods to learn statistics on which advertisement campaigns can be attributed to converting sales. Existing proposals rely on users to locally store advertisement history on their browser and report attribution measurements to an aggregation service (instantiated with multiparty computation over non-colluding servers). The service computes and reveals the aggregate statistic. The service hides individual user contributions, but it does not guarantee integrity against misbehaving users that may submit fraudulent measurements.

Our work proposes a new cryptographic primitive, "secret share attestation", in which secret shares input into a multiparty computation protocol are accompanied by an attestation of integrity by a third party: advertisers include signature attestations when serving ads that are later included in contributed measurements. We propose two constructions based on the standards-track BBS signatures and efficient signatures over equivalence classes, respectively. We implement and evaluate our protocols in the context of the advertising application to demonstrate their practicality.
Expand

11 October 2025

Jung Hee Cheon, Daehyun Jang
ePrint Report ePrint Report
Verifiable Homomorphic Encryption (VHE) is a cryptographic technique that integrates Homomorphic Encryption (HE) with Verifiable Computation (VC). It serves as a crucial technology for ensuring both privacy and integrity in outsourced computation, where a client sends input ciphertexts $\mathsf{ct}$ and a function $f$ to a server and verifies the correctness of the evaluation upon receiving the evaluation result $f(\mathsf{ct})$ from the server. Chatel et al. (CCS'24) introduced two VHE schemes: Replication Encoding (REP) and Polynomial Encoding (PE). A similar approach to REP was used by Albrecht et al. (EUROCRYPT'24) to develop a Verifiable Oblivious PRF scheme (vADDG). A key approach in these schemes is to embed specific secret information within ciphertexts and use them to verify homomorphic evaluations. This paper presents efficient forgery attacks against the verifiability guarantees of these VHE schemes. We introduce two attack strategies. The first targets REP and vADDG, extracting secret information in encrypted form from input ciphertexts and leveraging it to forge output ciphertexts without being detected by the verification algorithm. The second targets PE, exploiting its secret embedding structure to forge output ciphertexts that remain valid on input values for verification, yet violate the verifiability property. Our forgery attack on vADDG demonstrates that the proposed 80-bit security parameters provide at most 10 bits of concrete security. Our attack on REP and \PE achieves a probability 1 attack with linear time complexity when using fully homomorphic encryption.
Expand
Prabhanjan Ananth, John Bostanci, Aditya Gulati, Yao-Ting Lin
ePrint Report ePrint Report
Gluing theorem for random unitaries [Schuster, Haferkamp, Huang, QIP 2025] have found numerous applications, including designing low depth random unitaries [Schuster, Haferkamp, Huang, QIP 2025], random unitaries in $\mathsf{QAC0}$ [Foxman, Parham, Vasconcelos, Yuen'25] and generically shortening the key length of pseudorandom unitaries [Ananth, Bostanci, Gulati, Lin EUROCRYPT'25]. We present an alternate method of combining Haar random unitaries from the gluing lemma from [Schuster, Haferkamp, Huang, QIP 2025] that is secure against adversaries with inverse query access to the joined unitary. As a consequence, we show for the first time that strong pseudorandom unitaries can generically have their length extended, and can be constructed using only $O(n^{1/c})$ bits of randomness, for any constant $c$, if any family of strong pseudorandom unitaries exists.
Expand
Frank Denis
ePrint Report ePrint Report
Format-preserving encryption (FPE) enables encryption while maintaining syntactic properties such as character sets. The current NIST standard FF1 uses multi-round Feistel networks that sacrifice performance for flexibility, while FF3-1 was withdrawn in 2025 following successful cryptanalytic attacks. FAST, proposed as a faster alternative, has not been widely implemented due to its complexity, leaving limited practical alternatives.

We present HCTR2-FP and HCTR3-FP, format-preserving adaptations of the HCTR2 and HCTR3 wide-block tweakable ciphers.

These variants preserve the single-pass Hash-Encrypt-Hash structure while operating on arbitrary radix domains through base-radix encoding and modular arithmetic. The constructions are simple to implement and analyze, and benchmarks demonstrate significant speedup over FF1.
Expand
Gregory D. Kahanamoku-Meyer, Seyoon Ragavan, Katherine Van Kirk
ePrint Report ePrint Report
"Pebble games," an abstraction from classical reversible computing, have found use in the design of quantum circuits for inherently sequential tasks. Gidney showed that allowing Hadamard basis measurements during pebble games can dramatically improve costs---an extension termed "spooky pebble games" because the measurements leave temporary phase errors called ghosts. In this work, we define and study parallel spooky pebble games. Previous work by Blocki, Holman, and Lee (TCC 2022) and Gidney studied the benefits offered by either parallelism or spookiness individually; here we show that these resources can yield impressive gains when used together. First, we show by construction that a line graph of length $\ell$ can be pebbled in depth $2\ell$ (which is exactly optimal) using space $\leq 2.47\log \ell$. Then, to explore pebbling schemes using even less space, we use a highly optimized $A^*$ search implemented in Julia to find the lowest-depth parallel spooky pebbling possible for a range of concrete line graph lengths $\ell$ given a constant number of pebbles $s$.

We show that these techniques can be applied to Regev's factoring algorithm (Journal of the ACM 2025) to significantly reduce the cost of its arithmetic. For example, we find that 4096-bit integers $N$ can be factored in multiplication depth 193, which outperforms the 680 required of previous variants of Regev and the 444 reported by Ekerå and Gärtner for Shor's algorithm (IACR Communications in Cryptology 2025). While space-optimized implementations of Shor's algorithm remain likely the best candidates for first quantum factorization of large integers, our results show that Regev's algorithm may have practical importance in the future, especially given the possibility of further optimization. Finally, we believe our pebbling techniques will find applications in quantum cryptanalysis beyond integer factorization.
Expand
Michael Klooß, Russell W. F. Lai, Michael Reichle
ePrint Report ePrint Report
Blind signatures are an important tool for privacy-preserving applications with a long history dating back to Chaum's seminal work in Crypto'82. In this work, we focus on the Fiat-Shamir paradigm, i.e., blind signatures based on $\Sigma$-protocols compiled via Fiat-Shamir, in the random oracle model. We resolve the following open problems:

- We give the first lattice-based blind signature that is concurrently-secure based on the Fiat-Shamir paradigm. - We give the first pairing-free blind signature that is concurrently-secure under the discrete logarithm assumption (without the algebraic group model).

On a technical level, our work is inspired by the recent proofs of inequality technique (Klooß and Reichle, Crypto'25). This technique relies on statistical puncturing of the verification key. We explore the technique in the computational regime and develop new proof and design techniques to tackle the challenges encountered along the way.
Expand
Sönke Jendral, Elena Dubrova, Qian Guo, Thomas Johansson
ePrint Report ePrint Report
Recognising the need for PQC signature schemes with different size and performance trade-offs than the ML-DSA and SLH-DSA standards, in 2023 NIST launched a competition for additional signature algorithms. Among the current candidates in this competition is CROSS, a code-based scheme derived from the syndrome-decoding problem and suitable for memory-constrained devices. This paper presents a fault attack on CROSS that recovers the secret key by flipping one or more bits in the scheme’s public parity-check matrix. Unlike previous PQC fault attacks that typically rely on precisely controlled fault injections, which is often an unrealistic assumption, our approach exploits bit flips with unknown position and value, resembling the Rowhammer fault model. The attack builds upon the correction-based methodology introduced for Dilithium (Euro S&P’22; CHES’24) and exploits structural properties of CROSS to substantially relax attacker requirements. We demonstrate the attack on an ARM Cortex-M4 processor using voltage fault injection. We further show that prior work on partial key exposure attacks (CRYPTO'22) can be extended to CROSS under non-trivial erasure rates, reducing the attack complexity. The attack remains effective in the presence of memory-integrity protection mechanisms such as error-correcting codes. Finally, we propose countermeasures for hardening CROSS implementations against physical attacks.
Expand
Sonia Belaïd, Gaëtan Cassiers
ePrint Report ePrint Report
Cryptographic implementations are inherently vulnerable to side-channel attacks, which exploit physical leakages such as power consumption. Masking has become the most widely adopted countermeasure to mitigate these threats, as it randomizes intermediate values and makes the leakage less exploitable. Yet, a central challenge remains: how to rigorously assess the concrete security level of masked implementations. To tackle this issue, the random probing model has emerged as a powerful abstraction. It formalizes leakage as random probes in the circuit and, importantly, the security in the noisy leakage model, which closely reflects the behavior of real embedded devices, reduces to security in the random probing model. Hence, proving security in the random probing model provides sound guarantees of practical resistance against side-channel attacks. Yet, the current state of the art on random probing compilers and verifiers suffers from a clear limitation: scalable approaches yield prohibitively large and inefficient circuits, while tighter methods do not scale to practical circuit sizes. In this work, we bridge this gap by introducing a new methodology that directly estimates the random probing security of large circuits through Monte Carlo sampling, combined with a pruning strategy that drastically reduces the sampling space. We implement our approach in a new tool, PERSEUS, which supports both gate and wire leakage models. Our experiments demonstrate that PERSEUS can efficiently evaluate masked implementations of AES-128 with $n=8$ shares, achieving security levels beyond 32 bits, thereby significantly advancing the state of the art in practical verification of side-channel countermeasures.
Expand
Valerie Gilchrist, Laurane Marco, Christophe Petit, Gang Tang
ePrint Report ePrint Report
The Linear Code Equivalence (LCE) problem and the Matrix Code Equivalence (MCE) problem are two examples of code-based hard problems that have gained attention as candidates for use in post-quantum cryptography. They are straightforward to implement, can be viewed as group actions, and offer a good trade-off between compactness and performance in the realm of post-quantum group actions. With the community gaining confidence in the security of these problems, new variants of these problems have been introduced to achieve particular functionalities in advanced protocols or efficiency improvements. A natural question is then whether the problem variants are as secure as the original ones.

In this work, we consider three problem variants of LCE or MCE. We first consider a variant based on LCE, and reduce it to the original LCE assumption. This increases security confidence in a blind signature scheme, proposed by Duong, Khuc, Qiao, Susilo and Zhang that relied on it. Second, we analyse an MCE variant, MIMCE, proposed in the context of another blind signature scheme, by Kutcha, Legrow and Persichetti, and show that the parameters proposed are not sufficient to reach the claimed bit security. Finally, we consider a multi-sample version of MIMCE which we solve in polynomial time.
Expand
Gaëtan Cassiers
ePrint Report ePrint Report
Hardware Private Circuits (HPC) provide a compositional framework for designing masked hardware circuits, ensuring security against side-channel attacks in the robust probing model with glitches and transitions. There is however a gap between the simplified circuit models and real-world implementations. In particular, some parts are of real circuits are ignored in the simplified models. Further, designing implementations such that they have a simple static correspondence to the theory can be challenging (e.g., around the requirement of splitting the computation into gadgets), or even infeasible (e.g., when some hardware elements like memory are used for different purposes during the executions). In this work, we close this gap by introducing a new model for hardware circuits that include the control and randomness distribution logic. It also allows some computation on the shares to be performed outside the delimited gadgets. We introduce a new open-source compositional masking verification tool, MATCHI. Thanks to the formalization of a composition framework for the new circuit model, we prove that any circuit successfully verified by MATCHI is secure in the threshold probing model with glitches and transitions.
Expand
Noah Greene, Britta Hale
ePrint Report ePrint Report
The development of quantum computing technology poses a serious and credible threat to modern public-key cryptography, which is a pillar of secure communications. Post quantum cryptographic (PQC) algorithms can protect against this threat, but key establishment protocols supporting PQC algorithms pose non-trivial overhead costs. Prior proposals have pointed to the use of strategic traditional/PQC protocol combinations as a means of balancing performance and security, namely as an amortization of PQC overhead. This work provides the first benchmarking of this method within the context of the Messaging Layer Security (MLS) protocol. Comparative metrics include group size, CPU cycles, bytes, and runtime. The results show substantial overhead savings across the board when compared to a simple post-quantum cipher suite use, and marginal increase over traditional cipher suite performance when amortized. At small group sizes such as 1-to-1 channels, the method performs comparably to MLS using a traditional cipher suite. This work shows viability of deploying PQC solutions for constrained settings and and achieving PQC security with efficiency.
Expand
Prabhanjan Ananth, Amit Behera, Zikuan Huang, Fuyuki Kitagawa, Takashi Yamakawa
ePrint Report ePrint Report
Quantum copy-protection is a functionality-preserving compiler that transforms a classical program into an unclonable quantum program. This primitive has emerged as a foundational topic in quantum cryptography, with significant recent developments. However, characterizing the functionalities that can be copy-protected is still an active and ongoing research direction.

Assuming the existence of indistinguishability obfuscation and learning with errors, we show the existence of copy-protection for a variety of classes of functionalities, including puncturable cryptographic functionalities and subclasses of evasive functionalities. This strictly improves upon prior works, which were either based on the existence of heuristic assumptions [Ananth and Behera CRYPTO'24] or were based on the classical oracle model [Coladangelo and Gunn STOC'24]. Moreover, our constructions satisfy a new and much stronger security definition compared to the ones studied in the prior works. To design copy-protection, we follow the blueprint of constructing copy-protection from unclonable puncturable obfuscation (UPO) [Ananth and Behera CRYPTO'24] and present a new construction of UPO by leveraging the recently introduced techniques from [Kitagawa and Yamakawa TCC'25].
Expand
Thomas Debris-Alazard, Philippe Gaborit, Romaric Neveu, Olivier Ruatta
ePrint Report ePrint Report
Introduced in 2003 and 2005, Alekhnovich and Regev' schemes were the first public-key encryptions whose security is only based on the average hardness of decoding random linear codes and LWE, without other security assumptions. Such security guarantees made them very popular, being at the origin of the now standardized HQC or Kyber.

We present an adaptation of Alekhnovich and Regev' encryption scheme whose security is only based on the hardness of a slight variation of MinRank, the so-called stationary-MinRank problem. We succeeded to reach this strong security guarantee by showing that stationary-MinRank benefits from a search-to-decision reduction. Our scheme therefore brings a partial answer to the long-standing open question of building an encryption scheme whose security relies solely on the hardness of MinRank.

Finally, we show after a thoroughly security analysis that our scheme is practical and competitive with other encryption schemes admitting such strong security guarantees. Our scheme is slightly less efficient than FrodoKEM, but much more efficient than Alekhnovich and Regev' original schemes, with possibilities of improvements by considering more structure, in the same way as HQC and Kyber.
Expand
Alain Couvreur, Thomas Debris-Alazard, Philippe Gaborit, Adrien Vinçotte
ePrint Report ePrint Report
We present Miranda, the first family of full-domain-hash signatures based on matrix codes. This signature scheme fulfils the paradigm of Gentry, Peikert and Vaikuntanathan (GPV), which gives strong security guarantees. Our trapdoor is very simple and generic: if we propose it with matrix codes, it can actually be instantiated in many other ways since it only involves a subcode of a decodable code (or lattice) in a unique decoding regime of parameters. Though Miranda signing algorithm relies on a decoding task where there is exactly one solution, there are many possible signatures given a message to sign and we ensure that signatures are not leaking information on their underlying trapdoor by means of a very simple procedure involving the drawing of a small number of uniform bits. In particular Miranda does not use a rejection sampling procedure which makes its implementation a very simple task contrary to other GPV-like signatures schemes such as Falcon or even Wave.

We instantiate Miranda with the famous family of Gabidulin codes represented as spaces of matrices and we study thoroughly its security (in the EUF-CMA security model). For 128 bits of classical security, the signature sizes are as low as 90 bytes and the public key sizes are in the order of 2.6 megabytes.
Expand
George Lu, Jad Silbak, Daniel Wichs
ePrint Report ePrint Report
We study error-detection and error-correction codes for computationally bounded adversarial channels. We consider seeded codes where the polynomial-time encoding and decoding procedures share a public random seed, but are otherwise deterministic. An adversarial channel gets this seed and can perform arbitrary polynomial-time computation to adaptively select both the message to be encoded and a bounded number of errors to be added to the resulting codeword. The goal is to detect or correct such errors with overwhelming probability, while achieving better trade-offs between rate and error tolerance than those possible for computationally unbounded channels. For large alphabets, prior work (ITCS '25) achieves essentially optimal parameters under minimal cryptographic assumptions. However, for the binary alphabet, prior works (TCC '20, EUROCRYPT '25) either only achieved a weaker notion of selective security under the learning with errors (LWE) assumption, or relied on non-standard cryptographic assumptions to get the full notion of adaptive security. In this work, we construct binary codes that achieve the full notion of adaptive security assuming trapdoor hashing, which can in turn be instantiated under a variety of standard cryptographic assumptions such as LWE, or Decisional Diffie-Hellman (DDH), or Quadratic Residuosity (QR), or Decisional Composite Residuosity (DCR). For error detection, our codes get essentially optimal rate $R \approx 1$ and relative error tolerance $p \approx \frac{1}{2}$. For error correction, they can uniquely correct $p < 1/4$ fraction of errors with a rate $R$ matching that of the best known list-decodable codes for this error tolerance.

As a central technical tool of potentially independent interest, we construct multi-input correlation intractable hashing for ``shifted output relations'' under the standard cryptographic assumptions above.
Expand
Hossein Hafezi, Gaspard Anthoine, Matteo Campanelli, Dario Fiore
ePrint Report ePrint Report
Lookup arguments have become a central tool in proof systems, powering a range of practical applications. They enable the efficient enforcement of non-native operations, such as bit decomposition, range checks, comparisons, and floating-point arithmetic. They underpin zk-VMs by modelling instruction tables, provide set membership proofs in stateful computations, and strengthen extractors by ensuring witnesses belong to small domains. Despite these broad uses, existing lookup constructions vary widely in assumptions, efficiency, and composability. In this work, we systematize the design of lookup arguments and the cryptographic primitives they rely on. We introduce a unified and modular framework that covers standard, projective, indexed, vector, and decomposable lookups. We classify existing protocols by proof technique—multiset equality, Logup-based, accumulators, and subvector extraction (matrix–vector)—as well as by composition style. We survey and evaluate existing protocols along dimensions such as prover cost, dependence on table size, and compatibility with recursive proofs. From this analysis, we distill lessons and guidelines for choosing lookup constructions in practice and highlight the benefits and limitations of emerging directions in literature, such as preprocessing and decomposability.
Expand
Sana Boussam
ePrint Report ePrint Report
Non profiled attacks are a type of attacks in which an attacker aims at retrieving secret information from any device with no prior knowledge about leakage model characteristics. In practice, Differential Power Analysis (DPA), Correlation Power Analysis (CPA) and Linear Regression based Attack (LRA) which are the most common non profiled attacks require an a priori about leakage model to be used nowadays. The development of a generic attack in which no assumptions are made about the leakage model remains therefore an open issue to this day and has been investigated for over 10 years by the side channel community. Among all state-of-the-art non profiled attacks, it has been showed by Whitnall et al. that Linear Regression based Attack (LRA) corresponds to a generic attack when all predictors are considered i.e. LRA captures the dependencies between the bits of the secret information and their interactions and the physical traces. However, in practice, LRA cannot be carried out considering all predictors, as it is subject to multiple limitations, namely the problem of multicollinearity related to linear regression and the use of inappropriate distinguishers as the latter lose their discriminating ability when targeting injective functions. In this paper, we aim at finding a solution to this issue and providing a significant improvement in generic attacks research topic. First, we show that the use of Walsh-Hadamard basis prevent multicollinearity problem from which LRA suffers and allows thus to perform generic LRA i.e. LRA in which all predictors can be considered, without incuring a loss of precision of the estimated coefficients. From this observation, we demonstrate the limitations of using the classical distinguishers in LRA (i.e. Euclidean distance based distinguishers) and propose novel alternatives that are more suitable for LRA. To motivate the choice of these new distinguishers, we formally demonstrate their soundness against linear and non-linear operations. These choices result in the first generic non profiled attack which considers all predictors. Finally, we experimentally assess and validate all our theoretical results using simulations and publicly available datasets, thus covering a wide range of use-cases.
Expand
Yevhen Hrubiian, Illia Melnyk, Volodymyr Dubinin, Oleksandr Kurbatov, Serhii Volynets, Roman Perebynos, Yevhenii Serdiukov
ePrint Report ePrint Report
The majority of modern e2e private applications face scalability issues that limit their functionality, broader adoption, and overall user experience. Some of them organize private groups as a set of peer-to-peer chats, which leads to an overall quadratic complexity in the size of group communication and a linear time complexity in the number of members for encryption. Others apply more scalable group key establishment constructions (such as ART), but at the same time, they do not support verifiable and concurrent updates. In this paper, we introduce the 0-ART protocol, which aims to address the aforementioned issues by (1) verifiable group operations; (2) a causal tree construction allowing for multiple concurrent updates and efficient member removal; (3) anonymous credentials, making privacy in the group available while keeping operations authentic. We implemented the 0-ART framework and applied it to a decentralized collaborative work application. According to our benchmark, executing the most complex operation (performing the provable $\mathsf{AddMember}$ operation in a group of size $2^{20}$) takes 1.57 seconds on the user's device and $24$Kb of proof size.
Expand
◄ Previous Next ►