International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

12 June 2025

Roberto La Scala, Sharwan K. Tiwari
ePrint Report ePrint Report
The multistep solving strategy consists in a divide-and-conquer approach: when a multivariate polynomial system is computationally infeasible to solve directly, one variable is assigned over the elements of the base finite field, and the procedure is recursively applied to the resulting simplified systems. In a previous work by the same authors (among others), this approach proved effective in the algebraic cryptanalysis of the Trivium cipher.

In this paper, we present a new implementation of the corresponding algorithm based on a Depth-First Search strategy, along with a novel complexity analysis leveraging tree structures. We further introduce the notion of an ``oracle function'' as a general predictive tool for deciding whether the evaluation of a new variable is necessary to simplify the current polynomial system. This notion allows us to unify all previously proposed variants of the multistep strategy, including the classical hybrid approach, by appropriately selecting the oracle function.

Finally, we apply the multistep solving strategy to the cryptanalysis of the low-latency block cipher Aradi, recently introduced by the NSA. We present the first full-round algebraic attack, raising concerns about the cipher’s actual security with respect to its key length.
Expand
Shutong Jin, Shiyu Shen, Hao Yang, Donglong Chen, Wangchen Dai, Ray C. C. Cheung
ePrint Report ePrint Report
Privacy-preserving neural network inference using Fully Homomorphic Encryption (FHE) faces significant challenges in efficiently evaluating non-polynomial functions, such as activation functions, which are critical for introducing non-linearity in neural networks. Full-Domain Functional Bootstrap (FDFB) algorithms provide a promising solution by enabling the evaluation of arbitrary functions while simultaneously refreshing ciphertexts to manage noise accumulation. Despite their theoretical advantages, the practicality of FDFB algorithms has been limited by excessive computational overhead, often exceeding 1000 ms per ciphertext, which restricts their scalability for large neural networks.

To overcome the computational bottlenecks of FDFB, we have re-engineered the algorithms for massively parallel execution on GPUs. Our primary contribution is a hierarchical parallelization strategy that exploits concurrency at the thread, stream, and device levels. A key optimization involves the use of CUDA streams to create a data pipeline that effectively mitigates the overhead of memory transfers between the host and device. This optimized architecture achieves a significant speedup of up to 524$\times$ compared to CPU-based implementations. Our implementation maintains full precision for evaluating various activation functions, confirming its viability for large-scale, privacy-preserving machine learning tasks and paving the way for practical FHE-based deep learning.
Expand
Clémence Chevignard, Guilhem Mureau
ePrint Report ePrint Report
The module-Lattice Isomorphism Problem (module-LIP) was introduced by Ducas et al. and used within the signature scheme and NIST candidate HAWK. Recently, it was pointed out that over certain number fields $F$, the problem can be reduced to enumerating solutions of $x^2 + y^2 = q$ (where $q \in \mathcal{O}_F$ is given and $x,y \in \mathcal{O}_F$ are the unknowns). Moreover one can always reduce to a similar equation which has only few solutions. This key insight led to a heuristic polynomial-time algorithm for solving module-LIP on those specific instances. Yet this result doesn't threaten HAWK for which the problem can be reduced to enumerating solutions of $x^2 + y^2 + z^2 + t^2 = q$ (where $q \in \mathcal{O}_F$ is given and $x,y,z,t \in \mathcal{O}_F$ are the unknowns). We show that, in all likelihood, solving this equation requires the enumeration of a too large set to be feasible, thereby making irrelevant a straightforward adaptation of the previous method for solving module-LIP.
Expand
Tobias Guggemos, Farzin Renan
ePrint Report ePrint Report
Identity-based signature ($\textsf{IBS}$) schemes eliminate the need for certificate management, reducing both signature size and verification time. However, the challenge of updating stolen identity-based signature keys (or revoking and re-issueing them) has received limited attention. Existing generic solutions, such as managing revocation lists or periodically renewing user keys, are inefficient and introduce significant networking overhead in dynamic environments. In this work, we address this gap by introducing a symmetric element that enables key updates in $\textsf{IBS}$ schemes via a single multicast message. The network overhead of our solutions scales logarithmic with the number of system users, while computation and memory overhead are constant. Furthermore, we generalize our method by proposing a framework to transform any $\textsf{IBS}$ scheme into a key-updatable signature scheme ($\textsf{KUSS}$), and we define the token security (unforgeability), forward security, and post-compromise security requirements for such transformations. We demonstrate the versatility of our framework by providing five instantiations of $\textsf{KUSS}$ based on Schnorr-type $\textsf{IBS}$, pairing-based $\textsf{IBS}$, and isogeny-based $\textsf{IBS}$. Finally, we analyze the security of these instantiations.
Expand
Rutchathon Chairattana-Apirom, Stefano Tessaro
ePrint Report ePrint Report
BBS/BBS+ signatures are the most promising solution to instantiate practical and lightweight anonymous credentials. They underlie standardization efforts by the W3C and the IRTF. Due to their potential for large scale deployment, it is paramount to understand their concrete security, but a number of questions have been left open by prior works. To this end, the security proofs by Au et al. (SCN '06), Camenisch et al. (TRUST '16), and Tessaro and Zhu (EUROCRYPT '23) show reductions from $q$-SDH in groups of prime order $p$, where $q$ is the number of issued signatures. However, these prior works left the possibility open that BBS/BBS+ is "even more secure" than what can be guaranteed by such proofs. Indeed, while the $q$-SDH assumption is subject to an attack that uses $O(\sqrt{p/q})$ group exponentiations (Cheon, EUROCRYPT '06) for several choices of $q$, no attack with a similar complexity appears to affect either of BBS+ and "deterministic" BBS, for which the best known attacks amount to recovering the secret key by breaking the discrete logarithm problem. The assumption that this attack is best possible also seemingly justifies the choice of parameters in practice.

Our result shows that this expectation is not true. We show new attacks against BBS+ and deterministic BBS which, after seeing $q$ signatures, allow us to recover the secret key with the same complexity as solving the $\Theta(q)$-Discrete Logarithm problem, which in turn is proportional to $O(\sqrt{p/q})$ for many choices of $q$. Further, we also extend the attack to a reduction showing that the security of BBS+ and deterministic BBS implies the $\Theta(q)$-SDH assumption.
Expand
Pratap Singh, Joshua Gancher, Bryan Parno
ePrint Report ePrint Report
Cryptographic security protocols, such as TLS or WireGuard, form the foundation of a secure Internet; hence, a long line of research has shown how to formally verify their high-level designs. Unfortunately, these formal guarantees have not yet reached real-world implementations of these protocols, which still rely on testing and ad-hoc manual audits for security and correctness. This gap may be explained, in part, by the substantial performance and/or development overhead imposed by prior efforts to verify implementations.

To make it more practical to deploy verified implementations of security protocols, we present OwlC, the first fully automated, security-preserving compiler for verified, high-performance implementations of security protocols. From a high-level protocol specification proven computationally secure in the Owl language, OwlC emits an efficient, interoperable, side-channel resistant Rust library that is automatically formally verified to be correct.

We produce verified libraries for all previously written Owl protocols, and we also evaluate OwlC on two new verified case studies: WireGuard and Hybrid Public-Key Encryption (HPKE). Our verified implementations interoperate with existing implementations, and their performance matches unverified industrial baselines on end-to-end benchmarks.
Expand
Aws Albarghouthi
ePrint Report ePrint Report
Quantum computing is often introduced through the lens of linear algebra with notation that is inherited from quantum mechanics. In this paper, we take an operational view of quantum computing that is easy to demonstrate programmatically. The hope is that this viewpoint will (1) demystify quantum computing and make it more accessible to a wider audience, particularly computer science students and software engineers, and (2) possibly serve as the basis of a formal foundation for automatically reasoning about quantum programs. We treat the state of a quantum computer as a set and present the operations of a quantum computer—quantum gates and measurements—using familiar functional set transformations (think map, filter, fold, etc.). By the end of the paper, we will have implemented a simple quantum circuit simulator that can be used to simulate small quantum circuits. The code is available at https://github.com/qqq-wisc/qwla.
Expand
Shuichi Katsumata, Guilhem Niot, Ida Tucker, Thom Wiggers
ePrint Report ePrint Report
The Signal protocol relies on a handshake protocol, formerly X3DH and now PQXDH, to set up secure conversations. One of its privacy properties, of value to Signal, is deniability, allowing users to deny participation in communications. Prior analyses of deniability for these protocols, including post-quantum variants, use models highly tailored to the individual protocols and generally make ad-hoc adaptations to ``standard'' AKE definitions, obscuring the concrete deniability guarantees and complicating comparisons across protocols. Building on Hashimoto et al.’s abstraction for Signal handshake protocols (USENIX’25)'s abstraction for Signal handshake protocols (USENIX'25), we address this gap by presenting a unified framework for analyzing their deniability. We analyze Signal's classically secure X3DH and harvest-now-decrypt-later-secure PQXDH, and show that PQXDH is deniable against harvest-now-judge-later attacks, where a quantum judge retrospectively assesses the participation of classical users. We further analyze post-quantum alternatives like RingXKEM, whose deniability relies on ring signatures (RS). By introducing a novel metric inspired by differential privacy, we provide relaxed, pragmatic guarantees for deniability. We also use this metric to define deniability for RS, a relaxation of anonymity, allowing us to build an efficient RS from NIST-standardized Falcon (and MAYO), which is not anonymous, but is provably deniable.
Expand
Nilanjan Datta, Jean Paul Degabriele, Avijit Dutta, Vukašin Karadžić, Hrithik Nandi
ePrint Report ePrint Report
A rugged pseudorandom permutation (RPRP) is a security notion for variable-length tweakable ciphers that is strictly weaker than the traditional notion of a strong pseudorandom permutation. Being a weaker security notion it admits more efficient constructions. Yet the notion is strong enough so that any such construction can lend itself to a number of practical applications. It can be used to construct onion encryption, misuse-resistant AEAD, and AEAD secure under the release of unverified plaintext. Two recent works have introduced the notion, explored some of its applications, and studied a number of constructions that meet this notion. However, no constructions are known to achieve this notion with beyond-birthday-bound security. Current cloud applications are processing amounts of data that go well beyond the traditional $2^{32}$ barrier, and $2^{64}$ is becoming the new target. As such, the need for encryption with beyond-birthday-bound security has become a very practical concern. In this work, we present the first constructions for variable-length tweakable ciphers that satisfy RPRP security beyond the birthday bound. From these constructions, we readily obtain efficient AEAD schemes that are optimally secure against once misuse and the release of unverified plaintext.
Expand
Kang Hoon Lee, Ji Won Yoon
ePrint Report ePrint Report
We present a novel homomorphic trace evaluation algorithm $\mathsf{RevHomTrace}$, which mitigates the phase amplification problem that comes with the definition of the field trace. Our $\mathsf{RevHomTrace}$ overcomes the phase amplification with only a negligible computational overhead, thereby improving the usability of the homomorphic field trace algorithm. Moreover, our tweak also improves the noise propagation of the $\mathsf{HomTrace}$ and breaks the traditional $O(N^{3})$ variance bound in previous works into $O(N \log N)$.

Our experimental results obtained by integrating $\mathsf{RevHomTrace}$ into state-of-the-art homomorphic encryption algorithms further demonstrate the usefulness of our algorithm. Specifically, $\mathsf{RevHomTrace}$ improves the noise accumulation of the (high precision) circuit bootstrapping, which also achieves maximal $1.40\times$ speedup by replacing the costly high precision trace evaluation. Also, based on our idea of $\mathsf{RevHomTrace}$, we present low latency, high precision LWE-to-GLWE packing algorithm $\mathtt{MS}$-$\mathtt{PackLWEs}$. We also show that our $\mathtt{MS}$-$\mathtt{PackLWEs}$ significantly reduces the packing error without severe degradation of performance.
Expand

10 June 2025

Rahul Ilango, Alex Lombardi
ePrint Report ePrint Report
We study several problems in the intersection of cryptography and complexity theory based on the following high-level thesis. 1) Obfuscation can serve as a general-purpose worst-case to average-case reduction, reducing the existence of various forms of cryptography to corresponding worst-case assumptions. 2) We can therefore hope to overcome barriers in cryptography and average-case complexity by (i) making worst-case hardness assumptions beyond $\mathsf{P}\neq \mathsf{NP}$, and (ii) leveraging worst-case hardness reductions, either proved by traditional complexity-theoretic methods or facilitated further by cryptography.

Concretely, our results include: - Optimal hardness. Assuming sub-exponential indistinguishability obfuscation, we give fine-grained worst-case to average case reductions for circuit-SAT. In particular, if finding an $\mathsf{NP}$-witness requires nearly brute-force time in the worst case, then the same is true for some efficiently sampleable distribution. In fact, we show that under these assumptions, there exist families of one-way functions with optimal time-probability security tradeoffs.

Under an additional, stronger assumption -- the optimal non-deterministic hardness of refuting circuit-SAT -- we construct additional cryptographic primitives such as PRGs and public-key encryption that have such optimal time-advantage security tradeoffs.

- Direct Product Hardness. Again assuming $i\mathcal O$ and optimal non-deterministic hardness of SAT refutation, we show that the ``(search) $k$-fold SAT problem'' -- the computational task of finding satisfying assignments to $k$ circuit-SAT instances simultaneously -- has (optimal) hardness roughly $(T/2^n)^k$ for time $T$ algorithms. In fact, we build ``optimally secure one-way product functions'' (Holmgren-Lombardi, FOCS '18), demonstrating that optimal direct product theorems hold for some choice of one-way function family.

- Single-Input Correlation Intractability. Assuming either $i\mathcal O$ or $\mathsf{LWE}$, we show a worst-case to average-case reduction for strong forms of single-input correlation intractability. That is, powerful forms of correlation-intractable hash functions exist provided that a collection of \emph{worst-case} ``correlation-finding'' problems are hard. - Non-interactive Proof of Quantumness. Assuming sub-exponential $i\mathcal O$ and OWFs, we give a non-interactive proof of quantumness based on the worst-case hardness of the white-box Simon problem. In particular, this proof of quantumness result does not explicitly assume quantum advantage for an average-case task.

To help prove our first two results, we show along the way how to improve the Goldwasser-Sipser ``set lower bound'' protocol to have communication complexity quadratically smaller in the multiplicative approximation error $\epsilon$.
Expand
Hao Chung, Elisaweta Masserova, Elaine Shi, Sri AravindaKrishnan Thyagarajan
ePrint Report ePrint Report
Atomic swaps enable asset exchanges across blockchains without relying on trusted intermediaries, and are a key component of decentralized finance (DeFi) ecosystems. Recently, Chung, Masserova, Shi, and Thyagarajan introduced Rapidash (Financial Cryptography 2025), an atomic swap protocol that remains incentive compatible under user-miner collusion, by ensuring that the honest strategy forms a coalition-resistant Nash equilibrium. However, their model assumes a closed system where players act solely based on internal protocol incentives. In practice, participants may be influenced by external incentives such as off-chain rewards or adversarial bribes, which can undermine such equilibrium guarantees.

In this work, we introduce a new game-theoretic notion, bounded maximin fairness, which ensures that honest participants remain protected against rational adversaries with arbitrary but bounded external incentives. We construct an atomic swap protocol that satisfies this notion, while preserving the equilibrium properties of prior work in the absence of external influence.

As we show, our protocol is easy to implement and can be instantiated even in Bitcoin’s limited scripting language.
Expand
Thibauld Feneuil, Matthieu Rivain
ePrint Report ePrint Report
Zero-knowledge proofs (ZKPs) are a fundamental building block in cryptography, enabling powerful privacy-preserving and verifiable computations. In the post-quantum era, hash-based ZKPs have emerged as a promising direction due to their conjectured resistance to quantum attacks, along with their simplicity and efficiency.

In this work, we introduce SmallWood, a hash-based polynomial commitment scheme (PCS) and zero-knowledge argument system optimized for relatively small instances. Building on the recent degree-enforcing commitment scheme (DECS) from the Threshold-Computation-in-the-Head (TCitH) framework, we refine its formalization and combine it with techniques from Brakedown. This results in a new hash-based PCS that is particularly efficient for polynomials of relatively small degree —typically up to $2^{16}$— outperforming existing approaches in this range.

Leveraging this new PCS, we design a hash-based zero-knowledge argument system that outperforms the state-of-the-art in terms of proof sizes for witness size ranging from $2^6$ to $2^{16}$. Additionally, we present exact zero-knowledge arguments for lattice-based problems using SmallWood, demonstrating highly competitive performance: our scheme yields proof sizes under 25 KB across a wide range of lattice parameters, including Kyber and Dilithium instances.
Expand
Sebastian Faller, Julia Hesse
ePrint Report ePrint Report
An oblivious pseudorandom function (OPRF) is a cryptographic tool that enables fast and secure authentication and key derivation from passwords. In the past few years, the adoption of OPRFs has flourished and today they are at the core of the PIN-protected backup methods of WhatsApp and Signal, and of privacy-enhancing browser technologies. All vendors deploy the so-called 2Hash-Diffie-Hellman (2HashDH) OPRF, which relies on discrete-logarithm-type assumptions that are standard yet known to be prone to quantum attacks.

Recent advancements in cryptographic research (e.g., Dodgson et al., Eurocrypt 2025) have brought up post-quantum OPRFs that are fast enough to deploy them in the setting of, e.g., WhatsApp or Signal. Yet none of these constructions %that achieves the required level of security e.g., for WhatsApps backup protocol are based on standard assumptions.

In this work, we investigate combiners for OPRFs, namely a ``best-of-both'' combination of a classical and a post-quantum OPRF that is secure as long as one of them is. First, we give formal evidence that so-called black-box combiners do not exist, indicating that combining OPRFs is subtle and bears similarities with other powerful yet hard-to-combine cryptographic primitives like oblivious transfer (OT).

We then give a (non-black-box) combiner for OPRFs and show that it can be instantiated with 2HashDH and the currently most efficient post-quantum OPRFs based on Legendre symbols. In particular, the reliance on the less standard Legendre-based hardness assumption does not harm the security of 2HashDH. This gives vendors a viable path to lift the security of their OPRF deployments to a post-quantum level.
Expand
Giulia Gaggero, Elisa Gorla, Daniel Cabarcas
ePrint Report ePrint Report
In this note, we provide proven estimates for the complexity of the SupportMinors Modeling, mostly confirming the heuristic complexity estimates contained in the original article.
Expand
Amin Setayesh, Cheran Mahalingam, Emily Chen, Sujaya Maiyya
ePrint Report ePrint Report
We present Treebeard - the first scalable and fault tolerant Oblivious RAM (ORAM) based datastore designed to protect applications from access pattern attacks. Current ORAM systems face challenges in practical adoption due to their limited ability to handle concurrent workloads, scale effectively, and ensure fault tolerance. We address all three limitations in Treebeard by utilizing a multi-layer architecture that scales horizontally, handling thousands of requests in parallel, while replicating the data to prevent data loss upon failures. Experimental evaluation demonstrates Treebeard's ability to scale linearly, achieving a throughput of 160K ops/sec with 16 machines; this behavior is similar to the enclave-based state-of-the-art, Snoopy. Being fault-tolerant, Treebeard recovers from failures with close to zero downtime and achieves 13.8x the throughput of QuORAM, the latest fault tolerant ORAM system, even without scaling.
Expand
Zhengyuan Su, Qi Pang, Simon Beyzerov, Wenting Zheng
ePrint Report ePrint Report
Abstract Secure two-party computation (2PC) is a cryptographic technique that enables two mutually distrusting parties to jointly evaluate a function over their private inputs. We consider a 2PC primitive called confidential lookup table (LUT) evaluation, which is useful in privacy-preserving ML inference and data analytics. In this setting, a server holds a confidential LUT and evaluates it over an input secret-shared between a client and the server, producing a secret-shared output. Existing approaches for 2PC LUT evaluation suffer from high asymptotic complexity and practical inefficiency, with some designs lacking confidentiality guarantees for the LUT. Recognizing that many applications involving confidential LUT evaluation require processing multiple inputs with the same LUT, we propose FABLE, a system designed to efficiently evaluate a LUT on a large batch of queries simultaneously. Compared to the state-of-the-art confidential LUT evaluation methods, FABLE achieves up to 28.46-101.47$\times$ speedup in LAN environments and up to 50.10-392.93$\times$ speedup in WAN environments.
Expand
Katharina Boudgoust, Oleksandra Lapiha
ePrint Report ePrint Report
In this work, we propose a novel systematic approach for obtaining leftover hash lemmas (LHLs) over cyclotomic rings. Such LHLs build a fundamental tool in lattice-based cryptography, both in theoretical reductions as well as in the design of cryptographic primitives. The scattered set of prior works makes it difficult to navigate the landscape and requires a substantial effort to understand the mathematical constraints under which the LHL holds over cyclotomic rings. This is especially painful if one’s given setting does not fit exactly into prior studies.

We argue that all prior approaches boil down to two different proof strategies, resulting in two main theorems. From there on, we are able to recover all previous flavours of seemingly independent LHLs as corollaries. Moreover, we showcase the power of our interpretation by providing new statements, covering mathematical settings not considered before. Our work further proves LHLs in the presence of leakage for both approaches and provides novel bounds for wide families of leakage functions.
Expand
Maiara F. Bollauf, Roberto Parisella, Janno Siim
ePrint Report ePrint Report
A reduction showing that the hardness of the discrete logarithm ($\mathsf{DL}$) assumption implies the hardness of the computational Diffie-Hellman ($\mathsf{CDH}$) assumption in groups of order $p$, where $p - 1$ is smooth, was first presented by den Boer [Crypto, 88].} We also consider groups of prime order $p$, where $p - 1$ is somewhat smooth (say, every prime $q$ that divides $p - 1$ is less than $2^{100}$). Several practically relevant groups satisfy this condition. 1. We present a concretely efficient version of the reduction for such groups. In particular, among practically relevant groups, we obtain the most efficient and tightest reduction in the literature for BLS12-381, showing that $\mathsf{DL}$ = $\mathsf{CDH}$. 2. By generalizing the reduction, we show that in these groups the $n$-Power $\mathsf{DL}$ ($n$-$\mathsf{PDL}$) assumption implies $n$-Diffie-Hellman Exponent ($n$-$\mathsf{DHE}$) assumption, where $n$ is polynomial in the security parameter. On the negative side, we show there is no generic reduction, which could demonstrate that $n$-$\mathsf{PDL}$ implies the $n$-Generalized Diffie-Hellman Exponent ($n$-$\mathsf{GDHE}$) assumption. This is in stark contrast with the algebraic group model, where this implication holds.
Expand
Delia-Iustina Grigoriță
ePrint Report ePrint Report
As Internet of Things (IoT) systems become increasingly complex, verifying communication protocols is essential to guarantee their security and correctness. This paper introduces a brief introduction to the theoretical concepts of how to use formal techniques, LTL and rewriting logic within the Maude system to verify the security and proper behavior of the protocols. While rewriting logic explains how various events occur over time, LTL explains how a property can be formulated. This theoretical perspective is intended to inform future applications and research.
Expand
◄ Previous Next ►