International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

27 April 2025

Krishnendu Chatterjee, Seth Gilbert, Stefan Schmid, Jakub Svoboda, Michelle Yeo
ePrint Report ePrint Report
Liquid democracy is a transitive vote delegation mechanism over voting graphs. It enables each voter to delegate their vote(s) to another better-informed voter, with the goal of collectively making a better decision. The question of whether liquid democracy outperforms direct voting has been previously studied in the context of local delegation mechanisms (where voters can only delegate to someone in their neighbourhood) and binary decision problems. It has previously been shown that it is impossible for local delegation mechanisms to outperform direct voting in general graphs. This raises the question: for which classes of graphs do local delegation mechanisms yield good results?

In this work, we analyse (1) properties of specific graphs and (2) properties of local delegation mechanisms on these graphs, determining where local delegation actually outperforms direct voting. We show that a critical graph property enabling liquid democracy is that the voting outcome of local delegation mechanisms preserves a sufficient amount of variance, thereby avoiding situations where delegation falls behind direct voting. These insights allow us to prove our main results, namely that there exist local delegation mechanisms that perform no worse and in fact quantitatively better than direct voting in natural graph topologies like complete, random $d$-regular, and bounded degree graphs, lending a more nuanced perspective to previous impossibility results.
Expand
Zhuang Shan, Leyou Zhang, Fuchun Guo, Yong Yu
ePrint Report ePrint Report
We were deeply impressed by the paper by Ateniese et al., published in Crypto 2019. In it, they presented a black-box construction of matchmaking encryption (ME) based on functional encryption. In our work, we propose an ME scheme based on standard assumptions in the standard model. This scheme has been proven to be secure under the learning with error (LWE) assumption. Our ME scheme is achieved through a novel framework of bilateral-policy attribute-based encryption (BP-ABE) and a new intermediate primitive termed a perturbed pseudorandom generator (PPRG), which facilitates the implementation of authentication functionality by replacing non-interactive zero-knowledge proof functionality.

In the scheme presented in this paper, the user's "public key" is generated using Hamming correlation robustness and user attributes. Note that the 'public key' is not public. In order to preserve the privacy of the two parties involved in matchmaking encryption, our BP-ABE scheme does not use the 'public key' directly to encrypt the plaintext. Instead, the message sender selects matching attributes and uses a Hamming correlation robustness and homomorphic pseudorandom function (HPRF) to generate temporary public keys and hide the public key and user attributes.

When these temporary public keys satisfy the access policy, the receiver can decrypt the data using their private key. Regarding the authentication function of matchmaking encryption, this paper proposes a non-interactive privacy set intersection (PSI) scheme based on HPRF and PPRG. The message sender encrypts their 'public key' using the proposed PSI scheme as part of the ciphertext. The receiver also encrypts their 'public key' using the proposed PSI scheme and matches the attributes, thereby completing the message authentication function. We consider our approach to be a significant departure from existing constructions, despite its simplicity.
Expand
Vasyl Ustimenko, Tymoteusz Chojecki
ePrint Report ePrint Report
Let us assume that one of two trusted parties (administrator) manages the information system (IS) and another one (user) is going to use the resources of this IS during the certain time interval. So they need establish secure user’s access password to the IS resources of this system via selected authenticated key exchange protocol. So they need to communicate via insecure communication channel and secretly con-struct a cryptographically strong session key that can serve for the establishment of secure passwords in the form of tuples in certain alphabet during the certain time interval. Nowadays selected protocol has to be postquantum secure. We propose the implementation of this scheme in terms of Symbolic Computa-tions. The key exchange protocol is one of the key exchange algorithms of Noncommutative Cryptography with the platform of multivariate transformation of the affine space over selected finite commutative ring. The session key is a multivariate map on the affine space. Platforms and multivariate maps are construct-ed in terms of Algebraic Graph Theory.
Expand
Stephan Krenn, Thomas Lorünser, Sebastian Ramacher, Federico Valbusa
ePrint Report ePrint Report
As quantum computing matures, its impact on traditional cryptographic protocols becomes increasingly critical, especially for data-at-rest scenarios where large data sets remain encrypted for extended periods of time. This paper addresses the pressing need to transition away from pre-quantum algorithms by presenting an agile cryptosystem that securely and efficiently supports post-quantum Key Encapsulation Mechanisms (KEMs). The proposed solution is based on combining a CCA-secure KEM with a robust Authenticated Encryption scheme, allowing only the dynamic component - the symmetric key encapsulation - to be updated when migrating to new cryptographic algorithms. This approach eliminates the need to re-encrypt potentially massive data payloads, resulting in significant savings in computational overhead and bandwidth. We formalize the concept of cryptoagility through an agile-CCA security model, which requires that neither the original ciphertext nor any updated version reveals meaningful information to an attacker. A game-based proof shows that the overall construction remains agile-CCA secure if the underlying KEM and AE are individually CCA secure under a random oracle assumption. The result is a future-proof scheme that eases the transition to post-quantum standards, enabling enterprises and cloud storage providers to protect large amounts of data with minimal disruption while proactively mitigating emerging quantum threats.
Expand
Weiqing Deng, Jianing Zhang, Haoyang Wang
ePrint Report ePrint Report
Differential meet-in-the-middle (MITM) cryptanalysis, introduced by Boura et al. at CRYPTO 2023, has been demonstrated to be an effective technique for analyzing the security of block ciphers. In this paper, we introduce an improved parallel partitioning technique, and incorporate it into a new framework with a flexible key recovery strategy. This framework is applicable to both SPN and Feistel ciphers. We apply the new framework to SIMON and Piccolo-128 for demonstration. In particular, we also develop an MILP-based tool for searching full attacks on Piccolo-128. For SIMON48/96, we propose the best attack, reaching 26 rounds against 25 rounds previously. For other versions of SIMON, we extend the best previous differential attacks by one or two rounds. In the case of Piccolo-128, while no current differential attacks show promising results, our differential MITM attack reaches 15 rounds, matching the same number of rounds of the best impossible differential attack.
Expand
Xin Wang, Xiao Sui, Sisi Duan
ePrint Report ePrint Report
Sharding is a generic approach to enhance the scalability of distributed systems. In recent years, many efforts have been made to scale the consensus mechanism of blockchains from sharding. A crucial research question is how to achieve the sweet spot of having a relatively small shard size (to achieve decent performance) while achieving an overwhelming probability of correctness (so the system is safe and live). Many recent works fall into the two-layer design that uses some coordinating shards to monitor the correctness of other shards (CCS 2022, NDSS 2024, INFOCOM 2023). All of them involve expensive communication costs between the shards, significantly degrading performance.

We present Otter, a scalable partially synchronous sharding-based Byzantine fault-tolerant atomic broadcast (ABC) protocol. We use coordinating shards in a completely new way. In particular, we randomly sample coordinating shards to directly participate in the consensus protocol. Such a random sampling mechanism makes it possible to analyze the correctness of the ABC protocol using a probabilistic model. In this way, we can significantly lower the shard size (informally, from over 1,200 in previous work to around 100) without lowering the probability of correctness. We also present a new notion called abortable fork detection (AFD) that might be of independent interest. Our evaluation results on Amazon EC2 using up to 1,000 replicas show that Otter achieves up to 4.38x the throughput of the state-of-the-art protocol.
Expand
Toshihiro Suzuki, Hiroki Furue, Takuma Ito, Shuhei Nakamura, Shigenori Uchiyama
ePrint Report ePrint Report
Multivariate public key cryptography (MPKC) is considered a promising candidate for post-quantum cryptography, with its security relying on the hardness of solving systems of multivariate quadratic equations.

Among MPKC schemes, the unbalanced oil and vinegar (UOV) and its variants have been actively studied. Pébereau and Luyten showed that the Kipnis–Shamir attack and the singular point attack can be described within the same framework using the Jacobian matrix.

In this study, we demonstrate that the rectangular MinRank attack can also be described within this framework. Furthermore, by leveraging this framework, we extend the feasible target ranks of the rectangular MinRank attack and use this extended attack to analyze the security of UOV and its variants. In conclusion, we confirm that the currently proposed parameters for UOV, MAYO, QR-UOV, and SNOVA are resistant to this attack.
Expand
Alexandru Cojocaru, Minki Hhan, Qipeng Liu, Takashi Yamakawa, Aaram Yun
ePrint Report ePrint Report
In this work, we derive the first lifting theorems for establishing security in the quantum random permutation and ideal cipher models. These theorems relate the success probability of an arbitrary quantum adversary to that of a classical algorithm making only a small number of classical queries.

By applying these lifting theorems, we improve previous results and obtain new quantum query complexity bounds and post-quantum security results. Notably, we derive tight bounds for the quantum hardness of the double-sided zero search game and establish the post-quantum security for the preimage resistance, one-wayness, and multi-collision resistance of constant-round sponge, as well as the collision resistance of the Davies-Meyer construction.
Expand
Anubhav Baweja, Pratyush Mishra, Tushar Mopuri, Matan Shtepel
ePrint Report ePrint Report
Recent work on IOP-based succinct arguments has focused on developing IOPs that improve prover efficiency by relying on linear-time encodable codes. We present two new schemes for improving the efficiency of such succinct arguments:

$\quad \bullet$ $\mathsf{FICS}$, an IOP of proximity for multilinear polynomial evaluation that, like prior work Blaze [EUROCRYPT 2025] achieves linear prover time, but additionally reduces the verifier oracle query complexity to $O(\lambda \log \log n + \log n)$ for codewords of length $n$. $\quad \bullet$ $\mathsf{FACS}$, an accumulation scheme for NP that achieves linear prover time and $O(\lambda)$ oracle queries per step of the accumulation.

Both schemes support a large class of linear-time encodable codes, including systematic LDPC codes and tensor codes of linear-time encodable codes.

We obtain our results by extending and formalizing the framework of Interactive Oracle Reductions (IORs) introduced by Ben-Sasson et al. [TCC 2019]. In particular, we develop new IORs for "codeswitching" tensor codes (Ron-Zewi and Rothblum [JACM 2024]), and also develop a new notion of knowledge soundness for IORs that allows us to easily compose IORs and to prove the security of our schemes in the non-interactive setting, even if the underlying codes are not known to be decodable in polynomial time.
Expand
Max Duparc
ePrint Report ePrint Report
We study the structure of theta structure on products of elliptic curves, detailing their construction through the symmetries induced by $4$-torsion points. In particular, we show how these symmetries allow the computation of theta structures projectively, thus avoiding the use of modular inversions. Furthermore, we explore the self-similarity of the matrix representation of theta structures, arising from the action of the canonical $2$-torsion point in the Kummer line. Combined with the sparsity of certain $4$-torsion points, this structure leads to new formulae for computing gluing $(2,2)$ isogenies that require significantly fewer precomputations and arithmetic operations. These new equations also naturally support the evaluation of points on the quadratic twist at negligible additional cost, without requiring operations in a field extension.
Expand
Katharina Boudgoust, Anamaria Costache
ePrint Report ePrint Report
Threshold encryption schemes provide a common tool to secure a public-key encryption scheme against single point of failure attacks. Despite the success of lattices in building fully-homomorphic and presumably quantum-resistant encryption schemes, the task of thresholdizing those schemes remains challenging. The major bottleneck in the standard approach is the use of statistical noise flooding, leading to a significant efficiency loss and the need of stronger hardness assumptions. Recent works have replaced the heavy statistical noise flooding by a lighter one using the Rényi divergence. The new Rényi noise flooding both improves the efficiency and allows to use weaker hardness assumptions. However, arguing semantic security of lattice-based threshold schemes in the presence of Rényi noise flooding showed to be challenging. Chowdhury et al. (IACR ePrint'22) argued in the fully-homomorphic case that the Rényi divergence directly applies for semantic security by making use of an existing framework called public sampleability.

In this work, we argue that their public sampleability framework was neither sufficient nor correctly used. To address both issues, we strengthen the framework and thoroughly apply it to prove semantic security of generic lattice-based threshold encryption constructions. We distinguish between the plain public-key and the fully-homomorphic settings, as different security notions are achieved. As a byproduct, this shows that the proof detour via one-way security made by Boudgoust and Scholl (Asiacrypt'23) was superfluous, now leading to tighter proofs in the standard model.
Expand
Vicent Esteve Voltes
ePrint Report ePrint Report
Delegation of quantum computation in a trustful way is one of the most fundamental challenges toward the realization of future quantum cloud computing. While considerable progress has been made, no known protocol provides a purely classical client with universal delegated quantum computation while simultaneously ensuring blindness (input privacy), verifiability (soundness), and robustness against quantum noise—a feat that must be achieved under stringent cryptographic assumptions and with low overhead.

In this work, I introduce UVCQC, a new delegation framework that, for the first time, realizes a fully composable protocol for securely delegating quantum computations to an untrusted quantum server from a classical client. My scheme employs trap-based quantum authentication, post-quantum cryptographic commitments, and zero-knowledge proofs to provide full guarantees: the client remains purely classical; the server learns nothing about the computation; and any attempt to deviate from the specified circuit is detected with high probability.

I rigorously prove completeness, soundness, and perfect blindness of the protocol and demonstrate its universal composability against unbounded quantum adversaries. Furthermore, I propose a thermodynamically inspired verification mechanism based on energy dissipation and entropy change, enabling physically testable verification independent of cryptographic assumptions.

Beyond its core architecture, UVCQC is deeply intertwined with multidisciplinary frameworks: it admits a game-theoretic formulation where honesty is a Nash equilibrium, an information-theoretic treatment grounded in Holevo bounds, a categorical model via compact closed structures, and novel cryptographic enhancements based on isogeny-based primitives and topological invariants.

This research offers a scalable and unified solution to the blind and verifiable delegation problem, pushing forward the theoretical and practical frontiers of secure quantum computation—and opening a tangible path toward trustable quantum cloud services for classical users.
Expand

24 April 2025

Hemin Rahimi, Amir Moradi
ePrint Report ePrint Report
Safeguarding cryptographic implementations against the increasing threat of Side-Channel Analysis (SCA) attacks is essential. Masking, a countermeasure that randomizes intermediate values, is a cornerstone of such defenses. In particular, SCA-secure implementation of AES, the most-widely used encryption standard, can employ Boolean masking as well as multiplicative masking due to its underlying Galois field operations. However, multiplicative masking is susceptible to vulnerabilities, including the zero-value problem, which has been identified right after theintroduction of multiplicative masking. At CHES 2018, De Meyer et al. proposed a hardware-based approach to manage these challenges and implemented multiplicative masking for AES, incorporating a Kronecker delta function and randomness optimization. In this work, we evaluate their design using the PROLEAD evaluation tool under the glitch- and transition-extended probing model. Our findings reveal a critical vulnerability in their first- and second-order implementation of the Kronecker delta function, stemming from the employed randomness optimization. This leakage compromises the security of their presented masked AES Sbox. After pinpointing the source of such a leakage, we propose an alternative randomness optimization for the first-order design to address this issue, and demonstrate its effectiveness through rigorous evaluations by means of PROLEAD.
Expand
Alex B. Grilo, Álvaro Yángüez
ePrint Report ePrint Report
While one-way functions (OWFs) serve as the minimal assumption for computational cryptography in the classical setting, in quantum cryptography, we have even weaker cryptographic assumptions such as pseudo-random states, and EFI pairs, among others. Moreover, the minimal assumption for computational quantum cryptography remains an open question. Recently, it has been shown that pseudoentanglement is necessary for the existence of quantum cryptography (Goulão and Elkouss 2024), but no cryptographic construction has been built from it.

In this work, we study the cryptographic usefulness of quantum pseudoresources —a pair of families of quantum states that exhibit a gap in their resource content yet remain computationally indistinguishable. We show that quantum pseudoresources imply a variant of EFI pairs, which we call EPFI pairs, and that these are equivalent to quantum commitments and thus EFI pairs. Our results suggest that, just as randomness is fundamental to classical cryptography, quantum resources may play a similarly crucial role in the quantum setting.

Finally, we focus on the specific case of entanglement, analyzing different definitions of pseudoentanglement and their implications for constructing EPFI pairs. Moreover, we propose a new cryptographic functionality that is intrinsically dependent on entanglement as a resource.
Expand
Gorjan Alagic, Joseph Carolan, Christian Majenz, Saliha Tokat
ePrint Report ePrint Report
The sponge is a cryptographic construction that turns a public permutation into a hash function. When instantiated with the Keccak permutation, the sponge forms the NIST SHA-3 standard. SHA-3 is a core component of most post-quantum public-key cryptography schemes slated for worldwide adoption.

While one can consider many security properties for the sponge, the ultimate one is \emph{indifferentiability from a random oracle}, or simply \emph{indifferentiability}. The sponge was proved indifferentiable against classical adversaries by Bertoni et al. in 2008. Despite significant efforts in the years since, little is known about sponge security against quantum adversaries, even for simple properties like preimage or collision resistance beyond a single round. This is primarily due to the lack of a satisfactory quantum analog of the lazy sampling technique for permutations.

In this work, we develop a specialized technique that overcomes this barrier in the case of the sponge. We prove that the sponge is in fact indifferentiable from a random oracle against quantum adversaries. Our result establishes that the domain extension technique behind SHA-3 is secure in the post-quantum setting. Our indifferentiability bound for the sponge is a loose $O(\mathsf{poly}(q) 2^{-\min(r, c)/4})$, but we also give bounds on preimage and collision resistance that are tighter.
Expand
Gennaro Avitabile, Vincenzo Botta, Dario Fiore
ePrint Report ePrint Report
Traceable ring signatures enhance ring signatures by adding an accountability layer. Specifically, if a party signs two different messages within the protocol, their identity is revealed. Another desirable feature is $\textit{extendability}$. In particular, $\textit{extendable threshold}$ ring signatures (ETRS) allow to $\textit{non-interactively}$ update already finalized signatures by enlarging the ring or the set of signers.

Combining traceability and extendability in a single scheme is unexplored and would offer a new tool for privacy-preserving voting schemes in scenarios where the voters are not known in advance. In this paper, we show how to reconcile both properties by introducing and constructing a new cryptographic primitive called Tetris. Notably, our Tetris construction simultaneously achieves strong anonymity and linear-size signatures, which is the main technical challenge in existing techniques. To solve this challenge, we develop a new approach to traceability that leads to several conceptual and technical contributions. Among those, we introduce and construct, based on Groth-Sahai proofs, $\textit{extendable}$ shuffle arguments that can be $\textit{non-interactively}$ updated by several provers.
Expand
Jaeseon Kim, Jeongeun Park, Hyewon Sung
ePrint Report ePrint Report
Private information retrieval (PIR) enables a client to retrieve data from a server while preserving the confidentiality of the client's query. When PIR is instantiated with fully homomorphic encryption (FHE), the protocol becomes non-interactive, requiring only a query-answer exchange, and it achieves asymptotically optimal communication and computation complexity. Although several FHE-based PIR protocols have been practically implemented with the desired properties, there has been little detailed comparison among them. As a result, it remains unclear which protocol is most efficient in practice with respect to various aspects such as performance and scalability.

In this paper, we revisit existing protocols by categorizing them into two different structures in order to analyze the advantages and disadvantages of each class in detail, with a focus on practical implementations. Furthermore, we introduce and compare various homomorphic algorithms that can be utilized for query optimization, discussing the strengths and limitations of each. Finally, with the goal of identifying the most efficient protocol in terms of computational cost and memory usage, based on database size. Additionally, we address common misconceptions that may lead to inefficient choices in real-world deployment scenarios and offer the best solutions. Consequently, our analysis and experimental results demonstrate that the less-explored design achieves a 90% reduction in communication cost and an 8× decrease in computational overhead compared to the other one, challenging the common misconception.
Expand
Ole Hylland Spjeldnæs
ePrint Report ePrint Report
SNAIL (Succinct, Non-interactive, Alon-compressed, Instant argument for Layered circuits) turns any depth-\(d\) arithmetic circuit into a non-interactive argument whose prover runs within \(1 + c(d,k,n)\) of plain circuit execution, where \(c(d,k,n) = \frac{3\,(k+n+1)}{k\,d + n + 1}\). For the representative choice \(k = n = 4\) and \(24 \le d \le 32\) this means only 21–28 % overhead.

Core idea: A constant-round zerocheck based on a difference-driven Alon decomposition compresses all \(d\) layers into one multivariate identity. Fiat–Shamir in the random-oracle model yields a transparent argument whose hot loop uses only field additions and multiplications (no MSMs, FFTs, or Merkle trees).

Cost summary: with \(k = n = 4\) and \(24 \le d \le 32\) \[ T_{\text{prov}}(S,d)=\Bigl(1+\frac{6.75}{d}\Bigr)S,\qquad |\pi|=(d^{2}+2d)(4d+1),\qquad \varepsilon_{\text{sound}}=\frac{4nd}{|\mathbb F|}. \]

A 2.6-billion-gate trace at \(d = 32\) is proven in \(\approx 300\;\text{ms}\) on an RTX-4090 and verified on a smartphone in \(< 1\;\text{ms}\).

Width scalability: Proof size depends only on depth, so widening to billions of parallel gates leaves the verifier unchanged—ideal for ultra-wide workloads such as live-audited VM traces.

Security: The six-round interactive core is statistically sound; the Fiat–Shamir variant is computationally sound in the random-oracle model and needs no trusted setup.

Implications: SNAIL shows that a prover can run near real-time on commodity GPUs while emitting proofs small enough for mobile light clients, enabling applications such as live-streamed provable gaming, pay-per-cycle cloud nodes, and always-on verifiable CPUs.
Expand

23 April 2025

Award Award
We are proud to announce the winners of the 2025 IACR Test-of-Time Award for Eurocrypt. The IACR Test-of-Time Award honors papers published at the 3 IACR flagship conferences 15 years ago which have had a lasting impact on the field.

The Test-of-Time award for Eurocrypt 2010 is awarded to the following paper:

On Ideal Lattices and Learning with Errors over Rings, by Vadim Lyubashevsky, Chris Peikert and Oded Regev, for introducing the Ring-LWE problem, laying the theoretical and practical foundation for efficient lattice-based cryptography.

For more information, see https://www.iacr.org/testoftime.

Congratulations to the winners!
Expand
Atsuki Momose, Kailun Qin, Ao Sakurai, Mona Vij
ePrint Report ePrint Report
Confidential Computing-as-a-Service has gained significant attention in recent years, driven by rapid advances in Trusted Execution Environment (TEE) technology. Among various architectures, confidential serverless computing has emerged as a promising model. A common approach to designing confidential serverless computing involves decoupling the client workload from the initial enclave image and dynamically provisioning the workload at runtime. This enables both offloading the costly enclave bootstrapping and maintaining a fixed reference measurement for attestation. This decoupling necessitates nested attestation, where the client’s workload is attested via a custom attestation module embedded in a platform-attested enclave established at boot time. The challenge in designing nested attestation, however, is to distinguish fresh enclaves from the used ones to prevent enclave reuse. Specifically, a previously used enclave may be compromised and its attestation module tampered with. If the enclave is reused for another workload, it could bypass runtime attestation, allowing unverified code to execute.

In this paper, we present a novel approach to securing nested attestation for confidential serverless computing on Intel Software Guard Extensions (Intel SGX). Unlike prior works, our approach does not rely on intra-enclave isolation techniques to sandbox the client’s workload. Instead, we leverage the Key Separation and Sharing (KSS) feature of Intel SGX to track and prevent enclave reuse based on its immutable IDs. We develop a prototype system for confidential serverless computing for Python workloads, incorporating our nested attestation scheme, and present an empirical evaluation of its performance.

We believe our scheme unveils a unique yet previously overlooked role of KSS---post-compromise identification, with broader implications beyond serverless computing. As a concrete example, we demonstrate how KSS can be leveraged to implement an enclave-binding TPM on Intel SGX, which is of independent interest.
Expand
◄ Previous Next ►