International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

11 July 2025

Ye Xu, Takashi Nishide
ePrint Report ePrint Report
Homomorphic signature (HS) schemes enable an untrusted server to run some computation over the data signed under the same key and derive a short signature for authenticating the computation result. Fiore et al. (Asiacrypt’16) introduced novel lattice-based multi-key homomorphic signatures (MKHS) to support an evaluation of signatures under multiple/different keys, and anyone can verify the resultant signature by using corresponding public verification keys. However, a limitation of their scheme is that even if only one signing key is leaked, a malicious server can forge a signature on a fake computation result involving the inputs of uncorrupted signers. To address this issue, we propose a new scheme built upon the work of Fiore et al., aiming to achieve a stronger security guarantee, which we call forward unforgeability, against signing key leakage. Our MKHS scheme is constructed based on the short integer solution (SIS) problem as Fiore et al., and can be forward-unforgeable even if an adversary obtains all the signing keys. Furthermore, we propose a variant by introducing a helper entity to amortize the overhead of signature verifications.
Expand
Intak Hwang, Shinwon Lee, Seonhong Min, Yongsoo Song
ePrint Report ePrint Report
Fully Homomorphic Encryption over the Torus (TFHE) enables efficient evaluation of arbitrary lookup tables (LUT) over encrypted data, allowing complex functions to be computed without decryption. However, in TFHE, only lookup tables with a negacyclic structure can be homomorphically evaluated, which limits the range of functions that can be supported. To overcome this limitation and enable the evaluation of arbitrary functions, the notion of full-domain functional bootstrapping (FDFB) was introduced. However, existing FDFB methods require at least two consecutive bootstrapping operations to evaluate a single function, resulting in significant latency and overhead.

In this work, we present a novel FDFB scheme that supports arbitrary lookup tables by decomposing them into multiple small negacyclic LUTs and one compact full-domain LUT. This structure allows most computations to be handled by fast negacyclic bootstrapping, significantly reducing the computational cost. To address the need for maintaining distinct evaluation keys for each LUT length, we apply Extended Bootstrapping (PKC 2021), which enables all operations to run within a fixed ring dimension. Combined with Extended Bootstrapping, our method nearly halves the bootstrapping cost compared to prior FDFB approaches while maintaining a constant key size, negligible parameter overhead, and strong scalability.

Finally, we implement our algorithm using the TFHE-go library and evaluate its performance across various settings. Our method achieves up to a 3.41× speedup over previous FDFB schemes without increasing key size, and retains up to a 1.91× advantage even when Extended Bootstrapping is applied to both.
Expand
Dan Boneh, Evan Laufer, Ertem Nusret Tas
ePrint Report ePrint Report
Suppose Alice holds a secret key $\mathsf{sk}$ in a public key encryption scheme. For a given set of ciphertexts, Alice wants to create a short pre-decryption key that lets anyone decrypt this exact set of ciphertexts and nothing else. This problem is called batch decryption. When the secret key $\mathsf{sk}$ is shared among a number of decryption parties the problem is called batch threshold decryption. This question comes up in the context of an encrypted mempool where the goal is to publish a short pre-decryption key that can be used to decrypt all ciphertexts in a block. Prior work constructed batch threshold decryption with some limitations.

In this work, we construct three new batch decryption and batch threshold decryption schemes. We first observe that a key-policy ABE (KP-ABE) scheme directly gives a batch decryption scheme. However, the best KP-ABE schemes, which happen to be lattice-based, lead to relatively long public keys and ciphertexts. We then use very different techniques to construct a new lattice-based batch decryption scheme with shorter parameters. Our construction employs a recent preimage sampler due to Waters, Wee, and Wu. Finally, for completeness, we show that a trilinear map leads to a highly efficient threshold batch decryption scheme.
Expand
Weikeng Chen
ePrint Report ePrint Report
This paper aims to be a systematization of knowledge on how to instantiate BitVM with succinct on-chain cost from attribute-based laconic function evaluation (AB-LFE), homomorphic message authentication codes (HMAC), or privacy-free garbled circuits (GC) with suitable properties, specifically with:

- AB-LFE with unbounded depth and with bounded depth, which implies reusable privacy-free garbled circuits

- HMAC in with unbounded depth, which implies succinct privacy-free garbled circuits

- privacy-free garbled circuits and their succinct garbling as in BitGC

They vary in complexity, concrete overhead, succinctness, reusability, and security mechanisms against a malicious garbler. This paper is a literature review, as instantiating BitVM with them is straightforward.
Expand
Tamer Mour, Alon Rosen, Ron Rothblum
ePrint Report ePrint Report
Probabilistically checkable proofs (PCPs) allow encoding a computation so that it can be quickly verified by only reading a few symbols. Inspired by tree codes (Schulman, STOC'93), we propose tree PCPs; these are PCPs that evolve as the computation progresses so that a proof for time $t$ is obtained by appending a short string to the end of the proof for time $t-1$. At any given time, the tree PCP can be locally queried to verify the entire computation so far.

We construct tree PCPs for non-deterministic space-s computation, where at time step $t$, the proof only grows by an additional $poly(s,\log(t))$ bits, and the number of queries made by the verifier to the overall proof is $poly(s) \cdot t^\epsilon$, for an arbitrary constant $\epsilon > 0$.

Tree PCPs are well-suited to proving correctness of ongoing computation that unfolds over time. They may be thought of as an information-theoretic analog of the cryptographic notion of incrementally verifiable computation (Valiant, TCC'08). In the random oracle model, tree PCPs can be compiled to realize a variant of incrementally verifiable computation where the prover is allowed a small number of queries to a large evolving state. This yields the first construction of (a natural variant of) IVC in the random oracle model.
Expand
Suvadeep Hajra, Debdeep Mukhopadhyay
ePrint Report ePrint Report
Deep learning (DL)-based side-channel analysis (SCA) has emerged as a powerful approach for extracting secret information from cryptographic devices. However, its performance often deteriorates when targeting implementations protected by masking and desynchronization-based countermeasures, or when analyzing long side-channel traces. In earlier work, we proposed EstraNet, a Transformer Network (TN)-based model designed to address these challenges by capturing long-distance dependencies and incorporating shift-invariant attention mechanisms.

In this work, we perform an in-depth analysis of the internal behavior of EstraNet and propose methods to further enhance its effectiveness. First, we introduce {\bf DL-ProVe} (Deep Learning Leakage Propagation Vector Visualization), a novel technique for visualizing how leakage from secret shares in a masked implementation propagates and recombines into the unmasked secret through the layers of a DL model trained for SCA. We then apply DL-ProVe to EstraNet, providing the first detailed explanation of how leakage is accumulated and combined within such an architecture.

Our analysis reveals a critical limitation in EstraNet’s multi-head GaussiP attention mechanism when applied to long traces. Based on this insights, we identify a new architectural hyperparameter which enables fine-grained control over the initialization of the attention heads. Our experimental results demonstrate that tuning this hyperparameter significantly improves EstraNet’s performance on long traces (with upto 250K features), allowing it to reach the guessing entropy 1 using only 3 attack traces while the original EstraNet model fails to do so even with 5K traces.

These findings not only deepen our understanding of EstraNet’s internal workings but also introduce a robust methodology for interpreting, diagnosing, and improving DL models for SCA.
Expand
Elena Dubrova, Sönke Jendral, Yanning Ji, Ruize Wang
ePrint Report ePrint Report
This paper introduces the weighted sum correlation analysis method, a profiled higher-order side-channel attack that quantifies the significance of time-domain samples based on a chosen leakage model. We also demonstrate the utility of the Hilbert transform in side-channel analysis, showing how its phase-shifting property can be exploited to construct an effective fused score that combines multiple correlation coefficients into a single metric. We validate the presented method on the challenging case of the AES-CCM accelerator in a commercial Bluetooth chip, leveraging RF signals captured via a software-defined radio as a side channel. Compared to the correlation analysis methods presented at RWC'25 and CHES'25, the weighted sum approach achieves at least a threefold reduction in the number of traces required for key recovery. Remarkably, it also outperforms deep learning-based analysis.
Expand
Debasmita Chakraborty, Soumya Sahoo, Phuong Hoa Nguyen, Santanu Sarkar
ePrint Report ePrint Report
Differential meet-in-the-middle (MITM) cryptanalysis, recently introduced by Boura et al., has emerged as a powerful and versatile technique for assessing the security of modern block cipher designs. Since its introduction, this method has been effectively applied to a variety of block ciphers, including different variants of SKINNY, CRAFT, and AES. However, identifying such attacks manually–especially on bit-oriented ciphers with large block sizes–can be a complex and error-prone process, which underscores the growing importance of automated solutions in this domain. In this work, we present, for the first time to the best of our knowledge, a novel and efficient automated tool for constructing optimized differential MITM attacks on bit-oriented block ciphers, with a particular focus on AndRX designs. Our framework begins by modeling an efficient constraint programming (CP) model to search for single-key optimal differential trails in AndRX ciphers. Building on this, we propose a unified bitwise CP model to automatically construct optimized differential MITM attacks within the same design framework. Furthermore, we incorporate two dedicated optimization strategies–namely, the equivalent subkey technique and the selective key guessing technique–both of which are tailored to the structural properties of AndRX ciphers and significantly enhance key recovery efficiency. Additionally, we also apply two additional optimization techniques: the parallel partitioning technique and the reducing data with imposed conditions techniques to further enhance the differential MITM attack on AndRX ciphers. To demonstrate the practical effectiveness of our tool, we apply it to all versions of SIMON and Simeck, two widely studied representatives of the AndRX family, and report improved cryptanalytic results. Specifically, we present differential MITM attacks on SIMON-32-64, SIMON-48-96, SIMON-64-128, and SIMON-96-144, covering 23, 25, 32, and 38 rounds, respectively. All of these results represent improvements in the number of attacked rounds compared to the best known differential attacks, classical meet-in-the-middle (MITM), and Demirci-Selçuk MITM (DS-MITM) attacks on the corresponding versions of SIMON. For instance, we present a 37-round differential MITM attack on SIMON-96-144, which extends the best known differential, classical MITM, and DS-MITM attacks by 1, 17, and 18 rounds, respectively. In the case of Simeck, we report a 29-round differential MITM attack on Simeck-48-96, improving the previous best differential attack by one round. These results demonstrate the strength and versatility of our automated tool. Importantly, our automated method for constructing differential MITM attacks operates at the bit level and is generic in nature, making it applicable to a broad class of bit-oriented block ciphers beyond the AndRX family.
Expand
Wu Qianmei, Sayandeep Saha, Wei Cheng, Fan Zhang, Shivam Bhasin
ePrint Report ePrint Report
Statistical Ineffective Fault Attack (SIFA) presents a critical threat to cryptographic implementations by circumventing conventional detection-based countermeasures effective against traditional fault attacks. Particularly, SIFA operates via two mechanisms: SIFA-1 exploits fault effectiveness dependency on target values, while SIFA-2 leverages conditional propagation of faulted values based on sensitive intermediates. Recent studies suggest that, masking, mainly a side-channel protection, also exhibits promising resistance to SIFA-1, such as prime masking. In this paper, we systematically evaluate the resilience of Inner Product Masking (IPM) against SIFA, which has been established in prior works as a powerful side-channel-resistant alternative to Boolean masking. Specifically, with regard to SIFA-1, our theoretical analysis demonstrates that Inner Product (IP) encoding provides stronger SIFA-1 resistance than both Boolean and prime masking under generic multi-bit fault models using various fault types. More interestingly, an equivalence between Side-channel and SIFA-1 security has been theoretically established for IP encoding, indicating that optimal IP encoding exists that simultaneously provides the highest side-channel resistance and maximizes the complexity of effective SIFA-1 attacks. For SIFA-2, our analysis reveals that IPM’s protection remains fundamentally bounded by the computational field size, consistent with previous results in this regard, e.g., for prime field masking. However, some vulnerabilities to persistent faults are anticipated for the most recently proposed IPM multiplication gadget. Given the compatibility with existing ciphers and demonstrated superior resistance against SIFA-1, IPM emerges as a more promising fault-resistant encoding technique compared to prime masking.
Expand
Antoine Gansel, Juliane Krämer, Tim Schumacher, Patrick Struck, Maximilian Tippmann, Thomas Walther
ePrint Report ePrint Report
Authentication is a crucial requirement for the security of Quantum Key Distribution (QKD). Yet, the integration of suitable methods in QKD systems tends to receive little attention from the research community. As a result, Wegman-Carter message authentication established itself as the go-to solution, leading to serious inefficiencies and additional trust assumptions, making it hard to recover from denial-of-service attacks. Another method is to use the lattice-based signature scheme Dilithium, as proposed by Wang et al. (npj quantum information; 2021). This method avoids the drawbacks of Wegman-Carter but, unfortunately, introduces new disadvantages. In this work, we implement and test several authentication methods on an actual QKD system. We compare and analyze three authentication variants, i.e., Wegman-Carter, Dilithium, and the established message-authentication code Chaskey, as a new method for authentication in QKD, which uses fewer quantum keys. We focus on the key consumptions, runtimes, and practicality in a field test of the QKD system. Lastly, we take a broader look at authentication for QKD in the context of Denial-of-Service attacks and propose a solution by combining several authentication methods to achieve their individual advantages while simultaneously avoiding several drawbacks.
Expand
Karim Baghery, Navid Ghaedi Bardeh, Shahram Khazaei, Mahdi Rahimi
ePrint Report ePrint Report
In ASIACRYPT 2011, Backes, Kate, and Patra (BKP) introduced two computationally secure round-optimal (2-round) Verifiable Secret Sharing (VSS) schemes in the honest-majority setting, one based on non-homomorphic commitments and the other on homomorphic ones. Their scheme based on non-homomorphic commitments has $O(n^2)$ computational complexity and necessitates $O(n^2\lambda)$ public and private communication for the dealer, where $n$ denotes the number of parties and $\lambda$ is the security parameter. They showed that these costs are $n$ times higher compared to their round-optimal VSS scheme employing homomorphic commitments and posed a research question regarding the inevitability of this gap. In this paper, we fill this gap by introducing a new variant of the recently proposed unified framework $\mathbf{\Pi}$ by Baghery at PKC 2025, designed to enable the construction of more efficient round-optimal VSS schemes in the honest-majority setting. Compared to the original framework, our variant reduces the required rounds by one while maintaining compatibility with any commitments and achieving comparable efficiency. Leveraging this new general construction, we develop several round-optimal VSS schemes that surpass state-of-the-art alternatives. Particularly noteworthy is the new round-optimal VSS scheme based on non-homomorphic commitments, which improves the BKP scheme by a factor of $n$ across all efficiency metrics. Compared to their schemes based on homomorphic commitments, our schemes demonstrate significantly expedited verification and reconstruction. Implementation results further validate the practicality of these new VSS schemes. For example, for $(n, t)=(256, 127)$, where $t$ represents the threshold, compared to the hash-based BKP VSS scheme, our proposed scheme showcases speed-ups exceeding $120,000\times$ (and $50\times$) for the dealer (and parties, respectively), while also requiring $365\times$ (and $512\times$) less communication.
Expand
Minjoo Sim, Gyeongju Song, Minwoo Lee, Seyoung Yoon, Anubhab Baksi, Hwajeong Seo
ePrint Report ePrint Report
This paper reports on the implementation and performance evaluation of Korean Post-Quantum Cryptography standards within existing TLS/X.509 infrastructure. We integrated HAETAE, AIMer, SMAUG-T, and NTRU+—the four KpqC standard algorithms—into the OpenSSL ecosystem via a modified liboqs framework. Then, we measured static overhead (certificate size) and dynamic overhead (TLS handshake latency) under both computational-bound (localhost) and network-bound (LAN) settings. Our results indicate that, focusing on the Korean standards, KpqC certificates are 11.5–48 times larger than the classical ECC baseline. In performance, the tested KpqC KEMs increase handshake latency by over 750\% in computation-bound tests (localhost) and by up to 35\% in network-bound tests (LAN). To our knowledge, this study constitutes the first practical evaluation of KpqC standards in real-world TLS environments, providing concrete performance data to guide post-quantum migration strategies.
Expand
Manideep Thotakura
ePrint Report ePrint Report
Pairing functions uniquely encode pairs of natural numbers into single values, a fundamental operation in mathematics and computer science. This paper presents an alternative approach inspired by geometric visualization—viewing pairs as arrangements of square blocks with missing tiles. Our method achieves packing efficiency comparable to the classical Cantor pairing function and matches the time complexity of both Cantor and Szudzik functions. Encoding is performed in constant time using simple arithmetic operations, while decoding requires square root computations, resulting in efficient inversion. By combining algebraic rigor with intuitive geometric insight, this approach offers a practical and accessible alternative for applications involving data encoding, spatial structures, and combinatorial problems.
Expand
Steven Galbraith, Valerie Gilchrist, Damien Robert
ePrint Report ePrint Report
Given two elliptic curves over F_q, computing an isogeny mapping one to the other is conjectured to be classically and quantumly hard. This problem plays an important role in the security of elliptic curve cryptography. In 2024, Galbraith applied recently developed techniques for isogenies to improve the state-of-the-art for this problem.

In this work, we focus on computing ascending isogenies. We give a simplified framework for computing self-pairings, and show how they can be used to improve upon the approach from Galbraith to recover these ascending isogenies and eliminate a heuristic assumption from his work. We show that this new approach gives an improvement to the overall isogeny recovery when the curves have a small crater (super-polynomial in size). We also study how these self-pairings affect the security of the (PEARL)SCALLOP group action, gaining an improvement over the state-of-the-art for some very particular parameter choices. The current SCALLOP parameters remain unaffected.
Expand
Orr Dunkelman, Eran Lambooij, Gaëtan Leurent
ePrint Report ePrint Report
In this note we study the proposed cipher Synergy and describe a full round differential with probability $2^{-21.29}$. The claims have been experimentally verified.
Expand
Evangelos Karatsiolis, Franziskus Kiefer, Juliane Krämer, Mirjam Loiero, Christian Tobias, Maximiliane Weishäupl
ePrint Report ePrint Report
With the advancing standardization of post-quantum cryptographic schemes, the need for preparing the IT security infrastructure for integrating post-quantum schemes increases. The focus of this work is a specific part of the IT security infrastructure, namely public key infrastructures. For public certification authorities, it is crucial to guarantee the quality of public keys certified by them. To this end, linting is deployed, which describes the process of analyzing the content of a certificate with respect to predefined rules, the so-called lints. In this work, we initiate the study of lints for post-quantum cryptography. As a starting point, we choose lattice-based schemes and analyze the public keys of the NIST standards ML-KEM and ML-DSA. We base our analyses on the NIST FIPS standards and IETF documents. We formally describe the identified lints and classify them with respect to the property of the public key that the lint checks. We implement the lints for a common X.509 certificate linter and provide an open-source tool.
Expand
Sven Argo, Marloes Venema, Adrian Ackermann, Tim Güneysu
ePrint Report ePrint Report
Attribute-based encryption (ABE) is a versatile primitive that has been considered in many applications to enforce access control cryptographically. To actually benefit from ABE in practice, we require implementations of schemes that satisfy all the properties that are needed. Many theoretical advancements have been made to attain such properties, ultimately resulting in powerful abstractions such as pair encodings. To build an ABE scheme, we use a compiler (in the theoretical sense), which transforms a provably secure pair encoding scheme into a provably secure ABE scheme. Although several such compilers have been introduced, they all abstract away many details that are relevant for engineers, which can hinder the implementation of schemes in practice. To address this problem, we propose pracy, which is a tool that automatically implements an ABE scheme from an input pair encoding scheme. To achieve this, we first note that we need to overcome a general issue in any automation efforts – including automated optimization and security analysis – in the field of pairing-based cryptography. In particular, there exist no parsers that properly model the interaction between the predicate and the pair encodings. Therefore, we devise a new formal model and type system, which capture this interaction in a way that is compatible with automated implementation efforts. To illustrate the feasibility of our model and system, we construct pracy, which is a (practical) compiler in Python that can implement ABE schemes in multiple target programming languages such as Python and C/C++. With pracy, we not only make the implementation of ABE schemes from pair encodings more accessible to practitioners, we realize the potential that pair encodings have to simplify implementation efforts.
Expand
Thomas Pornin
ePrint Report ePrint Report
In this short note, we describe some further improvements to the key pair generation process for the Falcon and Hawk lattice-based signature schemes, and for the BAT key encapsulation scheme, in a fully constant-time way and without any use of floating-point operations. Our new code is slightly faster than our previous implementation, and, more importantly for small embedded systems, uses less RAM space.
Expand
Pantelimon Stanica, Ranit Dutta, Bimal Mandal
ePrint Report ePrint Report
This paper introduces {\em truncated inner $c$-differential cryptanalysis}, a novel technique that for the first time enables the practical application of $c$-differential uniformity to block ciphers. While Ellingsen et al. (IEEE Trans. Inf. Theory, 2020) established the notion of $c$-differential uniformity using $(F(x\oplus a), cF(x))$, a key challenge remained: multiplication by $c$ disrupts the structural properties essential for block cipher analysis, particularly key addition.

We resolve this challenge by developing an \emph{inner} $c$-differential approach where multiplication by $c$ affects the input: $(F(cx\oplus a), F(x))$. We prove that the inner $c$-differential uniformity of a function $F$ equals the outer $c$-differential uniformity of $F^{-1}$, establishing a fundamental duality. This modification preserves cipher structure while enabling practical cryptanalytic applications.

Our main contribution is a comprehensive multi-faceted statistical-computational framework, implementing truncated $c$-differential analysis against the full 9-round Kuznyechik cipher (the inner $c$-differentials are immune to the key whitening at the backend). Through extensive computational analysis involving millions of differential pairs, we demonstrate statistically significant non-randomness across all tested round counts. For the full 9-round cipher, we identify multiple configurations triggering critical security alerts, with bias ratios reaching $1.7\times$ and corrected p-values as low as $1.85 \times 10^{-3}$, suggesting insufficient security margin against this new attack vector. This represents the first practical distinguisher against the full 9-round Kuznyechik.
Expand
Peter Gutmann, Stephan Neuhaus
ePrint Report ePrint Report
This paper presents implementations that match and, where possible, exceed current quantum factorisation records using a VIC-20 8-bit home computer from 1981, an abacus, and a dog. We hope that this work will inspire future efforts to match any further quantum factorisation records, should they arise.
Expand
◄ Previous Next ►