International Association for Cryptologic Research

International Association
for Cryptologic Research

IACR News

If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.

Here you can see all recent updates to the IACR webpage. These updates are also available:

email icon
via email
RSS symbol icon
via RSS feed

23 May 2025

Tim Beyne, Michiel Verbauwhede
ePrint Report ePrint Report
Integral and ultrametric integral cryptanalysis are generalized to finite rings of prime characteristic $p$ that are isomorphic to a product of fields. This extends, for instance, the complete state of the art in integral cryptanalysis from $\mathbf{F}_2^n$ to $\mathbf{F}_q^n$, for all prime powers $q$. A compact representation of transition matrices, based on convex polyhedra, is introduced to ensure that the proposed methods are computationally efficient even for large $p$. Automated tools are developed and applied to a few generic and several concrete primitives. The analysis shows that previous degree estimates for Feistel-GMiMC, HadesMiMC, AES-Prime, small-pSquare and mid-pSquare are overly optimistic. Furthermore, except for AES-Prime, these primitives do not meet their design criteria unless their number of rounds is substantially increased.
Expand
Gabriel Dettling, Martin Hirt, Chen-Da Liu-Zhang
ePrint Report ePrint Report
A multi-valued broadcast protocol allows a sender $P_s$ to broadcast an $\ell$-bit message $m$ to $n$ recipients. For all relevant models, multi-valued broadcast protocols with asymptotically optimal communication complexity $\mathcal{O}(\ell n)+\mathrm{Poly}(n)$ have been published.

Despite their very low communication complexity, these protocols perform poorly in modern networks. Even if the network allows all $n$ parties to send messages at the same time, the execution time of the protocols is proportional to $\ell n$ (instead of $\ell$). Even if the network allows to use all bilateral channels at the same time, the execution time is still proportional to $\ell$ (instead of $\ell/n$). We ask the natural question whether multi-valued broadcast protocols exist which take time proportional to $\ell$ if parties can simultaneously send messages, and even take time proportional to $\ell/n$ if the bilateral channels can be used simultaneously. We provide a complete characterization of multi-valued broadcast with a two-fold answer:

On the negative side, we prove that for $t
On the positive side, we prove that for $t < (1-\epsilon)n$ (for any fixed $\epsilon$), multi-valued broadcast in time proportional to $\ell$ (when parties can send messages simultaneously), respectively proportional to $\ell/n$ (if bilateral channels can be used simultaneously) is possible. We provide such protocols both with cryptographic security as well as with statistical security.
Expand
Henri Dohmen, Robin Hundt, Nora Khayata, Thomas Schneider
ePrint Report ePrint Report
Secure Multi-Party Computation (MPC) allows multiple parties to perform privacy-preserving computation on their secret data. MPC protocols based on secret sharing have high throughput which makes them well-suited for batch processing, where multiple instances are evaluated in parallel. So far, practical implementations of secret sharing-based MPC protocols mainly focus on runtime and communication efficiency, so the memory overhead of protocol implementations is often overlooked. Established techniques to reduce the memory overhead for constant-round garbled circuit protocols cannot be directly applied to secret sharing-based protocols because they would increase the round complexity. Additionally, state-of-the-art implementations of secret sharing-based MPC protocols are implemented in C/C++ and may exhibit memory unsafety and memory leaks, which could lead to undefined behavior.

In this paper, we present SEEC: SEEC Executes Enormous Circuits, a framework for secret sharing-based MPC with a novel approach to address memory efficiency and safety without compromising on runtime and communication efficiency. We realize SEEC in Rust, a language known for memory-safety at close-to-native speed. To reduce the memory footprint, we develop an in-memory representation for sub-circuits. Thus, we never inline sub-circuit calls during circuit evaluation, a common issue that blows up memory usage in MPC implementations. We compare SEEC with the state-of-the-art secret sharing-based MPC frameworks ABY (NDSS'15), MP-SPDZ (CCS'20), and MOTION (TOPS'22) w.r.t. runtime, memory, and communication efficiency. Our results show that our reliable and memory-safe implementation has competitive or even better performance.
Expand
Josh Benaloh, Michael Naehrig, Olivier Pereira
ePrint Report ePrint Report
Dispute resolution has been a significant challenge in verifiable election protocols since such protocols were first proposed more than forty years ago. This work explores the problem from a new perspective and offers strong dispute resolution for in-person voting by depending on observers.

It proposes a simple definition of dispute resolution as a property of a voting protocol---a definition that is independent of any other security goal. It also presents the DROP protocol, a verifiable, in-person voting protocol that runs in the presence of observers who will always reach a correct conclusion in the case of a dispute without ever being able to compromise privacy or facilitate coercion.
Expand
Daniël M. H. van Gent, Ludo N. Pulles
ePrint Report ePrint Report
The search rank-2 module Lattice Isomorphism Problem (smLIP), over a cyclotomic ring of degree a power of two, can be reduced to an instance of the Lattice Isomorphism Problem (LIP) of at most half the rank if an adversary knows a nontrivial automorphism of the underlying integer lattice. Knowledge of such a nontrivial automorphism speeds up the key recovery attack on HAWK at least quadratically, which would halve the number of security bits.

Luo et al. (ASIACRYPT 2024) recently found an automorphism that breaks omSVP, the initial underlying hardness assumption of HAWK. The team of HAWK amended the definition of omSVP to include this so-called symplectic automorphism in their submission to the second round of NIST's standardization of additional signatures. This work provides confidence in the soundness of this updated definition, assuming smLIP is hard, since there are plausibly no more trivial automorphisms that allow winning the omSVP game easily.

Although this work does not affect the security of HAWK, it opens up a new attack avenue involving the automorphism group that may be theoretically interesting on its own.
Expand
Qiangqiang Liu, Qian Huang, Frank Fan, Haishan Wu, Xueyan Tang
ePrint Report ePrint Report
Meme tokens represent a distinctive asset class within the cryptocurrency ecosystem, characterized by high community engagement, significant market volatility, and heightened vulnerability to market manipulation. This paper introduces an innovative approach to assessing liquidity risk in meme token markets using entity-linked address identification techniques. We propose a multi-dimensional method integrating fund flow analysis, behavioral similarity, and anomalous transaction detection to identify related addresses. We develop a comprehensive set of liquidity risk indicators tailored for meme tokens, covering token distribution, trading activity, and liquidity metrics. Empirical analysis of tokens like BabyBonk, NMT, and BonkFork validates our approach, revealing significant disparities between apparent and actual liquidity in meme token markets. The findings of this study provide significant empirical evidence for market participants and regulatory authorities, laying a theoretical foundation for building a more transparent and robust meme token ecosystem.
Expand
Jincheol Ha, Seongha Hwang, Jooyoung Lee, Seungmin Park, Mincheol Son
ePrint Report ePrint Report
Conventional hash functions are often inefficient in zero-knowledge proof settings, leading to design of several ZK-friendly hash functions. On the other hand, lookup arguments have recently been incorporated into zero-knowledge protocols, allowing for more efficient handling of ``ZK-unfriendly'' operations, and hence ZK-friendly hash functions based on lookup tables.

In this paper, we propose a new ZK-friendly hash function, dubbed $\mathsf{Polocolo}$, that employs an S-box constructed using power residues. Our approach reduces the numbers of gates required for table lookups, in particular, when combined with Plonk, allowing one to use such nonlinear layers over multiple rounds. We also propose a new MDS matrix for the linear layer of $\mathsf{Polocolo}$. In this way, $\mathsf{Polocolo}$ requires fewer Plonk gates compared to the state-of-the-art ZK-friendly hash functions. For example, when $t = 8$, $\mathsf{Polocolo}$ requires $21\%$ less Plonk gates compared to Anemoi, which is currently the most efficient ZK-friendly hash function, where $t$ denotes the size of the underlying permutation in blocks of $\mathbb F_p$. For $t = 3$, $\mathsf{Polocolo}$ requires $24\%$ less Plonk gates than Reinforced Concrete, which is one of the recent lookup-based ZK-friendly hash functions.
Expand
Dengguo Feng, Lin Jiao, Yonglin Hao, Qunxiong Zheng, Wenling Wu, Wenfeng Qi, Lei Zhang, Liting Zhang, Siwei Sun, Tian Tian
ePrint Report ePrint Report
In this paper, we introduce SCMAC, a general framework that transforms large-memory stream ciphers into AEAD schemes. It represents an intermediate design paradigm between Encrypt-then-MAC and dedicated single-pass AEAD, partially integrating encryption and authentication mechanisms while mitigating the risk of state leakage associated with immediate absorption and squeezing. Consequently, this approach harmonizes high performance with enhanced security. Additionally, we propose LOL2.0, an enhanced version of the blockwise stream cipher design framework LOL. This new framework improves security through modifications to the FSM update and output functions, and increases flexibility in constructing LFSR components. Based on SCMAC}$ and LOL2.0, we present two AEAD ciphers, LOL2.0-Mini and LOL2.0-Double, which support both stream cipher and AEAD modes. These ciphers are tailored to Beyond 5G/6G environments, offering 256-bit key length and resistance to known cryptanalysis methods, including differential, linear, and integral attacks. They also provide 128-bit security against forgery attacks in the nonce-respecting setting. Due to their compatibility with AES-NI and SIMD instructions, LOL2.0-Mini and LOL2.0-Double achieve software performance of 90 Gbps and 144 Gbps in stream cipher mode, respectively. In AEAD mode, they perform at 59 Gbps and 110 Gbps, significantly faster than their predecessor's Encrypt-then-MAC versions.
Expand
Koji Nuida
ePrint Report ePrint Report
Card-based cryptography is a research area for realizing cryptographic functionality, such as secure multiparty computation and zero-knowledge proofs, by using a deck of physical cards and/or other non-electrical tools. Motivated by zero-knowledge proofs for solutions in pencil puzzles, there is a direction of recent studies on card-based protocols to verify connectivity of a set of cells or edges on lattice-shaped boards. In this paper, we generalize the problem to counting connected components of subsets on any graph, and propose a card-based protocol for the problem.
Expand
Marco Baldi, Michele Battagliola, Rahmi El Mechri, Paolo Santini, Riccardo Schiavoni, Davide De Zuane
ePrint Report ePrint Report
The ongoing search for secure post-quantum cryptographic primitives has led to the development of numerous novel digital signature schemes. In this paper we introduce $\mathsf{SPECK}$, a new signature protocol based on the similarities between the Permuted Kernel Problem ($\mathsf{PKP}$) and the Permutation Code Equivalence Problem ($\mathsf{PEP}$). At its core, $\mathsf{SPECK}$ is built on the permutation version of LESS, but introduces a key modification to the commitment step. Indeed, instead of committing to an entire permuted code, the prover commits to a random relaxed $\mathsf{PKP}$ (that we call $\mathsf{PECK}$, Permutation Equivalence of Codes and Kernel) instance by randomly choosing a codeword from a random permutation of the initial code. In this sense, the secret key is used as a trapdoor to solve the committed $\mathsf{PECK}$ instance. The new approach allows for a faster verification that does not involve gaussian elimination, while maintains roughly the same signature size as LESS. We present the $\mathsf{Speck}$ protocol in detail and provide a deep analysis of the security of the new introduced assumptions.
Expand
Lizhen Zhang, Shang Gao, Bin Xiao
ePrint Report ePrint Report
This work introduces $\mathsf{HyperWolf}$, a Hypercube-Wise Optimized polynomial commitment scheme based on Lattices over ring Fields. The scheme achieves succinctness with $O(\log N)$ proof size and verifier time, along with linear prover cost. It supports both univariate and multilinear polynomials under a unified framework. Inspired by the two-dimensional tensor structure employed in \cite{golovnev2021brakedown} to achieve sublinear efficiency, we generalize the idea to a $k$-dimensional tensor (hypercube) structure and design a $k$-round recursive proof protocol, where each round performs a dimensionality reduction, which results in an overall efficiency of $O(kN^{1/k})$. By setting $k = \log N$, our scheme achieves near-optimal asymptotic performance.

$\mathsf{HyperWolf}$ is fully transparent and relies only on the standard lattice assumption over rings. In terms of concrete efficiency, for polynomials with $N = 2^{25}$ coefficients, our scheme yields proof sizes that are $8\times$ smaller than the lattice-based scheme of \cite{cini2024polynomial} (Crypto24), and over $200\times$ smaller than $\mathsf{SLAP}$ \cite{albrecht2024slap} (Eurocrypt24). Compared to $\mathsf{Greyhound}$\cite{nguyen2024greyhound} (Crypto24), our proof size is of similar magnitude, while achieving significantly lower verifier time.
Expand
Kostas Kryptos Chalkias, Deepak Maram, Arnab Roy, Joy Wang, Aayush Yadav
ePrint Report ePrint Report
Transaction details and participant identities on the blockchain are often publicly exposed. In this work, we posit that blockchain's transparency should not come at the cost of privacy. To that end, we introduce zero-knowledge authenticators (zkAt), a new cryptographic primitive for privacy-preserving authentication on public blockchains. zkAt utilizes zero-knowledge proofs to enable users to authenticate transactions, while keeping the underlying authentiction policies private.

Prior solutions for such {policy-private authentication} required the use of threshold signatures, which can only hide the threshold access structure itself. In comparison, zkAt provides privacy for arbitrarily complex authentication policies, and offers a richer interface even within the threshold access structure by, for instance, allowing for the combination of signatures under distinct signature schemes.

In order to construct zkAt, we design a compiler that transforms the popular Groth16 non-interactive zero knowledge (NIZK) proof system into a NIZK with equivocable verification keys, a property that we define in this work. Then, for any zkAt constructed using proof systems with this new property, we show that all public information must be independent of the policy, thereby achieving policy-privacy.

Next, we give an extension of zkAt, called zkAt+ wherein, assuming a trusted authority, policies can be updated obliviously in the sense that a third-party learns no new information when a policy is updated by the policy issuer. We also give a theoretical construction for zkAt+ using recursive NIZKs, and explore the integration of zkAt into modern blockchains. Finally, to evaluate their feasibility, we implement both our schemes for a specific threshold access structure. Our findings show that zkAt achieves comparable performance to traditional threshold signatures, while also attaining privacy for significantly more complex policies with very little overhead.
Expand
Zheng Xu, Kaizhan Lin, Chang-An Zhao, Yi Ouyang
ePrint Report ePrint Report
In this paper, we propose SQIsign2D$^2$, a novel digital signature scheme within the SQIsign2D family. Unlike other SQIsign2D variants, SQIsign2D$^2$ employs the prime $p=CD-1$ as the field characteristic, where $D=2^{e_2}$, $C=3^{e_3}$ and $C\approx D\approx \sqrt{p}$. By leveraging accessible $C$-isogenies, SQIsign2D$^2$ significantly reduces the degree requirements for two-dimensional isogeny computations, thereby lowering the overall computational overhead compared to other SQIsign2D variants.

We also provide a proof-of-concept implementation of SQIsign2D$^2$, and give an efficiency comparison between SQIsign2D$^2$ and other SQIsign2D variants. In particular, the experimental results demonstrate that the key generation and signing phases of SQIsign2D$^2$ are more than twice as fast as those of SQIsign2D-East at the NIST-I security level, respectively. Additionally, the verification performance in SQIsign2D$^2$ exhibits marginally improved efficiency.
Expand
Marcel Keller
ePrint Report ePrint Report
Rep3 denotes the implementation of semi-honest three-party computation with an honest majority in MP-SPDZ (CCS'20). It uses replicated secret sharing with one message per multiplication and party as proposed by Araki et al. (CCS'16). This approach is rivaled by Astra (CCSW'19) and Trio (PETS'25), which use function-dependent preprocessing. The latter is more involved than, e.g., Beaver triples which can be used as a commodity.

In this work, we present a full implementation of Astra and Trio in MP-SPDZ, and we evaluate the costs of the different approaches. We show the equivalence of the schemes, which implies that a protocol in any of the schemes can be translated to one in another with the same overall communication cost. We also present an improvement to two important building blocks for privacy-preserving computation, namely secure comparison and probabilistic truncation used in fixed-point arithmetic. To evaluate our implementation, we have benchmarked machine learning training and inference in all three schemes, improving on Keller and Sun (ICML'22) by over 30%. Our implementation also highlights the large storage requirements of function-dependent preprocessing as it runs the two phases separately. To the best of our knowledge, this is the first implementation to do so.
Expand
Alex Evans, Guillermo Angeris
ePrint Report ePrint Report
In this paper, we show two simple variations of a data availability scheme which enable it to act as a multilinear polynomial commitment scheme over the data in a block. The first variation enables commitments over all of the block's data with zero prover overhead: the data availability construction simply serves both purposes. The second variation allows commitments over subsets of data with nonzero but still concretely small proving costs, since most work is already done during data encoding. This works especially well for blockchains with a high degree of data parallelism, as data-parallel computation is particularly amenable to efficient GKR proofs. Since, in GKR, opening the polynomial commitment contributes significantly to prover costs, our construction enables the prover to reuse work already done by the data availability scheme, reducing—or wholly removing—work associated with the polynomial commitment scheme.
Expand
Tamir Hemo, Kevin Jue, Eugene Rabinovich, Gyumin Roh, Ron D. Rothblum
ePrint Report ePrint Report
Modern SNARK constructions, almost ubiquitously, rely on a polynomial commitment scheme (PCS) --- a method by which a prover can commit to a large polynomial $P$ and later provide evaluation proofs of the form "P(x)=y" to the verifier.

In the context of zkVMs (i.e., proof-systems for general-purpose RAM computations), the common design is to represent the computation trace as a sequence of tables, one per CPU instruction, and commit to the these tables, or even their individual columns, as separate polynomials. Committing separately to these polynomials has a large overhead in verification costs, especially in hash-based systems.

In this work we drastically reduce this cost via a new construction which we call the jagged PCS. This PCS enables the prover to commit to the entire computation trace as a single polynomial, but then allows for the verifier to emulate access to the individual table or column polynomials, so that the arithmetization can proceed in the usual manner. The jagged PCS may be thought of as a sparse PCS for a very particular form of sparsity - namely, a "jagged" matrix in which each column has a different height.

Our construction of the jagged PCS is highly performant in practice. In contrast to existing sparse PCS constructions for general sparse polynomials, the jagged PCS does not require the prover to commit to any additional oracles and the prover cost is dominated by 5 finite field multiplications per element in the trace area. Furthermore, we implement the verifier as an arithmetic circuit that depends only on the total trace area - thereby significantly reducing the problem of "combinatorial explosion" often encountered in zkVM recursion.
Expand
Jon Stephens, Shankara Pailoor, Isil Dillig
ePrint Report ePrint Report
Circuit languages like Circom and Gnark have become essential tools for programmable zero-knowledge cryptography, allowing developers to build privacy-preserving applications. These domain-specific languages (DSLs) encode both the computation to be verified (as a witness generator) and the corresponding arithmetic circuits, from which the prover and verifier can be automatically generated. However, for these programs to be correct, the witness generator and the arithmetic circuit need to be mutually consistent in a certain technical sense, and inconsistencies can result in security vulnerabilities. This paper formalizes the consistency requirement for circuit DSLs and proposes the first automated technique for verifying it. We evaluate the method on hundreds of real-world circuits, demonstrating its utility for both automated verification and uncovering errors that existing tools are unable to detect.
Expand
Tim Beyne, Addie Neyt
ePrint Report ePrint Report
SPEEDY is a family of lightweight block ciphers designed by Leander et al. Several differential attacks have been reported on the SPEEDY variants. However, nearly all of these attacks are based on differential characteristics with probabilities that differ from their reported values. These discrepancies arise from incorrect calculations of the (key-averaged) probability, particularly in consecutive steps within one round without intermediate key addition. In this paper, we revisit all reported differential characteristics and accurately calculate their key-averaged probabilities using quasidifferential trails. We extend this to also estimate the fixed-key probability. Our analysis reveals several characteristics with zero or significantly altered probability, invalidating several proposed attacks. We further implement a search algorithm and find a 5.5-round differential distinguisher that can be used to mount a full-round key recovery attack with a data complexity of $2^{183}$ and a time complexity of $2^{185}$. The memory complexity varies: in the chosen-plaintext setting, it is $2^{156}$, whereas in the chosen-ciphertext setting, it is $2^{36}$.
Expand
Bishwajit Chakraborty, Abishanka Saha
ePrint Report ePrint Report
Liskov, Rivest, and Wagner, in their seminal work, formulated tweakable blockciphers and proposed two blockcipher-based design paradigms, LRW1 and LRW2, where the basic design strategy is to xor the masked tweak to the input and output of a blockcipher. The 2-round cascaded LRW2 and 4-round cascaded LRW1 have been proven to be secure up to $O(2^{3n/4})$ queries, but $n$-bit optimal security still remains elusive for these designs. In their paper, Liskov also posed an open challenge of embedding the tweak directly in the blockcipher, and to address this, Goldenberg et al. introduced the tweakable Luby-Rackoff (LR) constructions. They showed that if the internal primitives are random functions, then for tweaks with $t$ blocks, the construction needs $t + 6$ rounds to be optimally $n$-bit CPA secure and $2t + 8$ rounds to be optimally $n$-bit CCA secure, where respectively $t$ and $2t$ rounds were required to process the tweaks. Since blockciphers can be designed much more efficiently than pseudorandom functions, in many practical applications the internal primitives of LR ciphers are instantiated as blockciphers, which however would lead to a birthday-bound factor, which is not ideal for say lightweight cryptography.

This paper addresses the following two key questions affirmatively: (1) Can Goldenberg et al.'s results be extended to LR constructions with random permutations as internal primitives without compromising the optimal $n$-bit security? (2) Can the number of rounds required for handling long tweaks be reduced?

We formally define TLR-compatible functions, for processing the tweak, which when composed with 4-rounds and 5-rounds of LR construction with random permutations as internal primitives gives us respectively $n$-bit CPA and CCA secure tweakable permutations. For the security analysis, we proved general Mirror Theory result for three permutations. We also propose instantiating TLR-compatible functions with one round LR where a permutation (resp, two AXU hash functions) is used to mask single-block tweaks (resp., variable-length tweaks), thus proposing the $n$-bit CPA (resp., CCA) secure tweakable permutation candidates, $\mathsf{TLRP5}$ and $\mathsf{TLRP5+}$ (resp., $\mathsf{TLRP7}$ and $\mathsf{TLRP7+}$), using $5$ (resp., $7$) LR rounds, which is a significant reduction from the tweak-length-dependent results of Goldenberg et al. As a corollary, we also show $n$-bit CPA (resp., CCA) security of $5$-rounds (resp. $7$-rounds) permutation-based LR construction, which is quite an improvement over the existing $2n/3$-bit security proved by Guo et al.
Expand
Yi-Fu Lai, Jonas Meers, Julian Nowakowski
ePrint Report ePrint Report
Schnorr and (EC)DSA signatures famously become completely insecure once a few bits of the random nonce are revealed to an attacker. In this work, we investigate whether post-quantum signature schemes allow for similar attacks. Specifically, we consider three Fiat-Shamir based signature schemes: Dilithium (lattices), LESS (codes) and CSI-FiSh (isogenies). Analogous to the attacks on Schnorr and (EC)DSA, we assume knowledge of some bits of the commitment randomness used in the underlying ID protocol. Such attacks belong to the broader class of Hidden Number Problems. In the case of LESS, we give an efficient attack that requires knowledge of a single bit of the randomness in roughly \(20000\) signatures to completely recover the secret key. Our attack is based on the observation that knowledge of a single bit in the randomness is enough to distinguish secret key entries from random ones. Since for proposed parameters there are at most \(548\) candidates for each entry, we can enumerate all candidates and use the distinguisher to recover the secret key one entry at a time. For Dilithium we are able to recover the whole secret key using at most 1792 signatures when either 3 or 4 least significant bits of the randomness are known (depending on the parameter set). Here the attack is based on the simple observation that knowledge of the least significant bits of the randomness immediately yields a linear relation in the secret key. For CSI-FiSh, on the other hand, we obtain only an inefficient attack that requires exponentially many signatures. However, we prove that this attack is essentially optimal. Thus, our results show that CSI-FiSh comes equipped with an inherent leakage resilience. The argument easily extends to a wider class of signature schemes, but notably does not apply to the predecessor SeaSign.
Expand
◄ Previous Next ►