IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
01 December 2025
Trey Li
We introduce a novel class of equations defined over Euclidean domains. These abstract equations establish a unified framework for deriving new, concrete computational problems useful for cryptography. We prove that solving a single such equation is NP-hard. For systems of these equations, we further prove NP-hardness, average-case hardness, random self-reducibility, search-to-decision reducibility, and trapdoorizability. Based on the hardness of solving these systems, we construct various cryptographic primitives. Our results are proved in an abstract, domain-agnostic manner and hold for a wide range of Euclidean domains. This generality allows the framework to accommodate rich mathematical structures, providing both theoretical depth and flexibility for diverse cryptographic applications.
Hugo Beeloo-Sauerbier Couvée, Antonia Wachter-Zeh, Violetta Weger
The Rank Decoding Problem (R-DP) has gained a lot of attention due to the competitive KEM proposals ROLLO and RQC, as well as the more recent signature scheme RYDE, the latter being a second-round candidate in the ongoing NIST post-quantum standardization process. While previous attacks on the R-DP are based on combinatorial methods, the seminal work of [Bardet et al., 2020] has shown the potential of attacks that use algebraic modelings, breaking the proposed parameters of ROLLO and RQC. These algebraic attacks model the R-DP as a large system of equations. For most parameter ranges, this system is underdetermined; hence, the algebraic attack first needs to perform several guessing steps to obtain a reduced instance for which the system of equations is overdetermined. These steps, in essence, guess a supersupport of the unknown error support, making this attack a hybrid approach between combinatorial and algebraic solvers.
In this paper, we present a novel type of guessing step based on searching a subsupport of the error support. While supersupport guessing only reduces the length and dimension of the code, subsupport guessing instead reduces the length and the rank weight of the sought-after error vector. This introduces an additional method for instance reduction compatible with supersupport guessing. Both types of guessing step can be performed sequentially in hybrid attacks, and their numbers can be optimized to outperform current hybrid attacks. We provide experimentally supported comparisons of the attack complexities with and without the novel guessing technique. We measure the impact of our new hybrid attack on the RYDE parameters; for the NIST security category 5 parameters, we decrease the hybrid MaxMinors attack complexity from 301 bits to 272 bits, outperforming all other known rank decoders and tightening the margin above the 256 threshold.
For the other security levels, we decrease the complexities to be on par with the best performing combinatorial decoders.
Davide Li Calsi, Dominique Schröder, Julian Thomas
A message authentication code (MAC) ensures authenticity and integrity in symmetric-key settings. The Carter–Wegman–Shoup (CWS) paradigm establishes that MACs for arbitrary-length messages can be built in a black-box way using a single call to a pseudorandom function (PRF) on a random input. More than a decade ago, Dodis, Kiltz, Pietrzak, and Wichs left open whether weak pseudorandom functions (wPRFs) would suffice in this construction.
This work establishes tight upper and lower bounds that precisely characterize the minimal computational assumptions needed for the security of the CWS paradigm. On the negative side, we prove that weak PRFs are insufficient to instantiate the CWS paradigm. On the positive side, we introduce a new primitive, the 1-adaptive weak pseudorandom function (1-awPRF), which guarantees pseudorandomness for polynomially many non-adaptive queries followed by one adaptive query. We show that 1-awPRFs are sufficient to secure CWS in a black-box manner.
Finally, we construct 1-adaptive weak pseudorandom functions in a black-box way from standard cryptographic assumptions, using a new randomized design paradigm that treats randomization as a fundamental structural element. Instantiating our generic construction under the Decisional Diffie Hellman and Learning with Errors assumptions yields concrete and efficient realizations. These lead to more efficient MAC schemes and illustrate how weak and abstract building blocks can be transformed into stronger and practically useful cryptographic constructions.
This work establishes tight upper and lower bounds that precisely characterize the minimal computational assumptions needed for the security of the CWS paradigm. On the negative side, we prove that weak PRFs are insufficient to instantiate the CWS paradigm. On the positive side, we introduce a new primitive, the 1-adaptive weak pseudorandom function (1-awPRF), which guarantees pseudorandomness for polynomially many non-adaptive queries followed by one adaptive query. We show that 1-awPRFs are sufficient to secure CWS in a black-box manner.
Finally, we construct 1-adaptive weak pseudorandom functions in a black-box way from standard cryptographic assumptions, using a new randomized design paradigm that treats randomization as a fundamental structural element. Instantiating our generic construction under the Decisional Diffie Hellman and Learning with Errors assumptions yields concrete and efficient realizations. These lead to more efficient MAC schemes and illustrate how weak and abstract building blocks can be transformed into stronger and practically useful cryptographic constructions.
Alessandro Chiesa, Zijing Di, Zihan Hu, Yuxi Zheng
Hash-based succinct non-interactive arguments (SNARGs) are widely used in practice, owing to their ease of deployment, notable efficiency, and post-quantum properties. They are constructed via the BCS transformation, which combines an interactive oracle proof (IOP) and a hash-based vector commitment. This success has motivated the study of hash-based succinct non-interactive reductions (SNRDXs), used for recursively ensuring the correctness of distributed computations, by extending the BCS transformation to work with an interactive oracle reduction (IOR) rather than an IOP.
We prove that hash-based SNRDXs constructed from IORs are secure in the quantum random oracle model (QROM), provided the IOR satisfies a natural post-quantum analogue of state-restoration security; moreover, we show that (classical) round-by-round security implies post-quantum state-restoration security. Our results thus achieve a post-quantum analogue of the classical security of SNRDXs in the ROM, and generalize a prior result about SNARGs in the QROM to cover recent SNRDXs constructions.
Moreover, for SNRDXs we propose and achieve an adaptively-secure straightline quantum extraction property in the QROM, while prior work obtains non-adaptive security for SNARGs in the QROM. Along the way, we develop a modular framework for proving the security of the (extended) BCS transformation based on a new quantum extraction property for vector commitments (which we prove is achieved by Merkle commitments), mirroring classical security analyses and departing from prior "monolithic" post-quantum analyses. This demands a new commutator bound that shows the almost-commutativity between quantum extraction and quantum oracle queries, by bounding a natural classical extraction property.
We prove that hash-based SNRDXs constructed from IORs are secure in the quantum random oracle model (QROM), provided the IOR satisfies a natural post-quantum analogue of state-restoration security; moreover, we show that (classical) round-by-round security implies post-quantum state-restoration security. Our results thus achieve a post-quantum analogue of the classical security of SNRDXs in the ROM, and generalize a prior result about SNARGs in the QROM to cover recent SNRDXs constructions.
Moreover, for SNRDXs we propose and achieve an adaptively-secure straightline quantum extraction property in the QROM, while prior work obtains non-adaptive security for SNARGs in the QROM. Along the way, we develop a modular framework for proving the security of the (extended) BCS transformation based on a new quantum extraction property for vector commitments (which we prove is achieved by Merkle commitments), mirroring classical security analyses and departing from prior "monolithic" post-quantum analyses. This demands a new commutator bound that shows the almost-commutativity between quantum extraction and quantum oracle queries, by bounding a natural classical extraction property.
Songqiao Cui, Geng Luo, Junhan Bao, Josep Balasch, Ingrid Verbauwhede
Inner product masking is a well-studied masking countermeasure against side-channel attacks. IPM-FD further extends the IPM scheme with fault detection capabilities. However, implementing IPM-FD in software especially on embedded devices results in high computational overhead. Therefore, in this work we perform a detailed analysis of all building blocks for IPM-FD scheme and propose a Masked Processing Unit to accelerate all operations, for example multiplication and IPM-FD specific Homogenization. We can then offload these computational extensive operations with dedicated hardware support. With only $4.05\%$ and $4.01\%$ increase in Look-Up Tables and Flip-Flops (Random Number Generator excluded), respectively, compared with baseline CV32E40p RISC-V core, we can achieve up to $16.55\times$ speed-up factor with optimal configuration. We then practically evaluate the side-channel security via uni- and bivariate Test Vector Leakage Assessment which exhibits no leakage. Finally, we use two different methods to simulate the injected fault and confirm the fault detection capability of up to $k-1$ faults, with $k$ being the replication factor.
Xin Li, Songtao Mao, Zhaienhe Zhou
We study the Batch Learning Parity with Noise (LPN) variant, where the oracle returns $k$ samples in a batch, and draws the noise vector from a joint noise distribution $\mathcal{D}$ on $\mathbb{F}_2^k$ (instead of i.i.d.). This model captures a broad range of correlated or structured noise patterns studied in cryptography and learning theory, and was formally defined in recent work by Golowich, Moitra, and Rohatgi (FOCS 2024). Consequently, understanding which distributions preserve the hardness of LPN has become an important question.
On the hardness side, we design several reductions from standard LPN to Batch LPN. Our reductions provide a more comprehensive characterization of hard distributions. Specifically, we show that a Batch LPN instance is as hard as standard LPN with noise rate $\eta:=\frac{1}{2}-\varepsilon$ provided that its noise distribution $\mathcal{D}$ satisfies one of the following:
1. The noise distribution $\mathcal{D}$ satisfies a mild Fourier-analytic condition (specifically, $\sum_{s\neq 0}|\widehat{P}_{\mathcal{D}}(s)|\le 2\varepsilon$). 2. The noise distribution $\mathcal{D}$ is $\Omega(\eta \cdot k 2^{-k})$-dense (i.e., every error pattern occurs with probability at least $\Omega(\eta \cdot k 2^{-k})$) for $\eta < 1/k$. 3. The noise distribution $\mathcal{D}$ is a $\delta$-Santha-Vazirani source. Our reduction improves the allowable bias $\delta$ from $O(2^{-k}\varepsilon)$ (in Golowich et al.) to $O(2^{-k/2}\varepsilon)$.
On the algorithmic side, we design an algorithm for solving Batch LPN whenever the noise distribution assigns sufficiently small probability to at least one point, which gives an algorithm--hardness separation for Batch LPN. Our algorithm can be seen as an extension of Arora and Ge's (ICALP 2011) linearization attack.
Our reduction is based on random affine transformations, developed and analyzed through the lens of Fourier analysis, providing a general framework for studying various LPN variants.
On the hardness side, we design several reductions from standard LPN to Batch LPN. Our reductions provide a more comprehensive characterization of hard distributions. Specifically, we show that a Batch LPN instance is as hard as standard LPN with noise rate $\eta:=\frac{1}{2}-\varepsilon$ provided that its noise distribution $\mathcal{D}$ satisfies one of the following:
1. The noise distribution $\mathcal{D}$ satisfies a mild Fourier-analytic condition (specifically, $\sum_{s\neq 0}|\widehat{P}_{\mathcal{D}}(s)|\le 2\varepsilon$). 2. The noise distribution $\mathcal{D}$ is $\Omega(\eta \cdot k 2^{-k})$-dense (i.e., every error pattern occurs with probability at least $\Omega(\eta \cdot k 2^{-k})$) for $\eta < 1/k$. 3. The noise distribution $\mathcal{D}$ is a $\delta$-Santha-Vazirani source. Our reduction improves the allowable bias $\delta$ from $O(2^{-k}\varepsilon)$ (in Golowich et al.) to $O(2^{-k/2}\varepsilon)$.
On the algorithmic side, we design an algorithm for solving Batch LPN whenever the noise distribution assigns sufficiently small probability to at least one point, which gives an algorithm--hardness separation for Batch LPN. Our algorithm can be seen as an extension of Arora and Ge's (ICALP 2011) linearization attack.
Our reduction is based on random affine transformations, developed and analyzed through the lens of Fourier analysis, providing a general framework for studying various LPN variants.
Mohamed Abdelmonem, Lejla Batina, Durba Chatterjee, Håvard Raddum
This paper introduces a novel and practical fault injection attack targeting the randomized version of the MAYO post-quantum signature scheme. While prior attacks on MAYO either relied on deterministic signing modes or specific memory assumptions, our attack succeeds without such constraints. By exploiting the inherent structural properties of MAYO signatures, we combine targeted fault injections with signature correction techniques to extract partial information about the secret oil space. By systematically accumulating such partial information across multiple fault-induced signatures and utilizing linear dependencies among oil vectors, we present an efficient method for achieving full secret key recovery. The attack requires only one fault injection per oil coefficient, repeated a small (i.e., 8, 17, 10, or 12 for the different MAYO versions, respectively) number of times. We demonstrate the targeted fault injection attack on a MAYO implementation on an ARM Cortex-M3 processor via clock glitching, establishing the feasibility of the attack in practice. Our approach is validated through simulations, and a detailed computational cost analysis is provided. Additionally, we demonstrate the ineffectiveness of some previously proposed countermeasures against our attack, thereby highlighting the urgent need for developing more robust protection mechanisms for multivariate post-quantum signature schemes, such as MAYO.
Zhenzhi Lai, Ruiyi Zhang, Zhiyuan Zhang, Julius Hermelink, Michael Schwarz, Van-Thuan Pham, Udaya Parampalli
Hamming Quasi-Cyclic (HQC) has recently been selected by NIST, after the Round 4 submission, as a postquantum key encapsulation mechanism (KEM) standard and will soon be widely deployed. Therefore, it is important to ensure its implementation is constant-time, i.e., resistant to side-channel attacks. Existing timing attacks on HQC exploit non-constant-time source code and the decryption that is vulnerable to chosen-ciphertext attacks. These active attacks require constructing thousands of invalid ciphertexts, and thus, they can be easily detected. The latest HQC implementation has mitigated all these attacks by making its source code constant-time.
In this work, we provide a new perspective on reviewing the implementation of HQC and exploiting timing leakages. For the first time, we show that an attacker can recover the secret key of HQC without targeting the CCA-insecure decryption and internal states of message decryption. Specifically, an attacker can exploit the timing leakages that occur when processing sparse vectors, which are ciphertext-independent, to recover the secret key by measuring the leakages only once. We find two such timing leakages in the latest stable HQC implementation, supposedly constant-time, and practically extract the leakages even when the process is protected by AMD Secure Encryption Virtualization. We also show that a power side-channel can extract similar leakages on embedded devices.
Our findings apply to all code-based KEMs that are submitted to the NIST Round 4 PQC submission. We show that an attacker can also perform similar passive attacks to recover the session key of BIKE and Classic McEliece. To help write constant-time code, we propose and test a workflow that uses CT-grind when developing the code. We find that CT-grind can effectively find all timing leakages in various implementations of HQC. Therefore, we suggest that cryptographic developers constantly use constant-time analysis tools when developing code.
In this work, we provide a new perspective on reviewing the implementation of HQC and exploiting timing leakages. For the first time, we show that an attacker can recover the secret key of HQC without targeting the CCA-insecure decryption and internal states of message decryption. Specifically, an attacker can exploit the timing leakages that occur when processing sparse vectors, which are ciphertext-independent, to recover the secret key by measuring the leakages only once. We find two such timing leakages in the latest stable HQC implementation, supposedly constant-time, and practically extract the leakages even when the process is protected by AMD Secure Encryption Virtualization. We also show that a power side-channel can extract similar leakages on embedded devices.
Our findings apply to all code-based KEMs that are submitted to the NIST Round 4 PQC submission. We show that an attacker can also perform similar passive attacks to recover the session key of BIKE and Classic McEliece. To help write constant-time code, we propose and test a workflow that uses CT-grind when developing the code. We find that CT-grind can effectively find all timing leakages in various implementations of HQC. Therefore, we suggest that cryptographic developers constantly use constant-time analysis tools when developing code.
Jens Alich, Thomas Eisenbarth, Hossein Hadipour, Gregor Leander, Felix Mächtle, Yevhen Perehuda, Shahram Rasoolzadeh, Jonas Sander, Cihangir Tezcan
In this work, we address the critical yet understudied question of the security of the most widely deployed pseudorandom number generators (PRNGs) in AI applications. We show that these generators are vulnerable to practical and low-cost attacks. With this in mind, we conduct an extensive survey of randomness usage in current applications to understand the efficiency requirements imposed in practice. Finally, we present a cryptographically secure and well-understood alternative, which has a negligible effect on the overall AI/ML workloads. More generally, we recommend the use of cryptographically strong PRNGs in all contexts where randomness is required, as past experience has repeatedly shown that security requirements may arise unexpectedly even in applications that appear uncritical at first.
29 November 2025
Gal Arnon, Jesko Dujmovic, Eylon Yogev
SNARGs are cryptographic primitives that allow a prover to demonstrate membership in an NP language while sending a proof that is much smaller than the witness. In this work, we focus on the succinctness of publicly-verifiable group-based SNARGs, analyzed in a model that combines both a generic bilinear group $(\mathbb{G}_{1} \times \mathbb{G}_{2} \to \mathbb{G}_{T})$ and a random oracle (the GGM + ROM).
We construct the first publicly-verifiable SNARG in the GGM + ROM where the proof consists of exactly $2$ elements of $\mathbb{G}_{1}$ and no additional bits, achieving the smallest proof size among all known publicly verifiable group-based SNARGs. Our security analysis is tight, ensuring that the construction incurs no hidden security losses. Concretely, when instantiated with the BLS12-381 curve for 128-bit security, our scheme yields a proof size of $768$ bits, nearly a $2\times$ improvement over the best known pairing-based SNARG. While our scheme is not yet concretely efficient, it demonstrates the feasibility of ultra-short proofs and opens the door to future practical instantiations.
Complementing this construction, we establish a new lower bound for group-based SNARGs. We prove that under mild and natural restrictions on the verifier (which are satisfied by all known schemes) no SNARG exists in the Maurer GGM + ROM with a proof that consists of a single group element (assuming one-way functions). This substantially strengthens the lower bound of Groth, which was more restrictive and did not extend to settings with a random oracle.
We construct the first publicly-verifiable SNARG in the GGM + ROM where the proof consists of exactly $2$ elements of $\mathbb{G}_{1}$ and no additional bits, achieving the smallest proof size among all known publicly verifiable group-based SNARGs. Our security analysis is tight, ensuring that the construction incurs no hidden security losses. Concretely, when instantiated with the BLS12-381 curve for 128-bit security, our scheme yields a proof size of $768$ bits, nearly a $2\times$ improvement over the best known pairing-based SNARG. While our scheme is not yet concretely efficient, it demonstrates the feasibility of ultra-short proofs and opens the door to future practical instantiations.
Complementing this construction, we establish a new lower bound for group-based SNARGs. We prove that under mild and natural restrictions on the verifier (which are satisfied by all known schemes) no SNARG exists in the Maurer GGM + ROM with a proof that consists of a single group element (assuming one-way functions). This substantially strengthens the lower bound of Groth, which was more restrictive and did not extend to settings with a random oracle.
Kang Li, Shouran Ma, Haochen Dou, Qian Guo
Falcon, a lattice-based signature scheme selected for NIST post-quantum standardization, is notable for its compact signature size alongside a complex signing procedure involving extensive floating-point arithmetic.
Prior side-channel attacks on Falcon, while demonstrating vulnerabilities, have consistently required a large number of power traces for successful key recovery; this critical efficiency gap means previously reported attacks are often impractical in real-world scenarios where trace collection is limited.
This paper presents a new single-trace attack on the Falcon. We identify and exploit novel leakage points within the floating-point conversion and Fast Fourier Transform (FFT) routines during the secret key expansion, which allow us to progressively partition the possible values of the secret key coefficients. By identifying a sufficient number of these coefficients, we establish a system of linear equations that can be solved to recover the entire secret key. Our attack is particularly critical for the \texttt{sign\_dyn} design---the memory-efficient implementation adopted in important cryptographic libraries and reference implementations---as it executes key expansion during every signature operation. We emphasize that this is the \textbf{first single-trace attack on the Falcon signing procedure itself}, providing a more compelling threat scenario than previous work.
We validate our attack on an ARM Cortex-M4 microcontroller, demonstrating a 100\% key recovery success rate with just a single power trace for both Falcon-512 and Falcon-1024 in both signing designs—\texttt{sign\_tree} and \texttt{sign\_dyn}, compiled at the \texttt{-O0} level. While the \texttt{-O3} optimization level mitigates some leakages, our multi-trace attack remains effective in the practically used \texttt{sign\_dyn} design, recovering 80 out of 100 Falcon-512 keys with only 5 traces. Our findings expose a critical implementation vulnerability in Falcon, highlighting the urgent necessity of integrating countermeasures to protect Falcon in real-world applications.
This paper presents a new single-trace attack on the Falcon. We identify and exploit novel leakage points within the floating-point conversion and Fast Fourier Transform (FFT) routines during the secret key expansion, which allow us to progressively partition the possible values of the secret key coefficients. By identifying a sufficient number of these coefficients, we establish a system of linear equations that can be solved to recover the entire secret key. Our attack is particularly critical for the \texttt{sign\_dyn} design---the memory-efficient implementation adopted in important cryptographic libraries and reference implementations---as it executes key expansion during every signature operation. We emphasize that this is the \textbf{first single-trace attack on the Falcon signing procedure itself}, providing a more compelling threat scenario than previous work.
We validate our attack on an ARM Cortex-M4 microcontroller, demonstrating a 100\% key recovery success rate with just a single power trace for both Falcon-512 and Falcon-1024 in both signing designs—\texttt{sign\_tree} and \texttt{sign\_dyn}, compiled at the \texttt{-O0} level. While the \texttt{-O3} optimization level mitigates some leakages, our multi-trace attack remains effective in the practically used \texttt{sign\_dyn} design, recovering 80 out of 100 Falcon-512 keys with only 5 traces. Our findings expose a critical implementation vulnerability in Falcon, highlighting the urgent necessity of integrating countermeasures to protect Falcon in real-world applications.
Saisi Xiong, Yijian Zhang, Jie Chen
In this work, we present the first construction of batched IBE scheme from lattices. The security is proven in the standard model under the succinct LWE assumption. Prior batched IBE schemes are only known from pairing-based assumptions. Moreover, our lattice-based batched IBE scheme is shown to be highly efficient, as the master public key, decryption key, and ciphertext are independent of the batch size $B$. In contrast, existing pairing-based schemes cannot provide the post-quantum security guarantee, and their master public keys have to scale with $B$.
Technically, we mainly rely on an insightful observation: batched IBE can be obtained solely from Inner-Product Encryption (IPE). To satisfy the efficiency requirements of batched IBE, we require an IPE scheme that owns two important features: decomposable key generation and compact components. Finally, we show how to construct such an IPE scheme from the well-known BGG+14 IPE scheme via careful modification.
Technically, we mainly rely on an insightful observation: batched IBE can be obtained solely from Inner-Product Encryption (IPE). To satisfy the efficiency requirements of batched IBE, we require an IPE scheme that owns two important features: decomposable key generation and compact components. Finally, we show how to construct such an IPE scheme from the well-known BGG+14 IPE scheme via careful modification.
Frank Hartmann
FrodoKEM provides conservative post-quantum security through unstructured lattices, yet its deployment on embedded systems is historically constrained by high memory requirements. While state-of-the-art implementations mitigate this by generating the public matrix on-the-fly, they remain bottlenecked by the sequential generation of secret matrices, which enforces a rigid trade-off between stack usage and recomputation overhead. To address this, we propose a blockwise secret generation mechanism that enables efficient random access to arbitrary matrix tiles. We formally prove that this modification is indistinguishable from the standard specification in the Random Oracle Model, thereby preserving IND-CCA2 security. By evaluating this approach on a 32-bit RISC-V core with custom cryptographic extensions, we demonstrate tiled encapsulation and key generation strategies that maintain constant transient stack usage ($\approx$2--3 kB) across all parameter sets. This design effectively decouples memory footprint from algorithmic complexity, enabling flexible buffering strategies that optimize performance according to the available heap memory of the target platform.
Jan Bobolz, Emad Heydari Beni, Anja Lehmann, Omid Mirzamohammadi, Cavit Özbay, Mahdi Sedaghat
Keyed-Verification anonymous credentials (KVAC) enable privacy-preserving authentication and can be seen as the symmetric primitive of conventional anonymous credentials: issuance and verification of credentials requires a shared secret key. The core advantage of KVACs is that they can be realized without pairings, which still appears to be a significant bottleneck when it comes to real-world adoption. KVACs provide all the benefits from anonymous credentials, in particular multi-show unlinkability, but only work in the setting where the issuer and verifier are the same entity, limiting the applications they can be used in. In this work we extend the idea of keyed-verification credential to a setting where again multiple verifiers are supported, each sharing an individual secret key with the issuer. We formally introduce this as multi-verifier keyed-verification anonymous credentials (mKVACs). While users must now get verifier-specific credentials, each credential still provides multi-show unlinkability. In terms of security, mKVAC naturally strengthens the single-verifier variant, as it guarantees that corruption of any verifier does not impact unforgeability guarantees for other verifiers. The main challenge therein is to not trade this added flexibility for privacy, and hide the verifier's identity in the credential issuance. We provide formal definitions of all desired security and privacy features and propose a provably secure and pairing-free construction. Along the way, we develop a new KVAC-like primitive that authenticates group elements and offers statistical privacy, solving the open problem of combining multi-verifier support and pairing-freeness. Finally, we demonstrate practicality of our protocol via implementation benchmarks.
James Bartusek, Ruta Jawale, Justin Raizes, Kabir Tomer
We construct a publicly-verifiable non-interactive zero-knowledge argument system for QMA with the following properties of interest.
1. Transparent setup. Our protocol only requires a uniformly random string (URS) setup. The only prior publicly-verifiable NIZK for QMA (Bartusek and Malavolta, ITCS 2022) requires an entire obfuscated program as the common reference string.
2. Extractability. Valid QMA witnesses can be extracted directly from our accepting proofs. That is, we obtain a publicly-verifiable non-interactive argument of quantum knowledge, previously only known in a privately-verifiable setting (Coladangelo, Vidick, and Zhang, CRYPTO 2020).
Our construction introduces a novel ZX QMA verifier with "strong completeness" and builds upon the coset state authentication scheme from (Bartusek, Brakerski, and Vaikuntanathan, STOC 2024) within the context of QMA verification. Along the way, we establish new properties of the authentication scheme.
The security of our construction rests on the heuristic use of a post-quantum indistinguishability obfuscator. Rather than rely on the full-fledged classical oracle model (i.e. ideal obfuscation), we isolate a particular game-based property of the obfuscator that suffices for our proof, which we dub the evasive composability heuristic.
As an additional contribution, we study a general method for replacing heuristic use of obfuscation with heuristic use of hash functions in the post-quantum setting. In particular, we establish security of the ideal obfuscation scheme of Jain, Lin, Luo, and Wichs (CRYPTO 2023) in the quantum pseudorandom oracle model (QPrO), which can be heuristically instantiated with a hash function. This gives us NIZK arguments of quantum knowledge for QMA in the QPrO, and additionally allows us to translate several quantum-cryptographic results that were only known in the classical oracle model to results in the QPrO.
1. Transparent setup. Our protocol only requires a uniformly random string (URS) setup. The only prior publicly-verifiable NIZK for QMA (Bartusek and Malavolta, ITCS 2022) requires an entire obfuscated program as the common reference string.
2. Extractability. Valid QMA witnesses can be extracted directly from our accepting proofs. That is, we obtain a publicly-verifiable non-interactive argument of quantum knowledge, previously only known in a privately-verifiable setting (Coladangelo, Vidick, and Zhang, CRYPTO 2020).
Our construction introduces a novel ZX QMA verifier with "strong completeness" and builds upon the coset state authentication scheme from (Bartusek, Brakerski, and Vaikuntanathan, STOC 2024) within the context of QMA verification. Along the way, we establish new properties of the authentication scheme.
The security of our construction rests on the heuristic use of a post-quantum indistinguishability obfuscator. Rather than rely on the full-fledged classical oracle model (i.e. ideal obfuscation), we isolate a particular game-based property of the obfuscator that suffices for our proof, which we dub the evasive composability heuristic.
As an additional contribution, we study a general method for replacing heuristic use of obfuscation with heuristic use of hash functions in the post-quantum setting. In particular, we establish security of the ideal obfuscation scheme of Jain, Lin, Luo, and Wichs (CRYPTO 2023) in the quantum pseudorandom oracle model (QPrO), which can be heuristically instantiated with a hash function. This gives us NIZK arguments of quantum knowledge for QMA in the QPrO, and additionally allows us to translate several quantum-cryptographic results that were only known in the classical oracle model to results in the QPrO.
Sourav Das, Pratish Datta, Aditi Partap, Swagata Sasmal, Mark Zhandry
Threshold encryption distributes decryption capability across $n$ parties such that any $t$ of them can jointly decrypt a ciphertext, while smaller coalitions learn nothing. However, once $t$ or more parties collude, traditional threshold schemes provide no accountability: a coalition of $t$ or more parties can pool its keys into a pirate decoder that enables unrestricted decryption, all without any risk of being exposed. To address this, Boneh, Partap, and Rotem [CRYPTO '24] introduced threshold traitor tracing (TTT), which equips threshold encryption with traceability. Yet, all known TTT schemes either suffer from parameter sizes growing with at least $n^{1/3}$, or rely on indistinguishability obfuscation to achieve optimal parameters.
In this paper, we present the first TTT schemes with optimal parameters, where public keys, secret keys, and ciphertexts are all bounded by ${\sf poly}(\lambda,\log n)$, built solely from standard cryptographic tools and assumptions. Our first construction relies on the decisional Bilinear Diffie–Hellman (DBDH) assumption in prime order bilinear groups. Our second construction, based on the Learning with Errors (LWE) assumption, is plausibly post-quantum secure, and supports ramp-thresholds where decryption requires a larger coalition than those tolerated by security. Both of our constructions provide traceability against coalitions of arbitrary sizes.
To achieve these results, we introduce a new primitive, Attribute-Based Threshold Encryption (ABTE), which generalizes both threshold and attribute-based encryption. We then combine ABTE with Mixed Functional Encryption through a new compiler to obtain our TTT schemes. We believe ABTE is a powerful primitive that may have independent applications beyond optimal TTT.
In this paper, we present the first TTT schemes with optimal parameters, where public keys, secret keys, and ciphertexts are all bounded by ${\sf poly}(\lambda,\log n)$, built solely from standard cryptographic tools and assumptions. Our first construction relies on the decisional Bilinear Diffie–Hellman (DBDH) assumption in prime order bilinear groups. Our second construction, based on the Learning with Errors (LWE) assumption, is plausibly post-quantum secure, and supports ramp-thresholds where decryption requires a larger coalition than those tolerated by security. Both of our constructions provide traceability against coalitions of arbitrary sizes.
To achieve these results, we introduce a new primitive, Attribute-Based Threshold Encryption (ABTE), which generalizes both threshold and attribute-based encryption. We then combine ABTE with Mixed Functional Encryption through a new compiler to obtain our TTT schemes. We believe ABTE is a powerful primitive that may have independent applications beyond optimal TTT.
Heng Guo, Kun Tian, Fengxia Liu, Zhiyong Zheng
In 2002, Johnson et al. posed an open problem at the Cryptographers' Track of the RSA Conference: how to construct a secure homomorphic signature on a semigroup, rather than on a group. In this paper, we introduce, for the first time, a semigroup-homomorphic signature scheme. Under certain conditions, we prove that the security of this scheme is based on the hardness of the Short Integer Solution (SIS) problem and is tightly secure. Furthermore, we extend it to a linearly semigroup-homomorphic signature scheme over lattices, and this scheme can also ensure privacy.
Dor Bitan, Zachary DeStefano, Shafi Goldwasser, Yuval Ishai, Yael Tauman Kalai, Justin Thaler
Motivated by the mismatch between floating-point arithmetic, which is intrinsically approximate, and verifiable computing protocols for exact computations, we develop a generalization of the classical sum-check protocol. Our generalization proves claims of the form $\sum_{x \in \{0,1\}^v} g(x) \approx H$, where $g$ is a low-degree $v$-variate polynomial over an integral domain $\mathbb{U}$. The verifier performs its check in each round of the protocol using a tunable error parameter $\delta$. If $\Delta$ is the error in the prover's initial claim, then the soundness error of our protocols degrades gracefully with $\delta/\Delta$. In other words, if the initial error $\Delta$ is large relative to $\delta$, then the soundness error is small, meaning the verifier is very likely to reject.
Unlike the classical sum-check protocol, which is fundamentally algebraic, our generalization exploits the metric structure of low-degree polynomials. The protocol can be instantiated over various domains, but is most natural over the complex numbers, where the analysis draws on the behavior of polynomials over the unit circle. We also analyze the protocol under the Fiat-Shamir transform, revealing a new intermediate security phenomenon that appears intrinsic to approximation.
Prior work on verifiable computing for numerical tasks typically verifies that a prover exactly executed a computation that only approximates the desired function. In contrast, our protocols treat approximation as a first-class citizen: the verifier's checks are relaxed to accept prover messages that are only approximately consistent with the claimed result.
This establishes the first black-box feasibility result for approximate arithmetic proof systems: the protocol compiler is independent of how arithmetic operations are implemented, requiring only that they satisfy error bounds. This opens a path to verifying approximate computations while sidestepping much of the prover overhead imposed by existing techniques that require encoding real-valued data into finite field arithmetic.
Unlike the classical sum-check protocol, which is fundamentally algebraic, our generalization exploits the metric structure of low-degree polynomials. The protocol can be instantiated over various domains, but is most natural over the complex numbers, where the analysis draws on the behavior of polynomials over the unit circle. We also analyze the protocol under the Fiat-Shamir transform, revealing a new intermediate security phenomenon that appears intrinsic to approximation.
Prior work on verifiable computing for numerical tasks typically verifies that a prover exactly executed a computation that only approximates the desired function. In contrast, our protocols treat approximation as a first-class citizen: the verifier's checks are relaxed to accept prover messages that are only approximately consistent with the claimed result.
This establishes the first black-box feasibility result for approximate arithmetic proof systems: the protocol compiler is independent of how arithmetic operations are implemented, requiring only that they satisfy error bounds. This opens a path to verifying approximate computations while sidestepping much of the prover overhead imposed by existing techniques that require encoding real-valued data into finite field arithmetic.
Thomas Debris-Alazard, Victor Dyseryn, Duong Hieu Phan
Code-based cryptography has reached maturity for standard primitives such as encryption and digital signatures. However, when it comes to advanced encryption functionalities, particularly in multi-user settings involving collusions among users holding different secret keys, there is still no foundational framework for code-based schemes.
In this work, we address this gap by introducing a multi-receiver encryption scheme with tracing capability based on coding assumptions. This primitive often serves as a foundation for more advanced constructions such as attribute-based or functional encryption.
To achieve this goal, we propose a kernel sampling technique that enables the sampling of secret keys under a common public key, thereby realizing multi-receiver encryption. The resulting construction is as efficient as the underlying public-key encryption scheme, namely Alekhnovich's scheme from FOCS'03.
We also introduce new hardness assumptions on problems with hints. These assumptions extend classical code-based problems to handle collusion among dishonest users in the multi-user setting. In particular, we define the $\ell$-Decoding Problem ($\ell$-DP), the code-based analogue of the $k$-LWE problem in lattices, and provide a reduction from the standard Decoding Problem (DP) to $\ell$-DP. We further introduce structural variants relying on Moderate Density Parity Check (MDPC) codes that we call $\ell$-MDPC-DP.
Based on $\ell$-MDPC-DP, we design the first code-based traitor-tracing encryption scheme. Interestingly, the security of our scheme relies on $\ell$-MDPC-DP and a slight variation called $(\ell,\ell')$-OM-MDPC-DP, the latter of which we prove to be as hard as $\ell$-DP via a polynomial reduction, therefore reducing the security of our scheme to only $\ell$-MDPC-DP. Although no formal reduction from $\ell$-DP to $\ell$-MDPC-DP is given, the definition of $\ell$-MDPC-DP is a natural variant of $\ell$-DP adapted to the multi-user code-based setting. Furthermore, we also provide evidence of $\ell$-MDPC-DP hardness by presenting detailed cryptanalytic arguments.
In this work, we address this gap by introducing a multi-receiver encryption scheme with tracing capability based on coding assumptions. This primitive often serves as a foundation for more advanced constructions such as attribute-based or functional encryption.
To achieve this goal, we propose a kernel sampling technique that enables the sampling of secret keys under a common public key, thereby realizing multi-receiver encryption. The resulting construction is as efficient as the underlying public-key encryption scheme, namely Alekhnovich's scheme from FOCS'03.
We also introduce new hardness assumptions on problems with hints. These assumptions extend classical code-based problems to handle collusion among dishonest users in the multi-user setting. In particular, we define the $\ell$-Decoding Problem ($\ell$-DP), the code-based analogue of the $k$-LWE problem in lattices, and provide a reduction from the standard Decoding Problem (DP) to $\ell$-DP. We further introduce structural variants relying on Moderate Density Parity Check (MDPC) codes that we call $\ell$-MDPC-DP.
Based on $\ell$-MDPC-DP, we design the first code-based traitor-tracing encryption scheme. Interestingly, the security of our scheme relies on $\ell$-MDPC-DP and a slight variation called $(\ell,\ell')$-OM-MDPC-DP, the latter of which we prove to be as hard as $\ell$-DP via a polynomial reduction, therefore reducing the security of our scheme to only $\ell$-MDPC-DP. Although no formal reduction from $\ell$-DP to $\ell$-MDPC-DP is given, the definition of $\ell$-MDPC-DP is a natural variant of $\ell$-DP adapted to the multi-user code-based setting. Furthermore, we also provide evidence of $\ell$-MDPC-DP hardness by presenting detailed cryptanalytic arguments.
Dohyuk Kim, Sin Kim, Seunghwan Lee, Dong-Joon Shin
Fully Homomorphic Encryption over the Torus (TFHE) is a fully homomorphic encryption scheme that efficiently supports Boolean logic gates by performing gate operations and refreshing ciphertext noise with single gate bootstrapping. However, its operation is limited to simple two-input gates such as AND, OR, XOR and NAND, requiring deep circuits and multiple bootstrapping steps to support more complex arithmetic. In this paper, we propose Primitive Gate Bootstrapping, a new algebraic framework that significantly expands the class of Boolean functions evaluable with a single bootstrapping. By formalizing bootstrappable functions as compositions of linear maps and negacyclic functions, called the Blind-Rotational Function Family (BRFF), we define a subclass, the Primitive Gate Family (PGF). This family includes multi-input hybrid gates such as l-input XOR, 3-input Majority, and AND-XOR, which can all be realized with a single bootstrapping. Building on PGF, we further design a general circuit framework called the Parallel Prefix Group Circuit (PPGC) for efficiently implementing arithmetic and logical operations. PPGC enable n-bit addition, subtraction, comparison, equality, select, minimum/maximum, absolute, and negation with logarithmic depth O(log n) and significantly reduced latency compared to TFHE. In particular, our optimized implementations of addition and subtraction achieve a 1.92× speedup over TFHE, while the size of the blind rotation key was reduced by approximately 40%. In addition to the two-input addition, we also introduce an efficient technique for multi-input addition, which is particularly useful in applications such as encrypted matrix multiplication. Therefore, it is clear that the PGF-based constructions offer a practically scalable and efficient foundation for depth-sensitive homomorphic computations