IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
20 October 2025
David Balbás, Dario Fiore, Russell W. F. Lai
Batch arguments for NP (BARGs) are non-interactive proof systems that allow a prover to convince a verifier that $k$ NP statements $x_1, \ldots, x_k$ are valid relative to some circuit $C$, i.e. there exist witnesses $w_i$ such that $(x_i, w_i)$ satisfy $C$ for all $i$, while the proof size remains sublinear in $k$. Most existing BARG constructions achieve a proof size of $|\pi| = poly(\lambda, |C|, \log k)$ for large or not explicitly specified $poly$ acting on $|C|$, with two exceptions:
- Devadas et al. and Paneth and Pass's ''rate-1'' constructions [FOCS'22] achieve $|\pi| = |w| + O(|w|/\lambda) + poly(\lambda, \log k)$ (with matching verification time for Paneth and Pass), but for not explicitly specified $poly$ due to non-black-box use of cryptographic primitives. - Waters and Wu's algebraic (pairing-based) construction [Crypto'22] and follow-up works achieve $|\pi| = O(\lambda \cdot |C|)$.
In this work, we give the first algebraic (pairing-based) construction of BARG that achieves proof size and online verifier runtime $O(\lambda \cdot |w|)$. We achieve our result by means of a compiler which builds a BARG generically from a projective chainable functional commitment (PCFC), which supports somewhere extraction, subvector projection, and functional openings. We then construct a PCFC from the standard MDDH assumption in bilinear groups by building on top of the functional commitment for circuits by Wee and Wu [Eurocrypt'24]. Our black-box transformation may be of independent interest for understanding the connection between functional commitments and BARGs and towards obtaining other algebraic constructions of the latter.
- Devadas et al. and Paneth and Pass's ''rate-1'' constructions [FOCS'22] achieve $|\pi| = |w| + O(|w|/\lambda) + poly(\lambda, \log k)$ (with matching verification time for Paneth and Pass), but for not explicitly specified $poly$ due to non-black-box use of cryptographic primitives. - Waters and Wu's algebraic (pairing-based) construction [Crypto'22] and follow-up works achieve $|\pi| = O(\lambda \cdot |C|)$.
In this work, we give the first algebraic (pairing-based) construction of BARG that achieves proof size and online verifier runtime $O(\lambda \cdot |w|)$. We achieve our result by means of a compiler which builds a BARG generically from a projective chainable functional commitment (PCFC), which supports somewhere extraction, subvector projection, and functional openings. We then construct a PCFC from the standard MDDH assumption in bilinear groups by building on top of the functional commitment for circuits by Wee and Wu [Eurocrypt'24]. Our black-box transformation may be of independent interest for understanding the connection between functional commitments and BARGs and towards obtaining other algebraic constructions of the latter.
Agha Aghayev, Yadigar Imamverdiyev
Homomorphic Encryption (HE) allows parties to securely
outsource data while enabling computation on encrypted data, protect-
ing against malicious parties and data leakages. More recent HE schemes
enable approximate arithmetic on complex vectors and approximation of
non-linear functions, specifically useful for image processing algorithms.
The Fourier Shape Descriptor (FSD) is a classical method for shape
matching via frequency-domain representation, and we show that FSD
can be computed entirely in the encrypted domain. To the best of our
knowledge, this is the first work to implement secure shape descriptors
and matching via HE. We also present two experiments for similar and
different shapes, and measure the performance of the encrypted algo-
rithm.
Guilhem Niot, Michael Reichle, Kaoru Takemure
Threshold signatures are an important tool for trust distribution, and preserving the interface of standardized signatures, such as Schnorr signatures, is crucial for their adoption. In practice, latency dominates end-to-end signing time, so minimizing the number of interaction rounds is critical. Ideally, this is achieved under minimal assumptions and with adaptive security, where the adversary can corrupt signers on-the-fly during the protocol.
While Schnorr signatures are proven secure under the Discrete Logarithm (DL) assumption in the random oracle model, the most round-efficient adaptively-secure threshold Schnorr protocols rely on stronger assumptions such as Decisional Diffie-Hellman (DDH), the Algebraic One-More Discrete Logarithm (AOMDL) or even the Low-Dimensional Vector Representation (LDVR) assumptions. The only adaptively-secure threshold Schnorr signature from the DL assumption requires five rounds, leaving a significant gap in our understanding of this fundamental primitive. Achieving low-round protocols with adaptive security from the DL assumption has remained an open question.
We resolve this question by presenting the first adaptively-secure threshold Schnorr scheme in three rounds (two online, one offline) in the random oracle model under the DL assumption. Our result demonstrates that achieving both low round complexity and adaptive security is possible while preserving the (so far) minimal assumptions for Schnorr signatures. To achieve this, we introduce new techniques, including a novel use of an equivocal commitment scheme paired with a simulation-extractable NIZK, and a masking-based aggregated opening strategy for homomorphic commitments. Our work also makes several contributions that might be of independent interest, including a formalization of a strong adaptive security notion, a stronger commitment equivocation property, and an analysis of the simulation-extractability of the randomized Fischlin transformation.
While Schnorr signatures are proven secure under the Discrete Logarithm (DL) assumption in the random oracle model, the most round-efficient adaptively-secure threshold Schnorr protocols rely on stronger assumptions such as Decisional Diffie-Hellman (DDH), the Algebraic One-More Discrete Logarithm (AOMDL) or even the Low-Dimensional Vector Representation (LDVR) assumptions. The only adaptively-secure threshold Schnorr signature from the DL assumption requires five rounds, leaving a significant gap in our understanding of this fundamental primitive. Achieving low-round protocols with adaptive security from the DL assumption has remained an open question.
We resolve this question by presenting the first adaptively-secure threshold Schnorr scheme in three rounds (two online, one offline) in the random oracle model under the DL assumption. Our result demonstrates that achieving both low round complexity and adaptive security is possible while preserving the (so far) minimal assumptions for Schnorr signatures. To achieve this, we introduce new techniques, including a novel use of an equivocal commitment scheme paired with a simulation-extractable NIZK, and a masking-based aggregated opening strategy for homomorphic commitments. Our work also makes several contributions that might be of independent interest, including a formalization of a strong adaptive security notion, a stronger commitment equivocation property, and an analysis of the simulation-extractability of the randomized Fischlin transformation.
Shiduo Zhang, Huiwen Jia, Delong Ran, Yang Yu, Yu Yu, Xiaoyun Wang
The lattice trapdoor associated with Ajtai's function is the cornerstone of many lattice-based cryptosystems.
The current provably secure trapdoor framework, known as the GPV framework, uses a \emph{strong smoothness} condition, i.e. $\epsilon\ll \frac{1}{n^2}$ for smoothing parameter $\eta_{\epsilon}(\mathbb{Z}^{n})$, to ensure the correctness of the security reduction.
In this work, we investigate the feasibility of \emph{weak smoothness}, e.g. $\epsilon = O(\frac{1}{n})$ or even $O(1)$ in the GPV framework and present several positive results. First, we provide a theoretical security proof for GPV with weak smoothness under a new assumption. Then, we present Gaussian samplers that are compatible with the weak smoothness condition. As direct applications, we present two practical GPV signature instantiations based on a weak smoothness condition. Our first instantiation is a variant of Falcon achieving smaller size and higher security. The public key sizes are $21\%$ to $28\%$ smaller, and the signature sizes are $23.5\%$ to $29\%$ smaller than Falcon. We also showcase an NTRU-based GPV signature scheme that employs the Peikert sampler with weak smoothness. This offers a simple implementation while the security level is greatly lower. Nevertheless, at the NIST-3 security level, our scheme achieves a $49\%$ reduction in size compared to Dilithium-3.
In this work, we investigate the feasibility of \emph{weak smoothness}, e.g. $\epsilon = O(\frac{1}{n})$ or even $O(1)$ in the GPV framework and present several positive results. First, we provide a theoretical security proof for GPV with weak smoothness under a new assumption. Then, we present Gaussian samplers that are compatible with the weak smoothness condition. As direct applications, we present two practical GPV signature instantiations based on a weak smoothness condition. Our first instantiation is a variant of Falcon achieving smaller size and higher security. The public key sizes are $21\%$ to $28\%$ smaller, and the signature sizes are $23.5\%$ to $29\%$ smaller than Falcon. We also showcase an NTRU-based GPV signature scheme that employs the Peikert sampler with weak smoothness. This offers a simple implementation while the security level is greatly lower. Nevertheless, at the NIST-3 security level, our scheme achieves a $49\%$ reduction in size compared to Dilithium-3.
Jihoon Jang, Myeonghoon Lee, Donggeun Kwon, Seokhie Hong, Suhri Kim, Sangjin Lee
HQC, a code-based KEM selected by NIST for post-quantum standardization in March 2025, relies on fast polynomial multiplication over $\mathbb{F}_2[x]$ on embedded targets. On ARM Cortex-M4, where carry-less multiply is unavailable, prior work has focused on two approaches, Frobenius Additive FFT (FAFFT) and a radix-16 method that computes $\mathbb{F}_2[x]$ multiplications via 32-bit integer multiplications.
In this paper, we propose improved variants of FAFFT and the radix-16 method that optimize HQC on Cortex-M4. Regarding FAFFT, applying FAFFT to a polynomial length $n=2^{k}+r$ with small $r$, such as hqc-128 and -192, requires operating $2^{k+2}\approx 4n$. To address this overhead, we use FAFFT with 2-Karatsuba. We divide at $2^{k}$, evaluate two subproducts of size $2^k$ with FAFFT at length $2^{k+1}$, and handle the residual of size $r$ via radix-16 multiplication. We further optimize the FAFFT butterfly by shortening XOR sequences that result from fixed-coefficient multiplications expressed as matrix-vector multiplications and by scheduling that fully utilizes all 14 general-purpose registers. In the final 5 layers, we implement bit swaps between registers with SWAPMOVE operations.
For the radix-16 method, we build a cost model based on operation counts to explore Karatsuba and Toom-Cook combinations, producing a small set of optimal candidates that we evaluate on the target device. We compare with the gf2x library using the same algorithm set. For hqc-128 and -192 the resulting combinations are identical, while for hqc-256 our combination achieves 21.7% fewer cycles.
On a NUCLEO-L4R5ZI board with a Cortex-M4 microcontroller, our implementations reduce polynomial-multiplication cycles by 11.3% (hqc-128) and 9.2% (hqc-192) with FAFFT, and by 24.5% (hqc-128) and 3.2% (hqc-192) with the radix-16 method. Overall, we achieve cycle reductions of 16.4%, 15.9%, and 14.7% for key generation, encapsulation, and decapsulation in hqc-128, and 5.8%, 5.8%, and 5.5% in hqc-192.
In this paper, we propose improved variants of FAFFT and the radix-16 method that optimize HQC on Cortex-M4. Regarding FAFFT, applying FAFFT to a polynomial length $n=2^{k}+r$ with small $r$, such as hqc-128 and -192, requires operating $2^{k+2}\approx 4n$. To address this overhead, we use FAFFT with 2-Karatsuba. We divide at $2^{k}$, evaluate two subproducts of size $2^k$ with FAFFT at length $2^{k+1}$, and handle the residual of size $r$ via radix-16 multiplication. We further optimize the FAFFT butterfly by shortening XOR sequences that result from fixed-coefficient multiplications expressed as matrix-vector multiplications and by scheduling that fully utilizes all 14 general-purpose registers. In the final 5 layers, we implement bit swaps between registers with SWAPMOVE operations.
For the radix-16 method, we build a cost model based on operation counts to explore Karatsuba and Toom-Cook combinations, producing a small set of optimal candidates that we evaluate on the target device. We compare with the gf2x library using the same algorithm set. For hqc-128 and -192 the resulting combinations are identical, while for hqc-256 our combination achieves 21.7% fewer cycles.
On a NUCLEO-L4R5ZI board with a Cortex-M4 microcontroller, our implementations reduce polynomial-multiplication cycles by 11.3% (hqc-128) and 9.2% (hqc-192) with FAFFT, and by 24.5% (hqc-128) and 3.2% (hqc-192) with the radix-16 method. Overall, we achieve cycle reductions of 16.4%, 15.9%, and 14.7% for key generation, encapsulation, and decapsulation in hqc-128, and 5.8%, 5.8%, and 5.5% in hqc-192.
Alexander Frolov, Hal Triedman, Ian Miers
We are now entering an era where the large-scale deployment of anonymous credentials seems inevitable, driven both by legislation requiring age verification and the desire to distinguish humans from bots in the face of the proliferation of AI-generated content. However, the widespread deployment of anonymous credentials faces the same security and fraud concerns as existing credentials, but without the established techniques for securing them. For non-anonymous credentials on the web today, authentication is a continuous process in which servers collect large volumes of behavioral data to protect account holders (e.g., by detecting account compromise) or to combat fraudulent behavior.
In this paper, we propose Continuous Anonymous Authentication (CAA) schemes and give a concrete construction and applications for preventing credential sharing and theft. CAA schemes allow us to move the server-side collection, storage, and processing of these behavioral signals to the client while maintaining privacy and integrity. CAA schemes support, on the client side, a number of common behavioral analysis tests and analytics both for determining fraudulent behavior and updating security policies. We implement a prototype, zk-Cookies, which runs in the browser, and supports common behavioral signals such as IP address and geolocation history, browser fingerprinting, and page view history. Using this, we build a prototype application for age verification based on legacy credentials (like passports). We implement these checks efficiently in zk-SNARKs, and also show how to securely implement differentially private behavioral analytics in a zk-SNARK. The simplest version of our construction can perform the computation for an update in under 200 ms.
Marc Damie, Federico Mazzone, Florian Hahn, Andreas Peter, Jan Ramon
Function Secret Sharing (FSS) schemes enable to share secret functions between multiple parties, with notable applications in anonymous communication and privacy-preserving machine learning. While two-party schemes offer logarithmic key sizes, multi-party schemes remain less practical due to significantly larger keys. Although several approaches have been proposed to improve multi-party schemes, a significant efficiency gap remains between the two-party and multi-party settings.
Our work introduces noisy FSS: a relaxation of FSS preserving the standard privacy guarantees but relaxing the correctness definition by allowing a small amount of noise in the output. We formally define noisy FSS and show how the noise introduced by the scheme can be leveraged to provide differential private outputs in statistics applications.
To demonstrate the benefits of this relaxation, we adapt a scheme proposed by Corrigan-Gibbs et al. (S&P'15). While their scheme provides the smallest key sizes among multi-party schemes, they do not support some applications notably in statistics due to their non-linear share decoding. On the contrary, recent works such as Goel et al. (CRYPTO'25) have larger keys, but support all FSS applications. Our noisy adapted scheme offers the best of both worlds by matching the best key sizes, while providing the properties necessary to statistics applications.
Our work introduces noisy FSS: a relaxation of FSS preserving the standard privacy guarantees but relaxing the correctness definition by allowing a small amount of noise in the output. We formally define noisy FSS and show how the noise introduced by the scheme can be leveraged to provide differential private outputs in statistics applications.
To demonstrate the benefits of this relaxation, we adapt a scheme proposed by Corrigan-Gibbs et al. (S&P'15). While their scheme provides the smallest key sizes among multi-party schemes, they do not support some applications notably in statistics due to their non-linear share decoding. On the contrary, recent works such as Goel et al. (CRYPTO'25) have larger keys, but support all FSS applications. Our noisy adapted scheme offers the best of both worlds by matching the best key sizes, while providing the properties necessary to statistics applications.
Vincent Grosso, Carlos Andres Lara-Nino
Masking is one of the most widespread countermeasures against side-channel attacks. However, its integration into hardware implementation is subject to physical hazards that can mitigate its security. To counter glitches, the most studied physical hazard, an effective solution is to resynchronize the signals by integrating additional hardware layers into the architecture. However, these solutions have an impact on the performance of the implementation. A solution to avoid these limitations is to use more shares to compute higher-degree functions. We study the cost of this approach, denominated ($td+n$)-masking. We first derive optimal dependence structures for the creation on non-complete sharings, which allow us to obtain efficient implementation of substitution boxes. As a case study, we use these elements to create a second-order masked architecture for the PRESENT cipher. We perform multiple TVLA tests to validate our approach. Our results confirm that the strategy is efficient in terms of performance, at the cost of increased hardware resources.
Craig Gentry, Yongwoo Lee
We propose an efficient fully homomorphic encryption (FHE) scheme tailored for matrix arithmetic based on the Ring-Learning with Errors (RLWE) problem. The proposed scheme naturally supports matrix multiplication, addition, and Hadamard multiplication for batched matrices of various sizes over both complex numbers and integers. Encrypted matrix multiplication is reduced to four matrix multiplications of ciphertext elements, without the need for expensive operations such as slot-to-coefficient conversion or ring switching. In addition, the scheme efficiently supports matrix transformations, including general and conjugate transpositions, as well as matrix rotations: inter-matrix rotations across batched matrices and intra-matrix rotations within rows and columns. Moreover, the proposed FHE scheme can be directly combined with existing bootstrapping algorithms.
By eliminating the need for expensive operations such as repeated slot rotations and conversion between slot- and coefficient-encoding, the proposed construction achieves significant performance improvements. In our construction, encrypted multiplications of $n\times n$ matrices under slot encoding are decomposed into two parts: (1) matrix multiplication — four $n\times n$ matrix multiplications of ciphertext coefficients, and (2) key switching — with a total cost approximately 2–4 times that of Hadamard multiplication. We implemented the proposed scheme and utilized the FLINT library for the matrix multiplication component. Experimental results demonstrate that, even when leveraging highly optimized implementations, matrix multiplication remains the major cost, indicating that our construction substantially reduces auxiliary overheads and achieves strong overall efficiency.
Alessandra Dolmeta, Valeria Piscopo, Guido Masera, Maurizio Martina, Michael Hutter
This work presents a RISC-V extension for Post-Quantum Cryptography
(PQC) called HORCRUX, which provides a unified Instruction-Set Extension (ISE) supporting all NIST-approved PQC algorithms. HORCRUX addresses the current fragmentation in hardware support, where existing extensions typically focus on individual algorithms or limited subsets of PQC schemes, and targets the common kernels shared across ML-KEM, ML-DSA, SLH-DSA, and HQC. To address the primary computational bottlenecks of all these algorithms (namely, modular arithmetic, matrix
multiplication, and hash transformations), the extension introduces new RISC-V instructions executed by a tightly coupled coprocessor. This coprocessor requires only minimal hardware resources, making the architecture efficient for constrained devices while still providing substantial acceleration. An experimental evaluation on a Zynq UltraScale+ FPGA demonstrates speedups of up to 3× for lattice and
hash-based schemes, and over 30× for code-based schemes, while adding less than 2,900 LUTs and 400 FFs to the design. The extension’s modular structure maintains backward compatibility with standard RISC-V cores, offering a scalable solution for deploying PQC on highly constrained embedded systems.
Xiaohan Wan, Mingqiang Wang, Xiaopeng Cheng, Haiyang Xue, Qi Zhang
Multi-key fully homomorphic encryption (MKFHE) extends the capability of fully homomorphic encryption by enabling homomorphic computations on ciphertexts encrypted under different keys. Multi-key blind rotation is the core and most computationally intensive component of MKFHE. The NTRU-based multi-key blind rotation proposed by Xiang et al. (ASIACRYPT 2024) has the potential to achieve smaller key sizes, faster blind rotation, and lower noise compared to its RLWE-based counterpart. However, while the multi-key blind rotation in the RLWE-based scheme proposed by Kwak et al. (PKC 2024) remains compatible with their single-key counterpart, the NTRU-based versions do not.
This motivates our work to advance NTRU-based schemes in both efficiency and compatibility. We propose a novel workflow for NTRU-based multi-key blind rotation that achieves compatibility with its single-key counterpart. Our approach significantly reduces both computational and storage complexity compared to the state-of-the-art NTRU-based design, while maintaining a comparable noise magnitude. Building upon this workflow, we further construct two MKFHE schemes for bootstrapping multi-key LWE ciphertexts and multi-key matrix NTRU ciphertexts, both supporting a super-constant number of parties with respect to the ring dimension $N$. Experimental results demonstrate that our method outperforms existing NTRU-based bootstrapping for TFHE-like MKFHE in both computational efficiency and bootstrapping key size. Specifically, our 2-key gate bootstrapping takes only 26ms and requires a bootstrapping key of size 7.34MB, achieving a 3.1× speedup and a 1.9× key size reduction compared to prior NTRU-based works.
This motivates our work to advance NTRU-based schemes in both efficiency and compatibility. We propose a novel workflow for NTRU-based multi-key blind rotation that achieves compatibility with its single-key counterpart. Our approach significantly reduces both computational and storage complexity compared to the state-of-the-art NTRU-based design, while maintaining a comparable noise magnitude. Building upon this workflow, we further construct two MKFHE schemes for bootstrapping multi-key LWE ciphertexts and multi-key matrix NTRU ciphertexts, both supporting a super-constant number of parties with respect to the ring dimension $N$. Experimental results demonstrate that our method outperforms existing NTRU-based bootstrapping for TFHE-like MKFHE in both computational efficiency and bootstrapping key size. Specifically, our 2-key gate bootstrapping takes only 26ms and requires a bootstrapping key of size 7.34MB, achieving a 3.1× speedup and a 1.9× key size reduction compared to prior NTRU-based works.
Andrej Bogdanov, Rohit Chatterjee, Yunqi Li, Prashant Nalini Vasudevan
Prange's information set algorithm is a decoding algorithm for arbitrary linear codes. It decodes corrupted codewords of any $\mathbb{F}_2$-linear code $C$ of message length $n$ up to relative error rate $O(\log n / n)$ in $\mathsf{poly}(n)$ time. We show that the error rate can be improved to $O((\log n)^2 / n)$, provided: (1) the decoder has access to a polynomial-length advice string that depends on $C$ only, and (2) $C$ is $n^{-\Omega(1)}$-balanced.
As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate.
Our main technical result is that the Hamming weight of $Hw$, where $H$ is a random sample of *short dual* codewords, measures the proximity of a word $w$ to the code in the regime of interest. Given such $H$ as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding $Hw$ for an arbitrary polynomial-size advice matrix $H$.
As a consequence we improve the error tolerance in decoding random linear codes if inefficient preprocessing of the code is allowed. This reveals potential vulnerabilities in cryptographic applications of Learning Noisy Parities with low noise rate.
Our main technical result is that the Hamming weight of $Hw$, where $H$ is a random sample of *short dual* codewords, measures the proximity of a word $w$ to the code in the regime of interest. Given such $H$ as advice, our algorithm corrects errors by locally minimizing this measure. We show that for most codes, the error rate tolerated by our decoder is asymptotically optimal among all algorithms whose decision is based on thresholding $Hw$ for an arbitrary polynomial-size advice matrix $H$.
Shichang Wang, Meicheng Liu, Shiqi Hou, Dongdai Lin
At CHES 2017, Banik et al. proposed a lightweight block cipher GIFT consisting of two versions GIFT-64 and GIFT-128. Recently, there are lots of authenticated encryption schemes that adopt GIFT-128 as their underlying primitive, such as GIFT-COFB and HyENA. To promote a comprehensive perception of the soundness of the designs, we evaluate their security against differential-linear cryptanalysis.
For this, automatic tools have been developed to search differential-linear approximation for the ciphers based on S-boxes. With the assistance of the automatic tools, we find 13-round differential-linear approximations for GIFT-COFB and HyENA. Based on the distinguishers, 18-round key-recovery attacks are given for the message processing phase and initialization phase of both ciphers. Moreover, the resistance of GIFT-64/128 against differential-linear cryptanalysis is also evaluated. The 12-round and 17-round differential-linear approximations are found for GIFT-64 and GIFT-128 respectively, which lead to 18-round and 19-round key-recovery attacks respectively. Here, we stress that our attacks do not threaten the security of these ciphers.
For this, automatic tools have been developed to search differential-linear approximation for the ciphers based on S-boxes. With the assistance of the automatic tools, we find 13-round differential-linear approximations for GIFT-COFB and HyENA. Based on the distinguishers, 18-round key-recovery attacks are given for the message processing phase and initialization phase of both ciphers. Moreover, the resistance of GIFT-64/128 against differential-linear cryptanalysis is also evaluated. The 12-round and 17-round differential-linear approximations are found for GIFT-64 and GIFT-128 respectively, which lead to 18-round and 19-round key-recovery attacks respectively. Here, we stress that our attacks do not threaten the security of these ciphers.
Taiyu Wang, Cong Zhang, Hong-Sheng Zhou, Xin Wang, Pengfei Chen, Wenli Wang, Kui Ren, Chun Chen
The generic group model (GGM) is fundamental for evaluating the feasibility and limitations of group-based cryptosystems. Two prominent versions of the GGM exist in the literature: Shoup's GGM and Maurer's GGM. Zhandry (CRYPTO 2022) points out inherent limitations in Maurer's GGM by demonstrating that several textbook cryptographic primitives, which are provably secure in Shoup's GGM, cannot be proven secure in Maurer's model.
In this work, we further investigate Shoup's GGM and identify novel limitations that have been previously overlooked. Specifically, to prevent generic algorithms from generating valid group elements without querying the oracle, the model typically employs sufficiently large encoding lengths. This leads to sparse encodings, a setting referred to as the sparse generic group model (sparse GGM). We emphasize that this sparseness introduces several constraints:
--Groups with AE and Black-Box Separation: Shoup's GGM is typically instantiated with elliptic curve groups, which admit admissible encodings (AE)—functions mapping from Z_N to elliptic curve points. We establish a black-box separation, showing that the sparse GGM fails to capture cryptographic groups that are both (1) computational Diffie-Hellman (CDH) secure and (2) compatible with admissible encodings.
--Comparison with EC-GGM: We examine the relationship between the sparse GGM and the Elliptic Curve Generic Group Model (EC-GGM) introduced by Groth and Shoup (EUROCRYPT 2022), which inherently yields CDH-secure groups with admissible encodings. Within the framework of indifferentiability, we prove that EC-GGM is strictly stronger than sparse GGM.
--Dense Groups and Black-Box Separation: We revisit groups with dense encodings and establish a black-box separation between CDH-secure dense groups and the sparse GGM.
--Extension to Bilinear Settings: Our results naturally extend to the sparse Generic Bilinear Group Model (GBM), demonstrating that the aforementioned constraints still hold.
In conclusion, our findings indicate that both feasibility and impossibility results in Shoup's GGM should be reinterpreted in a fine-grained manner, encouraging further exploration of cryptographic constructions and black-box separations in EC-GGM or dense GGM.
In this work, we further investigate Shoup's GGM and identify novel limitations that have been previously overlooked. Specifically, to prevent generic algorithms from generating valid group elements without querying the oracle, the model typically employs sufficiently large encoding lengths. This leads to sparse encodings, a setting referred to as the sparse generic group model (sparse GGM). We emphasize that this sparseness introduces several constraints:
--Groups with AE and Black-Box Separation: Shoup's GGM is typically instantiated with elliptic curve groups, which admit admissible encodings (AE)—functions mapping from Z_N to elliptic curve points. We establish a black-box separation, showing that the sparse GGM fails to capture cryptographic groups that are both (1) computational Diffie-Hellman (CDH) secure and (2) compatible with admissible encodings.
--Comparison with EC-GGM: We examine the relationship between the sparse GGM and the Elliptic Curve Generic Group Model (EC-GGM) introduced by Groth and Shoup (EUROCRYPT 2022), which inherently yields CDH-secure groups with admissible encodings. Within the framework of indifferentiability, we prove that EC-GGM is strictly stronger than sparse GGM.
--Dense Groups and Black-Box Separation: We revisit groups with dense encodings and establish a black-box separation between CDH-secure dense groups and the sparse GGM.
--Extension to Bilinear Settings: Our results naturally extend to the sparse Generic Bilinear Group Model (GBM), demonstrating that the aforementioned constraints still hold.
In conclusion, our findings indicate that both feasibility and impossibility results in Shoup's GGM should be reinterpreted in a fine-grained manner, encouraging further exploration of cryptographic constructions and black-box separations in EC-GGM or dense GGM.
Agha Aghayev, Nour-eddine Rahmani
The asymmetric cryptographic constructions upon on number-
theoretic hardness assumptions have become insecure, due to Shor’s
quantum algorithm and they will be vulnerable to large scale quantum
computers. Hence, the adaption to quantum-resistant cryptosystems is
a major task. Digital signatures, being a fundamental primitive in nu-
merous applications. Recently, a new approach by Nguyen et al. [9] has
claimed post-quantum security by basing the signature algorithm’s se-
curity on a variant of the discrete logarithm problem. In this paper, we
present a cryptanalysis of this construction and demonstrate a practi-
cal forgery attack that allows generating an unlimited number of valid
signatures—without access to a signing oracle.
Jonas Schupp, Marco Gianvecchio, Alessandro Barenghi, Patrick Karl, Gerardo Pelosi, Georg Sigl
Post-quantum cryptosystems are currently attracting significant research efforts due to the continuous improvements in quantum computing technologies, which led the National Institute of Standards and Technology (NIST) to open standardization competitions to foster proposals and public scrutiny of cryptosystems and digital signatures. Whilst NIST has chosen, after four selection rounds, three digital signature algorithms, it also has opened a new selection process as the chosen candidates were either relying only on lattice-based computationally hard problems, or had unsatisfactory performance figures. In this work, we propose two optimized implementations of the Codes and Restricted Objects Signature Scheme (CROSS) targeting the Cortex-M4 platform. One implementation targets the minimal possible stack size while the other trades some memory space for performance optimization using vectorization for some performance critical arithmetic operations. We show that all parameter sets fit within at maximum 24 kB of stack which corresponds to a reduction by a factor of 15 to 45 with respect to the reference implementation. The memory footprint of our implementation, taking the size of the binary and the signature also into account, is less than 128 kB. We additionally outline different stack reduction options which allow for a fine grained trade-off between memory footprint and performance of the algorithm. Notably, we also show that our memory optimizations alone have no significant impact on the signature verification of CROSS while we even achieve a speed-up factor of up to 1.7 when taking the stack and speed optimizations into account.
Diamante Simone CRESCENZO, Emanuele VALEA, Alberto BOSIO
Conventional cryptographic algorithms rely on hard mathematical problems to ensure an appropriate level of security. However, with the advent of quantum computing, classical cryptographic algorithms are expected to become vulnerable. For this reason, Post-Quantum Cryptography (PQC) algorithms have emerged as a response, being designed to resist quantum attacks. Most PQC algorithms rely on the Learning With Errors (LWE) problem, where generating pseudo-random controlled errors is crucial. A well-known solution is the use of hash functions followed by error samplers, implemented according to specific error distributions, whose implementation is challenging. This paper provides a proof of concept demonstrating how Approximate Computing (AxC) can be exploited in LWE-based cryptographic algorithms to alleviate this implementation bottleneck. The main idea is to use AxC circuits to run some of the algorithm's operations, introducing the required error for free thanks to the approximation. Our key contribution is demonstrating how AxC techniques can be effectively applied to LWE-based algorithms, highlighting a novel approach to generating and introducing the error. This concept has proven effective in an approximate implementation of the FrodoKEM algorithm, where we achieve a 50.3% reduction in the need for Gaussian sampling. Additionally, we observe a performance improvement of 2.19%, which further supports the feasibility of this approach. Overall, this work introduces and validates a new design direction for LWE-based cryptography through AxC, opening the way for further research.
Dimitri Koshelev
This article aims to consider batch hashing to elliptic curves. The given kind of hash functions found numerous applications in elliptic curve cryptography. In practice, a hash-to-curve function is often evaluated at a time by the same entity at many different inputs. It turns out that under certain mild conditions simultaneous evaluation can be carried out several times faster than separate ones. In this regard, the article introduces a new class of elliptic curves over finite fields, more appropriate for multiple hashing to them. Moreover, two explicit hashing-friendly Montgomery/twisted Edwards curves (of $\approx 128$ security bits) have been generated: one of CM discriminant $-7$, i.e., a GLV-friendly curve and one of huge CM discriminant, i.e., a CM-secure curve. The new elliptic curves are intentionally covered by so-called Klein's and Bring's curves of geometric genera $3$ and $4$, respectively. The latter are well studied in various algebraic geometry contexts, although they have not yet been (reasonably) applied in cryptography to the author's knowledge. Such a mathematical complication is justified, since conventional curves (from existing standards or of $j$-invariants $0$, $1728$) are seemingly less efficient for batch hashing.
Debranjan Pal, Anubhab Baksi, Surajit Mandal, Santanu Sarkar
A common way to perform classical cryptanalysis on symmetric-key ciphers is to encode the problem as an instance of the Mixed Integer Linear Programming (MILP) problem and then run the instance with an efficient solver. For this purpose, it is essential to model the components in a way that is compatible with the MILP formulation while preserving the characteristics of the cipher. In this work, we aim at the problem of efficiently encoding a substitution box (SBox for short). More specifically, we take the evaluation tables, namely, the Difference Distribution Table (DDT) that models for the differential cryptanalysis, the Linear Approximation Table (LAT) that models for the linear cryptanalysis, the Division Property Table (DPT) that models for the integral cryptanalysis, and the Boomerang Connectivity Table (BCT) that models for the boomerang cryptanalysis.
In the current stage, the only model proposed/used in the literature is to look for the non-zero entries in the evaluation table and encode for that (this ultimately leads to the minimum number of active SBoxes for the target cipher). In our first contribution, we experiment with the complementary concept of using the zero entries (i.e., treating the zero entries as nonzero entries and vice versa). In our second contribution, we observe that the top row and column of each table (which are always taken into consideration) are actually redundant, as those are inherently taken care of through additional constraints at another level (which we call the `secondary structure').
We show the efficacy of our proposals by furnishing results on the evaluation tables on multiple SBoxes. In fact, the first proposal reduces the number of constraints for certain SBox evaluation tables (depending on the properties of the table). The second proposal, on the other hand, always reduces the number of constraints from the state-of-the-art results.
In the current stage, the only model proposed/used in the literature is to look for the non-zero entries in the evaluation table and encode for that (this ultimately leads to the minimum number of active SBoxes for the target cipher). In our first contribution, we experiment with the complementary concept of using the zero entries (i.e., treating the zero entries as nonzero entries and vice versa). In our second contribution, we observe that the top row and column of each table (which are always taken into consideration) are actually redundant, as those are inherently taken care of through additional constraints at another level (which we call the `secondary structure').
We show the efficacy of our proposals by furnishing results on the evaluation tables on multiple SBoxes. In fact, the first proposal reduces the number of constraints for certain SBox evaluation tables (depending on the properties of the table). The second proposal, on the other hand, always reduces the number of constraints from the state-of-the-art results.
Benedikt Bünz, Kevin Choi, Chelsea Komlo
In this work, we present Golden, a non-interactive Distributed Key Generation (DKG) protocol. The core innovation of Golden is how it achieves publicly verifiability in a lightweight manner, allowing all participants to non-interactively verify that all other participants followed the protocol correctly. For this reason, Golden can be performed with only one round of (broadcast) communication. Non-interactive DKGs are important for distributed applications; as parties may go offline at any moment, reducing rounds of communication is a desirable feature. Golden outputs Shamir secret shares of a field element sk ∈Zp to all participants, and a public key PK= g^sk that is a discrete-logarithm commitment to sk. Further, the security of Golden requires only the hardness of discrete-logarithm assumptions, and so can be used over any elliptic curve where these assumptions hold.
Golden is more efficient than prior related schemes in both bandwidth and computation. For 50 participants, Golden requires only 223 kb bandwidth and 13.5 seconds of total runtime for each participant, in comparison to ElGamal-based non-interactive DKG, which requires 27.8 MB bandwidth and 40.5 seconds runtime per participant.
As a building block, we define a new exponent Verifiable Random Function (eVRF). Our eVRF uses a non-interactive key exchange (NIKE) as a building block to derive a Diffie-Hellman shared secret key, and proves correctness of this key with respect to the corresponding Diffie-Hellman public keys. Within Golden, participants use this eVRF in a pairwise manner to generate a one-time pad to encrypt Shamir secret shares to their respective recipients while ensuring public verifiability. As such, Golden avoids the use of public-key encryption schemes such as ElGamal, Paillier, or class groups, departing from prior schemes in the literature. Finally, our eVRF may be of independent interest to settings where publicly-verifiable encryption is desirable.
Golden is more efficient than prior related schemes in both bandwidth and computation. For 50 participants, Golden requires only 223 kb bandwidth and 13.5 seconds of total runtime for each participant, in comparison to ElGamal-based non-interactive DKG, which requires 27.8 MB bandwidth and 40.5 seconds runtime per participant.
As a building block, we define a new exponent Verifiable Random Function (eVRF). Our eVRF uses a non-interactive key exchange (NIKE) as a building block to derive a Diffie-Hellman shared secret key, and proves correctness of this key with respect to the corresponding Diffie-Hellman public keys. Within Golden, participants use this eVRF in a pairwise manner to generate a one-time pad to encrypt Shamir secret shares to their respective recipients while ensuring public verifiability. As such, Golden avoids the use of public-key encryption schemes such as ElGamal, Paillier, or class groups, departing from prior schemes in the literature. Finally, our eVRF may be of independent interest to settings where publicly-verifiable encryption is desirable.