IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
20 October 2025
Jan Sebastian Götte
Germany is currently rolling out an opt-out, nation-scale
database of the medical records of the majority of its population, with low-income people being disproportionally represented among its users. While there has been considerable criticism of the system coming from civil society, independent academic analysis of the system by the cryptography and information security community has been largely absent. In this paper, we aim to raise awareness of the system’s existence and, based on the system’s public specifications, highlight several concerning cryptographic engineering decisions. Our core observations is that the system’s most sensitive long-term user keys are derived by a rudimentary, home-grown centralized key escrow mechanism. This mechanism relies on a per-use salt and only 256 bit of entropy, shared globally across millions of users. Furthermore, the system’s specification mandates only level 3 compliance with the obsolete FIPS 140-2 security standard, which requires “hard, opaque potting”, but lacks active tamper sensing. As a result, the system remains vulnerable to attacks by nation states and other well-funded adversaries.
Jan Sebastian Götte, Björn Scheuermann
Security Meshes are patterns of sensing traces covering an area that are used in Hardware Security Modules (HSMs) and other systems to detect attempts to physically intrude into the device's protective shell. State-of-the-art solutions manufacture meshes in bespoke processes from carefully chosen materials, which is expensive and makes replication challenging. Additionally, state-of-the-art monitoring circuits sacrifice either monitoring precision or cost efficiency. In this paper, we present an embeddable security mesh monitoring circuit constructed from low-cost, standard components that utilizes Time Domain Reflectometry (TDR) to create a unique fingerprint of a mesh. Our approach is both low-cost and precise, and enables the use of inexpensive standard Printed Circuit Boards (PCBs) as security mesh material. We demonstrate a working prototype of our TDR circuit costing less than 10 € in components that achieves both time resolution and rise time better than 200 ps—a 25 × improvement over previous work. We demonstrate a simple classifier that detects several types of advanced attacks such as probing using an oscilloscope probe or micro-soldering attacks with no false negatives.
Adrian Cinal, Przemysław Kubiak, Mirosław Kutyłowski, Gabriel Wechta
In this paper, we analyze the clash between privacy-oriented cryptocurrencies and emerging legal frameworks for combating financial crime, focusing in particular on the recent European Union regulations. We analyze Monero, a leading "privacy coin" and a major point of concern for law enforcement, and study the scope of due diligence that must be exercised under the new law with regard to Monero trading platforms and how it translates to the technical capabilities of the Monero protocol. We both recognize flaws in the legislation and identify technical pitfalls threatening either the effective compliance of, say, Monero exchanges or the anonymity endeavour of Monero itself. Of independent interest is that we turn to anamorphic cryptography (marking one of the first practical applications of the concept) and leverage it to build a hidden transaction layer embedded in the Monero blockchain that obfuscates illegal money flow and circumvents transaction-level attempts at enforcing the EU law.
Xiaobin Yu, Meicheng Liu
Over the past decades, extensive research has been conducted on lightweight cryptographic primitives. The linear layer plays an important role in their security.
In this paper, we propose a family of linear layers consisting of XORs and rotations, which is called multiple rows mixers (MRM). It is a family designed for LS-type ciphers, but mixing elements from several rows. We investigate the impact of the linear layers on the 3-round trail weight of permutations and explore the properties of the inverse of the linear layers with a low XOR count. We employ a generic and extensible approach to determine the parameters of MRM. This approach can automatically generate linear layers that meet the requirements of a given branch number.
By applying these design principles and methods, we derive a linear layer that has a dimension of 5 × 64, a differential branch number of 12, a linear branch number of 5 and a computational cost of 2.6 XOR operations per bit. MRM is not limited to fixed dimension and can be extended to other dimensions. In addition, we present a concrete instantiation of a 320-bit permutation using a more efficient instance of MRM, named Hsilu. Its non-linear layer employs the χ operating on columns.
Compared with the permutations of Gaston and NIST lightweight standard Ascon, the round function of Hsilu requires fewer XOR operations. Hsilu exhibits competitive security and performance with Ascon and Gaston. We demonstrate that the best-found 3-round differential and linear trails of Hsilu have much higher weights than those of Ascon. Hsilu outperforms Gaston and Ascon in terms of both software and hardware performance.
In this paper, we propose a family of linear layers consisting of XORs and rotations, which is called multiple rows mixers (MRM). It is a family designed for LS-type ciphers, but mixing elements from several rows. We investigate the impact of the linear layers on the 3-round trail weight of permutations and explore the properties of the inverse of the linear layers with a low XOR count. We employ a generic and extensible approach to determine the parameters of MRM. This approach can automatically generate linear layers that meet the requirements of a given branch number.
By applying these design principles and methods, we derive a linear layer that has a dimension of 5 × 64, a differential branch number of 12, a linear branch number of 5 and a computational cost of 2.6 XOR operations per bit. MRM is not limited to fixed dimension and can be extended to other dimensions. In addition, we present a concrete instantiation of a 320-bit permutation using a more efficient instance of MRM, named Hsilu. Its non-linear layer employs the χ operating on columns.
Compared with the permutations of Gaston and NIST lightweight standard Ascon, the round function of Hsilu requires fewer XOR operations. Hsilu exhibits competitive security and performance with Ascon and Gaston. We demonstrate that the best-found 3-round differential and linear trails of Hsilu have much higher weights than those of Ascon. Hsilu outperforms Gaston and Ascon in terms of both software and hardware performance.
Reo Eriguchi
Private Simultaneous Messages (PSM) and Conditional Disclosure of Secrets (CDS) are two fundamental primitives in information-theoretic cryptography. In a PSM protocol, each party sends a single message to a referee, who can then compute a function $f$ of their private inputs but learns nothing else. A CDS protocol follows the same model as PSM, but the goal is to disclose a secret shared among all parties to the referee if and only if the function $f$ evaluates to $1$. Minimizing the communication complexity of these primitives is a central question in this area. Since the best known constructions for general functions require high communication complexity, recent studies have attempted to obtain more efficient constructions by focusing on symmetric functions, whose outputs are invariant under permutations of inputs. However, the extent to which exploiting symmetry can improve efficiency has remained unclear. In this work, we show upper and lower bounds that relate the optimal communication complexities of PSM and CDS for symmetric functions to those for general functions. When the number of parties is larger than the input domain size, our upper bound for PSM improves the best known communication complexity for symmetric functions. Furthermore, our upper bound for CDS demonstrates for the first time that symmetry can be exploited to reduce communication in CDS protocols. In contrast, when the number of parties is constant, our lower bounds show that focusing on symmetric functions yields only a constant-factor improvement. We also derive an analogous implication for PSM protocols under a plausible conjecture. In addition, our new constructions for symmetric functions lead to improvements over the state-of-the-art results in related models such as ad hoc PSM and secret sharing.
Oleksandra Lapiha, Thomas Prest
We present a simple IND-CCA lattice-based threshold KEM. At a high level, our design is based on the BCHK transform (Canetti et al., EUROCRYPT 2004), which we adapt to the lattice setting by combining it with the FO transform (Fujisaki and Okamoto, PKC 1999) in order to achieve decryption consistency.
As for the BCHK transform, our construction requires a threshold identity-based encryption (TIBE) scheme with suitable properties. We build such an IBE by combining the ABB IBE (Agrawal, Boneh, Boyen, EUROCRYPT 2010) with recent advances in lattice threshold cryptography, such as the threshold-friendly signature Plover (Esgin et al., EUROCRYPT 2024) and a variant of the Threshold Raccoon scheme (Katsumata et al., CRYPTO 2024).
The security proof of our scheme relies on a new assumption which we call the Coset-Hint-MLWE assumption, and which is a natural generalisation of the Hint-MLWE assumption (Kim et al., CRYPTO 2023). We prove the hardness of Coset-Hint-MLWE under standard assumptions. We believe this new assumption may be of independent interest.
Unlike prior works on IND-CCA lattice-based threshold KEMs, our construction only relies on simple algorithmic tools and does not use heavy machinery such as multi-party computation or threshold fully homomorphic encryption.
As for the BCHK transform, our construction requires a threshold identity-based encryption (TIBE) scheme with suitable properties. We build such an IBE by combining the ABB IBE (Agrawal, Boneh, Boyen, EUROCRYPT 2010) with recent advances in lattice threshold cryptography, such as the threshold-friendly signature Plover (Esgin et al., EUROCRYPT 2024) and a variant of the Threshold Raccoon scheme (Katsumata et al., CRYPTO 2024).
The security proof of our scheme relies on a new assumption which we call the Coset-Hint-MLWE assumption, and which is a natural generalisation of the Hint-MLWE assumption (Kim et al., CRYPTO 2023). We prove the hardness of Coset-Hint-MLWE under standard assumptions. We believe this new assumption may be of independent interest.
Unlike prior works on IND-CCA lattice-based threshold KEMs, our construction only relies on simple algorithmic tools and does not use heavy machinery such as multi-party computation or threshold fully homomorphic encryption.
Jung Hee Cheon, Minsik Kang, Junho Lee
Encrypted matrix multiplication (MM) is a fundamental primitive in privacy-preserving machine learning and encrypted data search, but it remains a significant performance bottleneck. Recently, Bae et al.~(Crypto’24) and Park~(Eurocrypt’25) introduced novel algorithms for ciphertext–plaintext (CPMM) and ciphertext–ciphertext (CCMM) matrix multiplications. These algorithms reduce encrypted MM operations to plaintext matrix multiplications (PPMM), enabling implementation through highly optimized BLAS libraries.
While these reduction-based methods offer significant improvements, their applicability is limited to scenarios where the matrix dimension
$d$ is comparable to the ring dimension $N$ in RLWE-based CKKS schemes.
As a result, they fall short for matrix multiplications involving small or medium-sized matrices.
We extend the reduction-based CPMM/CCMM into small-sized matrix operations by batching instances. We use the Slots-in-Coefficient (SinC) encoding where a ring element is represented by a polynomial with coefficients each of which is the Discrete Fourier Transform of matrix entries at the same position. This encoding enables reductions of encrypted batch MM algorithms to a small number of batch PPMMs, which can be efficiently accelerated by BLAS libraries. Our batch encrypted MM flexibly accommodates diverse matrix dimensions and batch sizes independent of the ring dimension $N$, thereby extending its applicability to practical real-world settings.
For two $d \times d$ matrices with $N/d$ batches, our batch CPMM and CCMM algorithms achieve complexity $O(d^2N)$, improving upon Bae et al. at $O(dN^2)$ and Jiang et al~(CCS’18) at $O(d^2 N\log (N))$. We further extend our techniques to rectangular matrices, achieving $O(dN^2)$ for multiplying a $d \times N$ and an $N \times N$ matrix, improving previous $O(N^3)$ methods. A proof-of-concept implementation validates these improvements: multiplying 128 batches of $64 \times 64$ matrices takes $0.20$s (CPMM) and $0.91$s (CCMM), yielding $205\times$ and $64\times$ speedups over previous methods. For a $64 \times 2048$ by $2048 \times 2048$ multiplication, our CCMM completes in $7.8$s, achieving a $28\times$ speedup compared to Park's algorithm.
We extend the reduction-based CPMM/CCMM into small-sized matrix operations by batching instances. We use the Slots-in-Coefficient (SinC) encoding where a ring element is represented by a polynomial with coefficients each of which is the Discrete Fourier Transform of matrix entries at the same position. This encoding enables reductions of encrypted batch MM algorithms to a small number of batch PPMMs, which can be efficiently accelerated by BLAS libraries. Our batch encrypted MM flexibly accommodates diverse matrix dimensions and batch sizes independent of the ring dimension $N$, thereby extending its applicability to practical real-world settings.
For two $d \times d$ matrices with $N/d$ batches, our batch CPMM and CCMM algorithms achieve complexity $O(d^2N)$, improving upon Bae et al. at $O(dN^2)$ and Jiang et al~(CCS’18) at $O(d^2 N\log (N))$. We further extend our techniques to rectangular matrices, achieving $O(dN^2)$ for multiplying a $d \times N$ and an $N \times N$ matrix, improving previous $O(N^3)$ methods. A proof-of-concept implementation validates these improvements: multiplying 128 batches of $64 \times 64$ matrices takes $0.20$s (CPMM) and $0.91$s (CCMM), yielding $205\times$ and $64\times$ speedups over previous methods. For a $64 \times 2048$ by $2048 \times 2048$ multiplication, our CCMM completes in $7.8$s, achieving a $28\times$ speedup compared to Park's algorithm.
Hao Zhang, Zewen Ye, Teng Wang, Yuanming Zhang, Tianyu Wang, Chengxuan Wang, Kejie Huang
The NIST Post-Quantum Cryptography (PQC) standardization has entered its fourth round, underscoring the critical importance of addressing side-channel attacks (SCA), a dominant threat in real-world cryptographic implementations, especially on embedded devices. This paper presents a novel chosen-ciphertext side-channel attack against CRYSTALS-Kyber (standardized as ML-KEM) implementations with Fisher-Yates shuffled polynomial reduction. We propose an efficient and fault-tolerant key recovery algorithm that, by crafting malicious ciphertexts, induces changes in the Hamming weight distribution of an intermediate polynomial's coefficients (the output of the shuffled polynomial reduction during decapsulation), enabling recovery of secret key coefficients from these changes. To ensure robustness, we propose an error-correction strategy that leverages the Hamming weight classifier's behavior to constrain and shrink the correction search space, maintaining effectiveness even with less accurate classifiers or in low-SNR environments. A Multi-Layer Perceptron (MLP) is employed for Hamming weight classification from side-channel traces, achieving 97.11\% accuracy. We combine statistical analysis with explainable deep learning for precise trace segmentation during pre-processing. Experimental results demonstrate full key recovery with only an average of $10 + 354 \times 3$ ciphertext queries and a success rate of 97.98\%, reducing the adversarial effort by 95.36\% compared to contemporary bit-flip techniques. Although shuffling aims to disrupt temporal correlations, our results show that statistical features persist and leak through shuffled implementations. This work reveals enduring SCA risks in shuffled implementations and informs a broader reassessment of PQC side-channel resilience.
Yusuke Sakai
Aggregate signatures allow compressing multiple single-signer signatures into a single short aggregate signature. This primitive has attracted new attention due to applications in blockchains and cryptocurrencies. In multisig addresses, which is one of such applications, aggregate signatures reduce the sizes of transactions from multisig addresses. Security of aggregate signatures under adaptive corruptions of signing keys is important, since one of the motivations of multisig addresses was a countermeasure against signing key exposures. We propose the first aggregate signature scheme tightly secure under adaptive corruptions using pairings. An aggregate signature includes two source group elements of bilinear groups plus a bit vector whose length is equal to the number of single-signer signatures being aggregated. To construct a scheme, we employ a technique from quasi-adaptive non-interactive zero-knowledge arguments. Our construction can be seen as modularization and tightness improvement of Libert et al.'s threshold signature scheme supporting signature aggregation (Theoretical Computer Science 645) in a non-threshold setting.
Trevor Yap, Shivam Bhasin, Liu Zhang
Side-channel analysis (SCA) exploits physical leakages such as power consumption or electromagnetic emissions to extract secret information from cryptographic implementations. Leakage model is the critical link between observable physical emanations (e.g., power consumption) and internal cryptographic states. When a leakage model fails to match the device’s underlying physical leakage, even powerful attacks like Correlation Power Analysis (CPA) are rendered ineffective. This fundamental challenge limits the success of traditional SCA, especially against noisy or masked targets.
In this work, we introduce the Neural Leakage Model (NLM), a novel neural network architecture trained to learn and accurately characterize complex physical leakage to enhance CPA. Moving beyond NLM's superior attack performance, we directly address the critical ``black box'' problem in deep learning-based side-channel analysis (DLSCA) by proposing a novel methodology to transform the trained NLM into a mathematically equivalent, closed-form polynomial expression. This provides interpretability, offering evaluators transparent insight into the precise leakage model the network has learned. We validate NLM’s performance across four diverse datasets spanning three platforms, from low-end microcontrollers to high-end multi-core ARM systems and dedicated FPGAs. Notably, NLM successfully recovers the secret key from the highly challenging and noisy CHES2025 dataset, representing the first known successful CPA attack on this benchmark. Furthermore, NLM demonstrates superior or comparable performance when extended to higher-order CPA against masking countermeasures. Our findings establish NLM as a powerful, generalizable, and interpretable approach to modern side-channel analysis.
Renas Bacho, Yanbo Chen, Julian Loss, Stefano Tessaro, Chenzhi Zhu
Very recently, Crites et al. (CRYPTO 2025) gave a proof for the full adaptive security of FROST (Komlo and Goldberg, SAC 2020), the state-of-the-art two-round threshold Schnorr signature scheme, which is currently used in real-world applications and is covered by an RFC standard. Their security proof, however, relies on the computational hardness of a new search problem they call “low-dimensional vector representation” (LDVR). In fact, the authors show that hardness of LDVR is necessary for adaptive security of a large class of threshold Schnorr signatures to hold, including FROST and its two-round variants. Given that LDVR is a new assumption and its hardness has not been seriously scrutinized, it remains an open problem whether a two-round threshold Schnorr signature with full adaptive security can be constructed based on more well-established assumptions.
In this paper, we resolve this open problem by presenting ms-FROST. Our scheme is partially non-interactive and supports any t - 1 < n adaptive corruptions, where n is the number of signers and t is the signing threshold. Its security relies on the algebraic one-more discrete logarithm (AOMDL) assumption, the algebraic group model (AGM), and the random oracle model (ROM). Further, it achieves the strongest security notion (TS-UF-4) in the security hierarchy of Bellare et al. (CRYPTO 2022). To justify our use of the algebraic group model, we show an impossibility result: We rule out any black-box algebraic security reduction in the ROM from AOMDL to the adaptive TS-UF-0 security of ms-FROST.
In this paper, we resolve this open problem by presenting ms-FROST. Our scheme is partially non-interactive and supports any t - 1 < n adaptive corruptions, where n is the number of signers and t is the signing threshold. Its security relies on the algebraic one-more discrete logarithm (AOMDL) assumption, the algebraic group model (AGM), and the random oracle model (ROM). Further, it achieves the strongest security notion (TS-UF-4) in the security hierarchy of Bellare et al. (CRYPTO 2022). To justify our use of the algebraic group model, we show an impossibility result: We rule out any black-box algebraic security reduction in the ROM from AOMDL to the adaptive TS-UF-0 security of ms-FROST.
Ali Arastehfard, Weiran Liu, Qixian Zhou, Zinan Shen, Liqiang Peng, Lin Qu, Shuya Feng, Yuan Hong
Private Information Retrieval (PIR) enables clients to retrieve data from a server without revealing their query. Keyword PIR (KPIR), an extension for keyword-based queries that enables PIR using keywords, is crucial for privacy-preserving two-party analytics in unbalanced settings. However, existing KPIR solutions face two challenges in efficiently supporting arbitrary server-side computations and handling mismatched queries non-interactively.
To our best knowledge, we take the first step to introduce Keyword PIR with Computation (``KPIR-C''), a novel PIR primitive that enables arbitrary non-interactive computation on responses while preserving query privacy. We overcome the arbitrary computation challenge by introducing TFHE into KPIR, which ensures efficient bootstrapping and allows arbitrary server-side computations. We address the mismatch challenge by identifying an important KPIR-C subroutine, referred to as KPIR with Default (``KPIR-D''), to remove disturbance of the computation caused by the mismatched responses. We instantiate KPIR-C with two constructions, one based on constant-weight codes and the other on recent LWE-based KPIR approaches. Both constructions enable efficient post-computation and offer trade-offs between communication overhead and runtime. Experiments show that our implemented constructions achieve competitive performance, and in some cases even outperform state-of-the-art KPIR solutions that do not support arbitrary computation.
To our best knowledge, we take the first step to introduce Keyword PIR with Computation (``KPIR-C''), a novel PIR primitive that enables arbitrary non-interactive computation on responses while preserving query privacy. We overcome the arbitrary computation challenge by introducing TFHE into KPIR, which ensures efficient bootstrapping and allows arbitrary server-side computations. We address the mismatch challenge by identifying an important KPIR-C subroutine, referred to as KPIR with Default (``KPIR-D''), to remove disturbance of the computation caused by the mismatched responses. We instantiate KPIR-C with two constructions, one based on constant-weight codes and the other on recent LWE-based KPIR approaches. Both constructions enable efficient post-computation and offer trade-offs between communication overhead and runtime. Experiments show that our implemented constructions achieve competitive performance, and in some cases even outperform state-of-the-art KPIR solutions that do not support arbitrary computation.
Diego F. Aranha, Nikolas Melissaris
The European Commission's 2022 proposal for a regulation on child sexual abuse material, popularly labelled ChatControl, obliges online services to detect, report, and remove prohibited content, through client-side scanning.
This paper examines the proposal as a case of undone science in computer security ethics: a domain where technical feasibility and rights-compatibility questions remain systematically underexplored. Combining legal analysis with philosophy of technology, the paper argues that client-side scanning transforms end-to-end encryption from a right to secrecy into a conditional privilege of use. By integrating Isaiah Berlin's concept of negative liberty, Langdon Winner’s account of the politics of artifacts, and David Hess’s notion of undone science, the analysis traces how design choices become moral constraints.
The discussion situates the European debate within broader concerns about proportionality, epistemic selectivity, and the governance of digital infrastructures. Ultimately, the study shows that the controversy over ChatControl is not only about privacy or child protection but about the epistemic norms that define what counts as legitimate technological knowledge.
Ruben Baecker, Paul Gerhart, Davide Li Calsi, Luigi Russo, Dominique Schröder, Arkady Yerukhimovich
We present the first round-optimal Schnorr threshold signature scheme that achieves full adaptive security against algebraic adversaries, relying solely on the Algebraic One-More Discrete Log (AOMDL) assumption.
Our scheme, FaFROST, builds on the FROST framework preserving its two-round signing structure and communication efficiency.
By avoiding binding commitments to partial public keys, FaFROST circumvents the recent impossibility results from CRYPTO’25 and requires no reliance on the newly introduced, tailor-made LDVR assumption.
This establishes that round-optimal, adaptively secure Schnorr threshold signatures are achievable under well-established algebraic assumptions.
Jacob Leiken, Sunoo Park
Over time, cryptographically deniable systems have come to be associated in computer-science literature with the idea of "denying" evidence in court — specifically, with the ability to convincingly forge evidence in courtroom scenarios, and relatedly, an inability to authenticate evidence in such contexts. Indeed, in some cryptographic models, the ability to falsify mathematically implies the inability to authenticate. Evidentiary processes in courts, however, have been developed over centuries to account for the reality that evidence has always been forgeable, and relies on factors outside of cryptographic models to seek the truth "as well as possible" while acknowledging that all evidence is imperfect. We argue that deniability does not and need not change this paradigm.
Our analysis highlights a gap between technical deniability notions and their application to the real world. There will essentially always be factors outside a cryptographic model that influence perceptions of a message's authenticity, in realistic situations. We propose the broader concept of credibility to capture these factors. The credibility of a system is determined by (1) a threshold of quality that a forgery must pass to be "believable" as an original communication, which varies based on sociotechnical context and threat model, (2) the ease of creating a forgery that passes this threshold, which is also context- and threat-model-dependent, and (3) default system retention policy and retention settings. All three aspects are important for designing secure communication systems for real-world threat models, and some aspects of (2) and (3) may be incorporated directly into technical system design. We hope that our model of credibility will facilitate system design and deployment that addresses threats that are not and cannot be captured by purely technical definitions and existing cryptographic models, and support more nuanced discourse on the strengths and limitations of cryptographic guarantees within specific legal and sociotechnical contexts.
Our analysis highlights a gap between technical deniability notions and their application to the real world. There will essentially always be factors outside a cryptographic model that influence perceptions of a message's authenticity, in realistic situations. We propose the broader concept of credibility to capture these factors. The credibility of a system is determined by (1) a threshold of quality that a forgery must pass to be "believable" as an original communication, which varies based on sociotechnical context and threat model, (2) the ease of creating a forgery that passes this threshold, which is also context- and threat-model-dependent, and (3) default system retention policy and retention settings. All three aspects are important for designing secure communication systems for real-world threat models, and some aspects of (2) and (3) may be incorporated directly into technical system design. We hope that our model of credibility will facilitate system design and deployment that addresses threats that are not and cannot be captured by purely technical definitions and existing cryptographic models, and support more nuanced discourse on the strengths and limitations of cryptographic guarantees within specific legal and sociotechnical contexts.
Yingyao Zhou, Natasha Devroye, Onur Günlü
We consider reversely-degraded wiretap channels, for which the secrecy capacity is zero if there is no channel feedback. This work focuses on a seeded modular code design for the Gaussian wiretap channel with channel output feedback, combining universal hash functions for security and learned feedback-based codes for reliability to achieve positive secrecy rates. We study the trade-off between communication reliability and information leakage, illustrating that feedback enables agreeing on a secret key shared between legitimate parties, overcoming the security advantage of the wiretapper. Our findings also motivate code designs for sensing-assisted secure communication, to be used in next-generation integrated sensing and communication methods.
Cruz Barnum, Mohammad Hajiabadi, David Heath, Jake Januzelli, Namar Kumar, Mike Rosulek
Oblivious Pseudorandom Function (OPRF) protocols can be categorized into two variants: chosen-key OPRFs -- where the server provides the PRF key $k$ as an input -- and ephemeral-key OPRFs -- where the functionality randomly samples $k$ on behalf of the server. Ephemeral-key OPRFs can be constructed from simple cryptography, such as black-box OT and a random oracle. Chosen-key OPRFs, on the other hand, are only known by employing one of the following (more expensive) approaches:
- Express the underlying PRF as a circuit, then use generic MPC techniques to evaluate it in a non-black-box manner.
- Base the underlying PRF on some specific public-key assumption, such as RSA or DDH, then build a custom protocol that achieves obliviousness by leveraging the PRF's algebraic structure.
Thus, there exists a qualitative gap between known instantiations of these two OPRF variants, in terms of both assumptions and efficiency.
We show that this gap is inherent. In particular, one might hope for an OPRF with all of the following characteristics: the protocol (1) is chosen-key, (2) supports domains of super-polynomial size, and (3) is constructed from "simple" cryptography, such as the minimal assumption of OT, plus a random oracle. We show that no such protocol can exist.
Let $\Pi$ be any chosen-key OPRF protocol defined in terms of a PRF $F$, where $F$ is defined with respect to (only) a random oracle. This restriction on $F$ rules out the above approaches of either using a PRF in a non-black-box way or basing the underlying PRF itself on public-key cryptography. While $F$ is restricted in its use of cryptography, the protocol $\Pi$ is not: $\Pi$ may use arbitrary cryptography, e.g., OT, FHE, iO, etc. We show that each invocation of any such $\Pi$ necessarily leaks information about the server's key $k$. After a bounded number of queries, an adversarial client can effectively recover $k$, breaking server privacy.
To complement our negative result, we provide a matching positive result: we construct a chosen-key OPRF from black-box OT and RO, where server privacy holds for some bounded number of queries $n$. This protocol's underlying PRF is constructed from a $(n+1)$-wise independent hash function and RO; the server's key $k$ has length scaling linearly in $n$ which, by our lower bound, is optimal. Thus, our two results tightly (i.e., up to $\mathrm{poly}(\lambda)$ factors) characterize (im)possibility for chosen-key OPRFs, unless one uses non-black-box cryptography or a public-key-style PRF.
We show that this gap is inherent. In particular, one might hope for an OPRF with all of the following characteristics: the protocol (1) is chosen-key, (2) supports domains of super-polynomial size, and (3) is constructed from "simple" cryptography, such as the minimal assumption of OT, plus a random oracle. We show that no such protocol can exist.
Let $\Pi$ be any chosen-key OPRF protocol defined in terms of a PRF $F$, where $F$ is defined with respect to (only) a random oracle. This restriction on $F$ rules out the above approaches of either using a PRF in a non-black-box way or basing the underlying PRF itself on public-key cryptography. While $F$ is restricted in its use of cryptography, the protocol $\Pi$ is not: $\Pi$ may use arbitrary cryptography, e.g., OT, FHE, iO, etc. We show that each invocation of any such $\Pi$ necessarily leaks information about the server's key $k$. After a bounded number of queries, an adversarial client can effectively recover $k$, breaking server privacy.
To complement our negative result, we provide a matching positive result: we construct a chosen-key OPRF from black-box OT and RO, where server privacy holds for some bounded number of queries $n$. This protocol's underlying PRF is constructed from a $(n+1)$-wise independent hash function and RO; the server's key $k$ has length scaling linearly in $n$ which, by our lower bound, is optimal. Thus, our two results tightly (i.e., up to $\mathrm{poly}(\lambda)$ factors) characterize (im)possibility for chosen-key OPRFs, unless one uses non-black-box cryptography or a public-key-style PRF.
Linghe Yang, Jian Liu, Jingyi Cui, Guangquan Xu, Mingzi Zuo, Lei Zhang, Zhongshan Li
Distributed Key Generation (DKG) is essential for secure, decentralized cryptographic systems, enabling collaborative key pair generation without a trusted authority. This capability underpins critical applications such as threshold signatures and blockchain-based protocols. To achieve post-quantum security, existing robust lattice-based DKG protocols, tailored for synchronous networks, rely on complaint-based Verifiable Secret Sharing (VSS). However, these protocols lack public verifiability and compatibility with asynchronous environments, constraining their use in Byzantine fault-tolerant settings.
This paper presents LADKG, a Lattice-Based Asynchronous Distributed Key Generation framework designed for post-quantum secure and scalable distributed systems. LADKG integrates Asynchronous Verifiable Short Secret Sharing (AV3S) with an Approximate Asynchronous Common Subset (AACS) protocol to achieve efficient key generation. By deferring verification and leveraging deterministic approximate agreement, LADKG reduces computational and communication overhead while maintaining security and robustness. Evaluations on geo-distributed AWS EC2 clusters demonstrate that LADKG is comparable or better than classical Asynchronous Distributed Key Generation (ADKG) schemes in scalability and efficiency. Under optimistic conditions with $n=121$ nodes, completion is achieved in 45 seconds, ensuring robust key generation for post-quantum secure applications.
This paper presents LADKG, a Lattice-Based Asynchronous Distributed Key Generation framework designed for post-quantum secure and scalable distributed systems. LADKG integrates Asynchronous Verifiable Short Secret Sharing (AV3S) with an Approximate Asynchronous Common Subset (AACS) protocol to achieve efficient key generation. By deferring verification and leveraging deterministic approximate agreement, LADKG reduces computational and communication overhead while maintaining security and robustness. Evaluations on geo-distributed AWS EC2 clusters demonstrate that LADKG is comparable or better than classical Asynchronous Distributed Key Generation (ADKG) schemes in scalability and efficiency. Under optimistic conditions with $n=121$ nodes, completion is achieved in 45 seconds, ensuring robust key generation for post-quantum secure applications.
Daniel Apon
In this note we identify major issues with “Exact Coset Sampling for Quantum Lattice Algorithms” by Zhang. The issues include:
• in the first version, the proposed algorithm requires foreknowledge of the answer to compute the answer;
• in the latest version, the proposed algorithm displays basic misunderstandings of quantum mechanics.
We believe the existence of these issues refute the author’s claims.
• in the first version, the proposed algorithm requires foreknowledge of the answer to compute the answer;
• in the latest version, the proposed algorithm displays basic misunderstandings of quantum mechanics.
We believe the existence of these issues refute the author’s claims.
Siddhartha Chowdhury, Nimish Mishra, Sarani Bhattacharya, Debdeep Mukhopadhyay
Software masking—particularly through threshold implementations—has long been regarded as a foundational defense mechanism against side-channel attacks. These schemes enforce the principles of non-completeness and uniformity, offering provable first-order resistance even under realistic leakage assumptions. However, such assurances were primarily developed under the simplified assumption of scalar or in-order execution, where instruction flow and data dependencies are well-behaved and predictable.
Contemporary processors, by contrast, employ deeply optimized out-of-order (OoO) pipelines that rely on dynamic scheduling, register renaming, operand forwarding, and speculative execution. These aggressive microarchitectural behaviors, while improving performance, also create intricate dataflow interactions that can inadvertently violate the independence of masked shares. As a result, secret shares that are theoretically isolated may recombine transiently through register reuse, speculative reordering, or transition-based leakage in the physical register file. This raises a critical question: to what extent do masking countermeasures remain secure when executed on modern OoO architectures?
To explore this question, we develop a systematic methodology for architectural leakage analysis based on fine-grained execution traces from an OoO RISC-V processor. The analysis correlates microarchitectural events with instruction-level behavior, reconstructs the evolution of physical register assignments, and identifies instances where fundamental masking principles such as non-completeness or uniformity are violated. Through this structured approach, we establish a taxonomy of microarchitectural leakage classes and formalize two dominant categories: (1) Rename-Induced Transition Leakage (RIL), arising from register reuse and transitional dependencies between masked shares; and (2) IEW-Induced Dispatch Leakage, originating from speculative instruction issue and execution reordering.
The methodology is demonstrated across three representative studies: (i) a masked Toffoli gate, where violations of non-completeness are revealed; (ii) a masked PRESENT cipher, where 571 leakage events are identified due to register-induced transitions; and (iii) a Probe Isolating Non-Interference (PINI) experiment, which exposes how speculative and dynamic scheduling can compromise isolation guarantees, thereby weakening composability of masked implementations that are provably secure under idealized models.
These results underline a fundamental insight: the security of masking countermeasures cannot be evaluated in abstraction from the hardware on which they execute. Ensuring true resistance to side-channel attacks demands a holistic view—one that jointly considers both the algorithmic soundness of masking constructions and the microarchitectural realities of modern processors.
Contemporary processors, by contrast, employ deeply optimized out-of-order (OoO) pipelines that rely on dynamic scheduling, register renaming, operand forwarding, and speculative execution. These aggressive microarchitectural behaviors, while improving performance, also create intricate dataflow interactions that can inadvertently violate the independence of masked shares. As a result, secret shares that are theoretically isolated may recombine transiently through register reuse, speculative reordering, or transition-based leakage in the physical register file. This raises a critical question: to what extent do masking countermeasures remain secure when executed on modern OoO architectures?
To explore this question, we develop a systematic methodology for architectural leakage analysis based on fine-grained execution traces from an OoO RISC-V processor. The analysis correlates microarchitectural events with instruction-level behavior, reconstructs the evolution of physical register assignments, and identifies instances where fundamental masking principles such as non-completeness or uniformity are violated. Through this structured approach, we establish a taxonomy of microarchitectural leakage classes and formalize two dominant categories: (1) Rename-Induced Transition Leakage (RIL), arising from register reuse and transitional dependencies between masked shares; and (2) IEW-Induced Dispatch Leakage, originating from speculative instruction issue and execution reordering.
The methodology is demonstrated across three representative studies: (i) a masked Toffoli gate, where violations of non-completeness are revealed; (ii) a masked PRESENT cipher, where 571 leakage events are identified due to register-induced transitions; and (iii) a Probe Isolating Non-Interference (PINI) experiment, which exposes how speculative and dynamic scheduling can compromise isolation guarantees, thereby weakening composability of masked implementations that are provably secure under idealized models.
These results underline a fundamental insight: the security of masking countermeasures cannot be evaluated in abstraction from the hardware on which they execute. Ensuring true resistance to side-channel attacks demands a holistic view—one that jointly considers both the algorithmic soundness of masking constructions and the microarchitectural realities of modern processors.