IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
03 September 2025
Aleck Nash, Christian Eduardo Terron Garcia, Henry Chimal-Dzul, Kim-Kwang Raymond Choo
Consensus protocols are an important building block in blockchain and blockchain-based systems. The recent focus in developing practical quantum computers reinforces the importance of designing quantum-resistant cryptographic protocols, and in the context of this paper quantum-resistant consensus protocols. In this paper, we systematically review the extant literature on quantum-resistant consensus protocols published between 2019 and 2024. As part of
the review, we identify a number of limitations and challenges and discuss potential future research directions.
Taehun Kang, Dongheo Heo, Jeonghwan Lee, Suhri Kim, Changmin Lee
Since supersingular isogeny Diffie-Hellman (SIDH) was broken by a polynomial-time attack, several countermeasures were proposed. Among them, terSIDH has been highlighted for its high performance, yet it exposes a side-channel vulnerability. The total isogeny degree depends on the private key, causing variation in isogeny computation times. This dependency makes terSIDH susceptible to timing attacks. The ratio between the worst- and the best-case execution times of terSIDH was about 32.67 times. In this work, we propose the first constant-time terSIDH. By adding dummy operations, we reduce the dependency of isogeny computations on the private key, ensuring that the algorithm’s execution time remains constant regardless of the private key. Since such countermeasures are generally slower than their non-constant-time counterparts, we applied additional optimizations to mitigate this overhead. Moreover, we present the first C implementation of terSIDH having 128-bit security. We compared our method against CSIDH-4096, a representative isogeny-based key exchange protocol with the same security level. For key generation, the worst-case of terSIDH, CSIDH-4096, and our constant-time method take 1.77 s, 5.18 s, and 1.58 s, respectively. These results demonstrate that our proposed constant-time method is faster than the worst-case of terSIDH while also being significantly more efficient than CSIDH-4096.
Manuel Barbosa, Matthias J. Kannwischer, Thing-han Lim, Peter Schwabe, Pierre-Yves Strub
Decryption errors play a crucial role in the security of KEMs based on Fujisaki-Okamoto because the concrete security guarantees provided by this transformation directly depend on the probability of such an event being bounded by a small real number. In this paper we present an approach to formally verify the claims of statistical probabilistic bounds for incorrect decryption in lattice-based KEM constructions. Our main motivating example is the PKE encryption scheme underlying ML-KEM. We formalize the statistical event that is used in the literature to heuristically approximate ML-KEM decryption errors and confirm that the upper bounds given in the literature for this event are correct. We consider FrodoKEM as an additional example, to demonstrate the wider applicability of the approach and the verification of a correctness bound without heuristic approximations. We also discuss other (non-approximate) approaches to bounding the probability of ML-KEM decryption.
Maria Leslie, Ratna Dutta
In a t-out-of-n threshold secret sharing scheme, accountability is crucial when a subset of f < t servers collude to leak secret shares. Traceable Threshold Secret Sharing (TTSS) ensures that leaked shares can be traced back to the compromised servers while preventing false accusations through non-imputability. In Crypto’24, Boneh et al. proposed new definitions and more practical constructions for TTSS based on Shamir’s and Blakley’s secret sharing schemes, removing the practical limitation of existing TTSS.
Our work presents a traceable secret sharing scheme built upon an additive variant of the Asmuth-Bloom scheme, relying only on black-box access to the reconstruction box R. In our model, a subset of f < t colluding servers can construct a reconstruction box R that recovers the secret with the assistance of an additional t − f random shares. We note that integrating tracing in the standard (t, n)-Asmuth-Bloom Secret Sharing (ABSS) scheme exhibits a tracing leakage issue. We fix this limitation by introducing additive variants of ABSS, ABSS-I and ABSS-II that retain the security of the original scheme ABSS while splitting the secret s into t additive components and generating all shares from the additive components of s. Based on ABSS-I, we construct a TTSS scheme, TTSS-I, that introduces traceability into the framework and is proven to be universally traceable in the random oracle model, assuming R is a universally good reconstruction box. We integrate a tracing mechanism in ABSS-II and propose a second scheme, TTSS-II, which extends TTSS-I by additionally concealing partial information about the additive component of the secret s to introduce non-imputability to prevent the tracer from falsely accusing any honest party by fabricating evidence of its corruption. The security of TTSS-II is also in the random oracle model and relies on the hardness of the discrete logarithm problem.
Our work presents a traceable secret sharing scheme built upon an additive variant of the Asmuth-Bloom scheme, relying only on black-box access to the reconstruction box R. In our model, a subset of f < t colluding servers can construct a reconstruction box R that recovers the secret with the assistance of an additional t − f random shares. We note that integrating tracing in the standard (t, n)-Asmuth-Bloom Secret Sharing (ABSS) scheme exhibits a tracing leakage issue. We fix this limitation by introducing additive variants of ABSS, ABSS-I and ABSS-II that retain the security of the original scheme ABSS while splitting the secret s into t additive components and generating all shares from the additive components of s. Based on ABSS-I, we construct a TTSS scheme, TTSS-I, that introduces traceability into the framework and is proven to be universally traceable in the random oracle model, assuming R is a universally good reconstruction box. We integrate a tracing mechanism in ABSS-II and propose a second scheme, TTSS-II, which extends TTSS-I by additionally concealing partial information about the additive component of the secret s to introduce non-imputability to prevent the tracer from falsely accusing any honest party by fabricating evidence of its corruption. The security of TTSS-II is also in the random oracle model and relies on the hardness of the discrete logarithm problem.
Yuhang Zeng, Zhixin Dong, Xian Xu
HotStuff has gained widespread application in scenarios such as consortium chains in recent years due to its linear view change and pipelined decision making mechanisms. Although there have been studies on the performance of this algorithm, there remains a lack of analysis and formal termination proofs regarding its composability. This paper, for the first time, constructs a comprehensive formal system for the HotStuff protocol in a partially synchronous network environment under the Universally Composable (UC) framework and proves the termination of the HotStuff protocol within the UC framework.
Specifically, we establish the ideal function and demonstrate that the HotStuff protocol UC realizes this ideal function, which guarantees the indistinguishability between the real protocol of HotStuff and the ideal function.
Notably, in the context of network delay attacks, this paper uses a phased time analysis method with an exponential backoff timeout mechanism to show that the protocol can still achieve consensus within a bounded amount of time.
The results of this work not only establish a formal proof paradigm for analyzing termination of BFT protocols in a composable framework, but also provide important theoretical foundations for ensuring reliable termination in industrial-grade blockchain systems (such as the Meta Diem project and Chang’an Chain that employ HotStuff).
Michel Seck, Abdoul Aziz Ciss
Recently, Cotan and Teseleanu (NordSec 2023) published an RSA-like cryptosystem. While in RSA, the public exponent $e$ and the private exponent $d$ are related by the equation $ed - k(p-1)(q-1) = 1$, in their scheme, $e$ and $d$ are related to the equation $ed - k(p^n-1)(q^n-1) = 1$ for some positive integer $n$. Teseleanu (CSCML 2024) showed that one can factor the modulus $N$ using a lattice attack if $d$ is small. In this paper, we extend his attack by showing that if the private exponent is either too small or too large, one can factor $N$ in polynomial time by solving the generalized equation $eu - (p^n - 1)(q^n - 1)v = \pm 1$ using lattice reduction techniques.
Hamza Abusalah, Gaspard Anthoine, Gennaro Avitabile, Emanuele Giunta
We study the inherent limitations of additive accumulators and updatable vector commitments (VCs) with constant-size digest (i.e., independent of the number of committed elements).
Specifically, we prove two lower bounds on the expected number of membership proofs that must be updated when a \emph{single} element is added (or updated) in such data structures. Our results imply that when the digest bit length approaches the concrete security level, then the expected number of proofs invalidated due to an append operation for a digest committing to $n$ elements is nearly maximal: $n-\mathsf{negl}(\lambda)$ in the case of exponential-size universes, and $n-o(n)$ for super-polynomial universes. Our results have significant implications for stateless blockchain designs relying on constant-size VCs, suggesting that the overhead of frequent proof updates may offset the benefits of reducing global state storage.
Specifically, we prove two lower bounds on the expected number of membership proofs that must be updated when a \emph{single} element is added (or updated) in such data structures. Our results imply that when the digest bit length approaches the concrete security level, then the expected number of proofs invalidated due to an append operation for a digest committing to $n$ elements is nearly maximal: $n-\mathsf{negl}(\lambda)$ in the case of exponential-size universes, and $n-o(n)$ for super-polynomial universes. Our results have significant implications for stateless blockchain designs relying on constant-size VCs, suggesting that the overhead of frequent proof updates may offset the benefits of reducing global state storage.
Anasuya Acharya, Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
The notion of Best-of-Both-Worlds introduced in the work of Ishai et al. (CRYPTO 2006) investigated whether an MPC protocol can simultaneously provide two incomparable security guarantees: guaranteed output delivery against an honest majority and security with abort against a dishonest majority and provided tight upper and lower bounds in the presence of computationally bounded, i.e., PPT adversaries. Another line of works starting from the work of Chaum (CRYPTO 1989) considered protocols that simultaneously achieved security against an unbounded adversary corrupting a minority of the parties and security against arbitrary corruption by a PPT adversary.
In this work, we generalize previous work to investigate a fundamental challenge of designing an MPC in a multiverse where security is specified with respect to (1) GOD, (2) fairness, (3) security w.r.t. unbounded adversaries, and (4) security with abort. The work of Lucas et al. (PODC 2010) resolved this question when considering threshold adversaries; however, the case of general adversary structures remains open. Our main result completely characterizes when a protocol can simultaneously achieve all properties. Namely, given adversary structures $Z_{\mathsf{GOD}}, Z_{fair}, Z_{S}$ and $Z_{C}$, we provide tight upper and lower bounds for when an MPC protocol can provide GOD, fairness, and security with abort respectively for unbounded and PPT adversaries w.r.t. these adversary structures.
In this work, we generalize previous work to investigate a fundamental challenge of designing an MPC in a multiverse where security is specified with respect to (1) GOD, (2) fairness, (3) security w.r.t. unbounded adversaries, and (4) security with abort. The work of Lucas et al. (PODC 2010) resolved this question when considering threshold adversaries; however, the case of general adversary structures remains open. Our main result completely characterizes when a protocol can simultaneously achieve all properties. Namely, given adversary structures $Z_{\mathsf{GOD}}, Z_{fair}, Z_{S}$ and $Z_{C}$, we provide tight upper and lower bounds for when an MPC protocol can provide GOD, fairness, and security with abort respectively for unbounded and PPT adversaries w.r.t. these adversary structures.
Wei Ao, Vishnu Naresh Boddeti
Face recognition is central to many authentication, security, and personalized applications. Yet, it suffers from significant privacy risks, particularly arising from unauthorized access to sensitive biometric data. This paper introduces CryptoFace, the first end-to-end encrypted face recognition system with fully homomorphic encryption (FHE). It enables secure processing of facial data across all stages of a face-recognition process—feature extraction, storage, and matching—without exposing raw images or features. We introduce a mixture of shallow patch convolutional networks to support higher-dimensional tensors via patch-based processing while reducing the multiplicative depth and, thus, inference latency. Parallel FHE evaluation of these networks ensures near-resolution-independent latency. On standard face recognition benchmarks, CryptoFace significantly accelerates inference and increases verification accuracy compared to the state-of-the-art FHE neural networks adapted for face recognition. CryptoFace will facilitate secure face recognition systems requiring robust and provable security. The code is available at https://github.com/human-analysis/CryptoFace.
Ashish Choudhury, Ivan Damgård, Shravani Patil, Arpita Patra
We study the feasibility of constant communication-overhead information-theoretic Multiparty Computation tolerating malicious adversaries in both the synchronous and asynchronous network settings. A simple observation based on the results of Damg\aa rd, Larsen and Neilsen (CRYPTO 2019) allows us to conclude that (a) any statistically-secure MPC with $n = 2t+s$, $s > 0$ requires a communication complexity of $\Theta(\frac{n|C|}{s})$ bits in the synchronous setting; (b) any statistically-secure MPC with $n = 3t+s$, $s > 0$ requires a communication complexity of $\Theta(\frac{n|C|}{s})$ bits in the asynchronous setting; where $C$ denotes the circuit, $n$ denotes the number of parties and $t$ denotes the number of corruptions. Both the results hold even for abort security.
To match the above bounds for $s = \Theta(t)$, we propose two protocols in the synchronous and respectively in the asynchronous setting, both with guaranteed output delivery and a communication complexity of $O(|C|\kappa)$ bits, for a statistical security parameter $\kappa$, a running time of $O(D)$ for circuit depth $D$ (in expectancy in the asynchronous setting). For this, we require circuits with a SIMD structure. Our constructs are the first constant-overhead statistically-secure MPC protocols with optimal resilience.
Our first technical contributions is a verifiable secret sharing (VSS) protocol with constant-overhead per secret through two-dimensional packing. A second contribution is a degree-reduction (from a higher degree packed sharing to a lower degree packed sharing) protocol with constant overhead.
As a bonus, a simplified version of our statistical MPC protocols, plugged with the VSSs derived from the earlier works of Abraham, Asharov, Patil and Patra (EUROCRYPT 2023 and 2024) allow us to obtain perfectly-secure MPC protocols with a communication cost of $O(|C|\kappa)$ bits, where $\kappa = \Theta(\log(n))$, and a running time of $O(D)$ with the resiliency of $n = 3t+\Theta(t)$ in synchronous setting and $n=4t+\Theta(t)$ in the asynchronous setting. These resiliency are optimal in perfect setting due to the lower bounds of Damg\aa rd, Patil, Patra, Roy (Eprint 2025) even for abort security.
To match the above bounds for $s = \Theta(t)$, we propose two protocols in the synchronous and respectively in the asynchronous setting, both with guaranteed output delivery and a communication complexity of $O(|C|\kappa)$ bits, for a statistical security parameter $\kappa$, a running time of $O(D)$ for circuit depth $D$ (in expectancy in the asynchronous setting). For this, we require circuits with a SIMD structure. Our constructs are the first constant-overhead statistically-secure MPC protocols with optimal resilience.
Our first technical contributions is a verifiable secret sharing (VSS) protocol with constant-overhead per secret through two-dimensional packing. A second contribution is a degree-reduction (from a higher degree packed sharing to a lower degree packed sharing) protocol with constant overhead.
As a bonus, a simplified version of our statistical MPC protocols, plugged with the VSSs derived from the earlier works of Abraham, Asharov, Patil and Patra (EUROCRYPT 2023 and 2024) allow us to obtain perfectly-secure MPC protocols with a communication cost of $O(|C|\kappa)$ bits, where $\kappa = \Theta(\log(n))$, and a running time of $O(D)$ with the resiliency of $n = 3t+\Theta(t)$ in synchronous setting and $n=4t+\Theta(t)$ in the asynchronous setting. These resiliency are optimal in perfect setting due to the lower bounds of Damg\aa rd, Patil, Patra, Roy (Eprint 2025) even for abort security.
Chenke Wang, Yu Long, Xian Xu, Shi-Feng Sun, Yiqi Liu, Dawu Gu
Cross-chain payment technologies have obtained broad affirmation from industry and academia as they enable assets to be circulated across the boundaries of various blockchains. However, existing cross-chain payment protocols are tailored for limited blockchains, inflexible in providing privacy guarantees, and unsatisfactory in scalability.
To address these issues, this paper proposes a universal cross-chain payment framework. This framework enables payments across a wide range of blockchains since it is independent of any specific blockchain features. Moreover, this framework provides on-demand privacy and high scalability. To instantiate the framework, we introduce $\mathsf{UniCross}$, a novel universal cross-chain payment protocol. Concretely, we utilize the ring learning with errors (RLWE)-based encryption scheme and propose a new non-interactive zero-knowledge (NIZK) protocol, named $\mathsf{HybridProof}$, to construct $\mathsf{UniCross}$. We formally define the security of the universal cross-chain payment framework and prove the universal composability (UC) security of $\mathsf{UniCross}$. The proof-of-concept implementation and evaluation demonstrate that (1) $\mathsf{UniCross}$ consumes up to 78\% and 94\% less communication and computation cost than the state-of-the-art work; (2) $\mathsf{UniCross}$ achieves a throughput ($\sim$360 tps) 36$\times$ that of the state-of-the-art work ($\sim$10 tps).
To address these issues, this paper proposes a universal cross-chain payment framework. This framework enables payments across a wide range of blockchains since it is independent of any specific blockchain features. Moreover, this framework provides on-demand privacy and high scalability. To instantiate the framework, we introduce $\mathsf{UniCross}$, a novel universal cross-chain payment protocol. Concretely, we utilize the ring learning with errors (RLWE)-based encryption scheme and propose a new non-interactive zero-knowledge (NIZK) protocol, named $\mathsf{HybridProof}$, to construct $\mathsf{UniCross}$. We formally define the security of the universal cross-chain payment framework and prove the universal composability (UC) security of $\mathsf{UniCross}$. The proof-of-concept implementation and evaluation demonstrate that (1) $\mathsf{UniCross}$ consumes up to 78\% and 94\% less communication and computation cost than the state-of-the-art work; (2) $\mathsf{UniCross}$ achieves a throughput ($\sim$360 tps) 36$\times$ that of the state-of-the-art work ($\sim$10 tps).
Anne Canteaut, Merlin Fruchon
Many design strategies and differential attacks rely on the so-called hypothesis of stochastic equivalence, which states that the differential behaviour of a cipher for any fixed key can be approximated by its average behaviour over all keys. However, this assumption is known to be invalid in general. For instance, all two-round differential characteristics of AES are plateau, meaning that their probabilities highly depends on the key. While such discrepancies were traditionally expected to occur only for a small number of rounds, several counter-examples have been exhibited recently thanks to the quasi-differential framework.
This paper aims to show why the divergence between fixed-key and average behaviour can be even more pronounced and is definitely harder to anticipate when analyzing differentials, rather than individual characteristics, which is the quantity actually relevant in differential cryptanalysis. Indeed, when a differential aggregates many characteristics that do not satisfy the hypothesis of stochastic equivalence, several scenarios may arise: the sets of weak keys may be identical across all characteristics, resulting in a strong clustering effect, or entirely disjoint, eliminating any such effect. Focusing on (truncated) differentials that include plateau characteristics, we demonstrate how this mechanism explains the unexpected differential behaviour observed in Midori64 and Scream over an arbitrary number of rounds, as recently reported by Baudrin et al. We also clarify how this situation differs from invariant subspaces, except in a few specific cases. Furthermore, we identify the properties of the Sbox that give rise to this weakness and all vulnerable Sboxes among the optimal 4-bit functions.
This paper aims to show why the divergence between fixed-key and average behaviour can be even more pronounced and is definitely harder to anticipate when analyzing differentials, rather than individual characteristics, which is the quantity actually relevant in differential cryptanalysis. Indeed, when a differential aggregates many characteristics that do not satisfy the hypothesis of stochastic equivalence, several scenarios may arise: the sets of weak keys may be identical across all characteristics, resulting in a strong clustering effect, or entirely disjoint, eliminating any such effect. Focusing on (truncated) differentials that include plateau characteristics, we demonstrate how this mechanism explains the unexpected differential behaviour observed in Midori64 and Scream over an arbitrary number of rounds, as recently reported by Baudrin et al. We also clarify how this situation differs from invariant subspaces, except in a few specific cases. Furthermore, we identify the properties of the Sbox that give rise to this weakness and all vulnerable Sboxes among the optimal 4-bit functions.
Patrick Derbez, Marie Euler
This paper introduces a new MILP modeling to find impossible differential (ID) distinguishers and attacks. Standard models for ID are negative models, in the sense that a differential is impossible if and only if the model has no solution. Our new modelling technique focuses on probable ID, differentials that are probably impossible. While this might lead to false positives, the main advantage is that searching for such probable ID can be achieved through a positive model. This facilitates the search for the best impossible differential attacks without first exhausting all possible ID distinguishers on a target. We also propose to simplify the modelling by only considering two possible states for internal cells: inactive and unknown. In this case there are no longer direct contradictions but only indirect ones, assuming that it is impossible that all cells are inactive.
With these two simple ideas, we are able to retrieve the longest impossible differentials distinguishers on MIDORI, SKINNY, PRESENT, SIMON, Simeck and SPECK. Furthermore, as the model looking for candidates is based on satisfiability, it can be incorporated in a larger model which looks directly for the best attacks in order to enumerate the distinguishers in the order of the complexity of the associated attacks, which we did for the AES, ARADI, SIMON and SKINNY.
M&M: Secure Two-Party Machine Learning through Efficient Modulus Conversion and Mixed-Mode Protocols
Ye Dong, Wen-jie Lu, Xiaoyao Hou, Kang Yang, Jian Liu
Secure two-party machine learning has made substantial progress through the use of mixed-mode protocols. Despite these advancements, existing approaches often suffer from efficiency bottlenecks due to the inherent mismatch between the optimal domains of various cryptographic primitives, such as Homomorphic Encryption and Oblivious Transfer. In response to these challenges, we introduce the \tNAME{} framework, which features an efficient modulus conversion protocol. This breakthrough enables seamless integration of the most suitable cryptographic subprotocols within their optimal modulus domains, allowing linear computations to be executed over a prime modulus and nonlinear computations over a two-power modulus, with a minimal modulus conversion overhead. We further establish new benchmarks for the performance of fundamental primitives, namely comparison and multiplication, across various two-party techniques, together with our practical optimizations to improve efficiency. By incorporating these techniques, \tNAME{} demonstrates significant performance enhancements over state-of-the-art solutions: \romannumeral1) we report a $6\times$-$100\times$ improvement for approximated truncations with 1-bit error tolerance; \romannumeral2) an average of $5\times$ (resp. $4\times$) reduction in communication (resp. runtime) for machine learning functions; \romannumeral3) and a 25\%-99\% improvement in cost-efficiency for private inference of deep neural networks and 50\% improvement in private training of gradient boosting decision trees.
Jian Guo, Wenjie Nan, Yiran Yao
We present analysis of time-space tradeoffs for both the search and decision variants of the $k$-collision problem in algorithmic perspective, where $k \in \left[2, O(\operatorname{polylog}(N))\right]$ and the underlying function is $f_{N,M} : [N] \rightarrow [M]$ with $M \geq N$. In contrast to prior work that focuses either on 2-collisions or on random functions with $M = N$, our results apply to both random and arbitrary functions and extend to a broader range of $k$. The tradeoffs are derived from explicit algorithmic constructions developed in this work, especially for decision problems when $k\geq3$.
For 2-collision problems, we show that for any random function $f_{N,M}$ with $M \geq N$, the time-space tradeoff for finding all 2-collisions follows a single curve $T=\widetilde{O}\left(\frac{N^{3/2}}{\sqrt{S}}\right)$, where $T$ denotes time complexity and $S$ denotes available space. This tradeoff also extends to arbitrary functions with at most $O(N)$ total 2-collisions.
For 3-collision problems, we identify two time-space tradeoff curves for the search variant over random functions, depending on the available space $S$. For arbitrary functions, we show that the decision problem can be solved with a tradeoff of $T=\widetilde{O}\left(\frac{N^{3/2}}{\sqrt{S}} + \frac{N}{S}\frac{n_2}{n_3}\right)$, where $n_{i}$ denotes the number of $i$-collisions. Surprisingly, for random functions, the decision problem for 3-collision shares the same time-space tradeoff as the 2-collision case $T=\widetilde{O}\left(\frac{N^{3/2}}{\sqrt{S}}\right)$.
For general $k$-collision problems, we extend these results to show that the decision problem over arbitrary functions can be solved in time $T=\widetilde{O}\left(\frac{N^{3/2}}{\sqrt{S}} + \frac{N}{S}\frac{n_2}{n_k}\right)$. For the search problem over random functions, we derive two time-space tradeoffs based on the space $S$, yielding approximately $S^{1/(k-2)}$ or $S^{1/(2k-2)}$-fold speedups compared to the low-memory setting $S = O(\log M)$. When $M = N$, the tradeoff simplifies to one single curve with $S^{1/(k-2)}$-fold speedup.
For 2-collision problems, we show that for any random function $f_{N,M}$ with $M \geq N$, the time-space tradeoff for finding all 2-collisions follows a single curve $T=\widetilde{O}\left(\frac{N^{3/2}}{\sqrt{S}}\right)$, where $T$ denotes time complexity and $S$ denotes available space. This tradeoff also extends to arbitrary functions with at most $O(N)$ total 2-collisions.
For 3-collision problems, we identify two time-space tradeoff curves for the search variant over random functions, depending on the available space $S$. For arbitrary functions, we show that the decision problem can be solved with a tradeoff of $T=\widetilde{O}\left(\frac{N^{3/2}}{\sqrt{S}} + \frac{N}{S}\frac{n_2}{n_3}\right)$, where $n_{i}$ denotes the number of $i$-collisions. Surprisingly, for random functions, the decision problem for 3-collision shares the same time-space tradeoff as the 2-collision case $T=\widetilde{O}\left(\frac{N^{3/2}}{\sqrt{S}}\right)$.
For general $k$-collision problems, we extend these results to show that the decision problem over arbitrary functions can be solved in time $T=\widetilde{O}\left(\frac{N^{3/2}}{\sqrt{S}} + \frac{N}{S}\frac{n_2}{n_k}\right)$. For the search problem over random functions, we derive two time-space tradeoffs based on the space $S$, yielding approximately $S^{1/(k-2)}$ or $S^{1/(2k-2)}$-fold speedups compared to the low-memory setting $S = O(\log M)$. When $M = N$, the tradeoff simplifies to one single curve with $S^{1/(k-2)}$-fold speedup.
Subeen Cho, Yulim Hyoung, Hagyeong Kim, Minjoo Sim, Anupam Chattopadhyay, Hwajeong Seo, Hyunji Kim
The advancement of quantum computing threatens traditional public-key cryptographic algorithms such as RSA and ECC, both vulnerable to Shor’s algorithm.
As most Transport Layer Security (TLS) deployments still rely on these quantum-vulnerable algorithms for key exchange and digital signatures, the transition to Post-Quantum Cryptography (PQC), standardized by NIST, has become increasingly urgent.
Given the critical role of TLS in securing Internet communications, identifying and replacing these algorithms in operational deployments is a key priority for ensuring long-term security.
We present an automated method for analyzing TLS network packets to detect the use of quantum-vulnerable algorithms. Our approach combines hierarchical packet filtering, protocol-aware parsing, and a hybrid certificate extraction technique that enables analysis of encrypted TLS~1.3 certificates without full decryption. The framework achieved over 96\% detection accuracy, and our certificate parsing strategy improves overall throughput. Applying it to domestic and international TLS deployments revealed that domestic systems lag behind in quantum-readiness, underscoring the need for greater adoption of TLS~1.3, hybrid key exchanges (RSA/ECC with PQC), and short-lived certificates. Beyond TLS, the underlying methodology can be extended to other secure communication protocols, offering a versatile foundation for post-quantum migration strategies. These results highlight the practicality of our method for large-scale, real-time TLS assessments and its potential to guide PQC adoption.
We present an automated method for analyzing TLS network packets to detect the use of quantum-vulnerable algorithms. Our approach combines hierarchical packet filtering, protocol-aware parsing, and a hybrid certificate extraction technique that enables analysis of encrypted TLS~1.3 certificates without full decryption. The framework achieved over 96\% detection accuracy, and our certificate parsing strategy improves overall throughput. Applying it to domestic and international TLS deployments revealed that domestic systems lag behind in quantum-readiness, underscoring the need for greater adoption of TLS~1.3, hybrid key exchanges (RSA/ECC with PQC), and short-lived certificates. Beyond TLS, the underlying methodology can be extended to other secure communication protocols, offering a versatile foundation for post-quantum migration strategies. These results highlight the practicality of our method for large-scale, real-time TLS assessments and its potential to guide PQC adoption.
Susan Hohenberger, Brent Waters, David J. Wu
An aggregate signature scheme allows a user to take $N$ signatures from $N$ users and aggregate them into a single short signature. One approach to aggregate signatures uses general-purpose tools like indistinguishability obfuscation or batch arguments for NP. These techniques are general, but lead to schemes with very high concrete overhead. On the practical end, the seminal work of Boneh, Gentry, Lynn, and Shacham (EUROCRYPT 2003) gives a simple and practical scheme, but in the random oracle model. In the plain model, current practical constructions either rely on interactive aggregation or impose restrictions on how signatures can be aggregated (e.g., same-message aggregation, same-signer aggregation or only support sequential or synchronized aggregation).
In this work, we focus on simple aggregate signatures in the plain model. We construct a pairing-based aggregate signature scheme that supports aggregating an a priori bounded number of signatures $N$. The size of the aggregate signature is just two group elements. Security relies on the (bilateral) computational Diffie-Hellman (CDH) problem in a pairing group. To our knowledge, this is the first group-based aggregate signature in the plain model where (1) there is no restriction on what type of signatures can be aggregated; (2) the aggregated signature contains a constant number of group elements; and (3) security is based on static falsifiable assumptions in the plain model. The limitation of our scheme is that our scheme relies on a set of public parameters (whose size scales with $N$) and individual signatures (before aggregation) also have size that scale with $N$. Essentially, individual signatures contain some additional hints to enable aggregation.
Our starting point is a new notion of slotted aggregate signatures. Here, each signature is associated with a "slot" and we only support aggregating signatures associated with distinct slots. We then show how to generically lift a slotted aggregate signature scheme into a standard aggregate signature scheme at the cost of increasing the size of the original signatures.
In this work, we focus on simple aggregate signatures in the plain model. We construct a pairing-based aggregate signature scheme that supports aggregating an a priori bounded number of signatures $N$. The size of the aggregate signature is just two group elements. Security relies on the (bilateral) computational Diffie-Hellman (CDH) problem in a pairing group. To our knowledge, this is the first group-based aggregate signature in the plain model where (1) there is no restriction on what type of signatures can be aggregated; (2) the aggregated signature contains a constant number of group elements; and (3) security is based on static falsifiable assumptions in the plain model. The limitation of our scheme is that our scheme relies on a set of public parameters (whose size scales with $N$) and individual signatures (before aggregation) also have size that scale with $N$. Essentially, individual signatures contain some additional hints to enable aggregation.
Our starting point is a new notion of slotted aggregate signatures. Here, each signature is associated with a "slot" and we only support aggregating signatures associated with distinct slots. We then show how to generically lift a slotted aggregate signature scheme into a standard aggregate signature scheme at the cost of increasing the size of the original signatures.
Brent Waters, David J. Wu
Threshold cryptography is a standard technique for distributing trust by splitting cryptographic keys into multiple shares held by different parties. The classic model of threshold cryptography assumes either that a trusted dealer distributes the shares to the different parties (and in doing so, also knows the overall secret) or that the users participate in an interactive distributed key-generation protocol to derive their individual shares. In recent years, several works have proposed a new model where users can independently choose a public key, and there is a public and deterministic function that derives the joint public key associated with a group of users from their individual keys. Schemes with this silent (i.e., non-interactive) setup procedure allow us to take advantage of the utility of threshold cryptography without needing to rely on a trusted dealer or an expensive interactive setup phase.
Existing works have primarily focused on threshold policies. This includes notions like threshold signatures (resp., encryption) with silent setup (where only quorums with at least $T$ users can sign (resp., decrypt) a message) and distributed broadcast encryption (a special case of threshold encryption where the threshold is 1). Currently, constructions that support general threshold policies either rely on strong tools such as indistinguishability obfuscation and witness encryption, or analyze security in idealized models like the generic bilinear group model. The use of idealized models is due to the reliance on techniques for constructing succinct non-interactive arguments of knowledge (SNARKs).
In this work, we introduce a new pairing-based approach for constructing threshold signatures and encryption schemes with silent setup. On the one hand, our techniques directly allow us to support expressive policies like monotone Boolean formulas in addition to thresholds. On the other hand, we only rely on basic algebraic tools (i.e., a simple cross-term cancellation strategy), which yields constructions with shorter signatures and ciphertexts compared to previous pairing-based constructions. As an added bonus, we can also prove (static) security under $q$-type assumptions in the plain model. Concretely, the signature size in our distributed threshold signature scheme is 3 group elements and the ciphertext size in our distributed threshold encryption scheme is 4 group elements (together with a short tag).
Existing works have primarily focused on threshold policies. This includes notions like threshold signatures (resp., encryption) with silent setup (where only quorums with at least $T$ users can sign (resp., decrypt) a message) and distributed broadcast encryption (a special case of threshold encryption where the threshold is 1). Currently, constructions that support general threshold policies either rely on strong tools such as indistinguishability obfuscation and witness encryption, or analyze security in idealized models like the generic bilinear group model. The use of idealized models is due to the reliance on techniques for constructing succinct non-interactive arguments of knowledge (SNARKs).
In this work, we introduce a new pairing-based approach for constructing threshold signatures and encryption schemes with silent setup. On the one hand, our techniques directly allow us to support expressive policies like monotone Boolean formulas in addition to thresholds. On the other hand, we only rely on basic algebraic tools (i.e., a simple cross-term cancellation strategy), which yields constructions with shorter signatures and ciphertexts compared to previous pairing-based constructions. As an added bonus, we can also prove (static) security under $q$-type assumptions in the plain model. Concretely, the signature size in our distributed threshold signature scheme is 3 group elements and the ciphertext size in our distributed threshold encryption scheme is 4 group elements (together with a short tag).
Pratish Datta, Abhishek Jain, Zhengzhong Jin, Alexis Korb, Surya Mathialagan, Amit Sahai
Incrementally verifiable computation (IVC) [Valiant, TCC'08] allows one to iteratively prove that a configuration $x_0$ reaches another configuration $x_T$ after repeated applications of a (possibly non-deterministic) transition function $\mathcal{M}$. The key requirement is that the size of the proof and the time to update the proof is sublinear in the number of steps $T$. IVC has numerous applications, notably including proving correctness of virtual machine executions in blockchains.
Currently, IVC for $\mathsf{NP}$ is only known to exist in non-standard idealized models, or based on knowledge assumptions. No constructions are known from standard assumptions, or even in the random oracle model. Furthermore, as observed in prior works, since IVC for $\mathsf{NP}$ implies adaptive succinct non-interactive arguments for $\mathsf{NP}$, the work of Gentry-Wichs [STOC'11] seemingly poses barriers to constructing IVC for $\mathsf{NP}$ from falsifiable assumptions.
In this work, we observe that the Gentry-Wichs barrier can be overcome for IVC for NP. We show the following two results:
- Assuming subexponential $i\mathcal{O}$ and LWE (or bilinear maps), we construct IVC for all $\mathsf{NP}$ with proof size $\mathsf{poly}(|x_i|,\log T)$. - Assuming subexponential $i\mathcal{O}$ and injective PRGs, we construct IVC for trapdoor IVC languages where the proof-size is $\mathsf{poly}(\log T)$. Informally, an IVC language has a trapdoor if there exists a (not necessarily easy to find) polynomial-sized circuit that determines if a configuration $x_i$ is reachable from $x_0$ in $i$ steps.
Currently, IVC for $\mathsf{NP}$ is only known to exist in non-standard idealized models, or based on knowledge assumptions. No constructions are known from standard assumptions, or even in the random oracle model. Furthermore, as observed in prior works, since IVC for $\mathsf{NP}$ implies adaptive succinct non-interactive arguments for $\mathsf{NP}$, the work of Gentry-Wichs [STOC'11] seemingly poses barriers to constructing IVC for $\mathsf{NP}$ from falsifiable assumptions.
In this work, we observe that the Gentry-Wichs barrier can be overcome for IVC for NP. We show the following two results:
- Assuming subexponential $i\mathcal{O}$ and LWE (or bilinear maps), we construct IVC for all $\mathsf{NP}$ with proof size $\mathsf{poly}(|x_i|,\log T)$. - Assuming subexponential $i\mathcal{O}$ and injective PRGs, we construct IVC for trapdoor IVC languages where the proof-size is $\mathsf{poly}(\log T)$. Informally, an IVC language has a trapdoor if there exists a (not necessarily easy to find) polynomial-sized circuit that determines if a configuration $x_i$ is reachable from $x_0$ in $i$ steps.
Gideon Samid
A trivial ciphertext is decrypted per all its bits to a corresponding, singular plaintext. A non-trivial ciphertext (NTC) is comprising decryption-proper bits as well as decryption unfitting bits (decoys) with a decryption discrimination key needed to identify each category. A non-trivial ciphertext may also decrypt to different plaintext options, determined by choice of decryption keys. NTC offer important advantages: giving ad-hoc power to buy extra security paid for with an inflated ciphertext, prospectively changing the strategic balance between cryptography and cryptanalysis. NTC involve plaintext alphabet that includes a 'plaintext empty letter' (PEL). Using a particular decryption key, certain ciphertext bits may decrypt to the PEL, and hence have no impact on the decrypted plaintext message constructed from the other bits. Using a different key, different ciphertext bits will decrypt to the PEL while a different collection of the remaining bits will decrypt to a different plaintext message. Thereby one achieves contents discrimination and decryption variety. The number of decoy bits is open-ended and is unilaterally controlled by the transmitter who thus determines the level of projected security -- asymptotically up to Vernam's perfection.