CryptoDB
Recently updated IACR publications
CryptoDB is periodically updated by manual and automatic processes. Whenever a paper is added or modified it will appear in this list, e.g., when a video appears.
A separate history of changes tracks schema and process changes. There is further information about CryptoDB in the documentation.
Year
Venue
Title
2025
TCC
Time/Space Tradeoffs for Generic Attacks on Delay Functions
Abstract
Delay functions have the goal of being inherently slow to compute.
They have been shown to be useful for generating public randomness in distributed systems in the presence of a dishonest majority of network participants; a task that is impossible to solve without such functions, due to Cleve's seminal impossibility result (STOC 1986).
Currently, little is known on how to construct secure delay functions or how to analyze the security of newly proposed candidate constructions.
In this work, we explore the time/space tradeoffs of generic attacks for a large class of potential delay function designs.
We consider delay functions $F^T$, which are computed as
\[
F^T := \underbrace{F(\dots F(F}_T(x)) \dots ),
\]
where $F : [N] \to [N]$ is some round function, in the presence of an adversary, who is given an advice string of some bounded size, has oracle access to $F$, and would like to compute $F^T$ on a random input $x$ using less than $T$ sequential oracle calls.
We show that for both random and arbitrary functions $F$ there exist non-trivial adversaries, who successfully evaluate $F^T$ using only $T/4$ sequential calls to oracle $F$, when given a large enough advice string.
We also show that there exist round functions $F$ for which the adversary cannot compute $F^T$ using less than $T/2$ sequential queries, unless they receive a large advice string or they can perform a large number of oracle queries to $F$ in parallel.
2025
TCC
Generalized and Unified Equivalences between Hardness and Pseudoentropy
Abstract
Pseudoentropy characterizations provide a quantitatively precise demonstration of the close relationship between computational hardness and computational randomness. We prove a unified pseudoentropy characterization that generalizes and strengthens previous results for both uniform and non-uniform models of computation. Our characterization holds for a general family of entropy notions that encompasses the common notions of Shannon entropy and min entropy as special cases. Moreover, we show that the characterizations for different entropy notions can be simultaneously achieved by a single, universal function that simultaneously witnesses computational hardness and computational randomness. A key technical insight of our work is that the notion of weight-restricted calibration from the recent literature on algorithm fairness, along with standard computational indistinguishability (known as multiaccuracy in the fairness literature), suffices for proving pseudoentropy characterizations for general entropy notions. This demonstrates the power of weight-restricted calibration to enhance the classic Complexity-Theoretic Regularity Lemma (Trevisan, Tulsiani, and Vadhan, 2009) and Leakage Simulation Lemma (Jetchev and Pietrzak, 2014) and allows us to achieve an exponential improvement in the complexity dependency on the alphabet size compared to the pseudoentropy characterizations by Casacuberta, Dwork, and Vadhan (2024) based on the much stronger notion of multicalibration. We show that the exponential dependency on the alphabet size is inevitable for multicalibration as well as for the weaker notion of calibrated multiaccuracy.
2025
TCC
Efficient Garbled Pseudorandom Functions and Lookup Tables from Minimal Assumption
Abstract
Yao's garbled circuits have received huge attention in both theory and practice. While garbled circuits can be constructed using minimal assumption (i.e., the existence of pseudorandom functions or one-way functions), the state-of-the-art constructions (e.g., Rosulek-Roy, Crypto 2021) are based on stronger assumptions. In particular, the ``Free-XOR'' technique (Kolesnikov-Schneider, ICALP 2008) is essential in these state-of-the-art constructions, and their security can only be proven in the random oracle model, or rely on the ``circular-correlation robust hash'' assumption.
In this paper, we aim to develop new techniques to construct efficient garbling schemes using minimal assumptions. Instead of generically replacing the Free-XOR technique, we focus on garbling schemes for specific functionalities. We successfully eliminated the need for Free-XOR in several state-of-the-art schemes, including the one-hot garbling (Heath and Kolesnikov, CCS 2021) and the garbled pseudorandom functions, and the garbled lookup tables (Heath, Kolesnikov and Ng, Eurocrypt 2024). Our schemes are based on minimal assumptions, i.e., standard pseudorandom functions (PRFs)---we resolved the need for circular security.
The performance of our scheme is almost as efficient as the best results except for a small constant factor. Namely, for any lookup table $\bit^n \to \bit^m$, our scheme takes $n + (5n+9)m\secp + 2^n \cdot m$ bits of communication, where $\lambda$ is the security parameter of PRF.
2025
TCC
Lower Bounds on Inner-Product Functional Encryption from All-or-Nothing Encryption Primitives
Abstract
A functional encryption (FE) is a versatile encryption, where the functional key ${\sf sk}_{\bf v}$ decrypts a ciphertext to the function ${\bf v}({\bf m})$ evaluated on the plaintext ${\bf m}$. The security requires that even when an adversary colludes multiple functional keys, it learns only the evaluations but nothing more about the plaintext. This work considers the adversary obtaining an \emph{unbounded number of colluded keys}, a natural setting for many FE applications. We aim to understand which cryptographic primitives are sufficient to construct an unbounded-collusion FE.
This work proves negative results: we separate unbounded-collusion FEs from many encryption primitives that are ``all-or-nothing.'' Introduced by Garg, Mahmoody and Mohammed (CRYPTO '17), an all-or-nothing encryption scheme allows an adversary to learn either \emph{the whole plaintext} if the authorized secret key is given; otherwise, the adversary learns \emph{nothing}. Hence, we rule out such FE construction from fully homomorphic encryption (FHE), attribute-based encryption (ABE), and predicate encryptions (PE). Our impossibility holds for private-key and selectively secure FEs for an minimal function class, \emph{inner products}, making the impossibility stronger.
Our separation is in the ``monolithic model,'' similar to the black-box model, but the primitive is allowed to evaluate a circuit that queries the primitive itself as an oracle.
Our result extends the beautiful work of Hajiabadi et al.~(TCC '24), which separated inner-product functional encryption (IPFE) from any primitives that exist in the random oracle model. Our result is perhaps surprising as IPFE was proposed by Abdalla et al.~(PKC '15) to be easier to construct, and indeed there are several IPFEs based on standard \emph{algebraic} assumptions, such as Decisional Diffie-Hellman or Learning-with-Errors. Moreover, \emph{bounded}-collusion FEs for all circuits are constructed in a black-box way from minimal assumptions, i.e., public-key FE from public-key encryption, and secret-key FE from one-way functions, by Ananth and Vaikuntanathan (TCC '19).
2025
TCC
A Computational Monogamy of Entanglement Theorem with Applications to Non-Interactive Quantum Key Distribution
Abstract
Quantum key distribution (QKD) enables Alice and Bob to exchange a secret key over a public, untrusted quantum channel.
Compared to classical key exchange, QKD achieves \emph{everlasting security}: after the protocol execution the key is secure against adversaries that can do unbounded computations.
On the flip side, while classical key exchange can be achieved non-interactively (with two simultaneous messages between Alice and Bob), no non-interactive protocol is known that provides everlasting security, even using quantum information.
In this work, we make progress on this problem. Our main technical contribution is a \emph{computational} variant of the celebrated \emph{monogamy of entanglement} game, where the secret is only computationally hidden from the players, rather than information-theoretically. In these settings, we prove a negligible bound on the maximal winning probability over all strategies.
As a direct application, we obtain a non-interactive (simultaneous message) QKD protocol from any post-quantum classical non-interactive key exchange, which satisfies everlastingly secure \emph{assuming Alice and Bob agree on the same key}.
The protocol only uses EPR pairs and standard and Hadamard basis measurements, making it suitable for near-term quantum hardware.
We also propose how to convert this protocol into a two-round protocol that satisfies the standard notion of everlasting security.
Finally, we prove a \emph{no-go theorem} which establishes that (in contrast to the case of ordinary multi-round QKD) entanglement is necessary for non-inter\-active QKD, i.e., the messages sent by Alice and Bob cannot both be unentangled with their respective quantum memories if the protocol is to be everlastingly secure.
2025
TCC
Optimal Bounds on the Existence of Pseudorandom Codes
Abstract
A pseudorandom code is a keyed error-correction scheme with the property that any polynomial number of encodings appear random to any computationally bounded adversary. We show that the pseudorandomness of any code tolerating a constant rate of random errors cannot be based on black-box reductions to almost any generic cryptographic primitive: for instance, anything that can be built from random oracles, generic multilinear groups, and virtual black-box obfuscation. Our result is optimal, as Ghentiyala and Guruswami (2024) observed that pseudorandom codes tolerating any sub-constant rate of random errors exist assuming just one-way functions.
The key technical ingredient in our proof is the hypercontractivity theorem for Boolean functions, which we use to prove our impossibility in the random oracle model. It turns out that this easily extends to a separation of pseudorandom codes tolerating a constant rate of random errors from ``crypto oracles,'' a notion introduced and shown to be capable of implementing all the primitives mentioned above by Lin, Mook, and Wichs (EUROCRYPT 2025).
2025
TCC
Seedless Condensers for Efficiently Samplable Sources
Abstract
Is it possible to efficiently convert any arbitrary source with sufficiently high (min-)entropy
into nearly uniformly random bits? Information-theoretically, this is achievable using seeded
extractors, provided the source is independent of the seed. However, in the absence of any such
independence guarantee, no solution is possible for general computationally unbounded sources.
Even for efficiently samplable sources, we cannot extract randomness that is statistically close
to uniform, but can at best condense the randomness into an output that is only missing
logarithmically many bits of entropy compared to the uniform distribution. Dodis, Ristenpart
and Vadhan (TCC ’12) constructed such seeded condensers for efficiently samplable sources
that can depend arbitrarily on the seed assuming near-optimal security of collision-resistant
hash functions. In this work, we investigate whether such condensers can be made seedless. We
present several new results:
- We construct seedless condensers for all efficiently samplable sources assuming near-optimal
security of keyless multi-collision resistant hash functions. We justify this assumption by
showing it holds in the auxiliary-input random oracle model.
- We show that such seedless condensers cannot be proven secure via a black-box reduc-
tion from any standard game-based assumption, even if we assume optimal exact security.
In fact, we give a general black-box separation that applies to a broad class of seedless
primitives, with seedless condensers as a special case.
- We consider the class of efficiently samplable distributed sources where two parties jointly
sample randomness x = (x0, x1), with one party honestly choosing xb uniformly at ran-
dom while the other party adversarially chooses x1−b depending on xb. We show how to
construct seedless condensers for such sources assuming near-optimal security of game-
based assumptions – either: (1) adaptive one-way functions, or (2) a certain from of
(single-input) correlation-intractable hash functions that can be instantiated from key-
dependent-message (KDM) secure encryption
2025
TOSC
New General MDS Matrix Construction Method Towards Low Area
Abstract
Maximum Distance Separable (MDS) matrices have been widely used in symmetric cryptographic primitives because of their excellent cryptographic properties. However, due to the heavy area cost, larger-scale MDS matrices than 4 x 4 ones are limited in ciphers, although they have a larger branch number. In this paper, we propose a general method for constructing MDS matrices with low implementation area cost, using matrix decomposition, automatic search, and symbolic computation techniques. According to matrix decomposition theory, every invertible matrix can be decomposed into a sequence of elementary matrices, including Type-1 (row switching), Type-2 (row multiplication) and Type-3 (row addition) elementary matrices. So, we first propose a greedy algorithm to construct MDS matrix patterns with as few Type-3 elementary matrices as possible. Then, we build an automatic search model to minimize the implementation area of multiplication coefficients used in Type-3 elementary matrices. Lastly, another greedy strategy is raised to further reduce the implementation area of MDS matrix patterns by introducing a few Type-2 elementary matrices. In comparison to previous methods, our approach is more general and effective for constructing lower-area MDS matrices. To demonstrate the efficiency of our method, we apply the framework on constructing m x m MDS matrices over F2n or GL(n, F2), where m ∈ {4, 5, 6, 7, 8} and n ∈ {4, 8, 16, 32, 64}. The 4 x 4 MDS matrices constructed by our method can also reach the minimum area. The 5 x 5, 6 x 6 and 7 x 7 MDS matrices constructed by our method have lower area compared to previous ones. While the 8 x 8 MDS matrices with n ∈ {16, 32, 64} constructed by our method also have lower area compared to previous ones.
2025
TOSC
LEAP: High-Performance Lattice-Based Pseudorandom Number Generator
Abstract
At EUROCRYPT2012, Banerjee, Peikert, and Rosen introduced Ring Learning With Rounding (RLWR) problem and constructed lattice-based pseudorandom functions for the first time. Subsequently, Banerjee, Brenner, Leurent, Peikert, and Rosen named this family of lattice-based pseudorandom functions as SPRING, reanalyzed the security, and gave two practical instances. Building upon the SPRING family, Bouillaguet, Delaplace, Fouque, and Kirchner further extended it to a pseudorandom number generator called SPRING-RS. It is quite fast but still has a certain gap compared with the classical pseudorandom number generator based on symmetric cryptography, and the key size is large.In this work, we present LEAP, a lattice-based pseudorandom number generation scheme characterized by high performance, adaptable parameter selection, and extensive support for parallel processing. Unlike the RLWR problem used in public key cryptography, LEAP treats the public parameter in the RLWR problem as the key as well. Hiding the public parameters leads to larger lattice dimensions and higher standard deviations of error in the concrete security analysis compared to RLWR under identical parameters. These adjustments imply enhanced security, allowing smaller parameters while maintaining the same security level, thereby improving performance. Additionally, we introduce a novel framework that reuses multiple parameters, significantly enhancing overall performance. To mitigate the issue of increased key size caused by treating the public parameter as the key, we design a pseudorandom number generator leveraging the small key size characteristic of a variant of the NTRU assumption, which provides the key required for the high-performance pseudorandom number generator.Compared with the SPRING-RS, the LEAP can reduce the key size by 1.71X while improving performance by 3.30X at the same security level. Under the AVX2 and AVX512 implementations, the performance reaches 1.61 Cycles/byte and 1.14 Cycles/byte, and the throughput reaches 16.12 Gbps and 22.60 Gbps, respectively.
2025
TOSC
HCTR+: An Optimally Secure TBC-Based Accordion Mode
Abstract
The design of tweakable wide-block ciphers has advanced significantly over the past two decades. This evolution began with the wide-block cipher by Naor and Reingold. Since then, numerous constructions have been proposed, many of which are built on existing block ciphers and are secure up to the birthday bound for the total number of blocks queried. Although there has been a recent slowdown in the development of such ciphers, the latest NIST proposal for Accordion modes has reignited the interest and momentum in the design and analysis of these ciphers. Although new designs have emerged, their security often falls short of optimal (i.e., n-bit) security, where n is the output size of the primitive. In this direction, designing an efficient tweakable wide-block cipher with n-bit security seems to be an interesting research problem to the symmetric key research community. An optimally secure tweakable wide-block cipher mode can easily be turned into a misuse-resistant RUP secure authenticated encryption scheme with optimal security. This paper proposes HCTR+, which turns an n-bit tweakable block cipher (TBC) with n-bit tweak into a variable input length tweakable wide block cipher. Unlike tweakable HCTR, HCTR+ ensures n-bit security regardless of tweak repetitions. We also propose two TBC-based almost-xor-universal hash functions, named PHASH+ and ZHASH+, and use them as the underlying hash functions in the HCTR+ construction to create two TBC-based n-bit secure tweakable wide block cipher modes, PHCTR+ and ZHCTR+. Experimental results show that both PHCTR+ and ZHCTR+ exhibit excellent software performance when their underlying TBC is instantiated with Deoxys-BC-256.
2025
TOSC
Minimized PRFs from Public Permutations
Abstract
The sum of permutations is a popular way to turn a PRP (like a block cipher) into a PRF. However, with the rise of permutation based cryptography, it makes sense to investigate the possibility to design a PRF as the sum of externally keyed public permutations. This challenge was initiated by Chen et al. (CRYPTO 2019) who presented the Sum of Even-Mansours (SoEM) construction. Sibleyras and Todo (CT-RSA 2023) later minimized the amount of key maskings in this construction with their Keyed Sum of Permutations (KSoP). However, both constructions have in common that their security proofs require two independent keys and two independent public random permutations. In this work, we investigate the possibilities to reduce this amount of randomness, by introducing three constructions: sirP, that uses two independent permutations but one key, sirK, that uses two independent keys but one permutation, and sirX, that uses a single permutation and a single key. The constructions are further generalized by having a parameter prescribing the data input size compared to the permutation size. We present general security results for all three variants, and demonstrate that, for certain parameter choices, the security bounds match those of SoEM and KSoP, but with reduced randomness.
2025
TOSC
Revisiting Vector-Input MACs
Abstract
At Eurocrypt 2006, Rogaway and Shrimpton (RS06) presented the idea of vector-input MAC that accepts a vector consisting of variable-length bit strings. A vector-input MAC could be built on a conventional (bit) string-input MAC, e.g., CMAC, with an injective encoding. RS06 pointed out an efficiency loss in this method and presented a general construction S2V that is more efficient than the encodingbased method. RS06 made significant scientific and practical impacts on the field of mode constructions, in particular for the invention of deterministic authenticated encryption. However, despite its potential, their work on vector-input MAC has been largely overlooked for more than 18 years. We revisit RS06’s treatment of vector-input MAC and show that the topic is more subtle than initially considered. We first formally define the problem of vector-input MAC and propose a natural efficiency goal for vector-input MACs as a counterpart of what has been considered for string-input MACs. Since S2V with any string-input MAC mode never achieves this efficiency goal, we propose a family of new MAC modes, VecMAC, that achieves this goal. VecMAC has a similarity to the popular PMAC, in particular its idea of tweak. However, the purpose of introducing tweak is different from PMAC and tweak is tailored to handle vectors without redundant block cipher calls. We also provide implementation results that show an advantage over the conventional method.
2025
TOSC
SoK: On Shallow Weak PRFs: A Common Symmetric Building Block for MPC Protocols
Abstract
A growing number of advanced cryptographic protocols and constructions rely on symmetric primitives known as weak pseudo-random functions (wPRFs). These functions differ significantly from traditional PRFs: they operate in constrained models where inputs are sampled uniformly at random and are not chosen by the adversary. In practice, many of these functions are implemented as shallow, non-iterated constructions with simple circuit representations.This Systematization of Knowledge (SoK) provides a unified view of shallow wPRFs (swPRFs), which we define as wPRFs computable by low-depth circuits and primarily used in different secure computation protocols. We identify and classify four main families of swPRFs—alternating moduli wPRFs, Goldreich’s PRG family, and the VDLPN and EALPN constructions—presenting formal definitions, algorithmic descriptions, known variants, cryptanalytic results, and concrete parameter sets for each.In addition to surveying the literature, our goal is to shift the focus from asymptotic analyses to concrete cryptanalysis. To this end, we provide a set of cryptanalytic challenges along with reference SAGE implementations for all the primitives discussed. We aim to encourage the symmetric cryptography community—particularly cryptanalysts— to rigorously evaluate the practical security levels offered by swPRFs, as concrete analyses are currently lacking. Given their growing use in high-level protocols and constructions, any cryptanalytic breakthrough on these primitives could directly affect the security of the broader cryptographic systems that rely on them.
2025
TOSC
Attacking Split-and-Lookup-Based Primitives Using Probabilistic Polynomial System Solving: Applications to Round-Reduced Monolith and Full-Round Skyscraper
Abstract
In recent years, many hash functions have been introduced to satisfy the pressing need of some zero-knowledge protocols for such primitives allowing a low degree verification of their round function when arithmetized over a large field.While this can be achieved by restricting their sub-components to low-degree functions (and their inverse), the newest primitives in this category also leverage the intricacies of some proof systems to use “Split-and-Lookup” non-linear functions that essentially apply a small S-box in parallel over the binary representation of a field element.Such components excel at hindering attacks relying on polynomial system solving, but they offer poor security against statistical attacks. On the other hand, low degree monomials offer the opposite guarantees, being strong against statistical attacks. Several primitives have recently been proposed that combine such components in different ways in order to get the best from both.In this paper, we target such primitives by relying on the low degree components to allow a low-cost polynomial solving step. The weakness of Split-and-Lookups against linear attacks is used to simplify these systems, and their weakness against differential attacks is then used to propagate across many rounds the differential patterns obtained during polynomial solving. We instantiate this general approach by attacking round-reduced Monolith, and providing a distinguisher on full-round Skyscraper. These result then shed some light on how to best combine the different types of components to achieve the highest security.
2025
TOSC
Multidimensional Linear Cryptanalysis of AEGIS
Abstract
AEGIS is a family of authenticated encryption with associated data (AEAD) ciphers that target for highly efficient implementations in software. The main operation in AEGIS is the AES encryption round function such that it can make full use of the cryptographic properties and efficient implementation. AEGIS includes three variants AEGIS-128, AEGIS-128L, and AEGIS-256, which achieve 128, 128, and 256 bits of security, respectively. AEGIS-128 has been selected and included into the final portfolio of the CAESAR competition. In this paper, we perform multidimensional linear cryptanalysis of AEGIS. We first dig into the reason of the inconsistency between the byte and bit trails in AEGIS and propose an improved truncated model to efficiently derive the accurate minimum number of active Sboxes. Based on the derived byte trails, we perform deep theoretical analysis of the correlation propagation in AEGIS and derive linear approximations with high correlations. Moreover, we find interesting properties of AEGIS that enable us to derive a number of equivalent but independent linear approximations. By combining these linear approximations, we perform multidimensional linear distinguishing attacks on AEGIS-128, AEGIS-256, and AEGIS-128L with complexities 2126.46, 2154.11, and 2144.44, respectively. These results suggest that AEGIS-128 and AEGIS-256 do not meet their security claims. We also apply the improved truncated model to two AES-based stream cipher families LOL and Rocca for the linear cryptanalysis of them. Particularly, for LOL-MINI, we give a fast correlation attack with complexity 2250.5, thereby breaking its security claim if we ignore the restriction in the length of the keystream under a single pair of key and IV.
2025
TOSC
Chosen-Key Distinguishing Attacks on Full AES-192, AES-256, Kiasu-BC, and More
Abstract
At CRYPTO 2020, Liu et al. demonstrated that many differentials on Gimli are, in fact, incompatible. Similar incompatibilities also arise in relatedkey differentials on AES, which are typically addressed in an ad-hoc manner by incorporating additional constraints into the searching models. However, such ad-hoc methods are insufficient to eliminate all incompatibilities and may still produce false positive related-key differentials. At CRYPTO 2022, a novel approach was introduced that combines a Constraint Programming (CP) tool with a triangulation algorithm to search for rebound attacks against AES-like hashings. In this paper, we extend and unify these techniques to develop a comprehensive related-key differential search model. Our model not only identifies valid related-key differentials for AES and similar ciphers, but also enables immediate verification of the existence of at least one key pair satisfying the differentials. Using this enhanced automatic tool, we discover new related-key differentials for full-round AES-192, AES-256, Kiasu-BC, and for roundreduced Deoxys-BC. Based on these findings, we present full-round limited-birthday chosen-key distinguishing attacks on AES-192, AES-256, and Kiasu-BC, as well as the first chosen-key distinguisher on reduced-round Deoxys-BC. Furthermore, we identify, for the first time, a limited-birthday distinguisher on 9-round Kiasu-BC with practical complexities.
2025
TOSC
Improved Differential Cryptanalysis of SPEEDY
Abstract
SPEEDY is a family of lightweight block ciphers designed by Leander et al. Several differential attacks have been reported on the SPEEDY variants. However, nearly all of these attacks are based on differential characteristics with probabilities that differ from their reported values. These discrepancies arise from incorrect calculations of the (key-averaged) probability, particularly in consecutive steps within one round without intermediate key addition. In this paper, we revisit all reported differential characteristics and accurately calculate their key-averaged probabilities using quasidifferential trails. We extend this to also estimate the fixed-key probability. Our analysis reveals several characteristics with zero or significantly altered probability, invalidating several proposed attacks. We further implement a search algorithm and find a 5.5-round differential distinguisher that can be used to mount a full-round key-recovery attack with a data complexity of 2183 and a time complexity of 2185. The memory complexity varies: in the chosen-plaintext setting, it is 2156, whereas in the chosen-ciphertext setting, it is 236.
2025
TOSC
Trail-Estimator: An Automated Verifier for Differential Trails in Block Ciphers
Abstract
Differential cryptanalysis is a powerful technique for attacking block ciphers, wherein the Markov cipher assumption and stochastic hypothesis are commonly employed to simplify the search and probability estimation of differential trails. However, these assumptions often neglect inherent algebraic constraints, potentially resulting in invalid trails and inaccurate probability estimates. Some studies identified violations of these assumptions and explored how they impose constraints on key material, but they have not yet fully captured all relevant ones. This study proposes Trail-Estimator, an automated verifier for differential trails on block ciphers, consisting of two parts: a constraint detector Cons-Collector and a solving tool Cons-Solver. We first establish the fundamental principles that will allow us to systematically identify all constraint subsets within a differential trail, upon which Cons-Collector is built. Then, Cons-Solver utilizes specialized preprocessing techniques to efficiently solve the detected constraint subsets, thereby determining the key space and providing a comprehensive probability distribution of differential trails. To validate its effectiveness, Trail-Estimator is applied to verify 17 differential trails for the SKINNY, LBLOCK, TWINE, and AES block ciphers. Experimental results show that Trail-Estimator consistently identifies previously undetected constraints for SKINNY and AES, and discovers constraints for the first time for LBLOCK and TWINE. Notably, it is the first tool to discover long nonlinear constraints extending beyond five rounds in these ciphers. Furthermore, Trail-Estimator’s accuracy is validated by experiments showing its predictions closely match the real probability distribution of short-round differential trails.
2025
TOSC
Divide-and-Conquer SAT for Exploring Optimal Differential and Linear Characteristics and Its Applications
Abstract
Developing automatic search tools to derive optimal characteristics is crucial for both the design and cryptanalysis of symmetric-key primitives. However, evaluating primitives that employ large S-boxes and complex linear layers remains a computationally demanding task. In this paper, we introduce a novel solver-aided automatic search tool based on the divide-and-conquer strategy that leverages the advantages of both MILP and SAT methods. Our method divides a given SAT model into multiple smaller SAT models, allowing to pre-eliminate as much of the space of Boolean variable assignments that make a given SAT model always “UNSAT”. In addition, we propose a new method for large S-boxes that involves the decimal parts of values, enabling us to efficiently derive optimal linear characteristics for a large S-box-based primitive, all within a practical time for the first time. The new tool is able to obtain optimal differential and linear characteristics in the significant number of rounds of AES, Camellia without FL function, ARIA, LED, Midori-128, SKINNY- 128, and Rijndael-256-256. Our results improve the required number of rounds for differential and linear attacks, based on a single characteristic, for Camellia, LED, and Midori-128. Besides, our tool identifies the longest distinguisher for extensivelyanalyzed ciphers of Camellia/ARIA/Midori-128 and SKINNY-128 by optimal linear and differential ones, respectively.
2025
TOSC
SAT-Based Space Partitioning and Applications to Ascon-Hash256 Cryptanalysis
Abstract
We introduce an efficient SAT-based space partitioning technique that enables systematic exploration of large search spaces in cryptanalysis. The approach divides complex search spaces into manageable subsets through combinatorial necklace generation, allowing precise tracking of explored regions while maintaining search completeness.We demonstrate the technique’s effectiveness through extensive cryptanalysis of Ascon-Hash256. For differential-based collision attacks, we conduct an exhaustive search of 2-round collision trails, proving that no collision trail with weight less than 156 exists. Through detailed complexity analysis and parameter optimization, we present an improved 2-round collision attack with complexity 261.79. We also discover new Semi-Free-Start (SFS) collision trails that enable practical attacks on both 3-round and 4-round Ascon-Hash256, especially improving the best known 4-round SFS trail from weight 295 to 250.Furthermore, applying the technique to Meet-in-the-Middle structure search yields improved attacks on 3-round Ascon-Hash256. We reduce the collision attack complexity from 2116.74 to 2114.13 with memory complexity 2112 (improved from 2116), and the preimage attack complexity from 2162.80 to 2160.75 with memory complexity 2160 (improved from 2162).
2025
TOSC
Linear Cancellations in the MitM Attacks on Sponge Functions
Abstract
At EUROCRYPT 2023, Qin et al. proposed the MitM attack framework on sponge functions by separating the message bits into two sets of neutral bits. By assigning bit cancellations on one of the two sets, the states of the two sets can be computed independently and then filtered by some matching equations. To solve the bit cancellations, Qin et al. exhaustively compute the cancellations of all message bits, and store them in a huge hash table, which leads to attacks with huge memory. In this paper, we separate the bit cancellations into linear and nonlinear cancellations for the MitM attacks, where the linear cancellations are solved by Gaussian elimination, and only the nonlinear cancellations are dealt with the hash table. Hence, the memory cost is significantly reduced. In order to search new attacks with efficient memory (fewer nonlinear cancellations and more linear cancellations), we propose a new MILP model whose encoding scheme can distinguish linear and nonlinear cancellations. Besides, dedicated tricks such as the so-called weak-diffusion structure and two-stage search are proposed to further accelerate solving the MILP models.Finally, the memory complexities of MitM attacks on 4-round Keccak[1024], 3-round Xoodyak-Xof, 4-round Ascon-Xof, and full Subterranean 2.0 are reduced by 249, 259, 216, and 236, respectively. Besides, our memory-efficient approach can turn invalid MitM attacks (where memory complexity is the dominant factor in old framework) into valid MitM attacks. For example, we propose the first MitM preimage attack on 4-round Keccak[768] and the first 3-round collision attack on Xoodyak-Xof with 128-bit tag.
2025
TOSC
Mix-Basis Geometric Approach to Boomerang Distinguishers
Abstract
Differential cryptanalysis relies on assumptions like Markov ciphers and hypothesis of stochastic equivalence. The probability of a differential characteristic estimated by classical methods is the key-averaged probability under the two assumptions. However, the real probability can vary significantly between keys. Hence, tools for differential cryptanalysis in the fixed-key model are desirable. Recently, Beyne and Rijmen applied the geometric approach to differential cryptanalysis and proposed a systematic framework called quasi-differential (CRYPTO 2022).As a variant of differential cryptanalysis, boomerang attacks rely on similar assumptions, so it is important to study their probability in the fixed-key model as well. A direct extension of the quasi-differential for boomerang attacks leads to the quasi-3- differential framework (IEEE-IT 2024). However, such a straightforward approach is difficult in practical applications as there are too many quasi-3-differential trails.We tackle this problem by applying the mix-basis style geometric approach (CRYPTO 2025) to the boomerang attacks and construct the quasi-boomerang framework. By choosing a suitable pair of bases, the boomerang probability can be computed by summing correlations of quasi-boomerang characteristics. The transition matrix of the key-XOR operation is also a diagonal matrix; thus, the influence of keys can be analyzed in a similar way to the quasi-differential framework.We apply the quasi-boomerang framework to SKINNY-64 and GIFT-64. For SKINNY- 64, we check and confirm 4 boomerang distinguishers with high probability (2 with probability 1 and 2 with probability 2−4) generated from Hadipour, Bagheri, and Song’s tool (ToSC 2021/1), through the analysis of key dependencies and the probability calculation from quasi-boomerang characteristics. We also propose a divide-and-conquer approach following the sandwich framework for boomerangs with small probability or long rounds to apply the quasi-boomerang framework. After checking 2/1 boomerang distinguisher(s) of SKINNY-64/GIFT-64, we find that the previously considered invalid 19-round distinguisher of GIFT-64 is valid.In addition, as a contribution of independent interest, we revisit Boura, Derbez, and Germon’s work by extending the quasi-differential framework to the related-key scenario (ToSC 2025/1), and show an alternative way to derive the same formulas in their paper by regarding the key-XOR as a normal cipher component.
2025
TOSC
Cryptanalysis: Theory Versus Practice: Correcting Cryptanalysis Results on Ascon, ChaCha, and Serpent Using GPUs
Abstract
Most modern cryptanalysis results are obtained through theoretical analysis, often relying on simplifications and idealized assumptions. In this work, we use the parallel computational power of GPUs to experimentally verify a small portion of the cryptanalysis results that have been published in recent years. Our focus is on the ciphers Ascon, ChaCha, and Serpent. In none of the attacks we considered did the theoretical estimates fully match the actual practical values. More precisely, we show that the 4.5-round truncated differential with probability one, the 6-round differential-linear (DL), and the 6-round impossible differential distinguishers on Ascon, as well as the best known 7- and 7.5-round DL distinguisher on ChaCha, do not actually work in practice. Moreover, we demonstrate that the best known 10, 11, and 12-round DL attacks on Serpent perform better in practice than previously estimated. Additionally, we provide a new experimentally obtained 9-round DL distinguisher on Serpent, which can be used in 10 and 11-round attacks with reduced data complexity. In a broader sense, we recommend that cryptanalysts experimentally verify reduced versions of their theoretically obtained analysis results whenever possible. In order to simplify this process, we make our optimized code for the ciphers treated here available for future use.
2025
TOSC
Observations on the BayesianKeySearch with Applications to Simon and Simeck
Abstract
In CRYPTO 2019, Gohr pioneered the integration of machine learning with differential cryptanalysis, demonstrating that differential-neural distinguishers can outperform classical techniques in distinguishing attacks. He also introduced a novel key-recovery strategy based on Bayesian optimization, termed BayesianKey- Search, enhancing key recovery for Speck32/64. However, the impact of parameter selection on the complexity and success probability of key-recovery attack using BayesianKeySearch remains underexplored.This paper investigates the impact of parameter selections on key-recovery effectiveness. Gohr’s key-recovery attack involves two stages, each using a cutoff value to filter candidate guesses for the last subkey and second-to-last subkey. Previous works selected these cutoffs independently. We propose connecting these cutoff selections, enhancing coordination between stages and improving the attack’s complexity and success probability.Applying our parameter optimization, we enhance the single-key recovery attacks on 16-round Simon32/64, 16-round and 17-round Simeck32/64, achieving higher success rates and lower time complexities compared to previous works. Additionally, for related-key differential-neural attacks on Simon32/64, we exploit both single-key and related-key features from cross-paired ciphertexts, developing advanced neuraldistinguishers for up to 13 rounds. Using these neural-distinguishers combined with carefully selected classical differentials, we devise an 18-round related-key recovery attack on Simon32/64. Our results validate the practical effectiveness of the proposed strategies and are expected to contribute to the advancement of machine learningaided cryptanalysis.
2025
TOSC
Generalizations of ChiChi: Families of Low-Latency Permutations in Any Even Dimension
Abstract
At Eurocrypt’25, Belkheyar et al. introduced a new non-linear layer called ChiChi or χχ. ChiChi is built based on Daemen’s χ function, but crucially gives a permutation in even dimension divisible by four.In this work, we generalize their construction in multiple ways, prove their open conjecture regarding the algebraic degree of the inverse of ChiChi, and investigate the properties of it against key recovery attacks.
2025
TOSC
MDS Diffusion Layers for Arithmetization-Oriented Symmetric Ciphers: The Rotational-Add Construction
Abstract
We introduce the rotational-add diffusion layers aimed for applications in the design of arithmetization-oriented (AO) symmetric ciphers, such as fully homomorphic encryption (FHE)-friendly symmetric ciphers. This generalizes the rotational-XOR diffusion layers which have been utilized in the design of many important conventional symmetric ciphers like SHA-256, SM4, ZUC and Ascon. A rotational-add diffusion layer is defined over the finite field Fp for arbitrary prime p, enabling implementations using only rotations and modular additions/subtractions. The advantage of using such diffusion layers in AO ciphers is that, the costs of scalar multiplications can be reduced since the appearing scalars include only ±1, thus the total costs depend on sizes of the rotation offsets. In this paper, we investigate characterizations and constructions of lightest rotational-add diffusion layers over (Fmp)n that are maximum distance separable (MDS) with a focus on the case n = 4. It turns out that the minimum achievable size of the rotation offsets is 5 subject to the MDS property constraint. We specify a large class of rotational-add diffusion layers with 5 rotations and traverse all possible patterns of appearance of the scalars ±1. In four cases we can derive computationally tractable necessary and sufficient conditions for the rotational-add diffusion layers to attain the MDS property. These conditions enable explicit characterization of suitable primes p for given parameters. Leveraging these results, we construct three distinct families of rotational-add MDS diffusion layers applicable to AO ciphers. Although a rotational-add diffusion layer with 7 rotations and only additions has already been used in the design of the FHEfriendly block cipher YuX recently, to our knowledge, our work presents the first systematic theoretical characterization of rotational-add MDS diffusion layers and provides explicit constructions of them.
2025
TOSC
The Large Block Cipher Vistrutah
Abstract
Vistrutah is a block cipher with block sizes of 256 and 512 bits. It iterates a step function consisting of two AES rounds applied to each 128-bit block of the state, followed by a state-wide cell permutation. Building upon established design principles from Simpira, Haraka, Pholkos, and ASURA, Vistrutah leverages AES instructions to achieve high performance.For each component of Vistrutah, we conduct a systematic evaluation of functions that can be efficiently implemented on both Intel and Arm architectures. We therefore expect them to perform efficiently on any recent vector instruction set architecture (ISA) with AES support. Our evaluation methodology combines, for each combination of the various choices of the cipher’s components, a security analysis with a latency estimation on an abstracted ISA. The goal is to maximize the ratio of “bits of security per unit of time,” i.e., to achieve the highest security for a given performance target, or equivalently, the best performance for a given security level within this class of designs. Implementations confirm the accuracy of our latency model. Vistrutah even performs significantly better than Rijndael-256-256.Our security claims are backed by a comprehensive ad-hoc cryptanalysis. An isomorphism between Vistrutah-512, the 512-bit wide variant, and the AES, allows us to also leverage the extensive cryptanalysis of AES and apply it to Vistrutah-512. A core design principle is the use of an inline key schedule, computed during each encryption or decryption operation without requiring storage in any external memory. In fact, rekeying Vistrutah has no associated overheads. Key schedules like the AES’s must precompute and store round keys in memory for acceptable performance. However, in 2010 Kamal and Youssef showed that this makes cold boot attacks significantly more effective. Vistrutah’s approach minimizes leakage to at most two byte-permutations of the original key during context switches. Furthermore, expensive key schedules reduce key agility, limiting the design of modes of operation. Vistrutah is particularly well-suited for Birthday-Bound modes of operation, including Synthetic IV modes and Accordion modes for 256-bit block ciphers. It can serve as a building block for compression functions (such as Matyas-Meyer-Oseas) in wide Merkle–Damgård hash functions. Additionally, it can implement “ZIP” wide pseudo-random functions as recently proposed by Flórez-Gutiérrez et al. in 2024.Finally, we present short, i.e., reduced-round versions of Vistrutah which are analyzed taking into account the restrictions posed on attackers by specific modes of operation. In particular, we model the use of the block ciphers in Hash-Encrypt-Hash (HEH) constructions such as HCTR2 as well as in ForkCiphers. These short versions of Vistrutah can be used to accelerate modes of operation without sacrificing security.
2025
TCC
Obfuscating Pseudorandom Functions is Post-Quantum Complete
Abstract
The last decade has seen remarkable success in designing and uncovering new applications of indistinguishability obfuscation (i$\O$). The main pressing question in this area is whether {\em post-quantum} i$\O$ exists. All current lattice-based candidates rely on new, non-standard assumptions, many of which are known to be broken.
To make systematic progress on this front, we investigate the following question: can general-purpose i$\O$ be reduced, assuming {\em only} learning with errors (LWE), to obfuscating a smaller class of functions? The specific class of functions we consider are {\em pseudorandom functions} (PRFs), which constitute a natural functionality of independent interest. We show the following results:
\begin{itemize}
\item We construct exponentially-efficient i$\O$ (xi$\O$) for general circuits based on LWE in the pseudorandom oracle model -- a variant of the Random Oracle model (Jain et al., CRYPTO'23). Our construction requires the pseudorandom oracle model heuristic to hold for a \emph{specific} pseudorandom function and we prove its security against classical adversaries.
\item We construct (post-quantum) i$\O$ for general circuits in the standard model based on (post-quantum) sub-exponentially secure LWE and (post-quantum) sub-exponentially secure {\em average-case} i$\O$ -- a natural notion of i$\O$ for pseudorandom functions that we define.
\end{itemize}
To obtain these results, we generalize the ``encrypt-evaluate-decrypt'' paradigm used in prior works by replacing the use of fully homomorphic encryption with succinct secure two-party computation where parties obtain additive output shares (Boyle et al., EUROCRYPT'25 and Abram et al., STOC'25).
2025
TCC
Cryptography with Weak Privacy
Abstract
We initiate a systematic study of information-theoretic cryptography with weak privacy, requiring that no secret can be ruled out by the adversary. For a parameter 0 < p <= 1, we say that a primitive has p-weak differential privacy (p-WP) if for every possible view V and inputs x_1, x_2, the ratio between the probabilities of the adversary observing V on x_1 and on x_2 is at least p. This generalizes both perfect privacy, which is p-WDP for p = 1, and a previous "combinatorial" notion of weak privacy (WP), which is p-WDP for some p > 0. We obtain the following main results.
- Positive results. We present efficient WDP constructions for generalized secret sharing, decomposable randomized encodings (and the related notions of garbling schemes and PSM protocols), and secure two-party computation with correlated randomness. For secret sharing, we settle a question of Beimel and Franklin (TCC 2007), showing that every n-party access structure admits a WP scheme with per-party share size O(n). When all unauthorized sets have constant size, we get a p-WDP scheme with constant share size and p >= 1 / poly(n).
- Negative result. For decomposable randomized encodings, we show that a previous lower bound technique of Ball et al. (ITCS 2020) applies also to the WP notion. Together with our upper bound, this shows that the optimal WP garbling size of the worst-case f : {0, 1}^n -> {0, 1} is \tilde{\Theta}(n^2).
- Application. Under the standard LPN assumption, we show that any p-WDP secret sharing scheme with inverse-polynomial p implies a computationally secure secret sharing scheme for a related access structure. Together with our positive results for WDP secret sharing, this implies a super-polynomial improvement in the share size of computational secret sharing for a natural class of access structures.
2025
TCC
Zeroizing Attacks against Evasive and Circular Evasive LWE
Abstract
We develop new attacks against the Evasive LWE family of assumptions, in both the public and private-coin regime. To the best of our knowledge, ours are the "first" attacks against Evasive LWE in the "public-coin" regime, for any instantiation from the family. Our attacks are summarized below.
Public-Coin Attacks.
1. The recent work by Hseih, Lin and Luo [HLL23] constructed the first Attribute Based Encryption (ABE) for unbounded depth circuits by relying on the ``circular'' evasive LWE assumption. This assumption has been popularly considered as a safe, public-coin instance of Evasive LWE in contrast to its ``private-coin'' cousins (for instance, see [CW25, DJM+25a]).
We provide the first attack against this assumption, challenging the widely held belief that this is a public-coin assumption.
2. We demonstrate a counter-example against vanilla public-coin evasive LWE by Wee [Wee22] in an unnatural parameter regime. Our attack crucially relies on the error in the pre-condition being larger than the error in the post-condition, necessitating a refinement of the assumption.
Private-Coin Attacks.
1. The recent work by Agrawal, Kumari and Yamada [AKY24a] constructed the first functional encryption scheme for pseudorandom functionalities (PRFE) and extended this to obfuscation for pseudorandom functionalities (PRIO) [AKY24c] by relying on private-coin evasive LWE. We provide a new attack against the assumption stated in the first posting of their work (subsequently refined to avoid these attacks).
2. The recent work by Branco et al. [BDJ+24] (concurrently to [AKY24c]) provides a construction of obfuscation for pseudorandom functionalities by relying on private-coin evasive LWE. We provide a new attack against their stated assumption.
3. Branco et al. [BDJ+24] showed that there exist contrived, ``self-referential'' classes of pseudorandom functionalities for which pseudorandom obfuscation cannot exist. We extend their techniques to develop an analogous result for pseudorandom functional encryption.
While Evasive LWE was developed to specifically avoid ``zeroizing attacks'', our work shows that in certain settings, such attacks can still apply.
2025
TCC
An Unstoppable Ideal Functionality for Signatures and a Modular Analysis of the Dolev-Strong Broadcast
Abstract
Many foundational results in the literature of consensus follow the Dolev-Yao model (FOCS '81), which treats digital signatures as ideal objects with perfect correctness and unforgeability. However, no work has yet formalized an ideal signature scheme that is both suitable for this methodology and possible to instantiate, or a composition theorem that ensures security when instantiating it cryptographically.
The Universal Composition (UC) framework would ensure composition if we could specify an ideal functionality for signatures and prove it UC-realizable. Unfortunately, all signature functionalities heretofore proposed are problematic when used to construct higher-level protocols: either the functionality internally computes a computationally secure signature, and therefore higher-level protocols must rely upon computational assumptions, or else the functionality introduces a new attack surface that does not exist when the functionality is realized. As a consequence, no consensus protocol has ever been analyzed in a modular way using existing ideal signature functionalities.
We propose a new unstoppable ideal functionality for signatures that is UC-realized exactly by the set of standard EUF-CMA signature schemes that are consistent and work in linear time in the length of the message. No adversary can prevent honest parties from obtaining perfectly ideal signature services from our functionality. We showcase its usefulness by presenting the first modular analysis of the Dolev-Strong broadcast protocol (SICOMP '83). Our result can be interpreted as a step toward a sound realization of the Dolev-Yao methodology. We also generalize our result to the threshold setting.
2025
TCC
Universally Composable Succinct Vector Commitments and Applications
Abstract
We develop a toolbox for modular construction and analysis of succinct, non-interactive commitments and vector commitments in the random oracle model, while guaranteeing universally composable security. To demonstrate its power, we use the toolbox to construct and analyze a modular variant of the Kilian-Micali ZK-SNARK.
Along the way we also propose a new UC formulation of a global random oracle, that avoids a weakness in existing formulations and also enables expressing more nuanced, session-specific abstractions.
We hope that this toolbox will be useful for building secure applications in settings where both succinctness and non-interactivity are key.
2025
TCC
Vive Galois! Part 1: Optimal SIMD Packing and Packed Bootstrapping for FHE
Abstract
The vast majority of work on the efficiency of lattice-based cryptography, including fully homomorphic encryption~(FHE), has relied on *cyclotomic* number fields and rings.
This is because cyclotomics offer a wide variety of benefits, including good geometrical properties, fast ring arithmetic, and rich homomorphic operations like vectorized (SIMD) operations on "packed" plaintexts, automorphisms, and ring-switching.
However, selecting a suitable cyclotomic that has the desired number and type of plaintext "slots," while also balancing security and efficiency, is a highly constrained problem that often lacks an ideal solution, resulting in wasted SIMD capacity and lost efficiency.
This work provides a suite of tools for instantiating ring-based lattice cryptography to work over *subfields* of cyclotomics, which provide more flexibility and better-fitting parameters for applications.
A particular focus is on realizing FHE with *optimal plaintext packing* and homomorphic SIMD parallelism for *any* plaintext characteristic, along with efficient *packed bootstrapping* that fully exploits this parallelism.
Toward this end, this (two-part) work makes the following main technical contributions, all of which are catalyzed by Galois theory:
* For sampling and decoding errors in encryption and decryption (respectively), we construct *geometrically short, structured bases* for the number rings of arbitrary subfields of prime-power cyclotomics (and hence their composites as well).
* For fast ring arithmetic, we define and establish analogous structural properties for Chinese Remainder Theorem~(CRT) bases in *abelian* number rings, and give *specialized fast transforms* that map between CRT bases and any similarly structured bases.
* For packed bootstrapping and homomorphic linear algebra, we give a general framework for *homomorphic evaluation of structured linear transforms* in abelian number rings, and show that CRT transforms can be evaluated using relatively few homomorphic operations.
2025
TCC
Securing Unbounded Differential Privacy Against Timing Attacks
Abstract
Recent works~\cite{ben2023resistance,ratliff2024framework} have started to theoretically investigate how we can protect differentially private programs against timing attacks, by making the joint distribution the output and the runtime differentially private (JOT-DP). However, the existing approaches to JOT-DP have some limitations, particularly in the setting of unbounded DP (which protects the size of the dataset and applies to arbitrarily large datasets). First, the known conversion of pure DP programs to pure JOT-DP programs in the unbounded setting~\cite{ben2023resistance} (a) incurs a constant additive increase in error probability (and thus does not provide vanishing error as $n\to\infty$) (b) produces JOT-DP programs that fail to preserve the computational efficiency of the original pure DP program and (c) is analyzed in a toy computational model in which the runtime is defined to be the number of coin flips. For approximate JOT-DP, an efficient conversion with vanishing error in the RAM model is known~\cite{haeberlen2011differential,ratliff2024framework}, but only applies to programs that run in $O(n)$ time on datasets of size $n$, as linear runtime is implied by ``timing stability,'' the timing analogue of global sensitivity. In this work, we overcome these limitations. Specifically:
\begin{enumerate}
\item We show that the error required for pure JOT-DP in the unbounded setting depends on the model of computation.
\begin{itemize}
\item In a randomized RAM model where the dataset size $n$ is given (or can be computed in constant time) and we can generate random numbers (not just random bits) in constant time, polynomially small error probability is necessary and sufficient.
\item If $n$ is not given or we only have a random-bit generator, an (arbitrarily small) constant error probability is necessary and sufficient.
\end{itemize}
\item The aforementioned positive results are proven by efficient procedures to convert any pure JOT-DP program $P$ in the upper-bounded setting to a pure JOT-DP program $P'$ in the unbounded setting, such that the output distribution of $P'$ is $\gamma$-close in total variation distance to that of $P$, where $\gamma$ is either an arbitrarily small constant or polynomially small, depending on the model of computation.
\end{enumerate}
2025
TCC
(Multi-Input) FE for Randomized Functionalities, Revisited
Abstract
Randomized functional encryption (rFE) generalizes functional encryption (FE) by incorporating randomized functionalities. Randomized multi-input functional encryption (rMIFE) extends rFE to accommodate multi-input randomized functionalities.
In this paper, we reassess the framework of rFE/rMIFE enhancing our understanding of this primitive and laying the groundwork for more secure and flexible constructions in this field. Specifically, we make three key contributions:
- Stronger IND definition: We show the prevailing indistinguishability-based security definition protects *only* against malicious *decryptors* and leaves systems *vulnerable* to malicious *encryptors* -- a critical requirement for rFE/rMIFE since their inception. We then propose a refined IND notion that simultaneously handles both threats.
- Separating counterexample: Illustrating this definitional gap, we meticulously craft an rFE scheme -- using standard tools (FE, PRF, PKE, simulation‑sound NIZK) -- that satisfies the old definition yet is blatantly insecure in practice (and where this insecurity would be precluded by our enhanced definition).
- Adaptive, unbounded‑message rMIFE: The sole, viable prior rMIFE construction by Goldwasser et al. [EUROCRYPT 2014] permits only a fixed message bound per encryption slot and offers merely selective security. Leveraging sub‑exponentially secure indistinguishability obfuscation and techniques of Goyal et al. [ASIACRYPT 2016] built for deterministic MIFE, we give the first rMIFE scheme that supports an unbounded number of messages per slot and attains full adaptive security.
2025
TCC
Adaptively Secure Streaming Functional Encryption
Abstract
This paper presents the *first adaptively secure* streaming functional encryption (sFE) scheme for P/Poly. sFE extends traditional FE to vast and/or dynamically evolving data sets, enabling incremental encryption of streaming inputs and iterative partial decryption over encrypted prefixes.
The proposed sFE scheme is built from indistinguishability obfuscation for P/Poly and injective PRGs. We achieve this result via two core technical contributions which may be of independent interest. First, we introduce a novel "gluing" mechanism to achieve adaptive security by intertwining two schemes, each possessing one aspect of adaptive security. Second, we enhance the powerful Koppula-Lewko-Waters [STOC 2015] framework with a "sliding window" technique, enabling authenticated iterative computation with fresh inputs at each iteration.
Prior work by Guan, Korb, and Sahai [CRYPTO 2023] constructed sFE but only under a (semi-adaptive) function-selective security model, requiring all functional keys to be queried before observing any ciphertext. This severe limitation precludes practical scenarios and leaves adaptive security as a crucial *open challenge* — explicitly highlighted by their work — which we address in this paper.
2025
TCC
Linear-Time Accumulation Schemes
Abstract
Proof-carrying data (PCD) is a powerful cryptographic primitive for computational integrity in a distributed setting. State-of-the-art constructions of PCD are based on accumulation schemes (and, closely related, folding schemes).
We present WARP, the first accumulation scheme with linear prover time and logarithmic verifier time. Our scheme is hash-based (secure in the random oracle model), plausibly post-quantum secure, and supports unbounded accumulation depth.
We achieve our result by constructing an interactive oracle reduction of proximity that works with any linear code over a sufficiently large field. We take a novel approach by constructing a straightline extractor that relies on erasure correction, rather than error-tolerant decoding like prior extractors. Along the way, we introduce a variant of straightline round-by-round knowledge soundness that is compatible with our extraction strategy.
2025
TCC
Relationships among FuncCPA and Its Related Notions
Abstract
Akavia, Gentry, Halevi, and Vald (TCC’22, JoC'25) introduced the security notion of function-chosen-plaintext-attack (FuncCPA security)for public-key encryption schemes.
FuncCPA is defined by adding a functional re-encryption oracle to the IND-CPA game.
This notion is crucial for secure computation applications where the server is allowed to delegate a part of the computation to the client.
Dodis, Halevi, and Wichs (TCC’23) introduced a stronger variant called FuncCPA+, and conjectured that FuncCPA+ is strictly stronger than FuncCPA, while they showed FuncCPA+ implies FuncCPA.
Seeking insights into this conjecture, they showed that ReEncCPA+ is strictly stronger than ReEncCPA, where ReEncCPA and ReEncCPA+ are restricted versions of FuncCPA and FuncCPA+ respectively.
In this paper, contrary to their conjecture, we show that FuncCPA+ is equivalent to FuncCPA.
We also introduce new variants of FuncCPA; WeakFuncCPA, OW-FuncCPA, and OW-WeakFuncCPA. WeakFuncCPA is a restricted variant of FuncCPA in that an oracle query is prohibited after the challenge query (like IND-CCA1).
OW-FuncCPA and OW-WeakFuncCPA are the one-way (OW) versions of FuncCPA and WeakFuncCPA, respectively.
This paper shows that WeakFuncCPA and OW-FuncCPA are equivalent to FuncCPA, that is, all of FuncCPA, FuncCPA+, WeakFuncCPA, and OW-FuncCPA are equivalent.
Considering the separation of IND-CCA1 and IND-CCA2, and that of OW-CPA and IND-CPA, these results are surprising.
To show the equivalence, we develop novel techniques to utilize functional re-encryption oracles.
We then provide the separation results that OW-WeakFuncCPA does not imply FuncCPA and ReEncCPA+ does not imply FuncCPA.
2025
TCC
Plonk is Simulation Extractable in ROM Under Falsifiable Assumptions
Abstract
Solving a long-standing open problem, Faonio, Fiore, and Russo proved that the widely used Plonk zk-SNARK is simulation extractable. However, their proof assumes both the random oracle model (ROM) and the algebraic group model. We prove that the same holds in the ROM under falsifiable assumptions. We combine the template of Faust et al., who proved that simulation extractability follows from knowledge soundness, (weak) unique response, and trapdoorless zero-knowledge, with the recent result of Lipmaa, Parisella, and Siim (Crypto 2025), who proved that Plonk has knowledge soundness in the ROM under falsifiable assumptions. For this, we prove that Plonk satisfies new variants of the weak unique response and trapdoorless zero-knowledge properties. We prove that several commonly used gadgets, like the linearization trick, are not trapdoorless zero-knowledge when considered as independent commit-and-prove zk-SNARKs.
2025
TCC
SNARK Lower Bounds via Communication Complexity
Abstract
We initiate the study of lower bounding the verification time of Succinct Non-interactive ARguments of Knowledge (SNARKs) built in the Polynomial Interactive Oracle Proof + Polynomial Commitment Scheme paradigm. The verification time of these SNARKs is generally dominated by the polynomial commitment scheme, and so we want to understand if polynomial commitment schemes admit lower bounds on the verification time. By recognizing that polynomial commitment schemes are also often built by applying cryptography to some information-theoretic core protocol, we seek to separate this core from the cryptography in a way that meaningfully captures the verification time required by the polynomial commitment scheme verifier.
We provide strong evidence that several polynomial commitment schemes have (nearly) \emph{optimal} verifier times. Our evidence comes from connecting polynomial commitment schemes to certain information-theoretic protocols known as communication protocols from the field of communication complexity, a link which we believe to be of independent interest. Through this lens, we model the verifier work in the cryptographic protocols as information (i.e., number of bits) exchanged between parties in the communication protocols, allowing us to leverage lower bounds from communication complexity. These lower bounds give strong evidence that the verifier time in these polynomial commitment schemes must be at least the number of bits exchanged in the communication protocol.
We extract the communication protocol cores of three polynomial commitment schemes and lower bound the bits exchanged in these cores. The lower bounds we obtain match (up to poly-logarithmic factors) the best-known (asymptotic) verification times of the polynomial commitment schemes we examine in this work. Specifically, we show that for univariate/multilinear polynomials of size $N = 2^n$:
- the communication core of Hyrax PCS (Wahby et al., S&P 2016) requires $\Omega(\sqrt{N}}$ bits to be exchanged;
- the communication core of Bulletproofs PCS (Bootle et al., EUROCRYPT 2016; Bünz et al., S&P 2018) requires $\Omega(N)$ bits to be exchanged; and
- the communication core of Dory PCS (Lee, TCC 2021) requires $\Omega(\log(N))$ bits to be exchanged.
Our results strongly suggest a negative answer to a longstanding open question on whether the Bulletproofs verifier can be made sublinear time.
2025
TCC
Clifford Strategies in Interactive Protocols are Classically Simulatable
Abstract
MIP* is the class of languages decidable by an efficient classical verifier interacting with multiple quantum provers that share entangled qubits but cannot communicate. Notably, MIP* was proved to equal RE, the class of all recursively enumerable languages.
We introduce the complexity class Clifford–MIP*, which restricts quantum provers to Clifford operations and classical post-processing of measurement results, while still allowing shared entangled qubits in any quantum state. We show that any strategy in this model can be simulated by classical provers with shared random bits, and therefore admits a local hidden-variable description. Consequently, Clifford–MIP*=MIP, a vastly smaller complexity class compared to RE.
Moreover, we resolve an open question posed by Kalai et al. (STOC 2023), by showing that quantum advantage in any single-round non-local game requires at least two provers operating outside the Clifford–MIP* computational model. This rules out a proposed approach for significantly improving the efficiency of quantum advantage tests that are based on compiling non-local games into single-prover interactive protocols.
2025
TCC
Multi-Server Doubly Efficient PIR in the Classical Model and Beyond
Abstract
A Doubly Efficient Private Information Retrieval (DEPIR) is defined as a Private Information Retrieval (PIR) scheme with both near-linear server-side preprocessing time and sublinear query time. For many years it has been a strong open question to devise a DEPIR scheme from standard assumptions.
Very recently, Lin et al. (STOC '23) devised the first single-server DEPIR scheme from standard assumptions. However, their work left one major question open: can we build a DEPIR scheme in the multi-server setting, without relying on heavy cryptographic machinery?
In this work, we discuss two different settings for multi-server DEPIR.
In the non-colluding, non-communicating servers setting, we propose the first doubly efficient multi-server PIR scheme. Our scheme is information-theoretic. For a database of size $N$, our scheme allows servers to answer an infinite number of client queries in $N^{o(1)}$ time, after a single preprocessing phase which takes $N^{1+o(1)}$ time. Our result is only a $N^{o(1)}$ factor from the lower bound proven by Persiano and Yeo (SODA '22) for this setting.
In addition, we devise a second information-theoretic PIR scheme in this setting which pushes the state of the art for the setting where the number of servers is more constrained. It achieves equally fast query times as our first scheme above, and a preprocessing time of $N^{2+o(1)}$. This scheme uses $O(\log N)$ servers. Both schemes don't require any cryptographic primitives.
Furthermore, in the setting where the servers are additionally allowed to communicate and keep state, we show a family of information-theoretic DEPIR schemes which achieve $N\polylog N$ preprocessing time, and $\log^3 N$ communication and server time, for any number of servers $k$ greater than three. We also discuss how introducing more servers or computational assumptions in this setting may improve concrete efficiency of the PIR. This setting of communicating servers requires online communication between servers to answer queries, in contrast to the standard multi-server PIR setting where servers only communicate to coordinate the database contents and do not communicate at query time.
2025
TCC
Linear Prover IOPs in Log Star Rounds
Abstract
Interactive Oracle Proofs (IOPs) form the backbone of some of the most efficient general-purpose cryptographic proof-systems. In an IOP, the prover can interact with the verifier over multiple rounds, where in each round the prover sends a long message, from which the verifier only queries a few symbols.
State-of-the-art IOPs achieve a linear-size prover and a poly-logarithmic time verifier but require a relatively large, logarithmic, number of rounds. While the Fiat-Shamir heuristic can be used to eliminate the need for actual interaction, in modern highly-parallelizable computer architectures such as GPUs, the large number of rounds still translates into a major bottleneck for the prover, since it needs to alternate between computing the IOP messages and the Fiat-Shamir hashes. Motivated by this fact, in this work we study the round complexity of linear-prover IOPs.
Our main result is an IOP for a large class of Boolean circuits, with only $O(\log^*(S))$ rounds, where $\log^*$ denotes the iterated logarithm function (and $S$ is the circuit size). The prover has linear size $O(S)$ and the verifier runs in time $\polylog(S)$ and has query complexity $O(\log^*(S))$. The protocol is both conceptually simpler, and strictly more efficient, than prior linear prover IOPs for Boolean circuits.
2025
TCC
On the Limitations of Pseudorandom Unitaries
Abstract
Pseudorandom unitaries (PRUs), one of the key quantum pseudorandom notions, are efficiently computable unitaries that are computationally indistinguishable from Haar random unitaries. While there is evidence to believe that PRUs are weaker than one-way functions, so far its relationship with other quantum cryptographic primitives (that are plausibly weaker than one-way functions) has not been fully established.
In this work, we focus on quantum cryptographic primitives with classical communication, referred to as QCCC primitives. Our main result shows that QCCC bit commitments and QCCC key agreement, cannot be constructed from pseudorandom unitaries in a black-box manner.
Our core technical contribution is to show (in a variety of settings) the difficulty of distinguishing identical versus independent Haar unitaries by separable channels. Our result strictly improves upon prior works which studied similar problems in the context of learning theory [Anshu, Landau, Liu, STOC 2022] and cryptography [Ananth, Gulati, Lin, TCC 2024].
2025
TCC
Provably Memory-Hard Proofs of Work With Memory-Easy Verification
Abstract
A Proof of Work (PoW) is an important construction for spam-mitigation and distributed consensus protocols. Intuitively, a PoW is a short proof that is easy for the verifier to check but moderately expensive for a prover to generate. However, existing proofs of work are not egalitarian in the sense that the amortized cost to generate a PoW proof using customized hardware is often several orders of magnitude lower than the cost for an honest party to generate a proof on a personal computer. Because Memory-Hard Functions (MHFs) appear to be egalitarian, there have been multiple attempts to construct Memory-Hard Proofs of Work (MHPoW) which require memory-hard computation to generate, but are efficient to verify. Biryukov and Khovratovich (Usenix, 2016) developed a MHPoW candidate called Merkkle Tree Proofs using used the Argon2d MHF. However, they did not provide a formal security proof and Dinur and Nadler (Crypto, 2017) found an attack which exploited the data-dependencies of the underlying Argon2d graph.
We revisit the security of the MTP framework and formally prove, in the parallel random oracle model, that the MTP framework is sound when instantiated with a suitable {\em data-independent} Memory-Hard function. We generically lower bound the cumulative memory cost (cmc) of any prover for the protocol by the pebbling cost of the {\em ex-post facto} graph. We also prove that as long as the underlying graph of the original iMHF is sufficiently depth-robust that, except with negligible probability, the {\em ex-post facto} will have high cumulative memory cost (cmc). In particular, if we instantiate the iMHF with DRSample then we obtain a MHPoW with the following properties: (1)
An honest prover for the protocol can run in sequential time $O(N)$, (2) The proofs have size $\mathtt{polylog}(N)$ and can be verified in time $\mathtt{polylog}(N)$ (3) Any malicious prover who produces a valid proof must incur high cumulative memory complexity at least $\Omega\left(\frac{N^2}{\log N}\right)$. We also develop general pebbling attacks to which we use to show that (1) any iMHF based MHPoW using the MTP framework has proof size at least $\Omega\left(\log^2 N/\log \log N \right)$, and (2) at least $\tilde{\Omega}(N^{0.32})$ when the iMHF is instantiated with Argon2i, the data-independent version of Argon2.
2025
TCC
Slightly Sublinear Trapdoor Hash Functions and PIR from Low-Noise LPN
Abstract
Trapdoor hash functions (TDHs) are compressing hash functions, with an additional trapdoor functionality: Given a encoding key for a function $f$, a hash on $x$ together with a (small) input encoding allow one to recover $f(x)$. TDHs are a versatile tool and a useful building block for more complex cryptographic protocols.
In this work, we propose the first TDH construction assuming the (quasi-polynomial) hardness of the LPN problem with noise rate $\epsilon = O(\log^{1+\beta} n / n)$ for $\beta>0$, i.e., in the so-called low-noise regime. The construction achieves $2^{\Theta(\log^{1-\beta} \lambda)}$ compression factor. As an application, we obtain private-information retrieval (PIR) with communication complexity $L / 2^{\Theta(\log^{1-\beta} L)}$, for a database of size L. This is the first PIR scheme with non-trivial communication complexity (asymptotically smaller than $L$) from any code-based assumption.
2025
TCC
Justvengers: Batched VOLE ZK Disjunctions in O(R+B+C) Communication
Abstract
Recent progress on zero-knowledge proofs (ZKPs) based on vector oblivious linear evaluation (VOLE) offers a promising paradigm for scaling ZKPs over extremely large statements. In particular, VOLE-based ZK is currently the best choice in terms of end-to-end execution time. However, VOLE-based ZK incurs high communication overhead — it usually scales linearly with the circuit size.
To mitigate this, existing literature considers VOLE-based ZK over structured statements. In this work, we focus on the batched disjunctive statement — P and V agree on B fan-in 2 circuits C_1, ..., C_B over a field F; each circuit is of size C with n_in inputs. P’s goal is to demonstrate the knowledge of R witnesses (id_j ∈ [B], w_j ∈ F^{n_in}) for each j ∈ [R] s.t. ∀j ∈ [R], C_{id_j}(w_j) = 0 where neither w_j nor id_j is revealed. Batched disjunctive statements are effective, e.g., in emulating the CPU execution inside ZK. Note, the naïve solution results in a circuit of size O(RBC).
To prove such a statement using VOLE-based ZK, the prior state-of-the-art protocol Antman (Weng et al., CCS’22) incurred O(BC + R) communication by additionally relying on AHE, whereas Batchman (Yang et al., CCS’23) achieved O(RC + B) communication using only VOLE.
In this work, we combine these two protocols non-trivially and present a novel protocol Justvengers — targeting the batched disjunctive statement — that incurs only O(R + B + C) communication and O(BC + (B + C)R log R) computation for prover, using AHE and VOLE. As in Antman, Justvengers requires an AHE scheme that achieves linear targeted malleability, which is a non-falsifiable assumption.
2025
TCC
Space-Deniable Proofs
Abstract
We introduce and construct a new proof system called Non-interactive Arguments of Knowledge or Space (NArKoS), where a space bounded prover can convince a verifier they know a secret, while having access to sufficient space allows one to forge indistinguishable proofs without the secret.
An application of NArKoS are space-deniable proofs, which are proofs of knowledge (say for authentication in access control) that are sound when executed by a lightweight device like a smart-card or an RFID chip that cannot have much storage, but are deniable (in the strong sense of online deniability) as the verifier, like a card reader, can efficiently forge such proofs.
We construct NArKoS in the random oracle model using an OR-proof combining a sigma protocol (for the proof of knowledge of the secret) with a new proof system called simulatable Proof of Transient Space (simPoTS). We give two different constructions of simPoTS, one based on labelling graphs with high pebbling complexity, a technique used in the construction of memory-hard functions and proofs of space, and a more practical construction based on the verifiable space-hard functions from TCC'24 where a prover must compute a root of a sparse polynomial. In both cases, the main challenge is making the proofs efficiently simulatable.
2025
TCC
Quantum Interactive Oracle Proofs
Abstract
We initiate the study of quantum Interactive Oracle Proofs (qIOPs), a generalization of both quantum Probabilistically Checkable Proofs and quantum Interactive Proofs, as well as a quantum analogue of classical Interactive Oracle Proofs.
In the model of quantum Interactive Oracle Proofs, we allow multiple rounds of quantum interaction between the quantum prover and the quantum verifier, but the verifier has limited access to quantum resources. This includes both queries to the prover’s messages and the complexity of the quantum circuits applied by the verifier. The question of whether QMA admits a quantum interactive oracle proof system is a relaxation of the quantum PCP Conjecture.
We show the following two main constructions of qIOPs, both of which are unconditional:
\begin{itemize}
\item We construct a quantum IOP protocol for QMA in which the verifier shares polynomially many EPR pairs with the prover at the start of the protocol and reads only a constant number of qubits from the prover’s messages.
\item We provide a stronger construction of quantum IOP for QMA in which the verifier not only reads a constant number of qubits but also operates on a constant number of qubits overall, including those in their private registers. However, in this stronger setting, the communication complexity becomes exponential. This leaves open the question of whether strong quantum IOPs for QMA with polynomial communication complexity exist.
\end{itemize}
As a key component of our construction, we introduce a novel single prover many-qubits tests, which may be of independent interest.
2025
TCC
Simplified PIR and CDS Protocols and Improved Linear Secret-Sharing Schemes
Abstract
We consider 3 related cryptographic primitives, private information retrieval (PIR) protocols, conditional disclosure of secrets (CDS) protocols, and secret-sharing schemes; these primitives have many applications in cryptography. We study these primitives requiring information-theoretic security. The complexity of the three primitives has been dramatically improved in the last few years and they are closely related, i.e., the 2-server PIR protocol of Dvir and Gopi (J. ACM 2016) was transformed to construct the CDS protocols of Liu, Vaikuntanathan, and Wee (CRYPTO 2017, Eurocrypt 2018) and these CDS protocols are the main ingredient in the construction of the best-known secret-sharing schemes. To date, the message size required in PIR and CDS protocols and the share size required in secret-sharing schemes are not understood and there are big gaps between their upper bounds and lower bounds. The goal of this paper is to try to better understand the upper bounds by simplifying current constructions and supplying tools for improving their complexity.
We obtain the following results:
- We simplify, abstract, and generalize the 2-server PIR protocol of Dvir and Gopi (J. ACM 2016),
the recent multi-server PIR protocol of Ghasemi, Kopparty, and Sudan (STOC 2025), and the 2-
server and multi-server CDS protocols of Liu et al.\ (CRYPTO 2017, Eurocrypt 2018) and Beimel,
Farr\`as, and Lasri (TCC 2023). In particular, we present one PIR protocol generalizing both the
2-server and multi-server PIR protocols using general share conversions. For $k\geq 3$
servers, our main contribution is a simpler proof the correctness of the protocol, avoiding the
partial derivatives and interpolation polynomials used by Ghasemi et al.
- In addition to simplifying previous protocols, our 2-server protocols can use a relaxed variant
of matching vectors over any $m$ that is the product of two distinct primes. Our constructions
do not improve the communication complexity of PIR and CDS protocols; however,
construction of better relaxed matching vectors over \emph{any} $m$ that is the product of
two distinct primes will improve the communication complexity of $2$-server PIR and $2$-
server CDS protocols.
- In many applications of secret-sharing schemes it is important that the scheme is linear, e.g.,
by using the fact that parties can locally add shares of two secrets and obtain shares of the
sum of the secrets. In an independent result, we provide a construction of linear secret-sharing
schemes for $n$-party access structures with improved share size of $2^{0.7563n}$.
Previously, the best share size for linear secret-sharing schemes was $2^{0.7576n}$ and it is
known that for most $n$-party access structures the shares' size is at least $2^{0.5n}$. This
result is achieved by a reduction to unbalanced CDS protocols (compared to balanced CDS protocols in previous constructions).
2025
TCC
Differentially Private Learning Beyond the Classical Dimensionality Regime
Abstract
We initiate the study of differentially private learning in the \emph{proportional dimensionality regime}, in which the number of data samples $n$ and problem dimension $d$ approach infinity at rates proportional to one another, meaning that $d / n \to \delta$ as $n \to \infty$ for an arbitrary, given constant $\delta \in (0, \infty)$. This setting is significantly more challenging than that of all prior theoretical work in high-dimensional differentially private learning, which, despite the name, has assumed that $\delta = 0$ or is sufficiently small for problems of sample complexity $O(d)$, a regime typically considered ``low-dimensional'' or ``classical'' by modern standards in high-dimensional statistics.
We provide sharp theoretical estimates of the error of several well-studied differentially private algorithms for robust linear regression and logistic regression, including output perturbation, objective perturbation, and noisy stochastic gradient descent, in the proportional dimensionality regime. The $1 + o(1)$ factor precision of our error estimates enables a far more nuanced understanding of the price of privacy of these algorithms than that afforded by existing, coarser analyses, which are essentially vacuous in the regime we consider. Using our estimates, we discover a previously unobserved ``double descent''-like phenomenon in the training error of objective perturbation for robust linear regression. We also identify settings in which output perturbation outperforms objective perturbation on average, and vice versa.
To prove our main theorems, we introduce -- and strengthen, to handle perturbations required for privacy -- several probabilistic tools that have not previously been used to analyze differentially private learning algorithms, such as a modern Gaussian comparison inequality and recent universality laws with origins in statistical physics.
2025
TCC
A Meta-Complexity Theoretic Approach to Indistinguishability Obfuscation and Witness Pseudo-Canonicalization
Abstract
Indistinguishability Obfuscation (IO) is a central hub of modern cryptography, with far reaching applications. Over the last decade, a handful of constructions ranging from heuristic ones to, more recently following the breakthrough of Jain, Lin and Sahai [STOC'21], provably secure constructions based on well-formed assumptions. The provably secure constructions, however, rely on various different assumptions, some of which can be broken in quantum polynomial time.
In this work we propose meta-complexity-theoretic characterizations of IO. First, we consider a notion of \emph{witness pseudo-canonicalization (WPC)}---which roughly speaking, requires, given a witness $w$ for some $NP$ statement $x$, efficiently outputting a pseduo-canonical witness $w'$ for the same statement $x$ (i.e., one that is \emph{computationally independent} of $w$). We remark that WPC for the Minimum Circuit Size problem (MCSP) is essentially equivalent to the notion of (exponentially-efficient) IO, which in turn can be bootstrapped into IO (assuming LWE and subexponential security).
We next provide a further study of the notion of WPC, noting that this notion captures not only IO but also non-interactive witness indistinguishabilty (NIWI); at the same time, (assuming OWFs) it is impossible to achieve it for any witness relation that is $NP$-complete w.r.t. (honest) Levin reductions.
Finally, we provide a purely meta-complexity (worst-case) characterization of WPC w.r.t. some witness relation $R$ through a problem referred to as the \emph{Decisional Computational Shallowness} ($DCS$) problem. Intuitively, the $DCS$ problem with w.r.t. witness-relation $R$ and an instance $\varphi$, requires deciding, given
$x, y, z \in R(\varphi)$, which of $CD^t(z| x)$
and $CD^t(z| y)$ is smaller, where $CD^t(z| x)=K^t(z| x)-K(z| x)$ is the (Kolmogorov) \emph{Computational Depth} and $t(size{z})$ is some polynomial.
We show that $DCS$ w.r.t. $R$ is essentially equivalent to a notion of ``weak" WPC (i.e., with weak indistinguishability), which leads our new complexity-theoretic characterization of (weak) IO.
2025
TCC
NISQ Security and Complexity via Simple Classical Reasoning
Abstract
We give novel lifting theorems for security games in the quantum random oracle model (QROM) in Noisy Intermediate-Scale Quantum (NISQ) settings such as the hybrid query model, the noisy oracle and the bounded-depth models.
We provide, for the first time, a hybrid lifting theorem for hybrid algorithms that can perform both quantum and classical queries, as well as a lifting theorem for quantum algorithms with access to noisy oracles or bounded quantum depth.
At the core of our results lies a novel measure-and-reprogram framework, called hybrid coherent measure-and-reprogramming, tailored specifically for hybrid algorithms.
Equipped with the lifting theorem, we are able to prove directly NISQ security and complexity results by calculating a single combinatorial quantity, relying solely on classical reasoning.
As applications, we derive the first direct product theorems in the average case, in the hybrid setting---i.e., an enabling tool to determine the hybrid hardness of solving multi-instance security games.
This allows us to derive in a straightforward manner the NISQ hardness of various security games,
such as (i) the non-uniform hardness of salted games, (ii) the hardness of specific cryptographic tasks such as the multiple instance version of one-wayness and collision-resistance, and (iii) uniform or non-uniform hardness of many other games.
2025
TCC
Polynomial Secret Sharing Schemes and Algebraic Matroids
Abstract
Polynomial secret sharing schemes generalize the linear ones by adding more expressivity and giving room for efficiency improvements. In these schemes, the secret is an element of a finite field, and the shares are obtained by evaluating polynomials on the secret and some random field elements, i.e., for every party there is a set of polynomials that computes the share of the party. Notably, for general access structures, the best known polynomial schemes have better share size than the best known linear ones. This work investigates the properties of the polynomials and the fields that allow these efficiency gains and aims to characterize the access structures that benefit from them.
We focus first on ideal schemes, which are optimal in the sense that the size of each share is the size of the secret. We prove that, under some restrictions, if the degrees of the sharing polynomials are not too big compared to the size of the field, then its access structure is a port of an algebraic matroid. This result establishes a new connection between ideal schemes and matroids, extending the known connection between ideal linear schemes and linearly representable matroids.
For general polynomial schemes, we extend these results and analyze their privacy and correctness. Additionally, we show that given a set of polynomials over a field of large characteristic, one can construct linear schemes that realize the access structure determined by these polynomials; as a consequence, polynomial secret sharing schemes over these fields are not stronger than linear schemes.
While there exist ports of algebraic matroids that do not admit ideal schemes, we demonstrate that they always admit schemes with statistical security and information ratio tending to one. This is achieved by considering schemes where the sharings are points in algebraic varieties.
2025
TCC
Sandwich BUFF: Achieving Non-Resignability Using Iterative Hash Functions
Abstract
We revisit the BUFF transform, which was proposed by Cremers et al. (S&P'21) as a means to achieve security properties beyond standard unforgeability for digital signature schemes. One of these properties, non-resignability (NR), has recently drawn some attention due to a strong impossibility result for the original definition of the property. Recent follow-up work then considered a variant (sNR) of the original definition, and showed that it is satisfied by the BUFF transform when the underlying hash function is modeled as a random oracle --- while the original impossibility result still applies for the plain model. This raises the natural question of whether the BUFF transform satisfies sNR in a more fine-grained use of the random oracle model, when we consider a real-life iterative-hash-function design (such as Merkle-Damgard or Sponge) and instead idealize the round function. Our discoveries in this direction are two-fold:
First, contrary to what one might expect, we show that there is a simple attack on the non-resignability property sNR of the BUFF-transform when instantiated with an iterative hash function. The attack relies on leaking an intermediate result of the hash computation to the adversary who is challenged to ``resign'' the message. This negative result once more shows the subtlety in the non-resignability property.
Second, on the positive side, we propose a small modification to the original BUFF transform, which we call Sandwich BUFF (for reasons to become clear), and prove the non-resignability property sNR of Sandwich BUFF both for Merkle-Damgard-based hash functions in the random oracle model, and for Sponge-based hash functions in the random permutation model.
2025
TCC
A Fiat-Shamir Transformation From Duplex Sponges
Abstract
We analyze a variant of the Fiat--Shamir transformation based on an ideal permutation. The transformation relies on the popular duplex sponge paradigm, and minimizes the number of calls to the permutation (given the information to absorb/squeeze). This closely models deployed variants of the Fiat--Shamir transformation, and our analysis provides concrete security bounds to guide security parameters in practice. Our results contribute to the ongoing community-wide effort to achieve rigorous, and ultimately formally verified, security proofs for deployed cryptographic proofs. Along the way, we find that indifferentiability (a property proven for many modes of operation, including the duplex sponge) is ill-suited for establishing the knowledge soundness and zero knowledge of a non-interactive argument.
We additionally contribute SpongeFiSh, an open-source Rust library implementing our Fiat--Shamir transformation. The library is interoperable across multiple cryptographic frameworks, and works with any choice of permutation. The library comes equipped with Keccak and Poseidon permutations, as well as several ``codecs'' for re-mapping prover and verifier messages to the permutation's domain.
2025
TCC
Pseudorandom FE and iO with Applications
Abstract
We propose the abstractions of Functional Encryption (FE) and Indistinguishability Obfuscation (iO) for {\it pseudorandom} functionalities which are strictly weaker than their general counterparts. Intuitively, a pseudorandom functionality means that the output of the circuit is indistinguishable from uniform for {\it every} input seen by the adversary. We then leverage weak indistinguishability style security of these tools to obtain the following applications:
1. {\it Attribute Based Encryption for Unbounded Depth Circuits.} Assuming $\IND$-secure FE for pseudorandom functionalities and LWE, we construct Attribute Based Encryption (ABE) for circuits of unbounded depth. Previously, such ABE required the circular Evasive LWE assumption (Hseih, Lin and Luo, Focs 2023) which has recently been subject to zeroizing attacks.
2. {\it Attribute Based Encryption for Turing Machines.} Assuming $\IND$-secure FE for pseudorandom functionalities and circular small-secret LWE, we construct Attribute Based Encryption (ABE) for Turing machines. Previously, such ABE required either private coin Evasive LWE (Agrawal, Kumari and Yamada, Crypto 2024) or circular Evasive LWE (Cini and Wee, Eurocrypt 2025), both of which admit attacks in the general case.
3. {\it Multi Input Predicate Encryption for Polynomial Arity.} Assuming $\IND$-secure multi-input FE for pseudorandom functionalities, we construct Multi Input Predicate Encryption (${\sf MIPE}$) for ${\sf P}$ for polynomial arity. Previously, ${\sf MIPE}$ for ${\sf P}$ was known only for {\it constant} arity, using private coin evasive LWE (Agrawal, Rossi, Yadav and Yamada, Crypto 2023).
4. {\it Instantiating the Random Oracle.} We use our $\IND$-secure iO for pseudorandom functionalities to instantiate the random oracle in several applications that previously used iO (Hohenberger, Sahai and Waters, Eurocrypt 2014) such as full-domain hash signature based on trapdoor permutations and more. %, the adaptive security of RSA FDH signatures, the selective security of BLS signatures, and the adaptive security of BLS signatures in the standard model. Our pseudorandom $\iO$ can be used to instantiate these applications, thus reducing their security to strong evasive $\LWE$ and $\LWE$ assumptions.
We provide heuristic constructions of FE and MIFE for pseudorandom functionalities from private coin evasive LWE and plain LWE, where private coin evasive LWE is suitably parametrized to avoid all know attacks for the functionalities we consider in this work. This implies iO for pseudorandom functionalities from the same assumptions.
2025
TCC
Multiparty Homomorphic Secret Sharing and More from LPN and MQ
Abstract
We give the first constructions of multiparty pseudorandom correlation generators, distributed point functions, and (negligible-error) homomorphic secret sharing for constant-degree polynomials for any number of parties without using LWE or iO. Our constructions are proven secure under the combination of LPN with dimension $n$, $2n$ samples, and noise rate $n^{\eps-1}$ for a small constant $\eps$, and MQ with $n$ variables and $n^{1+\delta}$ equations.
As applications of our results, we obtain from the same assumptions secure multiparty computation protocols with sublinear communication and silent preprocessing, as well as private information retrieval for $M$ servers and size-$\secpar^d$ databases with optimal download rate and client-to-server communication $M^d\cdot \secpar^3$.
2025
TCC
Untelegraphable Encryption and its Applications
Abstract
We initiate the study of untelegraphable encryption (UTE), founded on the no-telegraphing principle, which allows an encryptor to encrypt a message such that a binary string representation of the ciphertext cannot be decrypted by a user with the secret key, a task that is classically impossible. This is a natural relaxation of unclonable encryption (UE), inspired by the recent work of Nehoran and Zhandry (ITCS 2024), who showed a computational separation between the no-cloning and no-telegraphing principles.
In this work, we define and construct UTE information-theoretically in the plain model. Building off this, we give several applications of UTE and study the interplay of UTE with UE and well-studied tasks in quantum state learning, yielding the following contributions:
- A construction of collusion-resistant UTE from standard secret-key encryption (SKE). We additionally show that hyper-efficient shadow tomography (HEST) is impossible assuming collusion-resistant UTE exists. By considering a relaxation of collusion-resistant UTE, we are able to show the impossibility of HEST assuming only pseudorandom state generators (which may not imply one-way functions). This almost unconditionally answers an open inquiry of Aaronson (STOC 2018).
- A construction of UTE from a quasi-polynomially secure one-shot message authentication code (OSMAC) in the classical oracle model, such that there is an explicit attack that breaks UE security for an unbounded polynomial number of decryptors.
- A construction of everlasting secure collusion-resistant UTE, where the decryptor adversary can run in unbounded time, in the quantum random oracle model (QROM), and formal evidence that a construction in the plain model is a challenging task. We additionally show that HEST with unbounded post-processing time (which we call weakly-efficient shadow tomography) is impossible assuming everlasting secure collusion-resistant UTE exists.
- A construction of secret sharing for all polynomial-size policies that is resilient to joint and unbounded classical leakage from collusion-resistant UTE and classical secret sharing for all policies.
- A construction (and definition) of collusion-resistant untelegraphable secret-key functional encryption (UTSKFE) from single-decryptor functional encryption and plain secret-key functional encryption, and a construction of collusion-resistant untelegraphable public-key functional encryption from UTSKFE, plain SKE, and plain public-key functional encryption.
2025
TCC
Pseudorandom Correlation Functions for Garbled Circuits
Abstract
In this paper, we define the notion of pseudorandom correlation generators (PCGs) and functions (PCFs) for garbling correlations. With our Garbling PCG or PCF, two parties can non-interactively generate a virtually unbounded number of secret-shared garbled circuits and corresponding secret-shared garbled inputs. With the shares of the garbled circuit and garbled input, anyone can recover the garbled circuit and evaluate it to obtain the result of the computation in the clear.
In the process of constructing Garbling PCFs, we introduce a new primitive that we call a Topology-Adaptive PCF (TAPCF), which we construct from two different variants of the learning parity with noise (LPN) assumption. Informally, a TAPCF is a PCF that additionally allows the target correlation to be specified on-demand (i.e., at evaluation time). Using our TAPCF construction as a building block, we construct a Garbling PCF that allows the parties to specify the circuit they wish to garble on-the-fly. Under realistic parameter settings, we estimate that, with our construction, two parties can generate one garbled circuit per second, for garbled circuits with 10,000 AND gates.
We show that Garbling PCFs have several applications: We provide constructions for (1) an efficient homomorphic secret-sharing scheme for specific circuits, (2) a zero-knowledge proof system for homomorphic secret sharing that supports checking unstructured languages, and (3) a semi-honest reusable two-round, two-party computation protocol supporting non-interactive public outputs.
2025
TCC
How to Verify that a Small Device is Quantum, Unconditionally
Abstract
A proof of quantumness (PoQ) allows a classical verifier to efficiently test if a quantum machine is performing a computation that is infeasible for any classical machine. In this work, we propose a new approach for constructing PoQ protocols where soundness holds unconditionally assuming a bound on the memory of the prover, but otherwise no restrictions on its runtime. In this model, we propose two protocols:
1. A simple protocol with a quadratic gap between the memory required by the honest parties and the memory bound of the adversary. The soundness of this protocol relies on Raz's (classical) memory lower bound for matrix inversion (Raz, FOCS 2016).
2. A protocol that achieves an exponential gap, building on techniques from the literature on the bounded storage model (Dodis et al., Eurocrypt 2023).
Both protocols are also efficiently verifiable. Despite having worse asymptotics, our first protocol is conceptually simple and relies only on arithmetic modulo 2, which can be implemented with one-qubit Hadamard and CNOT gates, plus a single one-qubit non-Clifford gate.
2025
TCC
Constrained Verifiable Random Functions Without Obfuscation and Friends
Abstract
CVRFs are PRFs that unify the properties of verifiable and constrained PRFs. Since they were introduced concurrently by Fuchsbauer and Chandran-Raghuraman-Vinayagamurthy in 2014, it has been an open problem to construct CVRFs without using heavy machinery such as multilinear maps, obfuscation or functional encryption.
We solve this problem by constructing a prefix-constrained verifiable PRF that does not rely on the aforementioned assumptions. Essentially, our construction is a verifiable version of the Goldreich-Goldwasser-Micali PRF. To achieve verifiability we leverage degree-2 algebraic PRGs and bilinear groups. In short, proofs consist of intermediate values of the Goldreich-Goldwasser-Micali PRF raised to the exponents of group elements. These outputs can be verified using pairings since the underlying PRG is of degree 2.
We prove the selective security of our construction under the Decisional Square Diffie-Hellman (DSDH) assumption and a new assumption, which we dub recursive Decisional Diffie-Hellman (recursive DDH).
We prove the soundness of recursive DDH in the generic group model assuming the hardness of the Multivariate Quadratic (MQ) problem and a new variant thereof, which we call MQ+.
Last, in terms of applications, we observe that our CVRF is also an exponent (C)VRF in the plain model. Exponent VRFs were recently introduced by Boneh et al. (Eurocrypt'25) with various applications to threshold cryptography in mind. In addition to that, we give further applications for prefix-CVRFs in the blockchain setting, namely, stake-pooling and compressible randomness beacons.
2025
TCC
Time-Space Trade-Offs for Sumcheck
Abstract
The sumcheck protocol is a fundamental building block in the design of probabilistic proof systems, and has become a key component of recent work on efficient succinct arguments.
We study time-space trade-offs for the prover of the sumcheck protocol in the streaming model, and provide upper and lower bounds that tightly characterize the efficiency achievable by the prover.
For sumcheck claims about a single multilinear polynomial we demonstrate an algorithm that runs in time $O(kN)$ and uses space $O(N^{1/k})$ for any $k \geq 1$. For non-adaptive provers (a class which contains all known sumcheck prover algorithms) we show this trade-off is optimal.
For sumcheck claims about products of multilinear polynomials, we describe a prover algorithm that runs in time $O(N(\log \log N + k))$ and uses space $O(N^{1/k})$ for any $k \geq 1$. We show that, conditioned on the hardness of a natural problem about multiplication of multilinear polynomials, any "natural" prover algorithm that uses space $O(N^{1/2 - \varepsilon})$ for some $\varepsilon > 0$ must run in time $\Omega(N(\log \log N + \log \varepsilon))$.
We implement and evaluate the prover algorithm for products of multilinear polynomials. We show that our algorithm consumes up to $120\times$ less memory compare to the linear-time prover algorithm, while incurring a time overhead of less than $2\times$.
The foregoing algorithms and lower bounds apply in the interactive proof model. We show that in the polynomial interactive oracle proof model one can in fact design a new protocol that achieves a better time-space trade-off of $O(N^{1/k})$ space and $O(N(\log^* N + k))$ time for any $k \geq 1$.
2025
TCC
Practical Secure Delegated Linear Algebra with Trapdoored Matrices
Abstract
Most heavy computation occurs on servers owned by a second party. This reduces data privacy, resulting in interest in data-oblivious computation, which typically severely degrades performance. Secure and fast delegated computation is particularly important for linear algebra, which comprises a large fraction of total computation and is best run on highly specialized hardware often accessible only through the cloud.
We state the natural efficiency and security desiderata for fast and data-oblivious delegated linear algebra. We demonstrate the existence of \textit{Trapdoored}-\textit{Matrix} families based on an LPN assumption, and provide a scheme for secure delegated matrix-matrix and matrix-vector multiplication based on the existence of trapdoored matrices. We achieve sublinear overhead for the server, dramatically reduced computation for the client, and various practical advantages over previous protocols.
2025
TCC
Unconditional foundations for supersingular isogeny-based cryptography
Abstract
In this paper, we prove that the supersingular isogeny problem (Isogeny), endomorphism ring problem (EndRing) and maximal order problem (MaxOrder) are equivalent under probabilistic polynomial time reductions, unconditionally.
Isogeny-based cryptography is founded on the presumed hardness of these problems, and their interconnection is at the heart of the design and analysis of cryptosystems like the SQIsign digital signature scheme. Previously known reductions relied on unproven assumptions such as the generalized Riemann hypothesis. In this work, we present unconditional reductions, and extend this network of equivalences to the problem of computing the lattice of all isogenies between two supersingular elliptic curves (HomModule).
For cryptographic applications, one requires computational problems to be hard on average for random instances. It is well-known that if Isogeny is hard (in the worst case), then it is hard for random instances. We extend this result by proving that if any of the above-mentionned classical problems is hard in the worst case, then all of them are hard on average. In particular, if there exist hard instances of Isogeny, then all of Isogeny, EndRing, MaxOrder and HomModule are hard on average.
2025
TCC
Information-Theoretic Broadcast-Optimal MPC
Abstract
Broadcast, though often used as a black box in cryptographic protocols, is expensive to realize in terms of rounds and communication complexity. We investigate the minimal use of broadcast in round-optimal information-theoretic MPC, with statistical security. For information-theoretic MPC with guaranteed output delivery, four rounds of communication are necessary and sufficient (Applebaum, Kachlon and Patra, FOCS 2020; Applebaum, Kachlon and Patra, STOC 2023). We show that broadcast is unavoidable in the second and third rounds of statistical MPC protocols. To complement our lower bounds, we modify the protocol of Applebaum, Kachlon and Patra (STOC 2023) to make use of broadcast only in the second and third round. Along the way, we show that the sharing phase of any three-round information-theoretic VSS protocol must also make use of broadcast in the second and third rounds.
2025
TCC
Threshold Signatures from One-Way Functions
Abstract
A threshold signature allows one to delegate its signing rights to $n$ parties, such that any subset of size $t$ can sign a message on their behalf. In this work, we show how to construct threshold signatures for any $t$ and $n$ from one way functions, thus establishing the latter as a necessary and sufficient computational assumption. Our protocol makes non-black box use of one-way functions, and can be generalized to other access structures, such as monotone policies.
2025
TCC
Separating Pseudorandom Codes from Local Oracles
Abstract
Pseudorandom codes (PRCs) are error-correcting codes with the
distinguishing feature that their codewords are computationally indistin-
guishable from random strings. Introduced by Christ and Gunn (CRYPTO
2024), PRCs have found applications in areas such as AI watermarking,
where both robustness and pseudorandomness are essential. All known
constructions of PRCs rely on coding-theoretic hardness assumptions. In
this work, we study how inherent the use of coding-theoretic hardness is
in the construction of pseudorandom codes.
We show that there is no black-box construction of PRCs with binary alpha-
bets capable of decoding from a constant fraction of Bernoulli noise from
a class of oracles we call local oracles. The class of local oracles includes
random oracles and trapdoor permutation oracles, and can be interpreted
as a meaningful notion of oracles that are not resilient against noise. Our
separation result is cast in the Impagliazzo-Rudich framework and crucially
relies on the Bonami-Beckner hypercontractivity theorem on the Boolean
hypercube.
As a complementary result, we show that PRCs with large alphabets that
can tolerate high error rates can indeed be constructed in a black-box man-
ner from one-way functions.
2025
TCC
Differentially Private Compression and the Sensitivity of LZ77
Abstract
We initiate the study of differentially private data-compression schemes motivated by the insecurity of the popular "Compress-Then-Encrypt" framework. Data compression is a useful tool which exploits redundancy in data to reduce storage/bandwidth when files are stored or transmitted. However, if the contents of a file are confidential then the \emph{length} of a compressed file might leak confidential information about the content of the file itself. Encrypting a compressed file does not eliminate this leakage as data encryption schemes are only designed to hide the \emph{content} of confidential message instead of the \emph{length} of the message. In our proposed \emph{Differentially Private Compress-Then-Encrypt} framework, we add a random positive amount of padding to the compressed file to ensure that any leakage satisfies the rigorous privacy guarantee of $(\epsilon,\delta)$-differential privacy. The amount of padding that needs to be added depends on the sensitivity of the compression scheme to small changes in the input, i.e., to what degree can changing a single character of the input message impact the length of the compressed file. While some popular compression schemes are highly sensitive to small changes in the input, we argue that effective data compression schemes do not necessarily have high sensitivity. Our primary technical contribution is analyzing the fine-grained sensitivity of the LZ77 compression scheme (IEEE Trans. Inf. Theory 1977) which is one of the most common compression schemes used in practice. We show that the global sensitivity of the LZ77 compression scheme has the upper bound $O(W^{2/3}\log n)$ where $W\leq n$ denotes the size of the sliding window. When $W=n$, we show the lower bound $\Omega(n^{2/3}\log^{1/3}n)$ for the global sensitivity of the LZ77 compression scheme which is tight up to a sublogarithmic factor.
2025
TCC
Scalable Multi-Server Private Information Retrieval
Abstract
We revisit multi-server Private Information Retrieval (PIR), where the client interacts with $S$ non-colluding servers. Ideally, we want a *scalable* family of multi-server PIR schemes where all the performance metrics of the scheme decrease as $S$ increases. However, no prior work achieved scalability under any setting, and any hardness assumption.
In this paper we construct new multi-server, information-theoretically secure *scalable* PIR schemes for three natural settings. First, we give a construction where all the performance metrics scale at equal rate. Second, we give a scalable construction that minimizes the per-query bandwidth. Third, we give a scalable construction that minimizes the per-query online bottleneck cost (the maximum of the bandwidth and computation). For the first two settings, our constructions are *doubly efficient* with only a super-constant number of servers. In comparison, the best known prior works in the information-theoretic setting required super-logarithmically many servers to achieve the doubly efficient notion.
Our techniques for achieving scalable PIR also enable us to advance the state of the art in the polynomial space setting. In this setting, we show how to improve the space consumption of prior works by a polynomial factor while preserving all other metrics. Further, we show a new balancing technique that allows us to further minimize the bandwidth per query by trading off the computation and server space, thus enabling a more smooth tradeoff between the metrics and generalizing the design space.
2025
TCC
Offline-Online Indifferentiability of Cryptographic Systems
Abstract
The indifferentiability framework has become a standard methodology that enables us to study the security of cryptographic constructions in idealized models of computation. Unfortunately, while indifferentiability provides strong guarantees whenever the security of a construction is captured by a "single-stage" security game, it may generally provide no meaningful guarantees when the security is captured by a "multi-stage" one. In particular, the indifferentiability framework does not capture offline-online games, where the adversary can perform an extensive offline computation to later speed up the online phase. Such security games are extremely common, both in practice and in theory. Over the past decade, there has been numerous attempts to meaningfully extend the indifferentiability framework to offline-online games, however, they all ultimately met with little success.
In this work, our contribution is threefold. First, we propose an extension of the classical indifferentiability framework, we refer to as *offline-online-indifferentiability*, that applies in the context of attackers with an expensive offline phase (\`{a} la Ghoshal and Tessaro, CRYPTO '23). Second, we show that our notion lends itself to a natural and meaningful composition theorem for offline-online security games. Lastly, as our main technical contribution, we analyze the offline-online-indifferentiability of two classical variants of the Merkle-Damgård hashing mechanism, one where the key is fed only to the first block in the chain and the other where the key is fed to each block in the chain. For both constructions, we prove a *tight* bound on their offline-online-indifferentiability (i.e., an upper bound and an attack that matches it). Notably, our bound for the second variant shows that the construction satisfies *optimal* offline-online-indifferentiability.
2025
TCC
Relativized Succinct Arguments in the ROM Do Not Exist
Abstract
A relativized succinct argument in the random oracle model (ROM) is a succinct argument in the ROM that can prove/verify the correctness of computations that involve queries to the random oracle. We prove that relativized succinct arguments in the ROM do not exist. The impossibility holds even if the succinct argument is interactive, and even if soundness is computational (rather than statistical).
This impossibility puts on a formal footing the commonly-held belief that succinct arguments in the ROM require non-relativizing techniques. Moreover, our results stand in sharp contrast with other oracle models, for which a recent line of work has constructed relativized succinct non-interactive arguments (SNARGs). Indeed, relativized SNARGs are a powerful primitive that, e.g., can be used to obtain constructions of IVC (incrementally-verifiable computation) and PCD (proof-carrying data) based on falsifiable cryptographic assumptions. Our results rule out this approach for IVC and PCD in the ROM.
2025
TCC
Quantum Rewinding for IOP-Based Succinct Arguments
Abstract
We analyze the post-quantum security of succinct interactive arguments constructed from interactive oracle proofs (IOPs) and vector commitment schemes. Specifically, we prove that an interactive variant of the *BCS transformation* is secure in the standard model against quantum adversaries when the vector commitment scheme is collapse binding.
Prior work established the post-quantum security of Kilian's succinct interactive argument, a special case of the BCS transformation for one-message IOPs (i.e., PCPs). That analysis is inherently limited to one message because the reduction, like all prior quantum rewinding reductions, aims to extract classical information (a PCP string) from the quantum argument adversary. Our reduction overcomes this limitation by instead extracting a *quantum algorithm* that implements an IOP adversary; representing such an adversary classically may in general require exponential complexity.
Along the way we define *collapse position binding*, which we propose as the ``correct'' definition of collapse binding for vector commitment schemes, eliminating shortcomings of prior definitions.
As an application of our results, we obtain post-quantum secure succinct arguments, in the standard model (no oracles), with the *best asymptotic complexity known*.
2025
TCC
Security Amplification of Threshold Signatures in the Standard Model
Abstract
The current standardization calls for threshold signatures have highlighted the need for appropriate security notions providing security guarantees strong enough for broad application. To address this, Bellare et al. [Crypto'22] introduced a hierarchy of unforgeability notions for threshold signatures. Recently, Navot and Tessaro [Asiacrypt'24] introduced a new game-based definition of strong (one-more) unforgeability for threshold signatures, which however does not achieve Bellare's strongest level of security.
Navot and Tessaro analyzed several existing schemes w.r.t. this security notion, but all positive results rely on idealized models. This is in contrast to the weaker security notion of (standard) unforgeability, for which standard-model constructions exist. This leaves open a fundamental question: is getting strong unforgeability fundamentally harder than standard unforgeability for threshold signatures?
In this paper we bridge this gap, by showing a generic construction lifting any unforgeable threshold signature scheme to strong unforgeability. The building blocks of our construction can be instantiated in the standard model under standard assumptions. The achieved notion of strong
unforgeability extends the definition of Navot and Tessaro to achieve the strongest level of security according to the hierarchy of Bellare et al. (following a recent classification of security notions for (blind) threshold signatures by Lehmann, Nazarian, and Özbay [Eurocrypt'25]).
The starting point for our transformation is an existing construction for single-user signatures from chameleon hash functions by Steinfeld, Pieprzyk and Wang [RSA'07]. We first simplify their construction by relying on a stronger security notion for chameleon hash functions. The bulk of our technical contribution is then to translate this framework into the threshold setting. Towards this goal, we introduce a game-based definition for threshold chameleon hash functions (TCHF) and provide a construction of TCHF that is secure under DDH in the standard model. We believe that our new notion of TCHF might also be of independent interest.
2025
TCC
Pseudorandom Function-like States from Common Haar Unitary
Abstract
Recent active studies have demonstrated that cryptography without one-way functions (OWFs) could be possible in the quantum world. Many fundamental primitives that are natural quantum analogs of OWFs or pseudorandom generators (PRGs) have been introduced, and their mutual relations and applications have been studied. Among them, pseudorandom function-like state generators (PRFSGs) [Ananth, Qian, and Yuen, Crypto 2022] are one of the most important primitives. PRFSGs are a natural quantum analogue of pseudorandom functions (PRFs), and imply many applications such as IND-CPA secret-key encryption (SKE) and EUF-CMA message authentication code (MAC). However, only known constructions of (many-query-secure) PRFSGs are ones from OWFs or pseudorandom unitaries (PRUs).
In this paper, we construct classically-accessible adaptive secure PRFSGs in the invertible quantum Haar random oracle (QHRO) model which is introduced in [Chen and Movassagh, Quantum]. The invertible QHRO model is an idealized model where any party can access a public single Haar random unitary and its inverse, which can be considered as a quantum analog of the random oracle model. Our PRFSG constructions resemble the classical Even-Mansour encryption based on a single permutation, and are secure against any unbounded polynomial number of queries to the oracle and construction. To our knowledge, this is the first application in the invertible QHRO model without any assumption or conjecture. The previous best constructions in the idealized model are PRFSGs secure up to $o(\secp/\log \secp)$ queries in the common Haar state model [Ananth, Gulati, and Lin, TCC 2024] and (inverseless) PRUs in a relaxed QRHO model without inverse access [Ananth, Bostanci, Gulati, and Lin, Eurocrypt 2025].
We develop new techniques on Haar random unitaries to prove the selective and adaptive security of our PRFSGs. For selective security, we introduce a new formula, which we call the Haar twirl approximation formula. For adaptive security, we show the unitary reprogramming lemma and the unitary resampling lemma along with the several technical tools for unitary oracle security proof with pure state queries. These have their own interest, and may have many further applications. In particular, by using the approximation formula, we give an alternative proof of the non-adaptive security of the PFC ensemble [Metger, Poremba, Sinha, and Yuen, FOCS 2024] as an additional result.
Finally, we prove that our construction is not PRUs or quantum-accessible non-adaptive PRFSGs by presenting quantum polynomial time attacks. Our attack is based on generalizing the hidden subgroup problem where the relevant function outputs quantum states.
2025
TCC
Is It Even Possible? On the Parallel Composition of Asynchronous MPC Protocols
Abstract
Despite several known idiosyncrasies separating the synchronous and the asynchronous models, asynchronous secure multi-party computation (MPC) protocols demonstrate high-level similarities to synchronous MPC, both in design philosophy and abstract structure. As such, a coveted, albeit elusive, {\em desideratum} is to devise automatic translators (e.g., protocol compilers) of feasibility and efficiency results from one model to the other.
In this work, we demonstrate new challenges associated with this goal. Specifically, we study the case of \emph{parallel composition} in the asynchronous setting. We provide formal definitions of this composition operation in the UC framework, which, somewhat surprisingly, have been missing from the literature. Using these definitions, we then turn to charting the feasibility landscape of asynchronous parallel composition.
We first prove strong impossibility results for composition operators that do not assume knowledge of the functions and/or the protocols that are being composed. These results draw a grim feasibility picture, which is in sharp contrast with the synchronous model, and highlight the question:
\begin{center}
{\em Is asynchronous parallel composition even a realistic goal?}
\end{center}
To answer the above (in the affirmative), we provide conditions on the composed protocols that enable a useful form of asynchronous parallel composition, as it turns out to be common in existing constructions.
2025
TCC
On the Impossibility of Actively Secure Distributed Samplers
Abstract
One-round secure computation is generally believed impossible due to the \emph{residual function attack}: any honest-but-curious participant can replay the protocol in their head changing their input, and learn, in this way, a new output. Inputless functionalities are among the few that are immune to this problem.
This paper studies one-round, multi-party computation protocols (MPC) that implement the most natural inputless functionality: one that generates a random sample from a fixed distribution. These are called \emph{distributed samplers}. At Eurocrypt 2022, Abram, Scholl and Yakoubov showed how to build this primitive in the semi-honest model with dishonest majority. In this work, we give a lower bound for constructing distributed samplers with a malicious adversary in the standard model. More in detail, we show that for any construction in the stand-alone model with black-box simulation, even with a CRS and honest majority, the output of the sampling protocol must have low entropy. This essentially implies that this type of construction is useless in applications. Our proof is based on an entropic argument, drawing a new connection between computationally secure MPC, information theory and learning theory.
2025
TCC
Deniable Secret Sharing
Abstract
We introduce deniable secret sharing (DSS), which, analogously to deniable encryption, enables shareholders to produce fake shares that are consistent with a target ``fake message'', regardless of the original secret. In contrast to deniable encryption, in a DSS scheme an adversary sees multiple shares, some of which might be real, and some fake. This makes DSS a more difficult task, especially in situations where the fake shares need to be generated by individual shareholders, with limited or no coordination with other shareholders.
We define several desirable properties for DSS, and show both positive and negative results for each. The strongest property is fake hiding, which is a natural analogy of deniability for encryption: given a complete set of shares, an adversary cannot determine whether any shares are fake. We show a construction based on Shamir secret sharing that achieves fake hiding as long as (1) the fakers are qualified (number $t$ or more), and (2) the set of real shares which the adversary sees is unqualified. Next we show a construction based on indistinguishability obfuscation that relaxes condition (1) and achieves fake hiding even when the fakers are unqualified (as long as they comprise more than half of the shareholders).
We also extend the first construction to provide the weaker property of faker anonymity for all thresholds. (Faker anonymity requires that given some real shares and some fake shares, an adversary should not be able to tell which are fake, even if it can tell that some fake shares are present.) All of these constructions require the fakers to coordinate in order to produce fake shares.
On the negative side, we first show that fake hiding is unachievable when the fakers are a minority, even if they coordinate. Further, if the fakers do not coordinate, then even faker anonymity is unachievable as soon as $t < n$ (namely the reconstruction threshold is smaller than the number of parties), if faking is not unanimous. (If faking is unanimous, we show a construction based on indistinguishability obfuscation.)
2025
TCC
Fully-Homomorphic Encryption from Lattice Isomorphism
Abstract
The lattice isomorphism problem (LIP) asks, given two lattices $\Lambda_0$ and $\Lambda_1$, to decide whether there exists an orthogonal linear map from $\Lambda_0$ to $\Lambda_1$. In this work, we show that the hardness of (a circular variant of) LIP implies the existence of a fully-homomorphic encryption scheme for all classical and quantum circuits. Prior to our work, LIP was only known to imply the existence of basic cryptographic primitives, such as public-key encryption or digital signatures.
2025
TCC
Dimensional e$\mathsf{{ROS}}$ion: Improving the $\mathsf{{ROS}}$ Attack with Decomposition in Higher Bases
Abstract
We revisit the polynomial attack to the $\mathsf{{ROS}}$ problem modulo $p$ from \cite{JC:BLLOR22}. Our new algorithm achieves a polynomial time solution in dimension $\ell \gtrsim 0.726 \cdot \log_2 p$, extending the range of dimensions for which a polynomial attack is known beyond the previous bound of $\ell > \log_2p$.
We also combine our new algorithm with Wagner's attack to improve the general $\mathsf{{ROS}}$ attack complexity for a range of dimensions where a polynomial solution is still not known.
We implement our polynomial attack and break the one-more unforgeability of blind Schnorr signatures over 256-bit elliptic curves in a few seconds with 192 concurrent sessions.
2025
TCC
Four-round Statistical Non-malleable Zero-knowledge
Abstract
We present a 4-round statistical non-malleable zero-knowledge (NMZK) argument in the plain model under standard hardness assumptions. Our construction can be based on any collision-resistant hash function and injective one-way function, and it guarantees simulation extractability in the delayed-input one-many setting. Before this work, 4-round constructions were known for computational NMZK but not for statistical NMZK.
2025
TCC
Incompressible Encryption with Everlasting Security
Abstract
Recently, the concept of incompressible encryption has emerged as a powerful enhancement to key-leakage resilience. In an incompressible encryption scheme, an adversary who intercepts ciphertexts is forced to dedicate a significant amount of memory to store them in full if they wish to extract any information about the plaintext later when the secret key becomes available. Given two messages, the security game involves two adversaries: the first adversary receives an encryption of one of the messages and produces a compressed state. Then, the second adversary, given both the secret key and the compressed state, attempts to determine which message was encrypted.
Several positive results exist in incompressible cryptography. On the one hand, there are constructions based on minimal assumptions but with a poor rate (i.e., rate tends to 0). On the other hand, there are rate-1 constructions that achieve optimal efficiency but rely on strong cryptographic assumptions, such as obfuscation.
A stronger security notion, known as \emph{everlasting security}, has been proposed for incompressible encryption. In this formulation, the second adversary, who receives the compressed state and the secret key, is allowed to be computationally \emph{unbounded}. While this notion is conceptually appealing, no constructions of everlasting incompressible encryption are currently known, regardless of the underlying assumption or even in idealized models.
In this work, we give the first construction of everlasting incompressible encryption. In fact, we show that everlasting incompressible encryption is \emph{inherent} in any sufficiently secure public-key encryption scheme. Specifically, we prove that any public-key encryption scheme with subexponential security (when instantiated with an appropriate security parameter) already satisfies the definition of everlasting incompressible encryption with subexponential security. Furthermore, our scheme achieves rate-1, improving upon existing results even for the weaker notion of standard incompressible encryption.
Our results raise concerns about whether the current definition of incompressible encryption adequately captures the expected efficiency properties of such schemes, indicating that refinements may be needed. As one concrete step in this direction, we propose a storage-rate definition for ciphertexts, and show how to construct schemes with constant storage-rate.
2025
TCC
Foundations of Single-Decryptor Encryption
Abstract
Single decryptor encryption (SDE) is public key encryption (PKE) where the decryption key is an unclonable quantum state. Coladangelo, Liu, Liu, and Zhandry (CRYPTO 2021) realized the first SDE assuming subexponentially secure indistinguishability obfuscation (iO) and one-way functions (OWFs), along with the polynomial hardness of the learning with errors (LWE) assumption. Since then, SDE has played a pivotal role in recent advances in quantum cryptography. However, despite its central importance in unclonable cryptography, many fundamental questions about SDE remain unanswered. For example, a line of works has proposed various security notions for SDE, but their relationships have hardly been discussed. Moreover, while many subsequent works have adopted the construction methodology of Coladangelo et al., none have explored its improvement, leaving the possibility of a more efficient approach to SDE.
In this work, we address these fundamental questions concerning SDE. Our contributions are threefold.
New security notion: We introduce a strengthened indistinguishability-based security notion for SDE, which we call CPA+ anti-piracy security. We show that CPA+ security unifies the existing security notions for SDE, as detailed in the third item.
New construction: We present an SDE scheme that satisfies CPA+ anti-piracy security, based solely on polynomially secure iO and OWFs. In addition to relying on weaker and more general assumptions, our SDE scheme offers a significant advantage over the scheme of Coladangelo et al., as both the construction and its security proof are much simpler.
Relationships among security notions: We demonstrate that CPA+ anti-piracy security implies all existing security notions for SDE, with the sole exception of identical challenge ciphertext security proposed by Georgiou and Zhandry (EPRINT 2020). Although we do not establish a direct implication from CPA+ anti-piracy security to identical challenge ciphertext security, we provide a generic transformation from an SDE scheme satisfying the former to one achieving the latter in the quantum random oracle model. Additionally, we establish various relationships among different security notions for SDE. By combining these results with our SDE construction, we derive several new feasibility results.
2025
TCC
Large-Plaintext Functional Bootstrapping in FHE with Small Bootstrapping Keys
Abstract
Functional bootstrapping is a core technique in Fully Homomorphic Encryption(FHE). For large plaintext, to evaluate a general function homomorphically over a ciphertext, in the FHEW/TFHE approach, since the function in look-up table form is encoded in the coefficients of a test polynomial, the degree of the polynomial must be high enough to hold the entire table.
This increases the bootstrapping time complexity and memory cost, as the size of bootstrapping keys and keyswitching keys need to be large accordingly.
In this paper, we propose to encode the look-up table of any function in a polynomial vector, whose coefficients can hold more data. The corresponding representation of the additive group ${\mathbb Z}_q$ used in the RGSW-based bootstrapping is the group of monic monomial permutation matrices, which integrates the permutation matrix representation used by Alperin-Sheriff and Peikert in 2014, and the monic monomial representation used in the FHEW/TFHE scheme. We make comprehensive investigation of the new representation, and propose a new bootstrapping algorithm based on it.
The new algorithm supports functional bootstrapping of large-plaintexts, and achieves polynomial reduction in key sizes and a constant-factor improvement in run-time cost.
2025
ASIACRYPT
Pseudorandom Correlation Functions from Ring-LWR
Abstract
State-of-the-art actively secure multi-party computation protocols, like SPDZ (Damgård et al., CRYPTO 2012), use correlated randomness, like Beaver triples, to achieve a highly efficient online phase. For a long time, the generation of the correlated randomness in the offline phase relied on classical cryptographic primitives, like somewhat homomorphic encryption or oblivious transfer, that required significant communication.
More recently, Boyle et al. (FOCS 2020) defined a new primitive called pseudorandom correlation functions (PCFs) to generate correlated randomness non-interactively. PCFs set up keys for each party in an initial interactive phase, which can then be used by the parties to generate a large number of shares of the correlated randomness without further communication. In the random oracle model (ROM), two-party PCFs can be generically constructed based on evaluating a weak pseudorandom function (WPRF) using a powerful-enough homomorphic secret sharing scheme. However, the concrete efficiency of instantiations of this approach has not been analyzed so far. There are also some works that construct PCFs based on other approaches, but they cannot be used for correlations of degree >= 2 (e.g., Beaver triples) over large rings/fields (such as those used in SPDZ).
In this paper, we improve the complexity and concrete efficiency of PCFs over large rings/fields by presenting a new generic PCF based on the hardness of the ring-learning with rounding (Ring-LWR) problem and FHE. We only share BFV keys in the initial interactive phase. The two parties then use the random oracle to locally sample BFV (pseudo-)ciphertexts encrypting pseudorandom plaintexts. We use a new bootstrapping algorithm for these (pseudo-)ciphertexts that reduces initially saturated noise to a level where the parties can use the homomorphic properties of the BFV scheme to correlate the encrypted randomness locally. Both parties can then produce, without further interaction, shares of the correlated randomness with their secret key share. Our new PCF works with any form of correlated randomness that can be expressed as an arithmetic circuit over a base ring Z_t or field F_p^d, e.g., Beaver or matrix triples.
2025
TCC
Honest Majority Constant-Round MPC with Linear Communication from One-Way Functions
Abstract
Secure multiparty computation (MPC) faces a fundamental efficiency trade-off between round complexity and communication complexity: without fully homomorphic encryption, protocols with constant round complexity (e.g., protocols based on garbled circuits) incur high communication cost, while communication-efficient approaches (e.g., protocols based on secret sharing) have round complexity linear in the depth of the circuit. In this work, we focus on reducing the communication complexity of constant-round MPC protocols in the honest majority setting. Existing results either rely on strong assumptions (e.g., random oracles, DDH, LPN) or incur high communication of $\Omega(|C|n^2\kappa)$ bits under one-way functions (OWFs). However, non-constant-round MPC protocols can achieve linear communication in the number of parties even with information-theoretic security.
We resolve this gap by presenting the first \emph{constant-round} honest majority MPC protocol with \emph{linear} communication complexity of $O(|C|n\kappa + n^2\kappa^2+n^4\kappa)$ only from OWFs. We introduce novel techniques for computing garbled circuits via party virtualization and efficient local computation of virtual parties, which optimize the existing protocols on multiparty garbling. These allow us to overcome the $O(n^2\kappa)$ bit of communication per-gate bottleneck of prior protocols, matching the scalability of the best non-constant-round protocols in the same setting.
2025
ASIACRYPT
Taming Iterative Grinding Attacks on Blockchain Beacons
Abstract
Random beacons play a critical role in blockchain protocols by providing publicly verifiable, unpredictable randomness essential for secure assignment of protocol roles such as block producers and committee membership. In the interest of efficiency, many deployed blockchains adopt beacon algorithms that suffer from grinding: an adversarial attack in which a party exploits freedom given by the protocol to bias the outcome of the random beacon by resampling it several times and
picking the most desirable outcome. To compound the problem, beacons often operate in an iterative manner, where the beacon output produced during one protocol epoch serves as the random seed for the beacon’s invocation in the next epoch. This amplifies the security threat, as such attacks may then aggregate their power over many epochs.
In this article, we formulate a generic framework for information-theoretic analysis of grinding in iterated randomness beacons. We define the natural grinding capacity of a beacon, intuitively corresponding to the amount of grinding it allows with a uniformly random seed. We then prove that sufficiently strong tail bounds on this quantity can be transformed into a
guarantee on smooth min-entropy of the iterated beacon’s output, even conditioned on all past outputs and irrespective of the inner workings of the beacon. Such min-entropy guarantees can immediately be translated into corresponding statements about various applications of the beacon to committee selection, incentives, or underlying protocol security.
Our main technical result concerns conventional longest-chain protocols, where we establish that the combinatorial structure of the forest of longest chains can be leveraged to control grinding. Instantiating the generic framework with these grinding upper bounds, we establish that the randomness beacon of the Ouroboros Praos protocol is secure against adversaries controlling up to about 12% of stake—even without any assumptions bounding the adversarial computational power invested into grinding. This is a qualitatively new guarantee for the protocol.
2025
ASIACRYPT
Persistence of Hourglass(-like) Structure: Improved Differential-Linear Distinguishers for Several ARX Ciphers
Abstract
The ARX structure plays a crucial role in symmetric-key primitives, with differential-linear (DL) attacks being among the most effective cryptanalysis techniques against ARX ciphers. In this paper, we present a systematic re-decomposition technique for DL distinguishers of ARX ciphers and identify for the first time the hourglass(-like) structural commonalities among optimal DL distinguishers searched out by various deduction techniques, also supported through comprehensive experiments, which motivate us to develop an efficient and generalized approach to construct optimal hourglass(-like) structural DL distinguishers. Our method yields significant advances when applied to \speck, \alzette, and the underlying permutations of \siphash{} and \chaskey: (1) the first 11- to 14-round DL distinguishers of \alzette; (2) the first (valid) DL distinguishers for 11-round \speck32, 12-round \speck48, and 16-round \speck96; (3) \emph{deterministic} (correlation $\pm1$) 3-round DL distinguishers for \siphash-2-4 and significantly improved 4-round ones. All these distinguishers are equipped with both theoretical and experimental verifications. We further analyze ARX-based Latin dance stream ciphers, achieving improved DL distinguishers for 7/7.25-round \chacha, 8-round \salsa, and 5.5-round \forro. Though some of the improvements are not significant, we have verified the effectiveness of our method across a broader range of instances. This work provides new insights for DL distinguisher construction and enhances understanding of the security of ARX ciphers.
2025
ASIACRYPT
Randomized Agreement, Verifiable Secret Sharing, and Multi Party Computation in Granular Synchrony
Abstract
Granular Synchrony (Giridharan et al. DISC 2024) is a new network model where the network is viewed as a graph consisting of synchronous, partially synchronous and asynchronous communication links. It has been shown that Granular Synchrony allows deterministic Byzantine agreement protocols to achieve a corruption threshold $n/3 \leq t < n/2$ in between complete asynchrony and complete synchrony if and only if the network satisfies the right condition, namely, if no two groups of honest parties of size $n-2t$ can be partitioned from each other.
In this work, we show that the same network condition is also tight for Agreement on a Common Subset (ACS), Verifiable Secret Sharing (VSS) and secure Multi-Party Computation (MPC) with guaranteed output delivery when the corruption threshold is between one-third and one-half. Our protocols, being randomized, assume that all links are either synchronous or asynchronous, and do not assume any partially synchronous links. Our ACS protocol incurs an amortized communication cost of $O(n^3\lambda)$ bits per inputs, and our VSS and MPC protocols incur amortized communication costs of $O(n^3)$ and $O(n^4)$ field elements per secret and per multiplication gate, respectively. To design our protocols, we also construct protocols for Reliable Broadcast and Externally Valid Byzantine Agreement (EVBA), which are of independent interest.
2025
ASIACRYPT
Scalable zkSNARKs for Matrix Computations: A Generic Framework for Verifiable Deep Learning
Abstract
Sublinear proof sizes have recently become feasible in verifiable machine learning (VML), yet no approach achieves the trio of strictly linear prover time, logarithmic proof size and verification time, and architecture privacy. Hurdles persist because we lack a succinct commitment to the full neural network and a framework for heterogeneous models, leaving verification dependent on architecture knowledge. Existing limits motivate our new approach: a unified proof-composition framework that casts VML as the design of zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs) for matrix computations. Representing neural networks with linear and non-linear layers as a directed acyclic graph of atomic matrix operations enables topology-aware composition without revealing the graph. Modeled this way, we split proving into a reduction layer and a compression layer that attests to the reduction with a proof of proof. At the reduction layer, inspired by reduction of knowledge (Crypto '23), root-node proofs are reduced to leaf-node proofs under an interface standardized for heterogeneous linear and non-linear operations. Next, a recursive zkSNARK compresses the transcript into a single proof while preserving architecture privacy.
2025
TCC
Special Genera of Hermitian Lattices and Applications to HAWK
Abstract
In its decisional form, the module-Lattice Isomorphism Problem (decision module-LIP) has received first attention recently in a paper by Ling, Liu and Mendelsohn. The authors gave a polynomial-time algorithm to distinguish spinor genera within the genus of a quadratic binary $\mathcal{O}_F$-lattice, assuming that $\mathcal{O}_F$ is a principal ideal domain. However, this algorithm would not impact cryptographic schemes based on decision module-LIP for lattices such as those employed in HAWK, i.e., for binary $\mathcal{O}_K$-lattices equipped with an Hermitian form (with $K$ a cyclotomic number field). Motivated by HAWK's framework, we investigate a concept that serves as an analogue of the spinor genus for Hermitian lattices, called special genus. This notion was studied by Shimura who provided a complete set of invariants for describing special genera. Building on this result, we propose an algorithm to determine whether two Hermitian lattices belong to the same special genus. Specifically for HAWK's lattice and siblings, our algorithm runs in classical polynomial-time. Nevertheless we provide numerical evidence suggesting that the ability to distinguish special genera does not, in practice, constitute a significative advantage for solving decision module-LIP.
2025
ASIACRYPT
Accelerating TFHE with Sorted Bootstrapping Techniques
Abstract
Fully Homomorphic Encryption (FHE) enables secure computation over encrypted data, offering a breakthrough in privacy-preserving computing. Despite its promise, the practical deployment of FHE has been hindered by the significant computational overhead, especially in general-purpose bootstrapping schemes. In this work, we build upon the recent advancements of~\cite{LY23} to introduce a variant of the functional/programmable bootstrapping. By carefully sorting the steps of the blind rotation, we reduce the overall number of external products without compromising correctness. To further enhance efficiency, we propose a novel modulus-switching technique that increases the likelihood of satisfying pruning conditions, reducing computational overhead.
Extensive benchmarks demonstrate that our method achieves a speedup ranging from 1.75x to 8.28x compared to traditional bootstrapping and from 1.26x to 2.14x compared to~\cite{LY23} bootstrapping techniques.
Moreover, we show that this technique is better adapted to the $\indcpad$ security model by reducing the performance downgrade it implies.
2025
ASIACRYPT
IVC in the Open-and-sign Random Oracle Model
Abstract
Incrementally verifiable computation (IVC) is a powerful cryptographic primitive, particularly suited for proving long-running machine computations.
Previous work shows that IVC can be constructed by recursively composing SNARKs.
Unfortunately, theoretical challenges limit the provable security of known IVC constructions.
Recursive composition may quickly lead to a blowup in extraction time and may require arithmetic circuits to enforce constraints about random oracle calls.
Furthermore, composition presents a practical challenge: proofs are often expressed in a form that is not friendly to the arithmetic circuits that produce them.
To mitigate the theoretical challenges, we present the Open-and-Sign Random Oracle Model (osROM) as an extension to the signed random oracle of Chiesa and Tromer (ICS `10).
This model, while strictly harder to instantiate than the Random Oracle Model, allows the design of protocols that can efficiently verify calls to the oracle and support straight-line extractors.
As a result, IVC constructions in the osROM can be shown to have provable security for polynomial depths of computation.
Under our new model, we construct a framework to build secure IVC schemes from simple non-interactive reductions of knowledge.
Our construction natively supports cycles of elliptic curves in the style of Ben-Sasson \textit{et al}. (CRYPTO `14), thus answering the practical challenge outlined above.
Finally, we analyze the HyperNova (CRYPTO `24) IVC scheme in the osROM and show that it is secure over a two-cycle of elliptic curves, for polynomial depths of computation.
2025
ASIACRYPT
Constraint-Friendly Map-to-Elliptic-Curve-Group Relations and Their Applications
Abstract
Hashing to elliptic curve groups is a fundamental primitive underlying numerous cryptographic applications, including multiset hashing and BLS signatures. With the recent rise of zero-knowledge applications, these primitives are increasingly used in constraint programming settings. For example, multiset hashing enables memory consistency checks in zkVMs, while BLS signatures are widely used in zkPoS protocols. In such cases, it becomes critical for hash-to-elliptic-curve-group constructions to be constraint-friendly. However, existing constructions rely on cryptographic hash functions that are expensive to represent in arithmetic constraint systems, resulting in high proving costs in these applications.
We propose a constraint-efficient alternative: a map-to-elliptic-curve-group relation that bypasses the need for cryptographic hash functions and can serve as a drop-in replacement for hash-to-curve constructions in practical settings, including the aforementioned applications. Our relation naturally supports witness-based instantiations within constraint programming frameworks, enabling efficient integration into zero-knowledge circuits. We formally analyze the security of our approach in the elliptic curve generic group model (EC-GGM).
Our implementation in Noir/Barretenberg demonstrates the efficiency of our construction in constraint programming: it achieves over 60x fewer constraints than the best hash-to-elliptic-curve-group alternatives, and enables 50-100x faster proving times at scale.
2025
ASIACRYPT
On Split-State Quantum Tamper Detection
Abstract
Tamper-detection codes (TDCs) are fundamental objects at the intersection of cryptography and coding theory. A TDC encodes messages in such a manner that tampering the codeword causes the decoder to either output the original message, or reject it. In this work, we study quantum analogs of one of the most well-studied adversarial tampering models: the so-called $t$-split-state tampering model, where the codeword is divided into $t$ shares, and each share is tampered with ``locally".
It is impossible to achieve tamper detection in the split-state model using classical codewords. Nevertheless, we demonstrate that the situation changes significantly if the message can be encoded into a multipartite quantum state, entangled across the $t$ shares. Concretely, we define a family of quantum TDCs defined on any $t\geq 3$ shares, which can detect arbitrary split-state tampering so long as the adversaries are unentangled, or even limited to a finite amount of pre-shared entanglement. Previously, this was only known in the limit of asymptotically large $t$.
As our flagship application, we show how to augment threshold secret sharing schemes with similar tamper-detecting guarantees. We complement our results by establishing connections between quantum TDCs and quantum encryption schemes.
2025
ASIACRYPT
On the Provable Dual Attack for LWE by Modulus Switching
Abstract
As a theoretical cornerstone of post-quantum cryptography, the Learning With Errors (LWE) problem serves as the security foundation for standardized algorithms such as Kyber and Dilithium. Recently, a framework for provable dual attacks on LWE has been proposed by Pouly et al. in Eurocrypt 2024, addressing the limitations in effectiveness caused by existing methods' reliance on heuristic assumptions in LWE dual attacks. Their paper also poses an open problem on how to formally integrate modulus switching into this framework to reduce attack costs. The main purpose of this paper is to give a solution of this open problem by presenting an improved provable dual attack method that incorporates modulus switching and Chinese Remainder Theorem (CRT) techniques. First, we design a modulus switching mechanism that eliminates practical errors via the Poisson summation formula. By embedding the inherent noise from modulus switching into a rational lattice framework, our approach effectively preventing the risk of attack failure caused by the merging of such errors with LWE noise. Theoretical guarantees (Theorems \ref{main_theorem} and \ref{main_theorem_entire}) rigorously quantify the parameter ranges for successful attacks. Second, we introduce a CRT-based secret recovery method that aggregates partial secrets from independent sub-attacks. By leveraging the Chinese Remainder Theorem to reconstruct full secrets from congruence relations, our method adapts to arbitrary secret distributions. Furthermore, by using a tighter variant of Banaszczyk's measure inequality, we obtain a precise parameter range for the dual attack's efficacy through rigorous mathematical proof, and achieve the same complementary gap with the contradictory regime (proposed by Ducas et al.) as in Pouly et al.'s work. Experiments show $15$-$29$ bit superior performance in attack estimation compared to the original framework.
2025
ASIACRYPT
Beyond-Birthday-Bound Security with HCTR2: Cascaded Construction and Tweak-based Key Derivation
Abstract
The block cipher (BC) mode for realizing a variable-input-length strong tweakable pseudorandom permutation (VIL-STPRP), also known as the accordion mode, is a rapidly growing research field driven by NIST's standardization project, which considers \textsf{AES} as a primitive. Widely used VIL-STPRP modes, such as \textsf{HCTR2}, have birthday-bound security and provide only 64-bit security with \textsf{AES}. To provide higher security, NIST is considering two directions: to develop new modes with beyond-birthday-bound (BBB) security and to use \textsf{Rijndael-256-256} with \textsf{HCTR2}. This paper pursues the first direction while maintaining compatibility with \textsf{HCTR2}. In particular, we provide two solutions to achieve BBB security for two different approaches: (i) general cases without any conditions on the tweak and (ii) under the condition that the same tweak is not repeated too often as adopted in \textit{bbb-ddd-AES} recently presented at Eurocrypt 2025. For the first approach, we propose a new mode, \textsf{CHCTR}, that iterates \textsf{HCTR2} with two independent keys, which achieves $2n/3$-bit security in the multi-user (mu) setting and satisfies NIST's requirements. For the second approach, we prove mu security of \textsf{HCTR2}, which allows us to apply the tweak-based key derivation (\textsf{TwKD}) to \textsf{HCTR2} in a provable manner. When the number of BC calls processed by a single tweak is upper-bounded by $2^{n/3}$, \textsf{HCTR2-TwKD} achieves $2n/3$-bit mu security. By benchmarking optimized software implementations, we show that \textsf{CHCTR} with \textsf{AES-256} outperforms \textsf{HCTR2} with \textsf{Rijndael-256-256}, in all the twelve processor models examined. Similarly, \textsf{HCTR2-TwKD} outperforms \textit{bbb-ddd-AES} in general cases, and it is even comparable to \textit{bbb-ddd-AES} rigorously optimized for tweak-repeating use cases using precomputation.
2025
ASIACRYPT
Vectorial Fast Correlation Attacks
Abstract
In this paper, we develop a new framework for vectorial fast correlation attacks, which exploits the vector-wise correlation in a novel and different approach from the previous Goli$\acute{c}$'s attack and gives the complete theoretical predictions of the attack complexities. First, the concept of correlation profile is introduced to characterize both the correlation of some linear approximation and the number of approximations having this correlation, which is not captured by the current notion of capacity or the Squared Euclidean Imbalance (SEI). It is shown how to construct the attack vector by carefully selecting the component-wise linear approximations to make a maximal usage of the inherent correlations. Second, we show how to transform and deliver the secret key information in the constructed vector by sequentially deriving linear subspaces from the original vector when the correlation profile is favorable. We further devise an efficient decoding algorithm to restore the partial secret key information retained in the last linear subspace, which allows for the recovery of the full secret information subsequently. Last, we present improved state recovery attacks on the ISO/IEC 29167-13 standard Grain-128a, the eSTREAM finalists Grain v1 and Sosemanuk, respectively by the new method. We resolve the open problem of detecting the output masks for Grain-like ciphers other than MILP at Crypto 2018 and propose a new algorithm based on graph theory to dissect complicated Boolean functions with many variables and compute its distribution efficiently. For Grain-128a, given around $2^{106.3}$ bits of keystream, the time complexity is $2^{107.7}$, while for Grain v1, given $2^{67.0}$ bits of keystream, the attack has a time complexity of $2^{69.6}$. These attacks are around $2^{12}$ times better than the best published results at Crypto 2018. For Sosemanuk, we propose a flexible assign-and-solve strategy to mount the first attack faster than exhaustive search of the 128-bit secret key.
2025
ASIACRYPT
Lattice-based Multi-message Multi-recipient KEM/PKE with Malicious Security
Abstract
The efficiency of Public Key Encryption (PKE) and Key Encapsulation Mechanism (KEM), and in particular their large ciphertext size, is a bottleneck in real-world systems. This worsens in post-quantum secure schemes (e.g., lattice-based ones), whose ciphertexts are an order of magnitude larger than prior ones.
The work of Kurosawa (PKC '02) introduced multi-message multi-recipient PKE (mmPKE) to reduce the amortized ciphertext size when sending messages to more than one recipient. This notion naturally extends to multi-message multi-recipient KEM (mmKEM).
In this work, we first show concrete attacks on existing lattice-based mmPKE schemes: Using malicious public keys, these attacks fully break semantic security and key privacy, and are inherently undetectable.
We then introduce the first lattice-based mmKEM scheme (thereby mmPKE) that maintains full privacy even in the presence of maliciously-generated public keys. Concretely, the ciphertext size of our mmKEM for 100 recipients is >10x smaller than naively using Crystals-Kyber. We additionally show a similar efficiency gain when applied to batched random oblivious transfer and group oblivious message retrieval.
Our scheme is proven secure under a new Module-LWE variant assumption, Oracle Module-LWE. We reduce standard MLWE to this new assumption for some parameter regimes, which also gives intuition on why this assumption holds for the parameter we are interested in (along with additional cryptanalysis).
Furthermore, we show an asymptotically efficient compiler that removes the assumption made in prior works, that recipients know their position in the list of intended recipients for every ciphertext.
2025
TCC
A Hidden-Bits Approach to Statistical ZAPs from LWE
Abstract
We give a new approach for constructing statistical ZAP arguments (a two-message public-coin statistically witness indistinguishable argument) from quasi-polynomial hardness of the learning with errors (LWE) assumption with a polynomial modulus-to-noise ratio. Previously, all ZAP arguments from lattice-based assumptions relied on correlation-intractable hash functions. In this work, we present the first construction of a ZAP from LWE via the classic hidden-bits paradigm. Our construction matches previous lattice-based schemes by being public-coin and satisfying statistical witness indistinguishability. Moreover, our construction is the first lattice-based ZAP that is fully black-box in the use of cryptography. Previous lattice-based ZAPs based on correlation-intractable hash functions all made non-black-box use of cryptography.