CryptoDB
Papers from TCC 2025
Year
Venue
Title
2025
TCC
(Multi-Input) FE for Randomized Functionalities, Revisited
Abstract
Randomized functional encryption (rFE) generalizes functional encryption (FE) by incorporating randomized functionalities. Randomized multi-input functional encryption (rMIFE) extends rFE to accommodate multi-input randomized functionalities.
In this paper, we reassess the framework of rFE/rMIFE enhancing our understanding of this primitive and laying the groundwork for more secure and flexible constructions in this field. Specifically, we make three key contributions:
- Stronger IND definition: We show the prevailing indistinguishability-based security definition protects *only* against malicious *decryptors* and leaves systems *vulnerable* to malicious *encryptors* -- a critical requirement for rFE/rMIFE since their inception. We then propose a refined IND notion that simultaneously handles both threats.
- Separating counterexample: Illustrating this definitional gap, we meticulously craft an rFE scheme -- using standard tools (FE, PRF, PKE, simulation‑sound NIZK) -- that satisfies the old definition yet is blatantly insecure in practice (and where this insecurity would be precluded by our enhanced definition).
- Adaptive, unbounded‑message rMIFE: The sole, viable prior rMIFE construction by Goldwasser et al. [EUROCRYPT 2014] permits only a fixed message bound per encryption slot and offers merely selective security. Leveraging sub‑exponentially secure indistinguishability obfuscation and techniques of Goyal et al. [ASIACRYPT 2016] built for deterministic MIFE, we give the first rMIFE scheme that supports an unbounded number of messages per slot and attains full adaptive security.
2025
TCC
A Computational Monogamy of Entanglement Theorem with Applications to Non-Interactive Quantum Key Distribution
Abstract
Quantum key distribution (QKD) enables Alice and Bob to exchange a secret key over a public, untrusted quantum channel.
Compared to classical key exchange, QKD achieves \emph{everlasting security}: after the protocol execution the key is secure against adversaries that can do unbounded computations.
On the flip side, while classical key exchange can be achieved non-interactively (with two simultaneous messages between Alice and Bob), no non-interactive protocol is known that provides everlasting security, even using quantum information.
In this work, we make progress on this problem. Our main technical contribution is a \emph{computational} variant of the celebrated \emph{monogamy of entanglement} game, where the secret is only computationally hidden from the players, rather than information-theoretically. In these settings, we prove a negligible bound on the maximal winning probability over all strategies.
As a direct application, we obtain a non-interactive (simultaneous message) QKD protocol from any post-quantum classical non-interactive key exchange, which satisfies everlastingly secure \emph{assuming Alice and Bob agree on the same key}.
The protocol only uses EPR pairs and standard and Hadamard basis measurements, making it suitable for near-term quantum hardware.
We also propose how to convert this protocol into a two-round protocol that satisfies the standard notion of everlasting security.
Finally, we prove a \emph{no-go theorem} which establishes that (in contrast to the case of ordinary multi-round QKD) entanglement is necessary for non-inter\-active QKD, i.e., the messages sent by Alice and Bob cannot both be unentangled with their respective quantum memories if the protocol is to be everlastingly secure.
2025
TCC
A Fiat-Shamir Transformation From Duplex Sponges
Abstract
We analyze a variant of the Fiat--Shamir transformation based on an ideal permutation. The transformation relies on the popular duplex sponge paradigm, and minimizes the number of calls to the permutation (given the information to absorb/squeeze). This closely models deployed variants of the Fiat--Shamir transformation, and our analysis provides concrete security bounds to guide security parameters in practice. Our results contribute to the ongoing community-wide effort to achieve rigorous, and ultimately formally verified, security proofs for deployed cryptographic proofs. Along the way, we find that indifferentiability (a property proven for many modes of operation, including the duplex sponge) is ill-suited for establishing the knowledge soundness and zero knowledge of a non-interactive argument.
We additionally contribute SpongeFiSh, an open-source Rust library implementing our Fiat--Shamir transformation. The library is interoperable across multiple cryptographic frameworks, and works with any choice of permutation. The library comes equipped with Keccak and Poseidon permutations, as well as several ``codecs'' for re-mapping prover and verifier messages to the permutation's domain.
2025
TCC
A Hidden-Bits Approach to Statistical ZAPs from LWE
Abstract
We give a new approach for constructing statistical ZAP arguments (a two-message public-coin statistically witness indistinguishable argument) from quasi-polynomial hardness of the learning with errors (LWE) assumption with a polynomial modulus-to-noise ratio. Previously, all ZAP arguments from lattice-based assumptions relied on correlation-intractable hash functions. In this work, we present the first construction of a ZAP from LWE via the classic hidden-bits paradigm. Our construction matches previous lattice-based schemes by being public-coin and satisfying statistical witness indistinguishability. Moreover, our construction is the first lattice-based ZAP that is fully black-box in the use of cryptography. Previous lattice-based ZAPs based on correlation-intractable hash functions all made non-black-box use of cryptography.
2025
TCC
A Meta-Complexity Theoretic Approach to Indistinguishability Obfuscation and Witness Pseudo-Canonicalization
Abstract
Indistinguishability Obfuscation (IO) is a central hub of modern cryptography, with far reaching applications. Over the last decade, a handful of constructions ranging from heuristic ones to, more recently following the breakthrough of Jain, Lin and Sahai [STOC'21], provably secure constructions based on well-formed assumptions. The provably secure constructions, however, rely on various different assumptions, some of which can be broken in quantum polynomial time.
In this work we propose meta-complexity-theoretic characterizations of IO. First, we consider a notion of \emph{witness pseudo-canonicalization (WPC)}---which roughly speaking, requires, given a witness $w$ for some $NP$ statement $x$, efficiently outputting a pseduo-canonical witness $w'$ for the same statement $x$ (i.e., one that is \emph{computationally independent} of $w$). We remark that WPC for the Minimum Circuit Size problem (MCSP) is essentially equivalent to the notion of (exponentially-efficient) IO, which in turn can be bootstrapped into IO (assuming LWE and subexponential security).
We next provide a further study of the notion of WPC, noting that this notion captures not only IO but also non-interactive witness indistinguishabilty (NIWI); at the same time, (assuming OWFs) it is impossible to achieve it for any witness relation that is $NP$-complete w.r.t. (honest) Levin reductions.
Finally, we provide a purely meta-complexity (worst-case) characterization of WPC w.r.t. some witness relation $R$ through a problem referred to as the \emph{Decisional Computational Shallowness} ($DCS$) problem. Intuitively, the $DCS$ problem with w.r.t. witness-relation $R$ and an instance $\varphi$, requires deciding, given
$x, y, z \in R(\varphi)$, which of $CD^t(z| x)$
and $CD^t(z| y)$ is smaller, where $CD^t(z| x)=K^t(z| x)-K(z| x)$ is the (Kolmogorov) \emph{Computational Depth} and $t(size{z})$ is some polynomial.
We show that $DCS$ w.r.t. $R$ is essentially equivalent to a notion of ``weak" WPC (i.e., with weak indistinguishability), which leads our new complexity-theoretic characterization of (weak) IO.
2025
TCC
Adaptively Secure Streaming Functional Encryption
Abstract
This paper presents the *first adaptively secure* streaming functional encryption (sFE) scheme for P/Poly. sFE extends traditional FE to vast and/or dynamically evolving data sets, enabling incremental encryption of streaming inputs and iterative partial decryption over encrypted prefixes.
The proposed sFE scheme is built from indistinguishability obfuscation for P/Poly and injective PRGs. We achieve this result via two core technical contributions which may be of independent interest. First, we introduce a novel "gluing" mechanism to achieve adaptive security by intertwining two schemes, each possessing one aspect of adaptive security. Second, we enhance the powerful Koppula-Lewko-Waters [STOC 2015] framework with a "sliding window" technique, enabling authenticated iterative computation with fresh inputs at each iteration.
Prior work by Guan, Korb, and Sahai [CRYPTO 2023] constructed sFE but only under a (semi-adaptive) function-selective security model, requiring all functional keys to be queried before observing any ciphertext. This severe limitation precludes practical scenarios and leaves adaptive security as a crucial *open challenge* — explicitly highlighted by their work — which we address in this paper.
2025
TCC
An Unstoppable Ideal Functionality for Signatures and a Modular Analysis of the Dolev-Strong Broadcast
Abstract
Many foundational results in the literature of consensus follow the Dolev-Yao model (FOCS '81), which treats digital signatures as ideal objects with perfect correctness and unforgeability. However, no work has yet formalized an ideal signature scheme that is both suitable for this methodology and possible to instantiate, or a composition theorem that ensures security when instantiating it cryptographically.
The Universal Composition (UC) framework would ensure composition if we could specify an ideal functionality for signatures and prove it UC-realizable. Unfortunately, all signature functionalities heretofore proposed are problematic when used to construct higher-level protocols: either the functionality internally computes a computationally secure signature, and therefore higher-level protocols must rely upon computational assumptions, or else the functionality introduces a new attack surface that does not exist when the functionality is realized. As a consequence, no consensus protocol has ever been analyzed in a modular way using existing ideal signature functionalities.
We propose a new unstoppable ideal functionality for signatures that is UC-realized exactly by the set of standard EUF-CMA signature schemes that are consistent and work in linear time in the length of the message. No adversary can prevent honest parties from obtaining perfectly ideal signature services from our functionality. We showcase its usefulness by presenting the first modular analysis of the Dolev-Strong broadcast protocol (SICOMP '83). Our result can be interpreted as a step toward a sound realization of the Dolev-Yao methodology. We also generalize our result to the threshold setting.
2025
TCC
Clifford Strategies in Interactive Protocols are Classically Simulatable
Abstract
MIP* is the class of languages decidable by an efficient classical verifier interacting with multiple quantum provers that share entangled qubits but cannot communicate. Notably, MIP* was proved to equal RE, the class of all recursively enumerable languages.
We introduce the complexity class Clifford–MIP*, which restricts quantum provers to Clifford operations and classical post-processing of measurement results, while still allowing shared entangled qubits in any quantum state. We show that any strategy in this model can be simulated by classical provers with shared random bits, and therefore admits a local hidden-variable description. Consequently, Clifford–MIP*=MIP, a vastly smaller complexity class compared to RE.
Moreover, we resolve an open question posed by Kalai et al. (STOC 2023), by showing that quantum advantage in any single-round non-local game requires at least two provers operating outside the Clifford–MIP* computational model. This rules out a proposed approach for significantly improving the efficiency of quantum advantage tests that are based on compiling non-local games into single-prover interactive protocols.
2025
TCC
Constrained Verifiable Random Functions Without Obfuscation and Friends
Abstract
CVRFs are PRFs that unify the properties of verifiable and constrained PRFs. Since they were introduced concurrently by Fuchsbauer and Chandran-Raghuraman-Vinayagamurthy in 2014, it has been an open problem to construct CVRFs without using heavy machinery such as multilinear maps, obfuscation or functional encryption.
We solve this problem by constructing a prefix-constrained verifiable PRF that does not rely on the aforementioned assumptions. Essentially, our construction is a verifiable version of the Goldreich-Goldwasser-Micali PRF. To achieve verifiability we leverage degree-2 algebraic PRGs and bilinear groups. In short, proofs consist of intermediate values of the Goldreich-Goldwasser-Micali PRF raised to the exponents of group elements. These outputs can be verified using pairings since the underlying PRG is of degree 2.
We prove the selective security of our construction under the Decisional Square Diffie-Hellman (DSDH) assumption and a new assumption, which we dub recursive Decisional Diffie-Hellman (recursive DDH).
We prove the soundness of recursive DDH in the generic group model assuming the hardness of the Multivariate Quadratic (MQ) problem and a new variant thereof, which we call MQ+.
Last, in terms of applications, we observe that our CVRF is also an exponent (C)VRF in the plain model. Exponent VRFs were recently introduced by Boneh et al. (Eurocrypt'25) with various applications to threshold cryptography in mind. In addition to that, we give further applications for prefix-CVRFs in the blockchain setting, namely, stake-pooling and compressible randomness beacons.
2025
TCC
Cryptography with Weak Privacy
Abstract
We initiate a systematic study of information-theoretic cryptography with weak privacy, requiring that no secret can be ruled out by the adversary. For a parameter 0 < p <= 1, we say that a primitive has p-weak differential privacy (p-WP) if for every possible view V and inputs x_1, x_2, the ratio between the probabilities of the adversary observing V on x_1 and on x_2 is at least p. This generalizes both perfect privacy, which is p-WDP for p = 1, and a previous "combinatorial" notion of weak privacy (WP), which is p-WDP for some p > 0. We obtain the following main results.
- Positive results. We present efficient WDP constructions for generalized secret sharing, decomposable randomized encodings (and the related notions of garbling schemes and PSM protocols), and secure two-party computation with correlated randomness. For secret sharing, we settle a question of Beimel and Franklin (TCC 2007), showing that every n-party access structure admits a WP scheme with per-party share size O(n). When all unauthorized sets have constant size, we get a p-WDP scheme with constant share size and p >= 1 / poly(n).
- Negative result. For decomposable randomized encodings, we show that a previous lower bound technique of Ball et al. (ITCS 2020) applies also to the WP notion. Together with our upper bound, this shows that the optimal WP garbling size of the worst-case f : {0, 1}^n -> {0, 1} is \tilde{\Theta}(n^2).
- Application. Under the standard LPN assumption, we show that any p-WDP secret sharing scheme with inverse-polynomial p implies a computationally secure secret sharing scheme for a related access structure. Together with our positive results for WDP secret sharing, this implies a super-polynomial improvement in the share size of computational secret sharing for a natural class of access structures.
2025
TCC
Deniable Secret Sharing
Abstract
We introduce deniable secret sharing (DSS), which, analogously to deniable encryption, enables shareholders to produce fake shares that are consistent with a target ``fake message'', regardless of the original secret. In contrast to deniable encryption, in a DSS scheme an adversary sees multiple shares, some of which might be real, and some fake. This makes DSS a more difficult task, especially in situations where the fake shares need to be generated by individual shareholders, with limited or no coordination with other shareholders.
We define several desirable properties for DSS, and show both positive and negative results for each. The strongest property is fake hiding, which is a natural analogy of deniability for encryption: given a complete set of shares, an adversary cannot determine whether any shares are fake. We show a construction based on Shamir secret sharing that achieves fake hiding as long as (1) the fakers are qualified (number $t$ or more), and (2) the set of real shares which the adversary sees is unqualified. Next we show a construction based on indistinguishability obfuscation that relaxes condition (1) and achieves fake hiding even when the fakers are unqualified (as long as they comprise more than half of the shareholders).
We also extend the first construction to provide the weaker property of faker anonymity for all thresholds. (Faker anonymity requires that given some real shares and some fake shares, an adversary should not be able to tell which are fake, even if it can tell that some fake shares are present.) All of these constructions require the fakers to coordinate in order to produce fake shares.
On the negative side, we first show that fake hiding is unachievable when the fakers are a minority, even if they coordinate. Further, if the fakers do not coordinate, then even faker anonymity is unachievable as soon as $t < n$ (namely the reconstruction threshold is smaller than the number of parties), if faking is not unanimous. (If faking is unanimous, we show a construction based on indistinguishability obfuscation.)
2025
TCC
Differentially Private Compression and the Sensitivity of LZ77
Abstract
We initiate the study of differentially private data-compression schemes motivated by the insecurity of the popular "Compress-Then-Encrypt" framework. Data compression is a useful tool which exploits redundancy in data to reduce storage/bandwidth when files are stored or transmitted. However, if the contents of a file are confidential then the \emph{length} of a compressed file might leak confidential information about the content of the file itself. Encrypting a compressed file does not eliminate this leakage as data encryption schemes are only designed to hide the \emph{content} of confidential message instead of the \emph{length} of the message. In our proposed \emph{Differentially Private Compress-Then-Encrypt} framework, we add a random positive amount of padding to the compressed file to ensure that any leakage satisfies the rigorous privacy guarantee of $(\epsilon,\delta)$-differential privacy. The amount of padding that needs to be added depends on the sensitivity of the compression scheme to small changes in the input, i.e., to what degree can changing a single character of the input message impact the length of the compressed file. While some popular compression schemes are highly sensitive to small changes in the input, we argue that effective data compression schemes do not necessarily have high sensitivity. Our primary technical contribution is analyzing the fine-grained sensitivity of the LZ77 compression scheme (IEEE Trans. Inf. Theory 1977) which is one of the most common compression schemes used in practice. We show that the global sensitivity of the LZ77 compression scheme has the upper bound $O(W^{2/3}\log n)$ where $W\leq n$ denotes the size of the sliding window. When $W=n$, we show the lower bound $\Omega(n^{2/3}\log^{1/3}n)$ for the global sensitivity of the LZ77 compression scheme which is tight up to a sublogarithmic factor.
2025
TCC
Differentially Private Learning Beyond the Classical Dimensionality Regime
Abstract
We initiate the study of differentially private learning in the \emph{proportional dimensionality regime}, in which the number of data samples $n$ and problem dimension $d$ approach infinity at rates proportional to one another, meaning that $d / n \to \delta$ as $n \to \infty$ for an arbitrary, given constant $\delta \in (0, \infty)$. This setting is significantly more challenging than that of all prior theoretical work in high-dimensional differentially private learning, which, despite the name, has assumed that $\delta = 0$ or is sufficiently small for problems of sample complexity $O(d)$, a regime typically considered ``low-dimensional'' or ``classical'' by modern standards in high-dimensional statistics.
We provide sharp theoretical estimates of the error of several well-studied differentially private algorithms for robust linear regression and logistic regression, including output perturbation, objective perturbation, and noisy stochastic gradient descent, in the proportional dimensionality regime. The $1 + o(1)$ factor precision of our error estimates enables a far more nuanced understanding of the price of privacy of these algorithms than that afforded by existing, coarser analyses, which are essentially vacuous in the regime we consider. Using our estimates, we discover a previously unobserved ``double descent''-like phenomenon in the training error of objective perturbation for robust linear regression. We also identify settings in which output perturbation outperforms objective perturbation on average, and vice versa.
To prove our main theorems, we introduce -- and strengthen, to handle perturbations required for privacy -- several probabilistic tools that have not previously been used to analyze differentially private learning algorithms, such as a modern Gaussian comparison inequality and recent universality laws with origins in statistical physics.
2025
TCC
Dimensional e$\mathsf{{ROS}}$ion: Improving the $\mathsf{{ROS}}$ Attack with Decomposition in Higher Bases
Abstract
We revisit the polynomial attack to the $\mathsf{{ROS}}$ problem modulo $p$ from \cite{JC:BLLOR22}. Our new algorithm achieves a polynomial time solution in dimension $\ell \gtrsim 0.726 \cdot \log_2 p$, extending the range of dimensions for which a polynomial attack is known beyond the previous bound of $\ell > \log_2p$.
We also combine our new algorithm with Wagner's attack to improve the general $\mathsf{{ROS}}$ attack complexity for a range of dimensions where a polynomial solution is still not known.
We implement our polynomial attack and break the one-more unforgeability of blind Schnorr signatures over 256-bit elliptic curves in a few seconds with 192 concurrent sessions.
2025
TCC
Efficient Garbled Pseudorandom Functions and Lookup Tables from Minimal Assumption
Abstract
Yao's garbled circuits have received huge attention in both theory and practice. While garbled circuits can be constructed using minimal assumption (i.e., the existence of pseudorandom functions or one-way functions), the state-of-the-art constructions (e.g., Rosulek-Roy, Crypto 2021) are based on stronger assumptions. In particular, the ``Free-XOR'' technique (Kolesnikov-Schneider, ICALP 2008) is essential in these state-of-the-art constructions, and their security can only be proven in the random oracle model, or rely on the ``circular-correlation robust hash'' assumption.
In this paper, we aim to develop new techniques to construct efficient garbling schemes using minimal assumptions. Instead of generically replacing the Free-XOR technique, we focus on garbling schemes for specific functionalities. We successfully eliminated the need for Free-XOR in several state-of-the-art schemes, including the one-hot garbling (Heath and Kolesnikov, CCS 2021) and the garbled pseudorandom functions, and the garbled lookup tables (Heath, Kolesnikov and Ng, Eurocrypt 2024). Our schemes are based on minimal assumptions, i.e., standard pseudorandom functions (PRFs)---we resolved the need for circular security.
The performance of our scheme is almost as efficient as the best results except for a small constant factor. Namely, for any lookup table $\bit^n \to \bit^m$, our scheme takes $n + (5n+9)m\secp + 2^n \cdot m$ bits of communication, where $\lambda$ is the security parameter of PRF.
2025
TCC
Foundations of Single-Decryptor Encryption
Abstract
Single decryptor encryption (SDE) is public key encryption (PKE) where the decryption key is an unclonable quantum state. Coladangelo, Liu, Liu, and Zhandry (CRYPTO 2021) realized the first SDE assuming subexponentially secure indistinguishability obfuscation (iO) and one-way functions (OWFs), along with the polynomial hardness of the learning with errors (LWE) assumption. Since then, SDE has played a pivotal role in recent advances in quantum cryptography. However, despite its central importance in unclonable cryptography, many fundamental questions about SDE remain unanswered. For example, a line of works has proposed various security notions for SDE, but their relationships have hardly been discussed. Moreover, while many subsequent works have adopted the construction methodology of Coladangelo et al., none have explored its improvement, leaving the possibility of a more efficient approach to SDE.
In this work, we address these fundamental questions concerning SDE. Our contributions are threefold.
New security notion: We introduce a strengthened indistinguishability-based security notion for SDE, which we call CPA+ anti-piracy security. We show that CPA+ security unifies the existing security notions for SDE, as detailed in the third item.
New construction: We present an SDE scheme that satisfies CPA+ anti-piracy security, based solely on polynomially secure iO and OWFs. In addition to relying on weaker and more general assumptions, our SDE scheme offers a significant advantage over the scheme of Coladangelo et al., as both the construction and its security proof are much simpler.
Relationships among security notions: We demonstrate that CPA+ anti-piracy security implies all existing security notions for SDE, with the sole exception of identical challenge ciphertext security proposed by Georgiou and Zhandry (EPRINT 2020). Although we do not establish a direct implication from CPA+ anti-piracy security to identical challenge ciphertext security, we provide a generic transformation from an SDE scheme satisfying the former to one achieving the latter in the quantum random oracle model. Additionally, we establish various relationships among different security notions for SDE. By combining these results with our SDE construction, we derive several new feasibility results.
2025
TCC
Four-round Statistical Non-malleable Zero-knowledge
Abstract
We present a 4-round statistical non-malleable zero-knowledge (NMZK) argument in the plain model under standard hardness assumptions. Our construction can be based on any collision-resistant hash function and injective one-way function, and it guarantees simulation extractability in the delayed-input one-many setting. Before this work, 4-round constructions were known for computational NMZK but not for statistical NMZK.
2025
TCC
Fully-Homomorphic Encryption from Lattice Isomorphism
Abstract
The lattice isomorphism problem (LIP) asks, given two lattices $\Lambda_0$ and $\Lambda_1$, to decide whether there exists an orthogonal linear map from $\Lambda_0$ to $\Lambda_1$. In this work, we show that the hardness of (a circular variant of) LIP implies the existence of a fully-homomorphic encryption scheme for all classical and quantum circuits. Prior to our work, LIP was only known to imply the existence of basic cryptographic primitives, such as public-key encryption or digital signatures.
2025
TCC
Generalized and Unified Equivalences between Hardness and Pseudoentropy
Abstract
Pseudoentropy characterizations provide a quantitatively precise demonstration of the close relationship between computational hardness and computational randomness. We prove a unified pseudoentropy characterization that generalizes and strengthens previous results for both uniform and non-uniform models of computation. Our characterization holds for a general family of entropy notions that encompasses the common notions of Shannon entropy and min entropy as special cases. Moreover, we show that the characterizations for different entropy notions can be simultaneously achieved by a single, universal function that simultaneously witnesses computational hardness and computational randomness. A key technical insight of our work is that the notion of weight-restricted calibration from the recent literature on algorithm fairness, along with standard computational indistinguishability (known as multiaccuracy in the fairness literature), suffices for proving pseudoentropy characterizations for general entropy notions. This demonstrates the power of weight-restricted calibration to enhance the classic Complexity-Theoretic Regularity Lemma (Trevisan, Tulsiani, and Vadhan, 2009) and Leakage Simulation Lemma (Jetchev and Pietrzak, 2014) and allows us to achieve an exponential improvement in the complexity dependency on the alphabet size compared to the pseudoentropy characterizations by Casacuberta, Dwork, and Vadhan (2024) based on the much stronger notion of multicalibration. We show that the exponential dependency on the alphabet size is inevitable for multicalibration as well as for the weaker notion of calibrated multiaccuracy.
2025
TCC
Honest Majority Constant-Round MPC with Linear Communication from One-Way Functions
Abstract
Secure multiparty computation (MPC) faces a fundamental efficiency trade-off between round complexity and communication complexity: without fully homomorphic encryption, protocols with constant round complexity (e.g., protocols based on garbled circuits) incur high communication cost, while communication-efficient approaches (e.g., protocols based on secret sharing) have round complexity linear in the depth of the circuit. In this work, we focus on reducing the communication complexity of constant-round MPC protocols in the honest majority setting. Existing results either rely on strong assumptions (e.g., random oracles, DDH, LPN) or incur high communication of $\Omega(|C|n^2\kappa)$ bits under one-way functions (OWFs). However, non-constant-round MPC protocols can achieve linear communication in the number of parties even with information-theoretic security.
We resolve this gap by presenting the first \emph{constant-round} honest majority MPC protocol with \emph{linear} communication complexity of $O(|C|n\kappa + n^2\kappa^2+n^4\kappa)$ only from OWFs. We introduce novel techniques for computing garbled circuits via party virtualization and efficient local computation of virtual parties, which optimize the existing protocols on multiparty garbling. These allow us to overcome the $O(n^2\kappa)$ bit of communication per-gate bottleneck of prior protocols, matching the scalability of the best non-constant-round protocols in the same setting.
2025
TCC
How to Verify that a Small Device is Quantum, Unconditionally
Abstract
A proof of quantumness (PoQ) allows a classical verifier to efficiently test if a quantum machine is performing a computation that is infeasible for any classical machine. In this work, we propose a new approach for constructing PoQ protocols where soundness holds unconditionally assuming a bound on the memory of the prover, but otherwise no restrictions on its runtime. In this model, we propose two protocols:
1. A simple protocol with a quadratic gap between the memory required by the honest parties and the memory bound of the adversary. The soundness of this protocol relies on Raz's (classical) memory lower bound for matrix inversion (Raz, FOCS 2016).
2. A protocol that achieves an exponential gap, building on techniques from the literature on the bounded storage model (Dodis et al., Eurocrypt 2023).
Both protocols are also efficiently verifiable. Despite having worse asymptotics, our first protocol is conceptually simple and relies only on arithmetic modulo 2, which can be implemented with one-qubit Hadamard and CNOT gates, plus a single one-qubit non-Clifford gate.
2025
TCC
Incompressible Encryption with Everlasting Security
Abstract
Recently, the concept of incompressible encryption has emerged as a powerful enhancement to key-leakage resilience. In an incompressible encryption scheme, an adversary who intercepts ciphertexts is forced to dedicate a significant amount of memory to store them in full if they wish to extract any information about the plaintext later when the secret key becomes available. Given two messages, the security game involves two adversaries: the first adversary receives an encryption of one of the messages and produces a compressed state. Then, the second adversary, given both the secret key and the compressed state, attempts to determine which message was encrypted.
Several positive results exist in incompressible cryptography. On the one hand, there are constructions based on minimal assumptions but with a poor rate (i.e., rate tends to 0). On the other hand, there are rate-1 constructions that achieve optimal efficiency but rely on strong cryptographic assumptions, such as obfuscation.
A stronger security notion, known as \emph{everlasting security}, has been proposed for incompressible encryption. In this formulation, the second adversary, who receives the compressed state and the secret key, is allowed to be computationally \emph{unbounded}. While this notion is conceptually appealing, no constructions of everlasting incompressible encryption are currently known, regardless of the underlying assumption or even in idealized models.
In this work, we give the first construction of everlasting incompressible encryption. In fact, we show that everlasting incompressible encryption is \emph{inherent} in any sufficiently secure public-key encryption scheme. Specifically, we prove that any public-key encryption scheme with subexponential security (when instantiated with an appropriate security parameter) already satisfies the definition of everlasting incompressible encryption with subexponential security. Furthermore, our scheme achieves rate-1, improving upon existing results even for the weaker notion of standard incompressible encryption.
Our results raise concerns about whether the current definition of incompressible encryption adequately captures the expected efficiency properties of such schemes, indicating that refinements may be needed. As one concrete step in this direction, we propose a storage-rate definition for ciphertexts, and show how to construct schemes with constant storage-rate.
2025
TCC
Information-Theoretic Broadcast-Optimal MPC
Abstract
Broadcast, though often used as a black box in cryptographic protocols, is expensive to realize in terms of rounds and communication complexity. We investigate the minimal use of broadcast in round-optimal information-theoretic MPC, with statistical security. For information-theoretic MPC with guaranteed output delivery, four rounds of communication are necessary and sufficient (Applebaum, Kachlon and Patra, FOCS 2020; Applebaum, Kachlon and Patra, STOC 2023). We show that broadcast is unavoidable in the second and third rounds of statistical MPC protocols. To complement our lower bounds, we modify the protocol of Applebaum, Kachlon and Patra (STOC 2023) to make use of broadcast only in the second and third round. Along the way, we show that the sharing phase of any three-round information-theoretic VSS protocol must also make use of broadcast in the second and third rounds.
2025
TCC
Is It Even Possible? On the Parallel Composition of Asynchronous MPC Protocols
Abstract
Despite several known idiosyncrasies separating the synchronous and the asynchronous models, asynchronous secure multi-party computation (MPC) protocols demonstrate high-level similarities to synchronous MPC, both in design philosophy and abstract structure. As such, a coveted, albeit elusive, {\em desideratum} is to devise automatic translators (e.g., protocol compilers) of feasibility and efficiency results from one model to the other.
In this work, we demonstrate new challenges associated with this goal. Specifically, we study the case of \emph{parallel composition} in the asynchronous setting. We provide formal definitions of this composition operation in the UC framework, which, somewhat surprisingly, have been missing from the literature. Using these definitions, we then turn to charting the feasibility landscape of asynchronous parallel composition.
We first prove strong impossibility results for composition operators that do not assume knowledge of the functions and/or the protocols that are being composed. These results draw a grim feasibility picture, which is in sharp contrast with the synchronous model, and highlight the question:
\begin{center}
{\em Is asynchronous parallel composition even a realistic goal?}
\end{center}
To answer the above (in the affirmative), we provide conditions on the composed protocols that enable a useful form of asynchronous parallel composition, as it turns out to be common in existing constructions.
2025
TCC
Justvengers: Batched VOLE ZK Disjunctions in O(R+B+C) Communication
Abstract
Recent progress on zero-knowledge proofs (ZKPs) based on vector oblivious linear evaluation (VOLE) offers a promising paradigm for scaling ZKPs over extremely large statements. In particular, VOLE-based ZK is currently the best choice in terms of end-to-end execution time. However, VOLE-based ZK incurs high communication overhead — it usually scales linearly with the circuit size.
To mitigate this, existing literature considers VOLE-based ZK over structured statements. In this work, we focus on the batched disjunctive statement — P and V agree on B fan-in 2 circuits C_1, ..., C_B over a field F; each circuit is of size C with n_in inputs. P’s goal is to demonstrate the knowledge of R witnesses (id_j ∈ [B], w_j ∈ F^{n_in}) for each j ∈ [R] s.t. ∀j ∈ [R], C_{id_j}(w_j) = 0 where neither w_j nor id_j is revealed. Batched disjunctive statements are effective, e.g., in emulating the CPU execution inside ZK. Note, the naïve solution results in a circuit of size O(RBC).
To prove such a statement using VOLE-based ZK, the prior state-of-the-art protocol Antman (Weng et al., CCS’22) incurred O(BC + R) communication by additionally relying on AHE, whereas Batchman (Yang et al., CCS’23) achieved O(RC + B) communication using only VOLE.
In this work, we combine these two protocols non-trivially and present a novel protocol Justvengers — targeting the batched disjunctive statement — that incurs only O(R + B + C) communication and O(BC + (B + C)R log R) computation for prover, using AHE and VOLE. As in Antman, Justvengers requires an AHE scheme that achieves linear targeted malleability, which is a non-falsifiable assumption.
2025
TCC
Large-Plaintext Functional Bootstrapping in FHE with Small Bootstrapping Keys
Abstract
Functional bootstrapping is a core technique in Fully Homomorphic Encryption(FHE). For large plaintext, to evaluate a general function homomorphically over a ciphertext, in the FHEW/TFHE approach, since the function in look-up table form is encoded in the coefficients of a test polynomial, the degree of the polynomial must be high enough to hold the entire table.
This increases the bootstrapping time complexity and memory cost, as the size of bootstrapping keys and keyswitching keys need to be large accordingly.
In this paper, we propose to encode the look-up table of any function in a polynomial vector, whose coefficients can hold more data. The corresponding representation of the additive group ${\mathbb Z}_q$ used in the RGSW-based bootstrapping is the group of monic monomial permutation matrices, which integrates the permutation matrix representation used by Alperin-Sheriff and Peikert in 2014, and the monic monomial representation used in the FHEW/TFHE scheme. We make comprehensive investigation of the new representation, and propose a new bootstrapping algorithm based on it.
The new algorithm supports functional bootstrapping of large-plaintexts, and achieves polynomial reduction in key sizes and a constant-factor improvement in run-time cost.
2025
TCC
Linear Prover IOPs in Log Star Rounds
Abstract
Interactive Oracle Proofs (IOPs) form the backbone of some of the most efficient general-purpose cryptographic proof-systems. In an IOP, the prover can interact with the verifier over multiple rounds, where in each round the prover sends a long message, from which the verifier only queries a few symbols.
State-of-the-art IOPs achieve a linear-size prover and a poly-logarithmic time verifier but require a relatively large, logarithmic, number of rounds. While the Fiat-Shamir heuristic can be used to eliminate the need for actual interaction, in modern highly-parallelizable computer architectures such as GPUs, the large number of rounds still translates into a major bottleneck for the prover, since it needs to alternate between computing the IOP messages and the Fiat-Shamir hashes. Motivated by this fact, in this work we study the round complexity of linear-prover IOPs.
Our main result is an IOP for a large class of Boolean circuits, with only $O(\log^*(S))$ rounds, where $\log^*$ denotes the iterated logarithm function (and $S$ is the circuit size). The prover has linear size $O(S)$ and the verifier runs in time $\polylog(S)$ and has query complexity $O(\log^*(S))$. The protocol is both conceptually simpler, and strictly more efficient, than prior linear prover IOPs for Boolean circuits.
2025
TCC
Linear-Time Accumulation Schemes
Abstract
Proof-carrying data (PCD) is a powerful cryptographic primitive for computational integrity in a distributed setting. State-of-the-art constructions of PCD are based on accumulation schemes (and, closely related, folding schemes).
We present WARP, the first accumulation scheme with linear prover time and logarithmic verifier time. Our scheme is hash-based (secure in the random oracle model), plausibly post-quantum secure, and supports unbounded accumulation depth.
We achieve our result by constructing an interactive oracle reduction of proximity that works with any linear code over a sufficiently large field. We take a novel approach by constructing a straightline extractor that relies on erasure correction, rather than error-tolerant decoding like prior extractors. Along the way, we introduce a variant of straightline round-by-round knowledge soundness that is compatible with our extraction strategy.
2025
TCC
Lower Bounds on Inner-Product Functional Encryption from All-or-Nothing Encryption Primitives
Abstract
A functional encryption (FE) is a versatile encryption, where the functional key ${\sf sk}_{\bf v}$ decrypts a ciphertext to the function ${\bf v}({\bf m})$ evaluated on the plaintext ${\bf m}$. The security requires that even when an adversary colludes multiple functional keys, it learns only the evaluations but nothing more about the plaintext. This work considers the adversary obtaining an \emph{unbounded number of colluded keys}, a natural setting for many FE applications. We aim to understand which cryptographic primitives are sufficient to construct an unbounded-collusion FE.
This work proves negative results: we separate unbounded-collusion FEs from many encryption primitives that are ``all-or-nothing.'' Introduced by Garg, Mahmoody and Mohammed (CRYPTO '17), an all-or-nothing encryption scheme allows an adversary to learn either \emph{the whole plaintext} if the authorized secret key is given; otherwise, the adversary learns \emph{nothing}. Hence, we rule out such FE construction from fully homomorphic encryption (FHE), attribute-based encryption (ABE), and predicate encryptions (PE). Our impossibility holds for private-key and selectively secure FEs for an minimal function class, \emph{inner products}, making the impossibility stronger.
Our separation is in the ``monolithic model,'' similar to the black-box model, but the primitive is allowed to evaluate a circuit that queries the primitive itself as an oracle.
Our result extends the beautiful work of Hajiabadi et al.~(TCC '24), which separated inner-product functional encryption (IPFE) from any primitives that exist in the random oracle model. Our result is perhaps surprising as IPFE was proposed by Abdalla et al.~(PKC '15) to be easier to construct, and indeed there are several IPFEs based on standard \emph{algebraic} assumptions, such as Decisional Diffie-Hellman or Learning-with-Errors. Moreover, \emph{bounded}-collusion FEs for all circuits are constructed in a black-box way from minimal assumptions, i.e., public-key FE from public-key encryption, and secret-key FE from one-way functions, by Ananth and Vaikuntanathan (TCC '19).
2025
TCC
Multi-Server Doubly Efficient PIR in the Classical Model and Beyond
Abstract
A Doubly Efficient Private Information Retrieval (DEPIR) is defined as a Private Information Retrieval (PIR) scheme with both near-linear server-side preprocessing time and sublinear query time. For many years it has been a strong open question to devise a DEPIR scheme from standard assumptions.
Very recently, Lin et al. (STOC '23) devised the first single-server DEPIR scheme from standard assumptions. However, their work left one major question open: can we build a DEPIR scheme in the multi-server setting, without relying on heavy cryptographic machinery?
In this work, we discuss two different settings for multi-server DEPIR.
In the non-colluding, non-communicating servers setting, we propose the first doubly efficient multi-server PIR scheme. Our scheme is information-theoretic. For a database of size $N$, our scheme allows servers to answer an infinite number of client queries in $N^{o(1)}$ time, after a single preprocessing phase which takes $N^{1+o(1)}$ time. Our result is only a $N^{o(1)}$ factor from the lower bound proven by Persiano and Yeo (SODA '22) for this setting.
In addition, we devise a second information-theoretic PIR scheme in this setting which pushes the state of the art for the setting where the number of servers is more constrained. It achieves equally fast query times as our first scheme above, and a preprocessing time of $N^{2+o(1)}$. This scheme uses $O(\log N)$ servers. Both schemes don't require any cryptographic primitives.
Furthermore, in the setting where the servers are additionally allowed to communicate and keep state, we show a family of information-theoretic DEPIR schemes which achieve $N\polylog N$ preprocessing time, and $\log^3 N$ communication and server time, for any number of servers $k$ greater than three. We also discuss how introducing more servers or computational assumptions in this setting may improve concrete efficiency of the PIR. This setting of communicating servers requires online communication between servers to answer queries, in contrast to the standard multi-server PIR setting where servers only communicate to coordinate the database contents and do not communicate at query time.
2025
TCC
Multiparty Homomorphic Secret Sharing and More from LPN and MQ
Abstract
We give the first constructions of multiparty pseudorandom correlation generators, distributed point functions, and (negligible-error) homomorphic secret sharing for constant-degree polynomials for any number of parties without using LWE or iO. Our constructions are proven secure under the combination of LPN with dimension $n$, $2n$ samples, and noise rate $n^{\eps-1}$ for a small constant $\eps$, and MQ with $n$ variables and $n^{1+\delta}$ equations.
As applications of our results, we obtain from the same assumptions secure multiparty computation protocols with sublinear communication and silent preprocessing, as well as private information retrieval for $M$ servers and size-$\secpar^d$ databases with optimal download rate and client-to-server communication $M^d\cdot \secpar^3$.
2025
TCC
NISQ Security and Complexity via Simple Classical Reasoning
Abstract
We give novel lifting theorems for security games in the quantum random oracle model (QROM) in Noisy Intermediate-Scale Quantum (NISQ) settings such as the hybrid query model, the noisy oracle and the bounded-depth models.
We provide, for the first time, a hybrid lifting theorem for hybrid algorithms that can perform both quantum and classical queries, as well as a lifting theorem for quantum algorithms with access to noisy oracles or bounded quantum depth.
At the core of our results lies a novel measure-and-reprogram framework, called hybrid coherent measure-and-reprogramming, tailored specifically for hybrid algorithms.
Equipped with the lifting theorem, we are able to prove directly NISQ security and complexity results by calculating a single combinatorial quantity, relying solely on classical reasoning.
As applications, we derive the first direct product theorems in the average case, in the hybrid setting---i.e., an enabling tool to determine the hybrid hardness of solving multi-instance security games.
This allows us to derive in a straightforward manner the NISQ hardness of various security games,
such as (i) the non-uniform hardness of salted games, (ii) the hardness of specific cryptographic tasks such as the multiple instance version of one-wayness and collision-resistance, and (iii) uniform or non-uniform hardness of many other games.
2025
TCC
Obfuscating Pseudorandom Functions is Post-Quantum Complete
Abstract
The last decade has seen remarkable success in designing and uncovering new applications of indistinguishability obfuscation (i$\O$). The main pressing question in this area is whether {\em post-quantum} i$\O$ exists. All current lattice-based candidates rely on new, non-standard assumptions, many of which are known to be broken.
To make systematic progress on this front, we investigate the following question: can general-purpose i$\O$ be reduced, assuming {\em only} learning with errors (LWE), to obfuscating a smaller class of functions? The specific class of functions we consider are {\em pseudorandom functions} (PRFs), which constitute a natural functionality of independent interest. We show the following results:
\begin{itemize}
\item We construct exponentially-efficient i$\O$ (xi$\O$) for general circuits based on LWE in the pseudorandom oracle model -- a variant of the Random Oracle model (Jain et al., CRYPTO'23). Our construction requires the pseudorandom oracle model heuristic to hold for a \emph{specific} pseudorandom function and we prove its security against classical adversaries.
\item We construct (post-quantum) i$\O$ for general circuits in the standard model based on (post-quantum) sub-exponentially secure LWE and (post-quantum) sub-exponentially secure {\em average-case} i$\O$ -- a natural notion of i$\O$ for pseudorandom functions that we define.
\end{itemize}
To obtain these results, we generalize the ``encrypt-evaluate-decrypt'' paradigm used in prior works by replacing the use of fully homomorphic encryption with succinct secure two-party computation where parties obtain additive output shares (Boyle et al., EUROCRYPT'25 and Abram et al., STOC'25).
2025
TCC
Offline-Online Indifferentiability of Cryptographic Systems
Abstract
The indifferentiability framework has become a standard methodology that enables us to study the security of cryptographic constructions in idealized models of computation. Unfortunately, while indifferentiability provides strong guarantees whenever the security of a construction is captured by a "single-stage" security game, it may generally provide no meaningful guarantees when the security is captured by a "multi-stage" one. In particular, the indifferentiability framework does not capture offline-online games, where the adversary can perform an extensive offline computation to later speed up the online phase. Such security games are extremely common, both in practice and in theory. Over the past decade, there has been numerous attempts to meaningfully extend the indifferentiability framework to offline-online games, however, they all ultimately met with little success.
In this work, our contribution is threefold. First, we propose an extension of the classical indifferentiability framework, we refer to as *offline-online-indifferentiability*, that applies in the context of attackers with an expensive offline phase (\`{a} la Ghoshal and Tessaro, CRYPTO '23). Second, we show that our notion lends itself to a natural and meaningful composition theorem for offline-online security games. Lastly, as our main technical contribution, we analyze the offline-online-indifferentiability of two classical variants of the Merkle-Damgård hashing mechanism, one where the key is fed only to the first block in the chain and the other where the key is fed to each block in the chain. For both constructions, we prove a *tight* bound on their offline-online-indifferentiability (i.e., an upper bound and an attack that matches it). Notably, our bound for the second variant shows that the construction satisfies *optimal* offline-online-indifferentiability.
2025
TCC
On Achieving ``Best-in-the-Multiverse'' MPC
Abstract
The notion of Best-of-Both-Worlds introduced in the work of Ishai et al. (CRYPTO 2006) investigated whether an MPC protocol can simultaneously provide two incomparable security guarantees: guaranteed output delivery against an honest majority and security with abort against a dishonest majority and provided tight upper and lower bounds in the presence of computationally bounded, i.e., PPT adversaries. Another line of works starting from the work of Chaum (CRYPTO 1989) considered protocols that simultaneously achieved security against an unbounded adversary corrupting a minority of the parties and security against arbitrary corruption by a PPT adversary.
In this work, we generalize previous work to investigate a fundamental challenge of designing an MPC in a \emph{multiverse} where security is specified with respect to (1) GOD, (2) fairness, (3) security w.r.t. unbounded adversaries, and (4) security with abort.
The work of Lucas et al. (PODC 2010) resolved this question when considering threshold adversaries; however, the case of general adversary structures remains open.
Our main result completely characterizes when a protocol can simultaneously achieve all properties. Namely, given adversary structures $Z_{GOD}, Z_{fair}, Z_{Stat}$ and $Z_{Comp}$, we provide tight upper and lower bounds for when an MPC protocol can provide GOD, fairness, and security with abort respectively for unbounded and PPT adversaries w.r.t. these adversary structures.
2025
TCC
On the Impossibility of Actively Secure Distributed Samplers
Abstract
One-round secure computation is generally believed impossible due to the \emph{residual function attack}: any honest-but-curious participant can replay the protocol in their head changing their input, and learn, in this way, a new output. Inputless functionalities are among the few that are immune to this problem.
This paper studies one-round, multi-party computation protocols (MPC) that implement the most natural inputless functionality: one that generates a random sample from a fixed distribution. These are called \emph{distributed samplers}. At Eurocrypt 2022, Abram, Scholl and Yakoubov showed how to build this primitive in the semi-honest model with dishonest majority. In this work, we give a lower bound for constructing distributed samplers with a malicious adversary in the standard model. More in detail, we show that for any construction in the stand-alone model with black-box simulation, even with a CRS and honest majority, the output of the sampling protocol must have low entropy. This essentially implies that this type of construction is useless in applications. Our proof is based on an entropic argument, drawing a new connection between computationally secure MPC, information theory and learning theory.
2025
TCC
On the Limitations of Pseudorandom Unitaries
Abstract
Pseudorandom unitaries (PRUs), one of the key quantum pseudorandom notions, are efficiently computable unitaries that are computationally indistinguishable from Haar random unitaries. While there is evidence to believe that PRUs are weaker than one-way functions, so far its relationship with other quantum cryptographic primitives (that are plausibly weaker than one-way functions) has not been fully established.
In this work, we focus on quantum cryptographic primitives with classical communication, referred to as QCCC primitives. Our main result shows that QCCC bit commitments and QCCC key agreement, cannot be constructed from pseudorandom unitaries in a black-box manner.
Our core technical contribution is to show (in a variety of settings) the difficulty of distinguishing identical versus independent Haar unitaries by separable channels. Our result strictly improves upon prior works which studied similar problems in the context of learning theory [Anshu, Landau, Liu, STOC 2022] and cryptography [Ananth, Gulati, Lin, TCC 2024].
2025
TCC
Optimal Bounds on the Existence of Pseudorandom Codes
Abstract
A pseudorandom code is a keyed error-correction scheme with the property that any polynomial number of encodings appear random to any computationally bounded adversary. We show that the pseudorandomness of any code tolerating a constant rate of random errors cannot be based on black-box reductions to almost any generic cryptographic primitive: for instance, anything that can be built from random oracles, generic multilinear groups, and virtual black-box obfuscation. Our result is optimal, as Ghentiyala and Guruswami (2024) observed that pseudorandom codes tolerating any sub-constant rate of random errors exist assuming just one-way functions.
The key technical ingredient in our proof is the hypercontractivity theorem for Boolean functions, which we use to prove our impossibility in the random oracle model. It turns out that this easily extends to a separation of pseudorandom codes tolerating a constant rate of random errors from ``crypto oracles,'' a notion introduced and shown to be capable of implementing all the primitives mentioned above by Lin, Mook, and Wichs (EUROCRYPT 2025).
2025
TCC
Plonk is Simulation Extractable in ROM Under Falsifiable Assumptions
Abstract
Solving a long-standing open problem, Faonio, Fiore, and Russo proved that the widely used Plonk zk-SNARK is simulation extractable. However, their proof assumes both the random oracle model (ROM) and the algebraic group model. We prove that the same holds in the ROM under falsifiable assumptions. We combine the template of Faust et al., who proved that simulation extractability follows from knowledge soundness, (weak) unique response, and trapdoorless zero-knowledge, with the recent result of Lipmaa, Parisella, and Siim (Crypto 2025), who proved that Plonk has knowledge soundness in the ROM under falsifiable assumptions. For this, we prove that Plonk satisfies new variants of the weak unique response and trapdoorless zero-knowledge properties. We prove that several commonly used gadgets, like the linearization trick, are not trapdoorless zero-knowledge when considered as independent commit-and-prove zk-SNARKs.
2025
TCC
Polynomial Secret Sharing Schemes and Algebraic Matroids
Abstract
Polynomial secret sharing schemes generalize the linear ones by adding more expressivity and giving room for efficiency improvements. In these schemes, the secret is an element of a finite field, and the shares are obtained by evaluating polynomials on the secret and some random field elements, i.e., for every party there is a set of polynomials that computes the share of the party. Notably, for general access structures, the best known polynomial schemes have better share size than the best known linear ones. This work investigates the properties of the polynomials and the fields that allow these efficiency gains and aims to characterize the access structures that benefit from them.
We focus first on ideal schemes, which are optimal in the sense that the size of each share is the size of the secret. We prove that, under some restrictions, if the degrees of the sharing polynomials are not too big compared to the size of the field, then its access structure is a port of an algebraic matroid. This result establishes a new connection between ideal schemes and matroids, extending the known connection between ideal linear schemes and linearly representable matroids.
For general polynomial schemes, we extend these results and analyze their privacy and correctness. Additionally, we show that given a set of polynomials over a field of large characteristic, one can construct linear schemes that realize the access structure determined by these polynomials; as a consequence, polynomial secret sharing schemes over these fields are not stronger than linear schemes.
While there exist ports of algebraic matroids that do not admit ideal schemes, we demonstrate that they always admit schemes with statistical security and information ratio tending to one. This is achieved by considering schemes where the sharings are points in algebraic varieties.
2025
TCC
Practical Secure Delegated Linear Algebra with Trapdoored Matrices
Abstract
Most heavy computation occurs on servers owned by a second party. This reduces data privacy, resulting in interest in data-oblivious computation, which typically severely degrades performance. Secure and fast delegated computation is particularly important for linear algebra, which comprises a large fraction of total computation and is best run on highly specialized hardware often accessible only through the cloud.
We state the natural efficiency and security desiderata for fast and data-oblivious delegated linear algebra. We demonstrate the existence of \textit{Trapdoored}-\textit{Matrix} families based on an LPN assumption, and provide a scheme for secure delegated matrix-matrix and matrix-vector multiplication based on the existence of trapdoored matrices. We achieve sublinear overhead for the server, dramatically reduced computation for the client, and various practical advantages over previous protocols.
2025
TCC
Privately Constrained PRFs from DCR: Puncturing and Bounded Waring Rank
Abstract
A privately constrained pseudorandom function (pCPRF) is a PRF with the additional property that one can derive a constrained key that allows evaluating the PRF only on inputs satisfying a constraint predicate $C$, without revealing $C$ itself or leaking information about the PRF’s output on inputs that do not satisfy the constraint.
Existing privately constrained PRFs face significant limitations: either (1) they rely on assumptions known to imply fully-homomorphic encryption or indistinguishability obfuscation, (2) they support only highly restricted classes of constraints—notably, no known group-based pCPRF even supports the simple class of puncturing constraints (where the constrained key permits evaluation on all but one point while hiding the punctured point), or (3) they are limited to polynomial-size input domains. A long-standing open question has been whether one can construct a privately constrained PRF from group-based assumptions for more expressive classes of constraints. In this work, we present a pCPRF based on the decisional composite residuosity (DCR) assumption that supports a highly expressive class of predicates, namely constraints whose degree and Waring rank are both polynomially bounded, which includes puncturing.
From a technical perspective, our work follows the general template of Couteau, Meyer, Passelègue, and Riahinia (Eurocrypt'23), who constructed a pCPRF from group-based homomorphic secret-sharing but were limited to inner-product constraints in the constraint-hiding setting. Leveraging novel techniques for computing with distributed discrete logarithms (DDLog), we enable the non-interactive authentication of powers of linear combinations of shares of some value. This, in turn, allows us to express constraints with polynomially bounded Waring rank.
Our construction is single-key, selectively secure, and supports an exponential-size domain.
2025
TCC
Provably Memory-Hard Proofs of Work With Memory-Easy Verification
Abstract
A Proof of Work (PoW) is an important construction for spam-mitigation and distributed consensus protocols. Intuitively, a PoW is a short proof that is easy for the verifier to check but moderately expensive for a prover to generate. However, existing proofs of work are not egalitarian in the sense that the amortized cost to generate a PoW proof using customized hardware is often several orders of magnitude lower than the cost for an honest party to generate a proof on a personal computer. Because Memory-Hard Functions (MHFs) appear to be egalitarian, there have been multiple attempts to construct Memory-Hard Proofs of Work (MHPoW) which require memory-hard computation to generate, but are efficient to verify. Biryukov and Khovratovich (Usenix, 2016) developed a MHPoW candidate called Merkkle Tree Proofs using used the Argon2d MHF. However, they did not provide a formal security proof and Dinur and Nadler (Crypto, 2017) found an attack which exploited the data-dependencies of the underlying Argon2d graph.
We revisit the security of the MTP framework and formally prove, in the parallel random oracle model, that the MTP framework is sound when instantiated with a suitable {\em data-independent} Memory-Hard function. We generically lower bound the cumulative memory cost (cmc) of any prover for the protocol by the pebbling cost of the {\em ex-post facto} graph. We also prove that as long as the underlying graph of the original iMHF is sufficiently depth-robust that, except with negligible probability, the {\em ex-post facto} will have high cumulative memory cost (cmc). In particular, if we instantiate the iMHF with DRSample then we obtain a MHPoW with the following properties: (1)
An honest prover for the protocol can run in sequential time $O(N)$, (2) The proofs have size $\mathtt{polylog}(N)$ and can be verified in time $\mathtt{polylog}(N)$ (3) Any malicious prover who produces a valid proof must incur high cumulative memory complexity at least $\Omega\left(\frac{N^2}{\log N}\right)$. We also develop general pebbling attacks to which we use to show that (1) any iMHF based MHPoW using the MTP framework has proof size at least $\Omega\left(\log^2 N/\log \log N \right)$, and (2) at least $\tilde{\Omega}(N^{0.32})$ when the iMHF is instantiated with Argon2i, the data-independent version of Argon2.
2025
TCC
Pseudorandom Correlation Functions for Garbled Circuits
Abstract
In this paper, we define the notion of pseudorandom correlation generators (PCGs) and functions (PCFs) for garbling correlations. With our Garbling PCG or PCF, two parties can non-interactively generate a virtually unbounded number of secret-shared garbled circuits and corresponding secret-shared garbled inputs. With the shares of the garbled circuit and garbled input, anyone can recover the garbled circuit and evaluate it to obtain the result of the computation in the clear.
In the process of constructing Garbling PCFs, we introduce a new primitive that we call a Topology-Adaptive PCF (TAPCF), which we construct from two different variants of the learning parity with noise (LPN) assumption. Informally, a TAPCF is a PCF that additionally allows the target correlation to be specified on-demand (i.e., at evaluation time). Using our TAPCF construction as a building block, we construct a Garbling PCF that allows the parties to specify the circuit they wish to garble on-the-fly. Under realistic parameter settings, we estimate that, with our construction, two parties can generate one garbled circuit per second, for garbled circuits with 10,000 AND gates.
We show that Garbling PCFs have several applications: We provide constructions for (1) an efficient homomorphic secret-sharing scheme for specific circuits, (2) a zero-knowledge proof system for homomorphic secret sharing that supports checking unstructured languages, and (3) a semi-honest reusable two-round, two-party computation protocol supporting non-interactive public outputs.
2025
TCC
Pseudorandom FE and iO with Applications
Abstract
We propose the abstractions of Functional Encryption (FE) and Indistinguishability Obfuscation (iO) for {\it pseudorandom} functionalities which are strictly weaker than their general counterparts. Intuitively, a pseudorandom functionality means that the output of the circuit is indistinguishable from uniform for {\it every} input seen by the adversary. We then leverage weak indistinguishability style security of these tools to obtain the following applications:
1. {\it Attribute Based Encryption for Unbounded Depth Circuits.} Assuming $\IND$-secure FE for pseudorandom functionalities and LWE, we construct Attribute Based Encryption (ABE) for circuits of unbounded depth. Previously, such ABE required the circular Evasive LWE assumption (Hseih, Lin and Luo, Focs 2023) which has recently been subject to zeroizing attacks.
2. {\it Attribute Based Encryption for Turing Machines.} Assuming $\IND$-secure FE for pseudorandom functionalities and circular small-secret LWE, we construct Attribute Based Encryption (ABE) for Turing machines. Previously, such ABE required either private coin Evasive LWE (Agrawal, Kumari and Yamada, Crypto 2024) or circular Evasive LWE (Cini and Wee, Eurocrypt 2025), both of which admit attacks in the general case.
3. {\it Multi Input Predicate Encryption for Polynomial Arity.} Assuming $\IND$-secure multi-input FE for pseudorandom functionalities, we construct Multi Input Predicate Encryption (${\sf MIPE}$) for ${\sf P}$ for polynomial arity. Previously, ${\sf MIPE}$ for ${\sf P}$ was known only for {\it constant} arity, using private coin evasive LWE (Agrawal, Rossi, Yadav and Yamada, Crypto 2023).
4. {\it Instantiating the Random Oracle.} We use our $\IND$-secure iO for pseudorandom functionalities to instantiate the random oracle in several applications that previously used iO (Hohenberger, Sahai and Waters, Eurocrypt 2014) such as full-domain hash signature based on trapdoor permutations and more. %, the adaptive security of RSA FDH signatures, the selective security of BLS signatures, and the adaptive security of BLS signatures in the standard model. Our pseudorandom $\iO$ can be used to instantiate these applications, thus reducing their security to strong evasive $\LWE$ and $\LWE$ assumptions.
We provide heuristic constructions of FE and MIFE for pseudorandom functionalities from private coin evasive LWE and plain LWE, where private coin evasive LWE is suitably parametrized to avoid all know attacks for the functionalities we consider in this work. This implies iO for pseudorandom functionalities from the same assumptions.
2025
TCC
Pseudorandom Function-like States from Common Haar Unitary
Abstract
Recent active studies have demonstrated that cryptography without one-way functions (OWFs) could be possible in the quantum world. Many fundamental primitives that are natural quantum analogs of OWFs or pseudorandom generators (PRGs) have been introduced, and their mutual relations and applications have been studied. Among them, pseudorandom function-like state generators (PRFSGs) [Ananth, Qian, and Yuen, Crypto 2022] are one of the most important primitives. PRFSGs are a natural quantum analogue of pseudorandom functions (PRFs), and imply many applications such as IND-CPA secret-key encryption (SKE) and EUF-CMA message authentication code (MAC). However, only known constructions of (many-query-secure) PRFSGs are ones from OWFs or pseudorandom unitaries (PRUs).
In this paper, we construct classically-accessible adaptive secure PRFSGs in the invertible quantum Haar random oracle (QHRO) model which is introduced in [Chen and Movassagh, Quantum]. The invertible QHRO model is an idealized model where any party can access a public single Haar random unitary and its inverse, which can be considered as a quantum analog of the random oracle model. Our PRFSG constructions resemble the classical Even-Mansour encryption based on a single permutation, and are secure against any unbounded polynomial number of queries to the oracle and construction. To our knowledge, this is the first application in the invertible QHRO model without any assumption or conjecture. The previous best constructions in the idealized model are PRFSGs secure up to $o(\secp/\log \secp)$ queries in the common Haar state model [Ananth, Gulati, and Lin, TCC 2024] and (inverseless) PRUs in a relaxed QRHO model without inverse access [Ananth, Bostanci, Gulati, and Lin, Eurocrypt 2025].
We develop new techniques on Haar random unitaries to prove the selective and adaptive security of our PRFSGs. For selective security, we introduce a new formula, which we call the Haar twirl approximation formula. For adaptive security, we show the unitary reprogramming lemma and the unitary resampling lemma along with the several technical tools for unitary oracle security proof with pure state queries. These have their own interest, and may have many further applications. In particular, by using the approximation formula, we give an alternative proof of the non-adaptive security of the PFC ensemble [Metger, Poremba, Sinha, and Yuen, FOCS 2024] as an additional result.
Finally, we prove that our construction is not PRUs or quantum-accessible non-adaptive PRFSGs by presenting quantum polynomial time attacks. Our attack is based on generalizing the hidden subgroup problem where the relevant function outputs quantum states.
2025
TCC
Quantum Interactive Oracle Proofs
Abstract
We initiate the study of quantum Interactive Oracle Proofs (qIOPs), a generalization of both quantum Probabilistically Checkable Proofs and quantum Interactive Proofs, as well as a quantum analogue of classical Interactive Oracle Proofs.
In the model of quantum Interactive Oracle Proofs, we allow multiple rounds of quantum interaction between the quantum prover and the quantum verifier, but the verifier has limited access to quantum resources. This includes both queries to the prover’s messages and the complexity of the quantum circuits applied by the verifier. The question of whether QMA admits a quantum interactive oracle proof system is a relaxation of the quantum PCP Conjecture.
We show the following two main constructions of qIOPs, both of which are unconditional:
\begin{itemize}
\item We construct a quantum IOP protocol for QMA in which the verifier shares polynomially many EPR pairs with the prover at the start of the protocol and reads only a constant number of qubits from the prover’s messages.
\item We provide a stronger construction of quantum IOP for QMA in which the verifier not only reads a constant number of qubits but also operates on a constant number of qubits overall, including those in their private registers. However, in this stronger setting, the communication complexity becomes exponential. This leaves open the question of whether strong quantum IOPs for QMA with polynomial communication complexity exist.
\end{itemize}
As a key component of our construction, we introduce a novel single prover many-qubits tests, which may be of independent interest.
2025
TCC
Quantum Rewinding for IOP-Based Succinct Arguments
Abstract
We analyze the post-quantum security of succinct interactive arguments constructed from interactive oracle proofs (IOPs) and vector commitment schemes. Specifically, we prove that an interactive variant of the *BCS transformation* is secure in the standard model against quantum adversaries when the vector commitment scheme is collapse binding.
Prior work established the post-quantum security of Kilian's succinct interactive argument, a special case of the BCS transformation for one-message IOPs (i.e., PCPs). That analysis is inherently limited to one message because the reduction, like all prior quantum rewinding reductions, aims to extract classical information (a PCP string) from the quantum argument adversary. Our reduction overcomes this limitation by instead extracting a *quantum algorithm* that implements an IOP adversary; representing such an adversary classically may in general require exponential complexity.
Along the way we define *collapse position binding*, which we propose as the ``correct'' definition of collapse binding for vector commitment schemes, eliminating shortcomings of prior definitions.
As an application of our results, we obtain post-quantum secure succinct arguments, in the standard model (no oracles), with the *best asymptotic complexity known*.
2025
TCC
Relationships among FuncCPA and Its Related Notions
Abstract
Akavia, Gentry, Halevi, and Vald (TCC’22, JoC'25) introduced the security notion of function-chosen-plaintext-attack (FuncCPA security)for public-key encryption schemes.
FuncCPA is defined by adding a functional re-encryption oracle to the IND-CPA game.
This notion is crucial for secure computation applications where the server is allowed to delegate a part of the computation to the client.
Dodis, Halevi, and Wichs (TCC’23) introduced a stronger variant called FuncCPA+, and conjectured that FuncCPA+ is strictly stronger than FuncCPA, while they showed FuncCPA+ implies FuncCPA.
Seeking insights into this conjecture, they showed that ReEncCPA+ is strictly stronger than ReEncCPA, where ReEncCPA and ReEncCPA+ are restricted versions of FuncCPA and FuncCPA+ respectively.
In this paper, contrary to their conjecture, we show that FuncCPA+ is equivalent to FuncCPA.
We also introduce new variants of FuncCPA; WeakFuncCPA, OW-FuncCPA, and OW-WeakFuncCPA. WeakFuncCPA is a restricted variant of FuncCPA in that an oracle query is prohibited after the challenge query (like IND-CCA1).
OW-FuncCPA and OW-WeakFuncCPA are the one-way (OW) versions of FuncCPA and WeakFuncCPA, respectively.
This paper shows that WeakFuncCPA and OW-FuncCPA are equivalent to FuncCPA, that is, all of FuncCPA, FuncCPA+, WeakFuncCPA, and OW-FuncCPA are equivalent.
Considering the separation of IND-CCA1 and IND-CCA2, and that of OW-CPA and IND-CPA, these results are surprising.
To show the equivalence, we develop novel techniques to utilize functional re-encryption oracles.
We then provide the separation results that OW-WeakFuncCPA does not imply FuncCPA and ReEncCPA+ does not imply FuncCPA.
2025
TCC
Relativized Succinct Arguments in the ROM Do Not Exist
Abstract
A relativized succinct argument in the random oracle model (ROM) is a succinct argument in the ROM that can prove/verify the correctness of computations that involve queries to the random oracle. We prove that relativized succinct arguments in the ROM do not exist. The impossibility holds even if the succinct argument is interactive, and even if soundness is computational (rather than statistical).
This impossibility puts on a formal footing the commonly-held belief that succinct arguments in the ROM require non-relativizing techniques. Moreover, our results stand in sharp contrast with other oracle models, for which a recent line of work has constructed relativized succinct non-interactive arguments (SNARGs). Indeed, relativized SNARGs are a powerful primitive that, e.g., can be used to obtain constructions of IVC (incrementally-verifiable computation) and PCD (proof-carrying data) based on falsifiable cryptographic assumptions. Our results rule out this approach for IVC and PCD in the ROM.
2025
TCC
Sandwich BUFF: Achieving Non-Resignability Using Iterative Hash Functions
Abstract
We revisit the BUFF transform, which was proposed by Cremers et al. (S&P'21) as a means to achieve security properties beyond standard unforgeability for digital signature schemes. One of these properties, non-resignability (NR), has recently drawn some attention due to a strong impossibility result for the original definition of the property. Recent follow-up work then considered a variant (sNR) of the original definition, and showed that it is satisfied by the BUFF transform when the underlying hash function is modeled as a random oracle --- while the original impossibility result still applies for the plain model. This raises the natural question of whether the BUFF transform satisfies sNR in a more fine-grained use of the random oracle model, when we consider a real-life iterative-hash-function design (such as Merkle-Damgard or Sponge) and instead idealize the round function. Our discoveries in this direction are two-fold:
First, contrary to what one might expect, we show that there is a simple attack on the non-resignability property sNR of the BUFF-transform when instantiated with an iterative hash function. The attack relies on leaking an intermediate result of the hash computation to the adversary who is challenged to ``resign'' the message. This negative result once more shows the subtlety in the non-resignability property.
Second, on the positive side, we propose a small modification to the original BUFF transform, which we call Sandwich BUFF (for reasons to become clear), and prove the non-resignability property sNR of Sandwich BUFF both for Merkle-Damgard-based hash functions in the random oracle model, and for Sponge-based hash functions in the random permutation model.
2025
TCC
Scalable Multi-Server Private Information Retrieval
Abstract
We revisit multi-server Private Information Retrieval (PIR), where the client interacts with $S$ non-colluding servers. Ideally, we want a *scalable* family of multi-server PIR schemes where all the performance metrics of the scheme decrease as $S$ increases. However, no prior work achieved scalability under any setting, and any hardness assumption.
In this paper we construct new multi-server, information-theoretically secure *scalable* PIR schemes for three natural settings. First, we give a construction where all the performance metrics scale at equal rate. Second, we give a scalable construction that minimizes the per-query bandwidth. Third, we give a scalable construction that minimizes the per-query online bottleneck cost (the maximum of the bandwidth and computation). For the first two settings, our constructions are *doubly efficient* with only a super-constant number of servers. In comparison, the best known prior works in the information-theoretic setting required super-logarithmically many servers to achieve the doubly efficient notion.
Our techniques for achieving scalable PIR also enable us to advance the state of the art in the polynomial space setting. In this setting, we show how to improve the space consumption of prior works by a polynomial factor while preserving all other metrics. Further, we show a new balancing technique that allows us to further minimize the bandwidth per query by trading off the computation and server space, thus enabling a more smooth tradeoff between the metrics and generalizing the design space.
2025
TCC
Securing Unbounded Differential Privacy Against Timing Attacks
Abstract
Recent works~\cite{ben2023resistance,ratliff2024framework} have started to theoretically investigate how we can protect differentially private programs against timing attacks, by making the joint distribution the output and the runtime differentially private (JOT-DP). However, the existing approaches to JOT-DP have some limitations, particularly in the setting of unbounded DP (which protects the size of the dataset and applies to arbitrarily large datasets). First, the known conversion of pure DP programs to pure JOT-DP programs in the unbounded setting~\cite{ben2023resistance} (a) incurs a constant additive increase in error probability (and thus does not provide vanishing error as $n\to\infty$) (b) produces JOT-DP programs that fail to preserve the computational efficiency of the original pure DP program and (c) is analyzed in a toy computational model in which the runtime is defined to be the number of coin flips. For approximate JOT-DP, an efficient conversion with vanishing error in the RAM model is known~\cite{haeberlen2011differential,ratliff2024framework}, but only applies to programs that run in $O(n)$ time on datasets of size $n$, as linear runtime is implied by ``timing stability,'' the timing analogue of global sensitivity. In this work, we overcome these limitations. Specifically:
\begin{enumerate}
\item We show that the error required for pure JOT-DP in the unbounded setting depends on the model of computation.
\begin{itemize}
\item In a randomized RAM model where the dataset size $n$ is given (or can be computed in constant time) and we can generate random numbers (not just random bits) in constant time, polynomially small error probability is necessary and sufficient.
\item If $n$ is not given or we only have a random-bit generator, an (arbitrarily small) constant error probability is necessary and sufficient.
\end{itemize}
\item The aforementioned positive results are proven by efficient procedures to convert any pure JOT-DP program $P$ in the upper-bounded setting to a pure JOT-DP program $P'$ in the unbounded setting, such that the output distribution of $P'$ is $\gamma$-close in total variation distance to that of $P$, where $\gamma$ is either an arbitrarily small constant or polynomially small, depending on the model of computation.
\end{enumerate}
2025
TCC
Security Amplification of Threshold Signatures in the Standard Model
Abstract
The current standardization calls for threshold signatures have highlighted the need for appropriate security notions providing security guarantees strong enough for broad application. To address this, Bellare et al. [Crypto'22] introduced a hierarchy of unforgeability notions for threshold signatures. Recently, Navot and Tessaro [Asiacrypt'24] introduced a new game-based definition of strong (one-more) unforgeability for threshold signatures, which however does not achieve Bellare's strongest level of security.
Navot and Tessaro analyzed several existing schemes w.r.t. this security notion, but all positive results rely on idealized models. This is in contrast to the weaker security notion of (standard) unforgeability, for which standard-model constructions exist. This leaves open a fundamental question: is getting strong unforgeability fundamentally harder than standard unforgeability for threshold signatures?
In this paper we bridge this gap, by showing a generic construction lifting any unforgeable threshold signature scheme to strong unforgeability. The building blocks of our construction can be instantiated in the standard model under standard assumptions. The achieved notion of strong
unforgeability extends the definition of Navot and Tessaro to achieve the strongest level of security according to the hierarchy of Bellare et al. (following a recent classification of security notions for (blind) threshold signatures by Lehmann, Nazarian, and Özbay [Eurocrypt'25]).
The starting point for our transformation is an existing construction for single-user signatures from chameleon hash functions by Steinfeld, Pieprzyk and Wang [RSA'07]. We first simplify their construction by relying on a stronger security notion for chameleon hash functions. The bulk of our technical contribution is then to translate this framework into the threshold setting. Towards this goal, we introduce a game-based definition for threshold chameleon hash functions (TCHF) and provide a construction of TCHF that is secure under DDH in the standard model. We believe that our new notion of TCHF might also be of independent interest.
2025
TCC
Seedless Condensers for Efficiently Samplable Sources
Abstract
Is it possible to efficiently convert any arbitrary source with sufficiently high (min-)entropy
into nearly uniformly random bits? Information-theoretically, this is achievable using seeded
extractors, provided the source is independent of the seed. However, in the absence of any such
independence guarantee, no solution is possible for general computationally unbounded sources.
Even for efficiently samplable sources, we cannot extract randomness that is statistically close
to uniform, but can at best condense the randomness into an output that is only missing
logarithmically many bits of entropy compared to the uniform distribution. Dodis, Ristenpart
and Vadhan (TCC ’12) constructed such seeded condensers for efficiently samplable sources
that can depend arbitrarily on the seed assuming near-optimal security of collision-resistant
hash functions. In this work, we investigate whether such condensers can be made seedless. We
present several new results:
- We construct seedless condensers for all efficiently samplable sources assuming near-optimal
security of keyless multi-collision resistant hash functions. We justify this assumption by
showing it holds in the auxiliary-input random oracle model.
- We show that such seedless condensers cannot be proven secure via a black-box reduc-
tion from any standard game-based assumption, even if we assume optimal exact security.
In fact, we give a general black-box separation that applies to a broad class of seedless
primitives, with seedless condensers as a special case.
- We consider the class of efficiently samplable distributed sources where two parties jointly
sample randomness x = (x0, x1), with one party honestly choosing xb uniformly at ran-
dom while the other party adversarially chooses x1−b depending on xb. We show how to
construct seedless condensers for such sources assuming near-optimal security of game-
based assumptions – either: (1) adaptive one-way functions, or (2) a certain from of
(single-input) correlation-intractable hash functions that can be instantiated from key-
dependent-message (KDM) secure encryption
2025
TCC
Separating Pseudorandom Codes from Local Oracles
Abstract
Pseudorandom codes (PRCs) are error-correcting codes with the
distinguishing feature that their codewords are computationally indistin-
guishable from random strings. Introduced by Christ and Gunn (CRYPTO
2024), PRCs have found applications in areas such as AI watermarking,
where both robustness and pseudorandomness are essential. All known
constructions of PRCs rely on coding-theoretic hardness assumptions. In
this work, we study how inherent the use of coding-theoretic hardness is
in the construction of pseudorandom codes.
We show that there is no black-box construction of PRCs with binary alpha-
bets capable of decoding from a constant fraction of Bernoulli noise from
a class of oracles we call local oracles. The class of local oracles includes
random oracles and trapdoor permutation oracles, and can be interpreted
as a meaningful notion of oracles that are not resilient against noise. Our
separation result is cast in the Impagliazzo-Rudich framework and crucially
relies on the Bonami-Beckner hypercontractivity theorem on the Boolean
hypercube.
As a complementary result, we show that PRCs with large alphabets that
can tolerate high error rates can indeed be constructed in a black-box man-
ner from one-way functions.
2025
TCC
Simplified PIR and CDS Protocols and Improved Linear Secret-Sharing Schemes
Abstract
We consider 3 related cryptographic primitives, private information retrieval (PIR) protocols, conditional disclosure of secrets (CDS) protocols, and secret-sharing schemes; these primitives have many applications in cryptography. We study these primitives requiring information-theoretic security. The complexity of the three primitives has been dramatically improved in the last few years and they are closely related, i.e., the 2-server PIR protocol of Dvir and Gopi (J. ACM 2016) was transformed to construct the CDS protocols of Liu, Vaikuntanathan, and Wee (CRYPTO 2017, Eurocrypt 2018) and these CDS protocols are the main ingredient in the construction of the best-known secret-sharing schemes. To date, the message size required in PIR and CDS protocols and the share size required in secret-sharing schemes are not understood and there are big gaps between their upper bounds and lower bounds. The goal of this paper is to try to better understand the upper bounds by simplifying current constructions and supplying tools for improving their complexity.
We obtain the following results:
- We simplify, abstract, and generalize the 2-server PIR protocol of Dvir and Gopi (J. ACM 2016),
the recent multi-server PIR protocol of Ghasemi, Kopparty, and Sudan (STOC 2025), and the 2-
server and multi-server CDS protocols of Liu et al.\ (CRYPTO 2017, Eurocrypt 2018) and Beimel,
Farr\`as, and Lasri (TCC 2023). In particular, we present one PIR protocol generalizing both the
2-server and multi-server PIR protocols using general share conversions. For $k\geq 3$
servers, our main contribution is a simpler proof the correctness of the protocol, avoiding the
partial derivatives and interpolation polynomials used by Ghasemi et al.
- In addition to simplifying previous protocols, our 2-server protocols can use a relaxed variant
of matching vectors over any $m$ that is the product of two distinct primes. Our constructions
do not improve the communication complexity of PIR and CDS protocols; however,
construction of better relaxed matching vectors over \emph{any} $m$ that is the product of
two distinct primes will improve the communication complexity of $2$-server PIR and $2$-
server CDS protocols.
- In many applications of secret-sharing schemes it is important that the scheme is linear, e.g.,
by using the fact that parties can locally add shares of two secrets and obtain shares of the
sum of the secrets. In an independent result, we provide a construction of linear secret-sharing
schemes for $n$-party access structures with improved share size of $2^{0.7563n}$.
Previously, the best share size for linear secret-sharing schemes was $2^{0.7576n}$ and it is
known that for most $n$-party access structures the shares' size is at least $2^{0.5n}$. This
result is achieved by a reduction to unbalanced CDS protocols (compared to balanced CDS protocols in previous constructions).
2025
TCC
Slightly Sublinear Trapdoor Hash Functions and PIR from Low-Noise LPN
Abstract
Trapdoor hash functions (TDHs) are compressing hash functions, with an additional trapdoor functionality: Given a encoding key for a function $f$, a hash on $x$ together with a (small) input encoding allow one to recover $f(x)$. TDHs are a versatile tool and a useful building block for more complex cryptographic protocols.
In this work, we propose the first TDH construction assuming the (quasi-polynomial) hardness of the LPN problem with noise rate $\epsilon = O(\log^{1+\beta} n / n)$ for $\beta>0$, i.e., in the so-called low-noise regime. The construction achieves $2^{\Theta(\log^{1-\beta} \lambda)}$ compression factor. As an application, we obtain private-information retrieval (PIR) with communication complexity $L / 2^{\Theta(\log^{1-\beta} L)}$, for a database of size L. This is the first PIR scheme with non-trivial communication complexity (asymptotically smaller than $L$) from any code-based assumption.
2025
TCC
SNARK Lower Bounds via Communication Complexity
Abstract
We initiate the study of lower bounding the verification time of Succinct Non-interactive ARguments of Knowledge (SNARKs) built in the Polynomial Interactive Oracle Proof + Polynomial Commitment Scheme paradigm. The verification time of these SNARKs is generally dominated by the polynomial commitment scheme, and so we want to understand if polynomial commitment schemes admit lower bounds on the verification time. By recognizing that polynomial commitment schemes are also often built by applying cryptography to some information-theoretic core protocol, we seek to separate this core from the cryptography in a way that meaningfully captures the verification time required by the polynomial commitment scheme verifier.
We provide strong evidence that several polynomial commitment schemes have (nearly) \emph{optimal} verifier times. Our evidence comes from connecting polynomial commitment schemes to certain information-theoretic protocols known as communication protocols from the field of communication complexity, a link which we believe to be of independent interest. Through this lens, we model the verifier work in the cryptographic protocols as information (i.e., number of bits) exchanged between parties in the communication protocols, allowing us to leverage lower bounds from communication complexity. These lower bounds give strong evidence that the verifier time in these polynomial commitment schemes must be at least the number of bits exchanged in the communication protocol.
We extract the communication protocol cores of three polynomial commitment schemes and lower bound the bits exchanged in these cores. The lower bounds we obtain match (up to poly-logarithmic factors) the best-known (asymptotic) verification times of the polynomial commitment schemes we examine in this work. Specifically, we show that for univariate/multilinear polynomials of size $N = 2^n$:
- the communication core of Hyrax PCS (Wahby et al., S&P 2016) requires $\Omega(\sqrt{N}}$ bits to be exchanged;
- the communication core of Bulletproofs PCS (Bootle et al., EUROCRYPT 2016; Bünz et al., S&P 2018) requires $\Omega(N)$ bits to be exchanged; and
- the communication core of Dory PCS (Lee, TCC 2021) requires $\Omega(\log(N))$ bits to be exchanged.
Our results strongly suggest a negative answer to a longstanding open question on whether the Bulletproofs verifier can be made sublinear time.
2025
TCC
Space-Deniable Proofs
Abstract
We introduce and construct a new proof system called Non-interactive Arguments of Knowledge or Space (NArKoS), where a space bounded prover can convince a verifier they know a secret, while having access to sufficient space allows one to forge indistinguishable proofs without the secret.
An application of NArKoS are space-deniable proofs, which are proofs of knowledge (say for authentication in access control) that are sound when executed by a lightweight device like a smart-card or an RFID chip that cannot have much storage, but are deniable (in the strong sense of online deniability) as the verifier, like a card reader, can efficiently forge such proofs.
We construct NArKoS in the random oracle model using an OR-proof combining a sigma protocol (for the proof of knowledge of the secret) with a new proof system called simulatable Proof of Transient Space (simPoTS). We give two different constructions of simPoTS, one based on labelling graphs with high pebbling complexity, a technique used in the construction of memory-hard functions and proofs of space, and a more practical construction based on the verifiable space-hard functions from TCC'24 where a prover must compute a root of a sparse polynomial. In both cases, the main challenge is making the proofs efficiently simulatable.
2025
TCC
Special Genera of Hermitian Lattices and Applications to HAWK
Abstract
In its decisional form, the module-Lattice Isomorphism Problem (decision module-LIP) has received first attention recently in a paper by Ling, Liu and Mendelsohn. The authors gave a polynomial-time algorithm to distinguish spinor genera within the genus of a quadratic binary $\mathcal{O}_F$-lattice, assuming that $\mathcal{O}_F$ is a principal ideal domain. However, this algorithm would not impact cryptographic schemes based on decision module-LIP for lattices such as those employed in HAWK, i.e., for binary $\mathcal{O}_K$-lattices equipped with an Hermitian form (with $K$ a cyclotomic number field). Motivated by HAWK's framework, we investigate a concept that serves as an analogue of the spinor genus for Hermitian lattices, called special genus. This notion was studied by Shimura who provided a complete set of invariants for describing special genera. Building on this result, we propose an algorithm to determine whether two Hermitian lattices belong to the same special genus. Specifically for HAWK's lattice and siblings, our algorithm runs in classical polynomial-time. Nevertheless we provide numerical evidence suggesting that the ability to distinguish special genera does not, in practice, constitute a significative advantage for solving decision module-LIP.
2025
TCC
The Pseudorandomness of Legendre Symbols under the Quadratic-Residuosity Assumption
Abstract
The Legendre signature of an integer $x$ modulo a prime~$p$ with respect to offsets $\vec a = (a_1, \dots, a_\ell)$ is the string of Legendre symbols $(\frac{x+a_1}{p}), \dots, (\frac{x+a_\ell}{p})$. Under the quadratic-residuosity assumption, we show that the function that maps the pair $(x,p)$ to the Legendre signature of $x$ modulo $p$, with respect to public random offsets $\vec a$, is a pseudorandom generator. Our result applies to cryptographic settings in which the prime modulus $p$ is secret; the result does not extend to the case—common in applications—in which the modulus $p$ is public. At the same time, this paper is the first to relate the pseudorandomness of Legendre symbols to any pre-existing cryptographic assumption.
2025
TCC
Threshold Signatures from One-Way Functions
Abstract
A threshold signature allows one to delegate its signing rights to $n$ parties, such that any subset of size $t$ can sign a message on their behalf. In this work, we show how to construct threshold signatures for any $t$ and $n$ from one way functions, thus establishing the latter as a necessary and sufficient computational assumption. Our protocol makes non-black box use of one-way functions, and can be generalized to other access structures, such as monotone policies.
2025
TCC
Time-Space Trade-Offs for Sumcheck
Abstract
The sumcheck protocol is a fundamental building block in the design of probabilistic proof systems, and has become a key component of recent work on efficient succinct arguments.
We study time-space trade-offs for the prover of the sumcheck protocol in the streaming model, and provide upper and lower bounds that tightly characterize the efficiency achievable by the prover.
For sumcheck claims about a single multilinear polynomial we demonstrate an algorithm that runs in time $O(kN)$ and uses space $O(N^{1/k})$ for any $k \geq 1$. For non-adaptive provers (a class which contains all known sumcheck prover algorithms) we show this trade-off is optimal.
For sumcheck claims about products of multilinear polynomials, we describe a prover algorithm that runs in time $O(N(\log \log N + k))$ and uses space $O(N^{1/k})$ for any $k \geq 1$. We show that, conditioned on the hardness of a natural problem about multiplication of multilinear polynomials, any "natural" prover algorithm that uses space $O(N^{1/2 - \varepsilon})$ for some $\varepsilon > 0$ must run in time $\Omega(N(\log \log N + \log \varepsilon))$.
We implement and evaluate the prover algorithm for products of multilinear polynomials. We show that our algorithm consumes up to $120\times$ less memory compare to the linear-time prover algorithm, while incurring a time overhead of less than $2\times$.
The foregoing algorithms and lower bounds apply in the interactive proof model. We show that in the polynomial interactive oracle proof model one can in fact design a new protocol that achieves a better time-space trade-off of $O(N^{1/k})$ space and $O(N(\log^* N + k))$ time for any $k \geq 1$.
2025
TCC
Time/Space Tradeoffs for Generic Attacks on Delay Functions
Abstract
Delay functions have the goal of being inherently slow to compute.
They have been shown to be useful for generating public randomness in distributed systems in the presence of a dishonest majority of network participants; a task that is impossible to solve without such functions, due to Cleve's seminal impossibility result (STOC 1986).
Currently, little is known on how to construct secure delay functions or how to analyze the security of newly proposed candidate constructions.
In this work, we explore the time/space tradeoffs of generic attacks for a large class of potential delay function designs.
We consider delay functions $F^T$, which are computed as
\[
F^T := \underbrace{F(\dots F(F}_T(x)) \dots ),
\]
where $F : [N] \to [N]$ is some round function, in the presence of an adversary, who is given an advice string of some bounded size, has oracle access to $F$, and would like to compute $F^T$ on a random input $x$ using less than $T$ sequential oracle calls.
We show that for both random and arbitrary functions $F$ there exist non-trivial adversaries, who successfully evaluate $F^T$ using only $T/4$ sequential calls to oracle $F$, when given a large enough advice string.
We also show that there exist round functions $F$ for which the adversary cannot compute $F^T$ using less than $T/2$ sequential queries, unless they receive a large advice string or they can perform a large number of oracle queries to $F$ in parallel.
2025
TCC
Unconditional foundations for supersingular isogeny-based cryptography
Abstract
In this paper, we prove that the supersingular isogeny problem (Isogeny), endomorphism ring problem (EndRing) and maximal order problem (MaxOrder) are equivalent under probabilistic polynomial time reductions, unconditionally.
Isogeny-based cryptography is founded on the presumed hardness of these problems, and their interconnection is at the heart of the design and analysis of cryptosystems like the SQIsign digital signature scheme. Previously known reductions relied on unproven assumptions such as the generalized Riemann hypothesis. In this work, we present unconditional reductions, and extend this network of equivalences to the problem of computing the lattice of all isogenies between two supersingular elliptic curves (HomModule).
For cryptographic applications, one requires computational problems to be hard on average for random instances. It is well-known that if Isogeny is hard (in the worst case), then it is hard for random instances. We extend this result by proving that if any of the above-mentionned classical problems is hard in the worst case, then all of them are hard on average. In particular, if there exist hard instances of Isogeny, then all of Isogeny, EndRing, MaxOrder and HomModule are hard on average.
2025
TCC
Universally Composable Succinct Vector Commitments and Applications
Abstract
We develop a toolbox for modular construction and analysis of succinct, non-interactive commitments and vector commitments in the random oracle model, while guaranteeing universally composable security. To demonstrate its power, we use the toolbox to construct and analyze a modular variant of the Kilian-Micali ZK-SNARK.
Along the way we also propose a new UC formulation of a global random oracle, that avoids a weakness in existing formulations and also enables expressing more nuanced, session-specific abstractions.
We hope that this toolbox will be useful for building secure applications in settings where both succinctness and non-interactivity are key.
2025
TCC
Untelegraphable Encryption and its Applications
Abstract
We initiate the study of untelegraphable encryption (UTE), founded on the no-telegraphing principle, which allows an encryptor to encrypt a message such that a binary string representation of the ciphertext cannot be decrypted by a user with the secret key, a task that is classically impossible. This is a natural relaxation of unclonable encryption (UE), inspired by the recent work of Nehoran and Zhandry (ITCS 2024), who showed a computational separation between the no-cloning and no-telegraphing principles.
In this work, we define and construct UTE information-theoretically in the plain model. Building off this, we give several applications of UTE and study the interplay of UTE with UE and well-studied tasks in quantum state learning, yielding the following contributions:
- A construction of collusion-resistant UTE from standard secret-key encryption (SKE). We additionally show that hyper-efficient shadow tomography (HEST) is impossible assuming collusion-resistant UTE exists. By considering a relaxation of collusion-resistant UTE, we are able to show the impossibility of HEST assuming only pseudorandom state generators (which may not imply one-way functions). This almost unconditionally answers an open inquiry of Aaronson (STOC 2018).
- A construction of UTE from a quasi-polynomially secure one-shot message authentication code (OSMAC) in the classical oracle model, such that there is an explicit attack that breaks UE security for an unbounded polynomial number of decryptors.
- A construction of everlasting secure collusion-resistant UTE, where the decryptor adversary can run in unbounded time, in the quantum random oracle model (QROM), and formal evidence that a construction in the plain model is a challenging task. We additionally show that HEST with unbounded post-processing time (which we call weakly-efficient shadow tomography) is impossible assuming everlasting secure collusion-resistant UTE exists.
- A construction of secret sharing for all polynomial-size policies that is resilient to joint and unbounded classical leakage from collusion-resistant UTE and classical secret sharing for all policies.
- A construction (and definition) of collusion-resistant untelegraphable secret-key functional encryption (UTSKFE) from single-decryptor functional encryption and plain secret-key functional encryption, and a construction of collusion-resistant untelegraphable public-key functional encryption from UTSKFE, plain SKE, and plain public-key functional encryption.
2025
TCC
Vive Galois! Part 1: Optimal SIMD Packing and Packed Bootstrapping for FHE
Abstract
The vast majority of work on the efficiency of lattice-based cryptography, including fully homomorphic encryption~(FHE), has relied on *cyclotomic* number fields and rings.
This is because cyclotomics offer a wide variety of benefits, including good geometrical properties, fast ring arithmetic, and rich homomorphic operations like vectorized (SIMD) operations on "packed" plaintexts, automorphisms, and ring-switching.
However, selecting a suitable cyclotomic that has the desired number and type of plaintext "slots," while also balancing security and efficiency, is a highly constrained problem that often lacks an ideal solution, resulting in wasted SIMD capacity and lost efficiency.
This work provides a suite of tools for instantiating ring-based lattice cryptography to work over *subfields* of cyclotomics, which provide more flexibility and better-fitting parameters for applications.
A particular focus is on realizing FHE with *optimal plaintext packing* and homomorphic SIMD parallelism for *any* plaintext characteristic, along with efficient *packed bootstrapping* that fully exploits this parallelism.
Toward this end, this (two-part) work makes the following main technical contributions, all of which are catalyzed by Galois theory:
* For sampling and decoding errors in encryption and decryption (respectively), we construct *geometrically short, structured bases* for the number rings of arbitrary subfields of prime-power cyclotomics (and hence their composites as well).
* For fast ring arithmetic, we define and establish analogous structural properties for Chinese Remainder Theorem~(CRT) bases in *abelian* number rings, and give *specialized fast transforms* that map between CRT bases and any similarly structured bases.
* For packed bootstrapping and homomorphic linear algebra, we give a general framework for *homomorphic evaluation of structured linear transforms* in abelian number rings, and show that CRT transforms can be evaluated using relatively few homomorphic operations.
2025
TCC
Zeroizing Attacks against Evasive and Circular Evasive LWE
Abstract
We develop new attacks against the Evasive LWE family of assumptions, in both the public and private-coin regime. To the best of our knowledge, ours are the "first" attacks against Evasive LWE in the "public-coin" regime, for any instantiation from the family. Our attacks are summarized below.
Public-Coin Attacks.
1. The recent work by Hseih, Lin and Luo [HLL23] constructed the first Attribute Based Encryption (ABE) for unbounded depth circuits by relying on the ``circular'' evasive LWE assumption. This assumption has been popularly considered as a safe, public-coin instance of Evasive LWE in contrast to its ``private-coin'' cousins (for instance, see [CW25, DJM+25a]).
We provide the first attack against this assumption, challenging the widely held belief that this is a public-coin assumption.
2. We demonstrate a counter-example against vanilla public-coin evasive LWE by Wee [Wee22] in an unnatural parameter regime. Our attack crucially relies on the error in the pre-condition being larger than the error in the post-condition, necessitating a refinement of the assumption.
Private-Coin Attacks.
1. The recent work by Agrawal, Kumari and Yamada [AKY24a] constructed the first functional encryption scheme for pseudorandom functionalities (PRFE) and extended this to obfuscation for pseudorandom functionalities (PRIO) [AKY24c] by relying on private-coin evasive LWE. We provide a new attack against the assumption stated in the first posting of their work (subsequently refined to avoid these attacks).
2. The recent work by Branco et al. [BDJ+24] (concurrently to [AKY24c]) provides a construction of obfuscation for pseudorandom functionalities by relying on private-coin evasive LWE. We provide a new attack against their stated assumption.
3. Branco et al. [BDJ+24] showed that there exist contrived, ``self-referential'' classes of pseudorandom functionalities for which pseudorandom obfuscation cannot exist. We extend their techniques to develop an analogous result for pseudorandom functional encryption.
While Evasive LWE was developed to specifically avoid ``zeroizing attacks'', our work shows that in certain settings, such attacks can still apply.