International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Nir Bitansky

Publications

Year
Venue
Title
2023
CRYPTO
Non-interactive Universal Arguments
In 2002, Barak and Goldreich introduced the notion of a {\em universal argument} and constructed an interactive universal argument for non-deterministic computations based on polynomially hard collision-resistant hash functions. Since then, and especially in recent years, there have been tremendous developments in the construction of {\em non-interactive} succinct arguments for deterministic computations under standard hardness assumptions. However, the constructed succinct arguments can be proven universal only under {\em sub-exponential} assumptions. Assuming {\em polynomially hard} fully homomorphic encryption and a widely believed worst-case complexity assumption, we prove a general lifting theorem showing that all existing non-interactive succinct arguments can be made universal. The required complexity assumption is that non-uniformity does not allow arbitrary polynomial speedup. In the setting of uniform adversaries, this extra assumption is not needed.
2022
EUROCRYPT
Non-malleable Commitments Against Quantum Attacks 📺
Nir Bitansky Huijia Lin Omri Shmueli
We construct, under standard hardness assumptions, the first non-malleable commitments secure against quantum attacks. Our commitments are statistically binding and satisfy the standard notion of {\em non-malleability with respect to commitment}. We obtain a $\log^\star(\lambda)$-round classical protocol, assuming the existence of post-quantum one-way functions. Previously, non-malleable commitments with quantum security were only known against a restricted class of adversaries known as {\em synchronizing adversaries.} At the heart of our results is a new general technique that allows to modularly obtain non-malleable commitments from any extractable commitment protocol, obliviously of the underlying extraction strategy (black-box or non-black-box) or round complexity. The transformation may also be of interest in the classical setting.
2022
CRYPTO
Constructive Post-Quantum Reductions 📺
Is it possible to convert classical reductions into post-quantum ones? It is customary to argue that while this is problematic in the interactive setting, non-interactive reductions do carry over. However, when considering quantum auxiliary input, this conversion results in a *non-constructive* post-quantum reduction that requires duplicating the quantum auxiliary input, which is in general inefficient or even impossible. This violates the win-win premise of provable cryptography: an attack against a cryptographic primitive should lead to an algorithmic advantage. We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the post-quantum setting in a constructive manner. We show that any non-interactive non-adaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made post-quantum constructive. In contrast, assumptions with super-polynomial solution space (such as general search assumptions) cannot be generally converted. Along the way, we make several additional contributions: 1. We put forth a framework for reductions (or general interaction) with *stateful* solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting. 2. A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a super-polynomial solution space cannot be generically ``restored'' post-measurement. This shows that the novel rewinding technique of Chiesa et al.\ (FOCS 2021) is tight in the sense that it cannot be extended beyond a polynomial measurement space.
2022
CRYPTO
Statistically Sender-Private OT From LPN and Derandomization 📺
Nir Bitansky Sapir Freizeit
We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK). The protocol is plausibly post-quantum secure. The only other constructions with plausible post quantum security are based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques. Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK.
2022
TCC
PPAD is as Hard as LWE and Iterated Squaring
One of the most fundamental results in game theory is that every game has a Nash equilibrium, an assignment of (randomized) strategies to players with the stability property that no individual player can benefit from deviating from the assigned strategy. It is not known how to efficiently *compute* such a Nash equilibrium --- the computational complexity of this task is characterized by the class PPAD, but the relation of PPAD to other problems and well-known complexity classes is not precisely understood. In recent years there has been mounting evidence, based on cryptographic tools and techniques, showing the hardness of PPAD. We continue this line of research by showing that PPAD is as hard as *learning with errors* and the *iterated squaring* problem, two standard problems in cryptography. Our work improves over prior hardness results that relied either on (1) sub-exponential assumptions, or (2) relied on ``obfustopia,'' which can currently be based on a particular combination of three assumptions. Our work additionally establishes *public-coin* hardness for PPAD (computational hardness for a publicly sampleable distribution of instances) that seems out of reach of the obfustopia approach. Following the work of Choudhuri et al. (STOC 2019) and subsequent works, our hardness result is obtained by constructing an *unambiguous and incrementally-updateable* succinct non-interactive argument for IS, whose soundness relies on polynomial hardness of LWE. The result also implies a verifiable delay function with unique proofs, which may be of independent interest.
2022
JOFC
A Note on Perfect Correctness by Derandomization
Nir Bitansky Vinod Vaikuntanathan
We show a general compiler that transforms a large class of erroneous cryptographic schemes (such as public-key encryption, indistinguishability obfuscation, and secure multiparty computation schemes) into perfectly correct ones. The transformation works for schemes that are correct on all inputs with probability noticeably larger than half , and are secure under parallel repetition. We assume the existence of one-way functions and of functions with deterministic (uniform) time complexity $$2^{O(n)}$$ 2 O ( n ) and non-deterministic circuit complexity $$2^{\Omega (n)}$$ 2 Ω ( n ) . Our transformation complements previous results that showed how public-key encryption and indistinguishability obfuscation that err on a noticeable fraction of inputs can be turned into ones that for all inputs are often correct, showing that they can be made perfectly correct. The technique relies on the idea of “reverse randomization” [Naor, Crypto 1989] and on Nisan–Wigderson style derandomization, previously used in cryptography to remove interaction from witness-indistinguishable proofs and commitment schemes [Barak, Ong and Vadhan, Crypto 2003].
2022
JOFC
Succinct Non-Interactive Arguments via Linear Interactive Proofs
Succinct non-interactive arguments (SNARGs) enable verifying NP statements with lower complexity than required for classical NP verification. Traditionally, the focus has been on minimizing the length of such arguments; nowadays, researchers have focused also on minimizing verification time, by drawing motivation from the problem of delegating computation. A common relaxation is a preprocessing SNARG, which allows the verifier to conduct an expensive offline phase that is independent of the statement to be proven later. Recent constructions of preprocessing SNARGs have achieved attractive features: they are publicly-verifiable, proofs consist of only O (1) encrypted (or encoded) field elements, and verification is via arithmetic circuits of size linear in the NP statement. Additionally, these constructions seem to have “escaped the hegemony” of probabilistically-checkable proofs (PCPs) as a basic building block of succinct arguments. We present a general methodology for the construction of preprocessing $$\text{ SNARG } $$ SNARG s, as well as resulting new efficiency features. Our contribution is threefold: (1) We introduce and study a natural extension of the interactive proof model that considers algebraically-bounded provers; this new setting is analogous to the common study of algebraically-bounded “adversaries” in other fields, such as pseudorandomness and randomness extraction. More concretely, in this work we focus on linear (or affine) provers, and provide several constructions of (succinct two-message) linear interactive proofs (LIPs) for NP. Our constructions are based on general transformations applied to both linear PCPs (LPCPs) and traditional “unstructured” PCPs. (2) We give conceptually simple cryptographic transformations from LIPs to preprocessing SNARGs, whose security can be based on different forms of linear targeted malleability (implied by previous knowledge assumptions). Our transformations convert arbitrary (two-message) LIPs into designated-verifier SNARGs, and LIPs with degree-bounded verifiers into publicly-verifiable SNARGs. We also extend our methodology to obtain zero-knowledge LIPs and SNARGs. Our techniques yield SNARGs of knowledge and thus can benefit from known recursive composition and bootstrapping techniques. (3) Following this methodology, we exhibit several constructions achieving new efficiency features, such as “single-ciphertext preprocessing SNARGs.” We also offer a new perspective on existing constructions of preprocessing SNARGs, revealing a direct connection of these to LPCPs and LIPs.
2021
TCC
Post-quantum Resettably-Sound Zero Knowledge
We study post-quantum zero-knowledge (classical) protocols that are sound against quantum resetting attacks. Our model is inspired by the classical model of resetting provers (Barak-Goldreich-GoldwasserLindell, FOCS ‘01), providing a malicious efficient prover with oracle access to the verifier’s next-message-function, fixed to some initial random tape; thereby allowing it to effectively reset (or equivalently, rewind) the verifier. In our model, the prover has quantum access to the verifier’s function, and in particular can query it in superposition. The motivation behind quantum resettable soundness is twofold: First, ensuring a strong security guarantee in scenarios where quantum resetting may be possible (e.g., smart cards, or virtual machines). Second, drawing intuition from the classical setting, we hope to improve our understanding of basic questions regarding post-quantum zero knowledge. We prove the following results: – Black-Box Barriers. Quantum resetting exactly captures the power of black-box zero knowledge quantum simulators. Accordingly, resettable soundness cannot be achieved in conjunction with black-box zero knowledge, except for languages in BQP. Leveraging this, we prove that constant-round public-coin, or three message, protocols cannot be black-box post-quantum zero-knowledge. For this, we show how to transform such protocols into quantumly resettably sound ones. The transformations are similar to classical ones, but their analysis is very different due to the essential difference between classical and quantum resetting. – A Resettably-Sound Non-Black-Box Zero-Knowledge Protocol. Under the (quantum) Learning with Errors assumption and quantum fully-homomorphic encryption, we construct a post-quantum resettably sound zero knowledge protocol for NP. We rely on non-black-box simulation techniques, thus overcoming the black-box barrier for such protocols. – From Resettable Soundness to The Impossibility of Quantum Obfuscation. Assuming one-way functions, we prove that any quantumly resettably-sound zero-knowledge protocol for NP implies the impossibility of quantum obfuscation. Combined with the above result, this gives an alternative proof to several recent results on quantum unobfuscatability.
2021
TCC
Classical Binding for Quantum Commitments 📺
Nir Bitansky Zvika Brakerski
In classical commitments, statistical binding means that for almost any commitment transcript there is at most one possible opening. While quantum commitments (for classical messages) sometimes have benefits over their classical counterparts (e.g. in terms of assumptions), they provide a weaker notion of binding. Essentially that the sender cannot open a given commitment to a random value with probability noticeably greater than 1/2. We introduce a notion of classical binding for quantum commitments which provides guarantees analogous to the classical case. In our notion, the receiver performs a (partial) measurement of the quantum commitment string, and the outcome of this measurement determines a single value that the sender may open. We expect that our notion can replace classical commitments in various settings, leaving the security proof essentially unchanged. As an example we show a soundness proof for the GMW zero-knowledge proof system. We construct a non-interactive quantum commitment scheme which is classically statistically-binding and has a classical opening, based on the existence of any post-quantum one-way function. Prior candidates had inherently quantum openings and were not classically binding. In contrast, we show that it is impossible to achieve classical binding for statistically hiding commitments, regardless of assumption or round complexity. Our scheme is simply Naor's commitment scheme (which classically requires a common random string, CRS), but executed in superposition over all possible values of the CRS, and repeated several times. We hope that this technique for using quantum communication to remove a CRS may find other uses.
2020
TCC
Weakly Extractable One-Way Functions 📺
A family of one-way functions is extractable if given a random function in the family, an efficient adversary can only output an element in the image of the function if it knows a corresponding preimage. This knowledge extraction guarantee is particularly powerful since it does not require interaction. However, extractable one-way functions (EFs) are subject to a strong barrier: assuming indistinguishability obfuscation, no EF can have a knowledge extractor that works against all polynomial-size non-uniform adversaries. This holds even for non-black-box extractors that use the adversary's code. Accordingly, the literature considers either EFs based on non-falsifiable knowledge assumptions, where the extractor is not explicitly given, but it is only assumed to exist, or EFs against a restricted class of adversaries with a bounded non-uniform advice. This falls short of cryptography's gold standard of security that requires an explicit reduction against non-uniform adversaries of arbitrary polynomial size. Motivated by this gap, we put forward a new notion of weakly extractable one-way functions (WEFs) that circumvents the known barrier. We then prove that WEFs are inextricably connected to the long standing question of three-message zero knowledge protocols. We show that different flavors of WEFs are sufficient and necessary for three-message zero knowledge to exist. The exact flavor depends on whether the protocol is computational or statistical zero knowledge and whether it is publicly or privately verifiable. Combined with recent progress on constructing three message zero-knowledge, we derive a new connection between keyless multi-collision resistance and the notion of incompressibility and the feasibility of non-interactive knowledge extraction. Another interesting corollary of our result is that in order to construct three-message zero knowledge arguments, it suffices to construct such arguments where the honest prover strategy is unbounded.
2019
EUROCRYPT
Distributional Collision Resistance Beyond One-Way Functions 📺
Distributional collision resistance is a relaxation of collision resistance that only requires that it is hard to sample a collision (x, y) where x is uniformly random and y is uniformly random conditioned on colliding with x. The notion lies between one-wayness and collision resistance, but its exact power is still not well-understood. On one hand, distributional collision resistant hash functions cannot be built from one-way functions in a black-box way, which may suggest that they are stronger. On the other hand, so far, they have not yielded any applications beyond one-way functions.Assuming distributional collision resistant hash functions, we construct constant-round statistically hiding commitment scheme. Such commitments are not known based on one-way functions, and are impossible to obtain from one-way functions in a black-box way. Our construction relies on the reduction from inaccessible entropy generators to statistically hiding commitments by Haitner et al. (STOC ’09). In the converse direction, we show that two-message statistically hiding commitments imply distributional collision resistance, thereby establishing a loose equivalence between the two notions.A corollary of the first result is that constant-round statistically hiding commitments are implied by average-case hardness in the class $${\textsf {SZK}}$$ (which is known to imply distributional collision resistance). This implication seems to be folklore, but to the best of our knowledge has not been proven explicitly. We provide yet another proof of this implication, which is arguably more direct than the one going through distributional collision resistance.
2019
CRYPTO
On Round Optimal Statistical Zero Knowledge Arguments 📺
Nir Bitansky Omer Paneth
We construct the first three message statistical zero knowledge arguments for all of NP, matching the known lower bound. We do so based on keyless multi-collision resistant hash functions and the Learning with Errors assumption—the same assumptions used to obtain round optimal computational zero knowledge.The main component in our construction is a statistically witness indistinguishable argument of knowledge based on a new notion of statistically hiding commitments with subset opening.
2019
TCC
On the Complexity of Collision Resistant Hash Functions: New and Old Black-Box Separations
Nir Bitansky Akshay Degwekar
The complexity of collision-resistant hash functions has been long studied in the theory of cryptography. While we often think about them as a Minicrypt primitive, black-box separations demonstrate that constructions from one-way functions are unlikely. Indeed, theoretical constructions of collision-resistant hash functions are based on rather structured assumptions.We make two contributions to this study: 1.A New Separation: We show that collision-resistant hashing does not imply hard problems in the class Statistical Zero Knowledge in a black-box way.2.New Proofs: We show new proofs for the results of Simon, ruling out black-box reductions of collision-resistant hashing to one-way permutations, and of Asharov and Segev, ruling out black-box reductions to indistinguishability obfuscation. The new proofs are quite different from the previous ones and are based on simple coupling arguments.
2019
JOFC
From Cryptomania to Obfustopia Through Secret-Key Functional Encryption
Functional encryption lies at the frontiers of the current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation (IO), while other variants have been constructed from standard assumptions such as LWE. Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of secret-key functional encryption with an unbounded number of keys and ciphertexts. On the one hand, this primitive is not known to imply anything outside of minicrypt, the land of secret-key cryptography, but, on the other hand, we do no know how to construct it without the heavy hammers in obfustopia. In this work, we show that (subexponentially secure) secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (subexponentially secure) plain public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia. On the technical side, our result relies on two main components. As our first contribution, we show how to use secret-key functional encryption to get “exponentially efficient indistinguishability obfuscation” (XIO), a notion recently introduced by Lin et al. (PKC’16) as a relaxation of IO. Lin et al. show how to use XIO and the LWE assumption to build IO. As our second contribution, we improve on this result by replacing its reliance on the LWE assumption with any plain public-key encryption scheme. Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS’15) shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black-box construction, that subexponentially secure-key functional encryption indeed leads to public-key encryption. The resulting public-key encryption scheme, however, is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.
2019
JOFC
Verifiable Random Functions from Non-interactive Witness-Indistinguishable Proofs
Nir Bitansky
Verifiable random functions (VRFs) are pseudorandom functions where the owner of the seed, in addition to computing the function’s value y at any point x , can also generate a non-interactive proof $$\pi $$ π that y is correct, without compromising pseudorandomness at other points. Being a natural primitive with a wide range of applications, considerable efforts have been directed toward the construction of such VRFs. While these efforts have resulted in a variety of algebraic constructions (from bilinear maps or the RSA problem), the relation between VRFs and other general primitives is still not well understood. We present new constructions of VRFs from general primitives, the main one being non-interactive witness-indistinguishable proofs (NIWIs). This includes: (1) a selectively secure VRF assuming NIWIs and non-interactive commitments. As usual, the VRF can be made adaptively secure assuming subexponential hardness of the underlying primitives. (2) An adaptively secure VRF assuming (polynomially hard) NIWIs, non-interactive commitments, and ( single-key ) constrained pseudorandom functions for a restricted class of constraints. The above primitives can be instantiated under various standard assumptions, which yields corresponding VRF instantiations, under different assumptions than were known so far. One notable example is a non-uniform construction of VRFs from subexponentially hard trapdoor permutations, or more generally, from verifiable pseudorandom generators (the construction can be made uniform under a standard derandomization assumption). This partially answers an open question by Dwork and Naor (FOCS ’00). The construction and its analysis are quite simple. Both draw from ideas commonly used in the context of indistinguishability obfuscation .
2018
TCC
One-Message Zero Knowledge and Non-malleable Commitments
Nir Bitansky Huijia Lin
We introduce a new notion of one-message zero-knowledge (1ZK) arguments that satisfy a weak soundness guarantee—the number of false statements that a polynomial-time non-uniform adversary can convince the verifier to accept is not much larger than the size of its non-uniform advice. The zero-knowledge guarantee is given by a simulator that runs in (mildly) super-polynomial time. We construct such 1ZK arguments based on the notion of multi-collision-resistant keyless hash functions, recently introduced by Bitansky, Kalai, and Paneth (STOC 2018). Relying on the constructed 1ZK arguments, subexponentially-secure time-lock puzzles, and other standard assumptions, we construct one-message fully-concurrent non-malleable commitments. This is the first construction that is based on assumptions that do not already incorporate non-malleability, as well as the first based on (subexponentially) falsifiable assumptions.
2017
EUROCRYPT
2017
EUROCRYPT
2017
CRYPTO
2017
TCC
2017
JOFC
2016
TCC
2016
TCC
2016
TCC
2016
TCC
2015
TCC
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
TCC
2014
JOFC
2013
TCC
2013
TCC
2012
TCC
2012
TCC
2012
CRYPTO
2011
ASIACRYPT
2010
CRYPTO

Program Committees

Crypto 2024
TCC 2022
Crypto 2021
Eurocrypt 2021
Eurocrypt 2019
TCC 2019
Eurocrypt 2018
Eurocrypt 2017
TCC 2017
PKC 2017
TCC 2016
TCC 2015