## CryptoDB

### Salil P. Vadhan

#### Affiliation: Harvard University

#### Publications

**Year**

**Venue**

**Title**

2010

EPRINT

Universal One-Way Hash Functions via Inaccessible Entropy
Abstract

This paper revisits the construction of Universally One-Way Hash Functions (UOWHFs) from any one-way function due to Rompel (STOC 1990). We give a simpler construction of UOWHFs which also obtains better efficiency and security. The construction exploits a strong connection to the recently introduced notion of *inaccessible entropy* (Haitner et al. STOC 2009). With this perspective, we observe that a small tweak of any one-way function f is already a weak form of a UOWHF: Consider F(x, i) that outputs the i-bit long prefix of f(x). If F were a UOWHF then given a random x and i it would be hard to come up with x' \neq x such that F(x, i) = F(x', i). While this may not be the case, we show (rather easily) that it is hard to sample x' with almost full entropy among all the possible such values of x'. The rest of our
construction simply amplifies and exploits this basic property.
With this and other recent works we have that the constructions of three fundamental cryptographic primitives (Pseudorandom Generators, Statistically Hiding Commitments and UOWHFs) out of one-way functions are to a large extent unified. In particular, all three constructions rely on and manipulate computational notions of entropy in similar ways. Pseudorandom Generators rely on the well-established notion of pseudoentropy, whereas Statistically Hiding Commitments and UOWHFs rely on the newer notion of inaccessible entropy.

2010

EPRINT

Improved Delegation of Computation using Fully Homomorphic Encryption
Abstract

Following Gennaro, Gentry, and Parno (Cryptology ePrint Archive 2009/547), we use fully homomorphic encryption to design improved schemes for delegating computation. In such schemes, a delegator outsources the computation of a function $F$ on many, dynamically chosen inputs $x_i$ to a worker in such a way that it is infeasible for the worker to make the delegator accept a result other than $F(x_i)$. The "online stage" of the Gennaro et al. scheme is very efficient: the parties exchange two messages, the delegator runs in time $poly(log T)$, and the worker runs in time $poly(T)$, where $T$ is the time complexity of $F$. However, the "offline stage" (which depends on the function $F$ but not the inputs to be delegated) is inefficient: the delegator runs in time $poly(T)$ and generates a public key of length $poly(T)$ that needs to be accessed by the worker during the online stage.
Our first construction eliminates the large public key from the Gennaro et al. scheme. The delegator still invests $poly(T)$ time in the offline stage, but does not need to communicate or publish anything. Our second construction reduces the work of the delegator in the offline stage to $poly(log T)$ at the price of a 4-message (offline) interaction with a $poly(T)$-time worker (which need not be the same as the workers used in the online stage). Finally, we describe a "pipelined" implementation of the second construction that avoids the need to re-run the offline construction after errors are detected (assuming errors are not too frequent).

2009

EPRINT

Proofs of Retrievability via Hardness Amplification
Abstract

Proofs of Retrievability (PoR), introduced by Juels and Kaliski, allow the client to store a file $F$ on an untrusted server, and later run an efficient audit protocol in which
the server proves that it (still) possesses the client's data.
Constructions of PoR schemes attempt to minimize the client and
server storage, the communication complexity of an audit, and even
the number of file-blocks accessed by the server during the audit.
In this work, we identify several different variants of the problem
(such as bounded-use vs. unbounded-use, knowledge-soundness vs.
information-soundness), and giving nearly optimal PoR schemes for
each of these variants. Our constructions either improve (and
generalize) the prior PoR constructions, or give the first known PoR
schemes with the required properties. In particular, we
\begin{itemize}
\item Formally prove the security of an (optimized) variant of the
bounded-use scheme of Juels and Kaliski~\cite{JuelsK07}, without
making any simplifying assumptions on the behavior of the
adversary.
\item Build the first unbounded-use PoR scheme where the communication
complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open
question of Shacham and Waters~\cite{ShachamW08}.
\item Build the first bounded-use scheme with {\em
information-theoretic} security.
\end{itemize}
The main insight of our work comes from a simple
connection between PoR schemes and the notion of {\em hardness
amplification}, extensively studied in complexity theory. In
particular, our improvements come from first abstracting a purely
information-theoretic notion of {\em PoR codes},
and then building nearly optimal PoR codes using state-of-the-art
tools from coding and complexity theory.

2008

EPRINT

Fairness with an Honest Minority and a Rational Majority
Abstract

We provide a simple protocol for secret reconstruction in any threshold secret sharing scheme, and prove that it is fair when executed with many rational parties together with a small minority of honest parties. That is, all parties will learn the secret with high probability when the honest parties follow the protocol and the rational parties act in their own self-interest (as captured by the notion of a Bayesian subgame perfect equilibrium). The protocol only
requires a standard (synchronous) broadcast channel, and tolerates fail-stop deviations (i.e. early stopping, but not incorrectly computed messages).
Previous protocols for this problem in the cryptographic or economic models have either required an honest majority, used strong communication channels that enable simultaneous exchange of information, or settled for approximate notions of security/equilibria.

2007

EPRINT

Interactive and Noninteractive Zero Knowledge Coincide in the Help Model
Abstract

We show that a problem in $\AM$ has a interactive zero-knowledge
proof system {\em if and only if} it has a noninteractive zero
knowledge proof system in the `help model' of Ben-Or and Gutfreund
({\em J. Cryptology}, 2003). In this model, the shared reference
string is generated by a probabilistic polynomial-time dealer who
is given access to the statement to be proven. Our result holds
for both computational zero knowledge and statistical zero
knowledge, and does not rely on any unproven complexity
assumptions.
We also show that help does not add power to interactive
computational zero-knowledge proofs, paralleling a result of
Ben-Or and Gutfreund for the case of statistical zero knowledge.

2007

EPRINT

Interactive and Noninteractive Zero Knowledge are Equivalent in the Help Model
Abstract

We show that interactive and noninteractive zero-knowledge are
equivalent in the `help model' of Ben-Or and Gutfreund ({\em J.
Cryptology}, 2003). In this model, the shared reference string is
generated by a probabilistic polynomial-time dealer who is given
access to the statement to be proven. Our results do not rely on
any unproven complexity assumptions and hold for statistical zero
knowledge, for computational zero knowledge restricted to AM,
and for quantum zero knowledge when the help is a pure quantum
state.

2006

EPRINT

Statistical Zero-Knowledge Arguments for NP from Any One-Way Function
Abstract

We show that every language in NP has a *statistical* zero-knowledge
argument system under the (minimal) complexity assumption that
one-way functions exist. In such protocols, even a computationally
unbounded verifier cannot learn anything other than the fact that the
assertion being proven is true, whereas a polynomial-time prover
cannot convince the verifier to accept a false assertion except with
negligible probability. This resolves an open question posed by Naor,
Ostrovsky, Venkatesan, and Yung (CRYPTO `92, J. Cryptology `98).
Departing from previous works on this problem, we do not construct
standard statistically hiding commitments from any one-way function.
Instead, we construct a relaxed variant of commitment schemes called
"1-out-of-2-binding commitments," recently introduced by Nguyen and
Vadhan (STOC `06).

2006

EPRINT

Zero Knowledge and Soundness are Symmetric
Abstract

We give a complexity-theoretic characterization of the class of problems in NP having zero-knowledge argument systems. This characterization is symmetric in its treatment of the zero knowledge and the soundness conditions, and thus we deduce that the class of problems in NP intersect coNP having zero-knowledge arguments is closed under complement. Furthermore, we show that a problem in NP has a *statistical* zero-knowledge argument system if and only if its complement has a computational zero-knowledge *proof* system. What is novel about these results is that they are *unconditional*, i.e., do not rely on unproven complexity assumptions such as the existence of one-way functions.
Our characterization of zero-knowledge arguments also enables us to prove a variety of other unconditional results about the class of problems in NP having zero-knowledge arguments, such as equivalences between honest-verifier and malicious-verifier zero knowledge, private coins and public coins, inefficient provers and efficient provers, and non-black-box simulation and black-box simulation. Previously, such results were only known unconditionally for zero-knowledge *proof systems*, or under the assumption that one-way functions exist for zero-knowledge argument systems.

2005

EPRINT

Concurrent Zero Knowledge without Complexity Assumptions
Abstract

We provide unconditional constructions of concurrent statistical zero-knowledge proofs for a variety of non-trivial problems (not known to have probabilistic polynomial-time algorithms). The problems include Graph Isomorphism, Graph Nonisomorphism, Quadratic Residuosity, Quadratic Nonresiduosity, a restricted version of Statistical Difference, and approximate versions of the (coNP forms of the) Shortest Vector Problem and Closest Vector Problem in lattices.
For some of the problems, such as Graph Isomorphism and Quadratic Residuosity, the proof systems have provers that can be implemented in polynomial time (given an NP witness) and have \tilde{O}(log n) rounds, which is known to be essentially optimal for black-box simulation.
To our best of knowledge, these are the first constructions of concurrent zero-knowledge protocols in the asynchronous model (without timing assumptions) that do not require complexity assumptions (such as the existence of one-way functions).

2005

EPRINT

Derandomization in Cryptography
Abstract

We give two applications of Nisan--Wigderson-type ("non-cryptographic") pseudorandom generators in cryptography. Specifically, assuming the existence of an appropriate NW-type generator, we construct:
A one-message witness-indistinguishable proof system for every language in NP, based on any trapdoor permutation. This proof system does not assume a shared random string or any setup assumption, so it is actually an "NP proof system."
A noninteractive bit commitment scheme based on any one-way function.
The specific NW-type generator we need is a hitting set generator fooling nondeterministic circuits. It is known how to construct such a generator if ETIME = DTIME(2^O(n)) has a function of nondeterministic circuit complexity 2^\Omega(n) (Miltersen and Vinodchandran, FOCS `99).
Our witness-indistinguishable proofs are obtained by using the NW-type generator to derandomize the ZAPs of Dwork and Naor (FOCS `00). To our knowledge, this is the first construction of an NP proof system achieving a secrecy property.
Our commitment scheme is obtained by derandomizing the interactive commitment scheme of Naor (J. Cryptology, 1991). Previous constructions of noninteractive commitment schemes were only known under incomparable assumptions.

2004

EPRINT

Simpler Session-Key Generation from Short Random Passwords
Abstract

Goldreich and Lindell (CRYPTO `01) recently presented the first protocol for
password-authenticated key exchange in the standard model (with no common reference string
or set-up assumptions other than the shared password). However, their protocol uses several
heavy tools and has a complicated analysis.
We present a simplification of the Goldreich--Lindell (GL) protocol and analysis for the
special case when the dictionary is of the form $D=\{0,1\}^d$, i.e. the password is a
short random string (like an ATM PIN number). Our protocol can be converted into one
for arbitrary dictionaries using a common reference string of logarithmic length. The
security bound achieved by our protocol is somewhat worse than the GL protocol. Roughly
speaking, our protocol guarantees that the adversary can ``break'' the scheme with
probability at most $O(\mathrm{poly}(n)/|D|)^{\Omega(1)}$, whereas the GL protocol
guarantees a bound of $O(1/|D|)$.
We also present an alternative, more natural definition of security than the ``augmented
definition'' of Goldreich and Lindell, and prove that the two definitions are equivalent.

2004

EPRINT

Lower Bounds for Non-Black-Box Zero Knowledge
Abstract

We show new lower bounds and impossibility results for general (possibly *non-black-box*) zero-knowledge proofs and arguments. Our main results are that, under reasonable complexity assumptions:
1. There does not exist a two-round zero-knowledge *proof* system with perfect completeness for an NP-complete language. The previous impossibility result for two-round zero knowledge, by Goldreich and Oren (J. Cryptology, 1994) was only for the case of *auxiliary-input* zero-knowledge proofs and arguments.
2. There does not exist a constant-round zero-knowledge *strong* proof or argument of knowledge (as defined by Goldreich (2001)) for a nontrivial language.
3. There does not exist a constant-round public-coin *proof* system for a nontrivial language that is *resettable zero knowledge*. This result also extends to "bounded-resettable" zero knowledge, in which the number of resets is a priori bounded by a polynomial in the input length and prover-to-verifier communication. In contrast, we show that under reasonable assumptions, there does exist such a (computationally sound) *argument* system that is bounded-resettable zero knowledge.
The complexity assumptions we use are not commonly used in cryptography. However, in all cases, we show that assumptions similar to ours are necessary for the above results.
Most previously known lower bounds, such as those of Goldreich and Krawczyk (SIAM J. Computing, 1996), were only for *black-box* zero knowledge. However, a result of Barak (FOCS 2001) shows that many (or even most) of these black-box lower bounds do *not* extend to the case of general zero knowledge.

2003

CRYPTO

2002

EPRINT

An Improved Pseudorandom Generator Based on Hardness of Factoring
Abstract

We present a simple to implement and efficient pseudorandom generator based on the factoring assumption. It outputs more than pn/2 pseudorandom bits per p exponentiations, each with the same base and an exponent shorter than n/2 bits. Our generator is based on results by Hastad, Schrift and Shamir [HSS93], but unlike their generator and its improvement by Goldreich and Rosen [GR00], it does not use hashing or extractors, and is thus simpler and somewhat more efficient.
In addition, we present a general technique that can be used to speed up pseudorandom generators based on iterating one-way permutations. We construct our generator by applying this technique to results of [HSS93]. We also show how the generator given by Gennaro [Gen00] can be simply derived from results of Patel and Sundaram [PS98] using our technique.

2002

EPRINT

On Constructing Locally Computable Extractors and Cryptosystems in the Bounded Storage Model
Abstract

We consider the problem of constructing randomness extractors
which are {\em locally computable}, i.e. only read a small number
of bits from their input. As recently shown by Lu (CRYPTO `02), locally computable extractors directly yield secure private-key cryptosystems in Maurer's bounded storage model (J. Cryptology, 1992).
In this note, we observe that a fundamental lemma of Nisan and
Zuckerman (J. Computer and System Sciences, 1996) yields a general technique for constructing locally computable extractors. Specifically, we obtain a locally computable extractor by combining any extractor with any randomness-efficient (averaging) sampler. Plugging in known extractor and sampler constructions, we obtain locally computable extractors, and hence cryptosystems in the bounded storage model, whose parameters improve upon previous constructions and come quite close to the lower bounds.
Along the way, we also present a refinement of the Nisan--Zuckerman lemma, showing that random sampling bits from a weak random source preserves the min-entropy rate up to an arbitrarily small additive loss (whereas the original lemma loses a logarithmic factor).

2001

EPRINT

On the (Im)possibility of Obfuscating Programs
Abstract

Informally, an {\em obfuscator} $O$ is an (efficient,
probabilistic) ``compiler'' that takes as input a program (or
circuit) $P$ and produces a new program $O(P)$ that has the
same functionality as $P$ yet is ``unintelligible'' in some sense.
Obfuscators, if they exist, would have a wide variety of
cryptographic and complexity-theoretic applications, ranging from
software protection to homomorphic encryption to
complexity-theoretic analogues of Rice's theorem. Most of these
applications are based on an interpretation of the
``unintelligibility'' condition in obfuscation as meaning that
$O(P)$ is a ``virtual black box,'' in the sense that anything
one can efficiently compute given $O(P)$, one could also
efficiently compute given oracle access to $P$.
In this work, we initiate a theoretical investigation of
obfuscation. Our main result is that, even under very
weak formalizations of the above intuition, obfuscation
is impossible. We prove this by constructing a family of
functions $F$ that are {\em \inherently
unobfuscatable} in the following sense: there is a
property $\pi : F \rightarrow \{0,1\}$ such that (a) given {\em
any program} that computes a function $f\in F$, the
value $\pi(f)$ can be efficiently computed, yet (b) given
{\em oracle access} to a (randomly selected) function
$f\in F$, no efficient algorithm can compute $\pi(f)$
much better than random guessing.
We extend our impossibility result in a number of ways, including
even obfuscators that (a) are not necessarily computable in
polynomial time, (b) only {\em approximately} preserve the
functionality, and (c) only need to work for very restricted
models of computation ($TC_0$). We also rule out several
potential applications of obfuscators, by constructing
``unobfuscatable'' signature schemes, encryption schemes, and
pseudorandom function families.

2000

EPRINT

A Complete Problem for Statistical Zero Knowledge
Abstract

We present the first complete problem for SZK, the class of
(promise) problems
possessing statistical zero-knowledge proofs (against an
honest verifier).
The problem, called STATISTICAL DIFFERENCE,
is to decide whether two efficiently samplable distributions
are either statistically close or far apart. This
gives a new characterization of
SZK that makes no reference to interaction or zero knowledge.
We propose the use of complete problems to unify and
extend the study of statistical zero knowledge. To this end,
we examine several
consequences of our Completeness Theorem and its proof, such as:
(1) A way to make every (honest-verifier) statistical
zero-knowledge proof very communication efficient,
with the prover sending only one bit
to the verifier (to achieve soundness error 1/2).
(2) Simpler proofs of many of the previously known
results about statistical zero knowledge, such as
the Fortnow and Aiello--Håstad upper bounds on the complexity of SZK
and
Okamoto's result that SZK is closed under complement.
(3) Strong closure properties of SZK which amount to
constructing statistical zero-knowledge proofs for complex assertions
built out of simpler assertions already shown to be in SZK.
(4) New results about the various measures of "knowledge complexity,"
including a collapse in the hierarchy corresponding
to knowledge complexity in the "hint" sense.
(5) Algorithms for manipulating the statistical difference
between efficiently samplable distributions, including transformations
which "polarize" and "reverse" the statistical relationship
between a pair of distributions.

1999

CRYPTO

1998

EPRINT

Many-to-one Trapdoor Functions and their Relation to Public-key Cryptosystems
Abstract

The heart of the task of building public key cryptosystems is viewed
as that of ``making trapdoors;'' in fact, public key cryptosystems and
trapdoor functions are often discussed as synonymous. How accurate is
this view? In this paper we endeavor to get a better understanding of
the nature of ``trapdoorness'' and its relation to public key
cryptosystems, by broadening the scope of the investigation: we look
at general trapdoor functions; that is, functions that are not
necessarily injective (ie., one-to-one). Our first result is somewhat
surprising: we show that non-injective trapdoor functions (with
super-polynomial pre-image size) can be constructed {from} any one-way
function (and hence it is unlikely that they suffice for public key
encryption). On the other hand, we show that trapdoor functions with
polynomial pre-image size are sufficient for public key encryption.
Together, these two results indicate that the pre-image size is a
fundamental parameter of trapdoor functions. We then turn our
attention to the converse, asking what kinds of trapdoor functions can
be constructed from public key cryptosystems. We take a first step by
showing that in the random-oracle model one can construct injective
trapdoor functions from any public key cryptosystem.

1998

EPRINT

Comparing Entropies in Statistical Zero-Knowledge with Applications to the Structure of SZK
Abstract

We consider the following (promise) problem, denoted ED (for Entropy
Difference): The input is a pairs of circuits, and YES instances
(resp., NO instances) are such pairs in which the first (resp.,
second) circuit generates a distribution with noticeably higher
entropy.
On one hand we show that any language having a (honest-verifier)
statistical zero-knowledge proof is Karp-reducible to ED. On the other
hand, we present a public-coin (honest-verifier) statistical
zero-knowledge proof for ED. Thus, we obtain an alternative proof of
Okamoto's result by which HVSZK (i.e., Honest-Verifier Statistical
Zero-Knowledge) equals public-coin HVSZK. The new proof is much simpler
than the original one. The above also yields a trivial proof that HVSZK
is closed under complementation (since ED easily reduces to its
complement). Among the new results obtained is an equivalence of a weak
notion of statistical zero-knowledge to the standard one.

#### Program Committees

- TCC 2017
- TCC 2014
- Crypto 2009
- TCC 2007
- Crypto 2006
- Eurocrypt 2005
- TCC 2004
- Crypto 2000

#### Coauthors

- Boaz Barak (5)
- Mihir Bellare (2)
- Eleanor Birrell (1)
- Mark Bun (1)
- Ran Canetti (1)
- André Chailloux (2)
- Yi-Hsiu Chen (1)
- Kai-Min Chung (2)
- Dragos Florin Ciocan (3)
- Nenad Dedic (1)
- Yevgeniy Dodis (4)
- Oded Goldreich (4)
- Ronen Gradwohl (1)
- Iftach Haitner (2)
- Shai Halevi (2)
- Thomas Holenstein (2)
- Russell Impagliazzo (2)
- Yael Tauman Kalai (2)
- Iordanis Kerenidis (2)
- Yehuda Lindell (1)
- Adriana López-Alt (1)
- Mohammad Mahmoody (1)
- Daniele Micciancio (3)
- Ilya Mironov (2)
- Tal Moran (1)
- Jack Murtagh (2)
- Minh-Huyen Nguyen (4)
- Shien Jin Ong (10)
- Omkant Pandey (1)
- David C. Parkes (2)
- Ananth Raghunathan (1)
- Omer Reingold (4)
- Leonid Reyzin (1)
- Thomas Ristenpart (1)
- Ronald L. Rivest (1)
- Alon Rosen (2)
- Steven Rudich (2)
- Amit Sahai (8)
- Gil Segev (1)
- Madhu Sudan (1)
- Luca Trevisan (2)
- Jonathan Ullman (1)
- Hoeteck Wee (3)
- Daniel Wichs (2)
- Ke Yang (2)
- Colin Jia Zheng (1)
- David Zuckerman (1)