## CryptoDB

### Daniel Wichs

#### Affiliation: Northeastern University

#### Publications

**Year**

**Venue**

**Title**

2019

EUROCRYPT

Private Anonymous Data Access
📺
Abstract

We consider a scenario where a server holds a huge database that it wants to make accessible to a large group of clients. After an initial setup phase, clients should be able to read arbitrary locations in the database while maintaining privacy (the server does not learn which locations are being read) and anonymity (the server does not learn which client is performing each read). This should hold even if the server colludes with a subset of the clients. Moreover, the run-time of both the server and the client during each read operation should be low, ideally only poly-logarithmic in the size of the database and the number of clients. We call this notion Private Anonymous Data Access (PANDA). PANDA simultaneously combines aspects of Private Information Retrieval (PIR) and Oblivious RAM (ORAM). PIR has no initial setup, and allows anybody to privately and anonymously access a public database, but the server’s run-time is linear in the data size. On the other hand, ORAM achieves poly-logarithmic server run-time, but requires an initial setup after which only a single client with a secret key can access the database. The goal of PANDA is to get the best of both worlds: allow many clients to privately and anonymously access the database as in PIR, while having an efficient server as in ORAM.In this work, we construct bounded-collusion PANDA schemes, where the efficiency scales linearly with a bound on the number of corrupted clients that can collude with the server, but is otherwise poly-logarithmic in the data size and the total number of clients. Our solution relies on standard assumptions, namely the existence of fully homomorphic encryption, and combines techniques from both PIR and ORAM. We also extend PANDA to settings where clients can write to the database.

2019

EUROCRYPT

Reusable Designated-Verifier NIZKs for all NP from CDH
📺
Abstract

Non-interactive zero-knowledge proofs (NIZKs) are a fundamental cryptographic primitive. Despite a long history of research, we only know how to construct NIZKs under a few select assumptions, such as the hardness of factoring or using bilinear maps. Notably, there are no known constructions based on either the computational or decisional Diffie-Hellman (CDH/DDH) assumption without relying on a bilinear map.In this paper, we study a relaxation of NIZKs in the designated verifier setting (DV-NIZK), in which the public common-reference string is generated together with a secret key that is given to the verifier in order to verify proofs. In this setting, we distinguish between one-time and reusable schemes, depending on whether they can be used to prove only a single statement or arbitrarily many statements. For reusable schemes, the main difficulty is to ensure that soundness continues to hold even when the malicious prover learns whether various proofs are accepted or rejected by the verifier. One-time DV-NIZKs are known to exist for general NP statements assuming only public-key encryption. However, prior to this work, we did not have any construction of reusable DV-NIZKs for general NP statements from any assumption under which we didn’t already also have standard NIZKs.In this work, we construct reusable DV-NIZKs for general NP statements under the CDH assumption, without requiring a bilinear map. Our construction is based on the hidden-bits paradigm, which was previously used to construct standard NIZKs. We define a cryptographic primitive called a hidden-bits generator (HBG), along with a designated-verifier variant (DV-HBG), which modularly abstract out how to use this paradigm to get both standard NIZKs and reusable DV-NIZKs. We construct a DV-HBG scheme under the CDH assumption by relying on techniques from the Cramer-Shoup hash-proof system, and this yields our reusable DV-NIZK for general NP statements under CDH.We also consider a strengthening of DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK) where the setup consists of an honestly generated common random string and the verifier then gets to choose his own (potentially malicious) public/secret key pair to generate/verify proofs. We construct MDV-NIZKs under the “one-more CDH” assumption without relying on bilinear maps.

2019

EUROCRYPT

Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing
📺
Abstract

We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of
$$1/2-1/\mathrm{poly}(n)$$
. Thus we can only show that “very hard” LPN is harder than some “very mildly hard” worst case problem. We note that LPN with noise
$$1/2-1/\mathrm{poly}(n)$$
already implies symmetric cryptography.Specifically, we consider the (n, m, w)-nearest codeword problem ((n, m, w)-NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error
$$w/m \approx {\log ^2 n}/{n}$$
, (n, m, w)-NCP can be solved given oracle access to an LPN distinguisher with noise ratio
$$1/2-1/\mathrm{poly}(n)$$
.Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n, m, w)-NCP with the aforementioned parameters lies in the complexity class
$$\mathrm {{Search}\hbox {-}\mathcal {BPP}}^\mathcal {SZK}$$
(i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be
$$\mathcal {NP}$$
-hard. We then show that the hardness of LPN with very low noise rate
$$\log ^2(n)/n$$
implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in
$$\mathcal {BPP}^\mathcal {SZK}$$
).

2018

CRYPTO

Hardness of Non-interactive Differential Privacy from One-Way Functions
📺
Abstract

A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset $$D \in X^n$$D∈Xn consisting of some small number of elements n from some large data universe X, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family Q.Ignoring computational constraints, this problem can be solved even when X and Q are exponentially large and n is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation, no efficient differentially private algorithm exists when X and Q can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption.In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when X and Q are exponentially large, and n is an arbitrary polynomial. In fact, we show that this result holds even if X is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey [52].

2018

PKC

Multi-Key Searchable Encryption, Revisited
Abstract

We consider a setting where users store their encrypted documents on a remote server and can selectively share documents with each other. A user should be able to perform keyword searches over all the documents she has access to, including the ones that others shared with her. The contents of the documents, and the search queries, should remain private from the server.This setting was considered by Popa et al. (NSDI ’14) who developed a new cryptographic primitive called Multi-Key Searchable Encryption (MKSE), together with an instantiation and an implementation within a system called Mylar, to address this goal. Unfortunately, Grubbs et al. (CCS ’16) showed that the proposed MKSE definition fails to provide basic security guarantees, and that the Mylar system is susceptible to simple attacks. Most notably, if a malicious Alice colludes with the server and shares a document with an honest Bob then the privacy of all of Bob’s search queries is lost.In this work we revisit the notion of MKSE and propose a new strengthened definition that rules out the above attacks. We then construct MKSE schemes meeting our definition. We first give a simple and efficient construction using only pseudorandom functions. This construction achieves our strong security definition at the cost of increasing the server storage overhead relative to Mylar, essentially replicating the document each time it is shared. We also show that high server storage overhead is not inherent, by giving an alternate (albeit impractical) construction that manages to avoid it using obfuscation.

2018

TCC

Traitor-Tracing from LWE Made Simple and Attribute-Based
Abstract

A traitor tracing scheme is a public key encryption scheme for which there are many secret decryption keys. Any of these keys can decrypt a ciphertext; moreover, even if a coalition of users collude, put together their decryption keys and attempt to create a new decryption key, there is an efficient algorithm to trace the new key to at least one the colluders.Recently, Goyal, Koppula and Waters (GKW, STOC 18) provided the first traitor tracing scheme from LWE with ciphertext and secret key sizes that grow polynomially in $$\log n$$, where n is the number of users. The main technical building block in their construction is a strengthening of (bounded collusion secure) secret-key functional encryption which they refer to as mixed functional encryption (FE).In this work, we improve upon and extend the GKW traitor tracing scheme:We provide simpler constructions of mixed FE schemes based on the LWE assumption. Our constructions improve upon the GKW construction in terms of expressiveness, modularity, and security.We provide a construction of attribute-based traitor tracing for all circuits based on the LWE assumption.

2018

TCC

Is There an Oblivious RAM Lower Bound for Online Reads?
Abstract

Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an
$$O(\log n)$$
overhead per access, where
$$n$$
is the data size. The work of Goldreich and Ostrovsky gave a lower bound showing that this is optimal for ORAM schemes that operate in a “balls and bins” model, where memory blocks can only be shuffled between different locations but not manipulated otherwise. The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond “balls and bins”?The work of Boyle and Naor (ITCS ’16) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with
$$o(\log n)$$
overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO ’18) shows that there indeed is an
$$\varOmega (\log n)$$
lower bound for general online ORAM.This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM where reads (but not writes) have an
$$o(\log n)$$
overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs). Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds.

2018

TCC

Watermarking PRFs Under Standard Assumptions: Public Marking and Security with Extraction Queries
Abstract

A software watermarking scheme can embed some information called a mark into a program while preserving its functionality. No adversary can remove the mark without damaging the functionality of the program. Cohen et al. (STOC ’16) gave the first positive results for watermarking, showing how to watermark certain pseudorandom function (PRF) families using indistinguishability obfuscation (iO). Their scheme has a secret marking procedure to embed marks in programs and a public extraction procedure to extract the marks from programs; security holds even against an attacker that has access to a marking oracle. Kim and Wu (CRYPTO ’17) later constructed a PRF watermarking scheme under only the LWE assumption. In their scheme, both the marking and extraction procedures are secret, but security only holds against an attacker with access to a marking oracle but not an extraction oracle. In fact, it is possible to completely break the security of the latter scheme using extraction queries, which is a significant limitation in any foreseeable application.In this work, we construct a new PRF watermarking scheme with the following properties.
The marking procedure is public and therefore anyone can embed marks in PRFs from the family. Previously we had no such construction even using obfuscation.The extraction key is secret, but marks remain unremovable even if the attacker has access to an extraction oracle. Previously we had no such construction under standard assumptions.Our scheme is simple, uses generic components and can be instantiated under many different assumptions such as DDH, Factoring or LWE.
The above benefits come with one caveat compared to prior work: the PRF family that we can watermark depends on the public parameters of the watermarking scheme and the watermarking authority has a secret key which can break the security of all of the PRFs in the family. Since the watermarking authority is usually assumed to be trusted, this caveat appears to be acceptable.

2016

TCC

2015

ASIACRYPT

2014

CRYPTO

2014

CRYPTO

2014

EPRINT

2013

JOFC

Fully Leakage-Resilient Signatures
Abstract

A signature scheme is fully leakage resilient (Katz and Vaikuntanathan, ASIACRYPT’09) if it is existentially unforgeable under an adaptive chosen-message attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on all intermediate values that are used throughout the lifetime of the system. This is a strong and meaningful notion of security that captures a wide range of side-channel attacks.One of the main challenges in constructing fully leakage-resilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the random-oracle model. Moreover, even in the random-oracle model, known schemes are only resilient to leakage of less than half the length of their signing key.In this paper we construct the first fully leakage-resilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length (1−o(1))L bits, where L is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific number-theoretic assumptions. In addition, we show that our approach extends to the continual-leakage model, recently introduced by Dodis, Haralambiev, Lopez-Alt and Wichs (FOCS’10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS’10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.

2012

EUROCRYPT

2010

EPRINT

On Symmetric Encryption and Point Obfuscation
Abstract

We show tight connections between several cryptographic primitives, namely encryption with weakly random keys, encryption with key-dependent messages (KDM), and obfuscation of point functions with multi-bit output(which we call multi-bit point functions, or MBPFs, for short). These primitives, which have been studied mostly separately in recent works, bear some apparent similarities, both in the flavor of their security requirements and in the flavor of their constructions and assumptions. Still, rigorous connections have not been drawn.
Our results can be interpreted as indicating that MBPF obfuscators imply a very strong form of encryption that simultaneously achieves security for weakly-random keys and key-dependent messages as special cases. Similarly, each one of the other primitives implies a certain restricted form of MBPF obfuscation. Our results carry both constructions and impossibility results from one primitive to others. In particular:
The recent impossibility result for KDM security of Haitner and Holenstein (TCC 09) carries over to MBPF obfuscators.
The Canetti-Dakdouk construction of MBPF obfuscators based on a strong variant of the DDH assumption (EC 08) gives an encryption scheme which is secure w.r.t. any weak key distribution of super-logarithmic
min-entropy (and in particular, also has very strong leakage resilient properties).
All the recent constructions of encryption schemes that are secure w.r.t. weak keys imply a weak form of MBPF obfuscators.

2010

EPRINT

Efficient Public-Key Cryptography in the Presence of Key Leakage
Abstract

We study the design of cryptographic primitives resistant to a large class of side-channel attacks, called "memory attacks", where an attacker can repeatedly and adaptively learn information about the secret key, subject *only* to the constraint that the *overall amount* of such information is bounded by some parameter $\ell$. Although the study of such primitives was initiated only recently by Akavia et al. [AGV09], subsequent work already produced many such "leakage-resilient" primitives [NS09,ADW09,KV09], including signature, encryption, identification (ID) and authenticated key agreement (AKA) schemes. Unfortunately, every existing scheme, --- for any of the four fundamental primitives above, --- fails to satisfy at least one of the following desirable properties:
- Efficiency. While the construction may be generic, it should have some *efficient* instantiations, based on standard cryptographic assumptions, and without relying on random oracles.
- Strong Security. The construction should satisfy the strongest possible definition of security (even in the presence of leakage). For example, encryption schemes should be secure against chosen *ciphertext* attack (CCA), while signatures should be *existentially* unforgeable.
- Leakage Flexibility. It should be possible to set the parameters of the schemes so that the leakage bound $\ell$ can come arbitrarily close to the size of the secret key $sk$.
In this work we design the first signature, encryption, ID and AKA schemes which overcome these limitations, and satisfy all the properties above. Moreover, all our constructions are generic, in several cases elegantly simplifying and generalizing the prior constructions (which did not have any efficient instantiations).
We also introduce several tools of independent interest, such as the abstraction (and constructions) of *simulation extractable* NIZK arguments, and a new *deniable* DH-based AKA protocol based on any CCA-secure encryption.

2010

EPRINT

Cryptography Against Continuous Memory Attacks
Abstract

We say that a cryptographic scheme is Continous Leakage-Resilient (CLR), if it allows users to refresh their secret keys, using only fresh local randomness, such that:
1. The scheme remains functional after any number of key refreshes, although the public key never changes. Thus, the outside world is neither affected by these key refreshes, nor needs to know about their frequency.
2. The scheme remains secure even if the adversary can continuously leak arbitrary information about the current secret-key of the system, as long as the amount of leaked information is bounded in between any two successive key refreshes. There is no bound on the total amount of information that can be leaked during the lifetime of the system.
In this work, we construct a variety of practical CLR schemes, including CLR one-way relations, CLR signatures, CLR identification schemes, and CLR authenticated key agreement protocols. For each of the above, we give general constructions, and then show how to instantiate them efficiently using a well established assumption on bilinear groups, called the K-Linear assumption (for any constant K >= 1).
Our constructions are highly modular, and we develop many interesting techniques and building-blocks along the way, including: leakage-indistinguishable re-randomizable relations, homomorphic NIZKs, and leakage-of-ciphertext non-malleable encryption schemes.
Prior to our work, no truly CLR schemes were known, as previous leakage-resilient schemes suffer from one or more of the following drawbacks: (a) restrictions are placed on the type of allowed leakage, such as the axiom that only computation leaks information; (b) the overall amount of key leakage is bounded a-priori for the lifetime
of the system and there is no method for refreshing keys ; (c) the efficiency of the scheme degrades proportionally with the number of refreshes; (d) the key updates require an additional leak-free master secret key to be stored securely; (e) the scheme is only proven secure under a strong non-standard assumption.

2009

EPRINT

Proofs of Retrievability via Hardness Amplification
Abstract

Proofs of Retrievability (PoR), introduced by Juels and Kaliski, allow the client to store a file $F$ on an untrusted server, and later run an efficient audit protocol in which
the server proves that it (still) possesses the client's data.
Constructions of PoR schemes attempt to minimize the client and
server storage, the communication complexity of an audit, and even
the number of file-blocks accessed by the server during the audit.
In this work, we identify several different variants of the problem
(such as bounded-use vs. unbounded-use, knowledge-soundness vs.
information-soundness), and giving nearly optimal PoR schemes for
each of these variants. Our constructions either improve (and
generalize) the prior PoR constructions, or give the first known PoR
schemes with the required properties. In particular, we
\begin{itemize}
\item Formally prove the security of an (optimized) variant of the
bounded-use scheme of Juels and Kaliski~\cite{JuelsK07}, without
making any simplifying assumptions on the behavior of the
adversary.
\item Build the first unbounded-use PoR scheme where the communication
complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open
question of Shacham and Waters~\cite{ShachamW08}.
\item Build the first bounded-use scheme with {\em
information-theoretic} security.
\end{itemize}
The main insight of our work comes from a simple
connection between PoR schemes and the notion of {\em hardness
amplification}, extensively studied in complexity theory. In
particular, our improvements come from first abstracting a purely
information-theoretic notion of {\em PoR codes},
and then building nearly optimal PoR codes using state-of-the-art
tools from coding and complexity theory.

2008

EUROCRYPT

2008

EPRINT

Detection of Algebraic Manipulation with Applications to Robust Secret Sharing and Fuzzy Extractors
Abstract

Consider an abstract storage device $\Sigma(\G)$ that can hold a
single element $x$ from a fixed, publicly known finite group $\G$.
Storage is private in the sense that an adversary does not have read
access to $\Sigma(\G)$ at all. However, $\Sigma(\G)$ is non-robust in the sense
that the adversary can modify its contents by adding some offset $\Delta \in \G$.
Due to the privacy of the storage device, the value $\Delta$ can only depend on an adversary's {\em a priori} knowledge of $x$. We introduce a new primitive called an {\em
algebraic manipulation detection} (AMD) code, which encodes a source $s$ into a value $x$ stored on $\Sigma(\G)$ so that any tampering
by an adversary will be detected, except with a small error probability $\delta$. We give a nearly optimal construction of AMD codes,
which can flexibly accommodate arbitrary choices for the length of the source $s$ and security level $\delta$. We use this construction in two applications:
\begin{itemize}
\item We show how to efficiently convert any linear secret sharing
scheme into a {\em robust secret sharing scheme}, which ensures that
no \emph{unqualified subset} of players can modify their shares and cause
the reconstruction of some value $s'\neq s$.
\item
We show how how to build nearly optimal {\em robust fuzzy
extractors} for several natural metrics. Robust fuzzy extractors enable one to reliably extract and later recover random keys from noisy and non-uniform secrets,
such as biometrics, by relying only on {\em non-robust public storage}. In the past, such constructions were known only in the random oracle model, or required the entropy rate of the secret to be greater than half. Our construction relies on a randomly chosen common reference string (CRS) available to all parties.
\end{itemize}

2008

EPRINT

Somewhat Non-Committing Encryption and Efficient Adaptively Secure Oblivious Transfer
Abstract

Designing efficient cryptographic protocols tolerating adaptive
adversaries, who are able to corrupt parties on the fly as the
computation proceeds, has been an elusive task. Indeed, thus far no
\emph{efficient} protocols achieve adaptive security for general
multi-party computation, or even for many specific two-party tasks
such as oblivious transfer (OT). In fact, it is difficult and
expensive to achieve adaptive security even for the task of
\emph{secure communication}, which is arguably the most basic task
in cryptography.
In this paper we make progress in this area. First, we introduce a
new notion called \emph{semi-adaptive} security which is slightly
stronger than static security but \emph{significantly weaker than
fully adaptive security}. The main difference between adaptive and
semi-adaptive security is that, for semi-adaptive security, the
simulator is not required to handle the case where \emph{both}
parties start out honest and one becomes corrupted later on during
the protocol execution. As such, semi-adaptive security is much
easier to achieve than fully adaptive security. We then give a
simple, generic protocol compiler which transforms any
semi-adaptively secure protocol into a fully adaptively secure one.
The compilation effectively decomposes the problem of adaptive
security into two (simpler) problems which can be tackled
separately: the problem of semi-adaptive security and the problem of
realizing a weaker variant of secure channels.
We solve the latter problem by means of a new primitive that we call
{\em somewhat non-committing encryption} resulting in significant
efficiency improvements over the standard method for realizing
(fully) secure channels using (fully) non-committing encryption.
Somewhat non-committing encryption has two parameters: an
equivocality parameter $\ell$ (measuring the number of ways that a
ciphertext can be ``opened'') and the message sizes $k$. Our
implementation is very efficient for small values $\ell$,
\emph{even} when $k$ is large. This translates into a very efficient
compilation of many semi-adaptively secure protocols (in particular,
for a task with small input/output domains such as bit-OT) into a
fully adaptively secure protocol.
Finally, we showcase our methodology by applying it to the recent
Oblivious Transfer protocol by Peikert \etal\ [Crypto 2008], which
is only secure against static corruptions, to obtain the first
efficient (expected-constant round, expected-constant number of
public-key operations), adaptively secure and composable bit-OT
protocol.

2008

EPRINT

One-Round Authenticated Key Agreement from Weak Secrets
Abstract

We study the question of information-theoretically secure authenticated key
agreement from weak secrets. In this setting, Alice and Bob share a $n$-bit
secret $W$, which might \emph{not} be uniformly random but the adversary has
at least $k$ bits of uncertainty about it (formalized using conditional
min-entropy). Alice and Bob wish to use $W$ to agree on a nearly uniform
secret key $R$, over a public channel controlled by an \emph{active} adversary
Eve. We show that non-interactive (single-message) protocols do not work when
$k\le \frac{n}{2}$, and require poor parameters even when $\frac{n}{2} < k\ll
n$.
On the other hand, for arbitrary values of $k$, we design a communication
efficient {\em two-message (i.e, one-round!)} protocol extracting nearly $k$
random bits. This dramatically improves the only previously known protocol of
Renner and Wolf~\cite{RennerW03}, which required $O(\lambda)$ rounds where
$\lambda$ is the security parameter. Our solution takes a new approach by
studying and constructing \emph{``non-malleable'' seeded randomness
extractors} --- if an attacker sees a random seed $X$ and comes up with an
arbitrarily related seed $X'$, then we bound the relationship between $R=
\Ext(W;X)$ and $R' = \Ext(W;X')$.
We also extend our one-round key agreement protocol to the ``fuzzy'' setting,
where Alice and Bob share ``close'' (but not equal) secrets $W_A$ and $W_B$,
and to the Bounded Retrieval Model (BRM) where the size of the secret $W$ is
huge.

2007

EPRINT

Isolated Proofs of Knowledge and Isolated Zero Knowledge
Abstract

We introduce a new notion called $\ell$-isolated proofs of knowledge
($\ell$-IPoK). These are proofs of knowledge where a cheating prover
is allowed to exchange up to $\ell$ bits of communication with some
external adversarial environment during the run of the proof.
Without any additional setup assumptions, no witness hiding protocol can be an $\ell$-IPoK for \emph{unbounded} values of $\ell$.
However, for any \emph{pre-defined} threshold $\ell$, and any relation in NP and we construct
an $\ell$-IPoK protocol for that relation. The resulting protocols
are zero knowledge (ZK) in the standard sense, i.e.,
w.r.t. a verifier that communicates only with the prover during the proof.
The cost of having a large threshold $\ell$ is a large communication complexity of the constructed protocol.
We analyze these costs and present a solution that is asymptotically optimal.
If a cheating verifier is allowed to communicate arbitrarily with an
external environment, it is not possible to construct an $\ell$-IPoK that
is also ZK with respect to such a verifier. As another new notion,
we define $\ell$-isolated zero knowledge ($\ell$-IZK) where the
verifier is $\ell$-isolated. For every relation in NP and
every $\ell$, we construct an $\ell$-IPoK protocol that is
also $\ell$-IZK.
We describe several applications of $\ell$-IPoK protocols
under the physical assumption that one can $\ell$-isolate a prover for
the duration of the proof phase. Firstly, we can use a witness
indistinguishable (WI) $\ell$-IPoK to prevent ``man-in-the-middle''
attacks on identification schemes. Prior results for this scenario
required all verifiers to register keys under a PKI, or the ability to fully
isolate the prover. Secondly, a partially isolated prover can register a public key and use a WI $\ell$-IPoK to prove knowledge of the corresponding secret key to another party acting as a verifier. This allows us to set up a PKI where the key registrant does not need to trust the Certificate Authority. The PKI is not perfect since the proof is only witness indistinguishable and not zero knowledge. In a companion paper, we show how to set up such a PKI and use it to implement arbitrary multiparty computation securely in the UC framework without relying on any trusted third parties.

2007

EPRINT

Universally Composable Multiparty Computation with Partially Isolated Parties
Abstract

It is well known that universally composable multiparty computation
cannot, in general, be achieved in the standard model without setup
assumptions when the adversary can corrupt an arbitrary number of
players. One way to get around this problem is by having a trusted
third party generate some global setup such as a common reference
string (CRS) or a public key infrastructure (PKI). Recently, an
alternative solution was proposed by Katz in \cite{Katz}, suggesting
that one may rely on physical assumptions rather than trusted third
parties. Concretely, the solution assumed it physically possible to
construct tamper proof hardware tokens which can be run in complete
isolation from the surrounding environment. Here we improve upon the
work of \cite{Katz} by constructing a scheme in which the tokens
only need to be partially isolated and may have some {\em limited
communication with the environment}. In addition we improve on
Katz's work by presenting a scheme which is secure against
\emph{adaptive adversaries} and is based on \emph{general
cryptographic assumptions}. We also consider an alternative
scenario, in which there are some trusted third parties but no
single such party is trusted by all of the players. This compromise
allows us to limit the use of the physical set-up and hence might be
preferred in practice.

#### Program Committees

- Crypto 2018
- Eurocrypt 2017
- TCC 2017
- TCC 2015
- PKC 2014
- Asiacrypt 2014
- Crypto 2013
- TCC 2012

#### Coauthors

- Shweta Agrawal (1)
- Joël Alwen (3)
- Gilad Asharov (1)
- Allison Bishop (2)
- Nir Bitansky (4)
- Elette Boyle (2)
- Zvika Brakerski (1)
- Ran Canetti (2)
- David Cash (2)
- Yilei Chen (1)
- Ronald Cramer (2)
- Dana Dachman-Soled (1)
- Ivan Damgård (4)
- Srinivas Devadas (1)
- Yevgeniy Dodis (17)
- Stefan Dziembowski (2)
- Sebastian Faust (1)
- Serge Fehr (2)
- Christopher W. Fletcher (1)
- Juan A. Garay (2)
- Sanjam Garg (2)
- Rosario Gennaro (1)
- Craig Gentry (4)
- Shafi Goldwasser (1)
- Shai Halevi (5)
- Ariel Hamlin (2)
- Kristiyan Haralambiev (3)
- Carmit Hazay (2)
- Brett Hemenway (1)
- Dennis Hofheinz (2)
- Zahra Jafargholi (5)
- Abhishek Jain (3)
- Yael Tauman Kalai (3)
- Chethan Kamath (1)
- Tomasz Kazana (2)
- Eike Kiltz (1)
- Saleet Klein (1)
- Karen Klein (1)
- Ilan Komargodski (1)
- Lucas Kowalczyk (1)
- Stephan Krenn (1)
- Alptekin Küpçü (2)
- Adriana López-Alt (2)
- Adriana López-Alt (5)
- Steve Lu (1)
- Vadim Lyubashevsky (2)
- Tal Malkin (1)
- Tal Moran (1)
- Pratyay Mukherjee (3)
- Moni Naor (1)
- Jesper Buus Nielsen (4)
- Ryo Nishimaki (4)
- Tatsuaki Okamoto (2)
- Rafail Ostrovsky (3)
- Carles Padró (2)
- Omer Paneth (2)
- Alain Passelègue (1)
- Valerio Pastro (2)
- Krzysztof Pietrzak (6)
- Willy Quach (2)
- Rajmohan Rajaraman (2)
- Vanishree Rao (2)
- Mariana Raykova (3)
- Ling Ren (1)
- Ron D. Rothblum (2)
- Alessandra Scafuro (2)
- Gil Segev (3)
- Adi Shamir (2)
- Abhi Shelat (1)
- Elaine Shi (1)
- Noah Stephens-Davidowitz (2)
- Eran Tromer (1)
- Jonathan Ullman (1)
- Salil P. Vadhan (2)
- Vinod Vaikuntanathan (4)
- Marten van Dijk (1)
- Mayank Varia (2)
- Daniele Venturi (1)
- Shabsi Walfish (1)
- Brent Waters (3)
- Hoeteck Wee (3)
- Mor Weiss (3)
- Mark Zhandry (2)
- Hong-Sheng Zhou (2)
- Giorgos Zirdelis (1)