## CryptoDB

### Zvika Brakerski

#### Affiliation: Weizmann Institute of Science

#### Publications

**Year**

**Venue**

**Title**

2019

EUROCRYPT

Degree 2 is Complete for the Round-Complexity of Malicious MPC
📺
Abstract

We show, via a non-interactive reduction, that the existence of a secure multi-party computation (MPC) protocol for degree-2 functions implies the existence of a protocol with the same round complexity for general functions. Thus showing that when considering the round complexity of MPC, it is sufficient to consider very simple functions.Our completeness theorem applies in various settings: information theoretic and computational, fully malicious and malicious with various types of aborts. In fact, we give a master theorem from which all individual settings follow as direct corollaries. Our basic transformation does not require any additional assumptions and incurs communication and computation blow-up which is polynomial in the number of players and in $$S,2^D$$S,2D, where S, D are the circuit size and depth of the function to be computed. Using one-way functions as an additional assumption, the exponential dependence on the depth can be removed.As a consequence, we are able to push the envelope on the state of the art in various settings of MPC, including the following cases.
3-round perfectly-secure protocol (with guaranteed output delivery) against an active adversary that corrupts less than 1/4 of the parties.2-round statistically-secure protocol that achieves security with “selective abort” against an active adversary that corrupts less than half of the parties.Assuming one-way functions, 2-round computationally-secure protocol that achieves security with (standard) abort against an active adversary that corrupts less than half of the parties. This gives a new and conceptually simpler proof to the recent result of Ananth et al. (Crypto 2018).
Technically, our non-interactive reduction draws from the encoding method of Applebaum, Brakerski and Tsabary (TCC 2018). We extend these methods to ones that can be meaningfully analyzed even in the presence of malicious adversaries.

2019

EUROCRYPT

On Quantum Advantage in Information Theoretic Single-Server PIR
📺
Abstract

In (single-server) Private Information Retrieval (PIR), a server holds a large database
$${\mathtt {DB}}$$
of size n, and a client holds an index
$$i \in [n]$$
and wishes to retrieve
$${\mathtt {DB}}[i]$$
without revealing i to the server. It is well known that information theoretic privacy even against an “honest but curious” server requires
$$\varOmega (n)$$
communication complexity. This is true even if quantum communication is allowed and is due to the ability of such an adversarial server to execute the protocol on a superposition of databases instead of on a specific database (“input purification attack”).Nevertheless, there have been some proposals of protocols that achieve sub-linear communication and appear to provide some notion of privacy. Most notably, a protocol due to Le Gall (ToC 2012) with communication complexity
$$O(\sqrt{n})$$
, and a protocol by Kerenidis et al. (QIC 2016) with communication complexity
$$O(\log (n))$$
, and O(n) shared entanglement.We show that, in a sense, input purification is the only potent adversarial strategy, and protocols such as the two protocols above are secure in a restricted variant of the quantum honest but curious (a.k.a specious) model. More explicitly, we propose a restricted privacy notion called anchored privacy, where the adversary is forced to execute on a classical database (i.e. the execution is anchored to a classical database). We show that for measurement-free protocols, anchored security against honest adversarial servers implies anchored privacy even against specious adversaries.Finally, we prove that even with (unlimited) pre-shared entanglement it is impossible to achieve security in the standard specious model with sub-linear communication, thus further substantiating the necessity of our relaxation. This lower bound may be of independent interest (in particular recalling that PIR is a special case of Fully Homomorphic Encryption).

2019

EUROCRYPT

Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing
📺
Abstract

We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of
$$1/2-1/\mathrm{poly}(n)$$
. Thus we can only show that “very hard” LPN is harder than some “very mildly hard” worst case problem. We note that LPN with noise
$$1/2-1/\mathrm{poly}(n)$$
already implies symmetric cryptography.Specifically, we consider the (n, m, w)-nearest codeword problem ((n, m, w)-NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error
$$w/m \approx {\log ^2 n}/{n}$$
, (n, m, w)-NCP can be solved given oracle access to an LPN distinguisher with noise ratio
$$1/2-1/\mathrm{poly}(n)$$
.Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n, m, w)-NCP with the aforementioned parameters lies in the complexity class
$$\mathrm {{Search}\hbox {-}\mathcal {BPP}}^\mathcal {SZK}$$
(i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be
$$\mathcal {NP}$$
-hard. We then show that the hardness of LPN with very low noise rate
$$\log ^2(n)/n$$
implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in
$$\mathcal {BPP}^\mathcal {SZK}$$
).

2019

TCC

(Pseudo) Random Quantum States with Binary Phase
Abstract

We prove a quantum information-theoretic conjecture due to Ji, Liu and Song (CRYPTO 2018) which suggested that a uniform superposition with random binary phase is statistically indistinguishable from a Haar random state. That is, any polynomial number of copies of the aforementioned state is within exponentially small trace distance from the same number of copies of a Haar random state.As a consequence, we get a provable elementary construction of pseudorandom quantum states from post-quantum pseudorandom functions. Generating pseudorandom quantum states is desirable for physical applications as well as for computational tasks such as quantum money. We observe that replacing the pseudorandom function with a (2t)-wise independent function (either in our construction or in previous work), results in an explicit construction for quantum state t-designs for all t. In fact, we show that the circuit complexity (in terms of both circuit size and depth) of constructing t-designs is bounded by that of (2t)-wise independent functions. Explicitly, while in prior literature t-designs required linear depth (for $$t > 2$$), this observation shows that polylogarithmic depth suffices for all t.We note that our constructions yield pseudorandom states and state designs with only real-valued amplitudes, which was not previously known. Furthermore, generating these states require quantum circuit of restricted form: applying one layer of Hadamard gates, followed by a sequence of Toffoli gates. This structure may be useful for efficiency and simplicity of implementation.

2019

TCC

Leveraging Linear Decryption: Rate-1 Fully-Homomorphic Encryption and Time-Lock Puzzles
Abstract

We show how to combine a fully-homomorphic encryption scheme with linear decryption and a linearly-homomorphic encryption schemes to obtain constructions with new properties. Specifically, we present the following new results.
(1)Rate-1 Fully-Homomorphic Encryption: We construct the first scheme with message-to-ciphertext length ratio (i.e., rate) $$1-\sigma $$ for $$\sigma = o(1)$$. Our scheme is based on the hardness of the Learning with Errors (LWE) problem and $$\sigma $$ is proportional to the noise-to-modulus ratio of the assumption. Our building block is a construction of a new high-rate linearly-homomorphic encryption.One application of this result is the first general-purpose secure function evaluation protocol in the preprocessing model where the communication complexity is within additive factor of the optimal insecure protocol.(2)Fully-Homomorphic Time-Lock Puzzles: We construct the first time-lock puzzle where one can evaluate any function over a set of puzzles without solving them, from standard assumptions. Prior work required the existence of sub-exponentially hard indistinguishability obfuscation.

2019

ASIACRYPT

Order-LWE and the Hardness of Ring-LWE with Entropic Secrets
Abstract

We propose a generalization of the celebrated Ring Learning with Errors (RLWE) problem (Lyubashevsky, Peikert and Regev, Eurocrypt 2010, Eurocrypt 2013), wherein the ambient ring is not the ring of integers of a number field, but rather an order (a full rank subring). We show that our Order-LWE problem enjoys worst-case hardness with respect to short-vector problems in invertible-ideal lattices of the order.The definition allows us to provide a new analysis for the hardness of the abundantly used Polynomial-LWE (PLWE) problem (Stehlé et al., Asiacrypt 2009), different from the one recently proposed by Rosca, Stehlé and Wallet (Eurocrypt 2018). This suggests that Order-LWE may be used to analyze and possibly design useful relaxations of RLWE.We show that Order-LWE can naturally be harnessed to prove security for RLWE instances where the “RLWE secret” (which often corresponds to the secret-key of a cryptosystem) is not sampled uniformly as required for RLWE hardness. We start by showing worst-case hardness even if the secret is sampled from a subring of the sample space. Then, we study the case where the secret is sampled from an ideal of the sample space or a coset thereof (equivalently, some of its CRT coordinates are fixed or leaked). In the latter, we show an interesting threshold phenomenon where the amount of RLWE noise determines whether the problem is tractable.Lastly, we address the long standing question of whether high-entropy secret is sufficient for RLWE to be intractable. Our result on sampling from ideals shows that simply requiring high entropy is insufficient. We therefore propose a broad class of distributions where we conjecture that hardness should hold, and provide evidence via reduction to a concrete lattice problem.

2018

EUROCRYPT

2018

CRYPTO

Quantum FHE (Almost) As Secure As Classical
📺
Abstract

Fully homomorphic encryption schemes (FHE) allow to apply arbitrary efficient computation to encrypted data without decrypting it first. In Quantum FHE (QFHE) we may want to apply an arbitrary quantumly efficient computation to (classical or quantum) encrypted data.We present a QFHE scheme with classical key generation (and classical encryption and decryption if the encrypted message is itself classical) with comparable properties to classical FHE. Security relies on the hardness of the learning with errors (LWE) problem with polynomial modulus, which translates to the worst case hardness of approximating short vector problems in lattices to within a polynomial factor. Up to polynomial factors, this matches the best known assumption for classical FHE. Similarly to the classical setting, relying on LWE alone only implies leveled QFHE (where the public key length depends linearly on the maximal allowed evaluation depth). An additional circular security assumption is required to support completely unbounded depth. Interestingly, our circular security assumption is the same assumption that is made to achieve unbounded depth multi-key classical FHE.Technically, we rely on the outline of Mahadev (arXiv 2017) which achieves this functionality by relying on super-polynomial LWE modulus and on a new circular security assumption. We observe a connection between the functionality of evaluating quantum gates and the circuit privacy property of classical homomorphic encryption. While this connection is not sufficient to imply QFHE by itself, it leads us to a path that ultimately allows using classical FHE schemes with polynomial modulus towards constructing QFHE with the same modulus.

2018

PKC

Learning with Errors and Extrapolated Dihedral Cosets
Abstract

The hardness of the learning with errors (LWE) problem is one of the most fruitful resources of modern cryptography. In particular, it is one of the most prominent candidates for secure post-quantum cryptography. Understanding its quantum complexity is therefore an important goal.We show that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, our result generalizes Regev’s famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS 02). We also discuss a connection between eDCP and Childs and Van Dam’s algorithm for generalized hidden shift problems (SODA 07).Our result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.

2018

TCC

Perfect Secure Computation in Two Rounds
Abstract

We show that any multi-party functionality can be evaluated using a two-round protocol with perfect correctness and perfect semi-honest security, provided that the majority of parties are honest. This settles the round complexity of information-theoretic semi-honest MPC, resolving a longstanding open question (cf. Ishai and Kushilevitz, FOCS 2000). The protocol is efficient for $${\mathrm {NC}}^1$$NC1 functionalities. Furthermore, given black-box access to a one-way function, the protocol can be made efficient for any polynomial functionality, at the cost of only guaranteeing computational security.Technically, we extend and relax the notion of randomized encoding to specifically address multi-party functionalities. The property of a multi-party randomized encoding (MPRE) is that if the functionality g is an encoding of the functionality f, then for any (permitted) coalition of players, their respective outputs and inputs in g allow them to simulate their respective inputs and outputs in f, without learning anything else, including the other outputs of f.

2018

TCC

Two-Message Statistically Sender-Private OT from LWE
Abstract

We construct a two-message oblivious transfer (OT) protocol without setup that guarantees statistical privacy for the sender even against malicious receivers. Receiver privacy is game based and relies on the hardness of learning with errors (LWE). This flavor of OT has been a central building block for minimizing the round complexity of witness indistinguishable and zero knowledge proof systems, non-malleable commitment schemes and multi-party computation protocols, as well as for achieving circuit privacy for homomorphic encryption in the malicious setting. Prior to this work, all candidates in the literature from standard assumptions relied on number theoretic assumptions and were thus insecure in the post-quantum setting. This work provides the first (presumed) post-quantum secure candidate and thus allows to instantiate the aforementioned applications in a post-quantum secure manner.Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.

2016

EUROCRYPT

2010

EPRINT

A Framework for Efficient Signatures, Ring Signatures and Identity Based Encryption in the Standard Model
Abstract

In this work, we present a generic framework for constructing efficient signature scheme, ring signature schemes, and identity based encryption schemes, all in the standard model (without relying on random oracles).
We start by abstracting the recent work of Hohenberger and Waters (Crypto 2009), and specifically their ``prefix method''. We show a transformation taking a signature scheme with a very weak security guarantee (a notion that we call a-priori-message unforgeability under static chosen message attack) and producing a fully secure signature scheme (i.e., existentially unforgeable under adaptive chosen message attack). Our transformation uses the notion of chameleon hash functions, defined by Krawczyk and Rabin (NDSS 2000) and the ``prefix method''. Constructing such weakly secure schemes seems to be significantly easier than constructing fully secure ones, and we present {\em simple} constructions based on the RSA assumption, the {\em short integer solution} (SIS) assumption, and the {\em computational Diffie-Hellman} (CDH) assumption over bilinear groups.
Next, we observe that this general transformation also applies to the regime of ring signatures. Using this observation, we construct new (provably secure) ring signature schemes: one is based on the {\em short integer solution} (SIS) assumption, and the other is based on the CDH assumption over bilinear groups. As a building block for these constructions, we define a primitive that we call {\em ring trapdoor functions}. We show that ring trapdoor functions imply ring signatures under a weak definition, which enables us to apply our transformation to achieve full security.
Finally, we show a connection between ring signatures and identity based encryption (IBE) schemes. Using this connection, and using our new constructions of ring signature schemes, we obtain two IBE schemes: The first is based on the {\em learning with error} (LWE) assumption, and is similar to the recently introduced IBE schemes of Peikert, Agrawal-Boyen and Cash-Hofheinz-Kiltz (2009); The second is based on the $d$-linear assumption over bilinear groups.

2010

EPRINT

Circular and Leakage Resilient Public-Key Encryption Under Subgroup Indistinguishability (or: Quadratic Residuosity Strikes Back)
Abstract

The main results of this work are new public-key encryption schemes
that, under the quadratic residuosity (QR) assumption (or
Paillier's decisional composite residuosity (DCR) assumption), achieve key-dependent message security as well as high resilience to secret key leakage and high resilience to the presence of auxiliary input information.
In particular, under what we call the {\it subgroup
indistinguishability assumption}, of which the QR and DCR are special
cases, we can construct a scheme that has:
* Key-dependant message (circular) security.
Achieves security even when encrypting affine functions of its
own secret-key (in fact, w.r.t. affine ``key-cycles'' of predefined
length). Our scheme also meets the requirements for extending
key-dependant message security to broader classes of functions
beyond affine functions using the techniques of [BGK, ePrint09] or [BHHI, ePrint09].
* Leakage resiliency.
Remains secure even if any adversarial low-entropy (efficiently computable) function of the secret-key is given to the adversary. A proper selection of parameters allows for a ``leakage rate'' of $(1-o(1))$ of the length of the secret-key.
* Auxiliary-input security.
Remains secure even if any sufficiently \emph{hard to invert} (efficiently computable) function of the secret-key is given to the adversary.
Our scheme is the first to achieve key-dependant security and
auxiliary-input security based on the DCR and QR assumptions.
Previous schemes that achieved these properties relied either
on the DDH or LWE assumptions. The proposed scheme is also the first to achieve leakage resiliency for leakage rate $(1-o(1))$ of the secret-key length, under the QR assumption. We note that leakage resilient schemes under the DCR and the QR assumptions, for the restricted case of composite modulus product of safe primes, were implied by the work of [NS, Crypto09], using hash proof systems. However, under the QR assumption, known constructions of hash proof systems only yield a leakage rate of $o(1)$ of the secret-key length.

2010

EPRINT

Cryptography Resilient to Continual Memory Leakage
Abstract

In recent years, there has been a major effort to design cryptographic schemes
that remain secure even if part of the secret key is leaked. This is due to a
recent proliferation of side channel attacks which, through various physical
means, can recover part of the secret key. We explore the possibility of
achieving security even with continual leakage, i.e., even if some information
is leaked each time the key is used.
We show how to securely update a secret key while information is leaked: We
construct schemes that remain secure even if an attacker, {\em at each time
period}, can probe the entire memory (containing a secret key) and ``leak'' up
to a $(1-o(1))$ fraction of the secret key. The attacker may also probe the
memory during the updates, and leak $O(\log k)$ bits, where $k$ is the security
parameter (relying on subexponential hardness allows $k^\epsilon$ bits of
leakage during each update process). All of the above is achieved without
restricting the model as is done in previous works (e.g. by assuming that
``only computation leaks information'' [Micali-Reyzin, TCC04]).
Specifically, under the decisional linear assumption on bilinear groups (which
allows for a leakage rate of $(1/2-o(1))$) or the symmetric external
Diffie-Hellman assumption (which allows for a leakage rate of $(1-o(1))$), we
achieve the above for public key encryption, identity-based encryption, and
signature schemes. Prior to this work, it was not known how to construct
public-key encryption schemes even in the more restricted model of [MR].
The main contributions of this work are (1) showing how to securely update a
secret key while information is leaked (in the more general model) and (2)
giving a public key encryption (and IBE) schemes that are resilient to
continual leakage.

#### Program Committees

- Eurocrypt 2020
- Crypto 2019
- TCC 2018
- PKC 2017
- TCC 2017
- TCC 2016
- Crypto 2015
- Eurocrypt 2014
- Crypto 2013

#### Coauthors

- Dorit Aharonov (1)
- Prabhanjan Ananth (1)
- Benny Applebaum (4)
- Boaz Barak (1)
- Mihir Bellare (1)
- Daniel Benarroch (1)
- Nir Bitansky (1)
- Madalina Bolboceanu (1)
- Chris Brzuska (1)
- David Cash (1)
- Kai-Min Chung (1)
- Nico Döttling (2)
- Nils Fleischhacker (1)
- Sanjam Garg (1)
- Craig Gentry (2)
- Shafi Goldwasser (4)
- Ayal Green (1)
- Shai Halevi (3)
- Yael Tauman Kalai (6)
- Jonathan Katz (2)
- Elena Kirshanova (1)
- Ilan Komargodski (4)
- Pravesh K. Kothari (1)
- Ching-Yi Lai (1)
- Tancrède Lepoint (2)
- Alex Lombardi (1)
- Vadim Lyubashevsky (1)
- Giulio Malavolta (1)
- Moni Naor (1)
- Omer Paneth (1)
- Renen Perlman (3)
- Antigoni Polychroniadou (1)
- Thomas Ristenpart (1)
- Guy N. Rothblum (4)
- Amit Sahai (1)
- Or Sattath (1)
- Gil Segev (13)
- Hovav Shacham (1)
- Devika Sharma (1)
- Omri Shmueli (1)
- Damien Stehlé (1)
- Mehdi Tibouchi (1)
- Rotem Tsabary (4)
- Vinod Vaikuntanathan (11)
- Hoeteck Wee (2)
- Weiqiang Wen (1)
- Daniel Wichs (1)
- Arkady Yerukhimovich (1)
- Scott Yilek (1)