## CryptoDB

### Omer Paneth

#### Publications

**Year**

**Venue**

**Title**

2022

TCC

Verifiable Private Information Retrieval
Abstract

A computational PIR scheme allows a client to privately query a database hosted on a single server without downloading the entire database. We introduce the notion of verifiable PIR (vPIR) where the server can convince the client that the database satisfies certain properties without additional rounds and while keeping the communication sub-linear. For example, the server can prove that the number of rows in the database that satisfy a predicate P is exactly n.
We define security by modeling vPIR as an ideal functionality and following the real-ideal paradigm. Starting from a standard PIR scheme, we construct a vPIR scheme for any database property that can be verified by a machine that reads the database once and maintains a bounded size state between rows. We also construct vPIR with public verification based on LWE or on DLIN. The main technical hurdle is to demonstrate a simulator that extracts a long input from an adversary that sends a single short message.
Our vPIR constructions are based on the notion of batch argument for NP. As contribution of independent interest, we show that batch arguments are equivalent to quasi-arguments---a relaxation of SNARKs which is known to imply succinct argument for various sub-classes of NP.

2022

TCC

PPAD is as Hard as LWE and Iterated Squaring
Abstract

One of the most fundamental results in game theory is that every game has a Nash equilibrium, an assignment of (randomized) strategies to players with the stability property that no individual player can benefit from deviating from the assigned strategy. It is not known how to efficiently *compute* such a Nash equilibrium --- the computational complexity of this task is characterized by the class PPAD, but the relation of PPAD to other problems and well-known complexity classes is not precisely understood. In recent years there has been mounting evidence, based on cryptographic tools and techniques, showing the hardness of PPAD.
We continue this line of research by showing that PPAD is as hard as *learning with errors* and the *iterated squaring* problem, two standard problems in cryptography. Our work improves over prior hardness results that relied either on (1) sub-exponential assumptions, or (2) relied on ``obfustopia,'' which can currently be based on a particular combination of three assumptions. Our work additionally establishes *public-coin* hardness for PPAD (computational hardness for a publicly sampleable distribution of instances) that seems out of reach of the obfustopia approach.
Following the work of Choudhuri et al. (STOC 2019) and subsequent works, our hardness result is obtained by constructing an *unambiguous and incrementally-updateable* succinct non-interactive argument for IS, whose soundness relies on polynomial hardness of LWE. The result also implies a verifiable delay function with unique proofs, which may be of independent interest.

2020

CRYPTO

Delegation with Updatable Unambiguous Proofs and PPAD-Hardness
📺
Abstract

In this work, we show the hardness of finding a Nash equilibrium, a \PPAD-complete problem, based on the quasi-polynomial hardness of the decisional assumption on groups with bilinear maps introduced by Kalai, Paneth and Yang [STOC 2019].
Towards this goal, we construct an {\em unambiguous} and {\em updatable} delegation scheme under this assumption for deterministic computations running in super-polynomial time and polynomial space.
This delegation scheme, which is of independent interest, is publicly verifiable and non-interactive in the common reference string (CRS) model. It is {\em unambiguous} meaning that it is hard to compute two different proofs for the same statement. It is {\em updatable} meaning that given a proof for the statement that a Turing machine $M$ reaches configuration $\conf_T$ in $T$ steps, one can {\em efficiently} generate a proof for the statement that $M$ reaches configuration $\conf_{T+1}$ in $T+1$ steps.

2020

TCC

Weakly Extractable One-Way Functions
📺
Abstract

A family of one-way functions is extractable if given a random function in the family, an efficient adversary can only output an element in the image of the function if it knows a corresponding preimage.
This knowledge extraction guarantee is particularly powerful since it does not require interaction.
However, extractable one-way functions (EFs) are subject to a strong barrier: assuming indistinguishability obfuscation, no EF can have a knowledge extractor that works against all polynomial-size non-uniform adversaries. This holds even for non-black-box extractors that use the adversary's code.
Accordingly, the literature considers either EFs based on non-falsifiable knowledge assumptions, where the extractor is not explicitly given, but it is only assumed to exist, or EFs against a restricted class of adversaries with a bounded non-uniform advice. This falls short of cryptography's gold standard of security that requires an explicit reduction against non-uniform adversaries of arbitrary polynomial size.
Motivated by this gap, we put forward a new notion of weakly extractable one-way functions (WEFs) that circumvents the known barrier. We then prove that WEFs are inextricably connected to the long standing question of three-message zero knowledge protocols. We show that different flavors of WEFs are sufficient and necessary for three-message zero knowledge to exist. The exact flavor depends on whether the protocol is computational or statistical zero knowledge and whether it is publicly or privately verifiable.
Combined with recent progress on constructing three message zero-knowledge, we derive a new connection between keyless multi-collision resistance and the notion of incompressibility and the feasibility of non-interactive knowledge extraction. Another interesting corollary of our result is that in order to construct three-message zero knowledge arguments, it suffices to construct such arguments where the honest prover strategy is unbounded.

2020

JOFC

Reusable Fuzzy Extractors for Low-Entropy Distributions
Abstract

Fuzzy extractors (Dodis et al., in Advances in cryptology—EUROCRYPT 2014, Springer, Berlin, 2014, pp 93–110) convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors (Boyen, in Proceedings of the 11th ACM conference on computer and communications security, CCS, ACM, New York, 2004, pp 82–91) remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings (for example, when a single person’s biometric is enrolled with multiple unrelated organizations). We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated. The extractor works for binary strings with Hamming noise; it achieves computational security under the existence of digital lockers (Canetti and Dakdouk, in Advances in cryptology—EUROCRYPT 2008, Springer, Berlin, 2008, pp 489–508). It is simple and tolerates near-linear error rates. Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates—lower than those supported by prior (nonreusable) constructions—assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. Structure beyond entropy is necessary to support distributions with low entropy rates. We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, building a computationally secure and an information-theoretically secure construction for large-alphabet sources.

2019

CRYPTO

On Round Optimal Statistical Zero Knowledge Arguments
📺
Abstract

We construct the first three message statistical zero knowledge arguments for all of NP, matching the known lower bound. We do so based on keyless multi-collision resistant hash functions and the Learning with Errors assumption—the same assumptions used to obtain round optimal computational zero knowledge.The main component in our construction is a statistically witness indistinguishable argument of knowledge based on a new notion of statistically hiding commitments with subset opening.

2019

TCC

Incrementally Verifiable Computation via Incremental PCPs
Abstract

If I commission a long computation, how can I check that the result is correct without re-doing the computation myself? This is the question that efficient verifiable computation deals with. In this work, we address the issue of verifying the computation as it unfolds. That is, at any intermediate point in the computation, I would like to see a proof that the current state is correct. Ideally, these proofs should be short, non-interactive, and easy to verify. In addition, the proof at each step should be generated efficiently by updating the previous proof, without recomputing the entire proof from scratch. This notion, known as incrementally verifiable computation, was introduced by Valiant [TCC 08] about a decade ago. Existing solutions follow the approach of recursive proof composition and can be based on strong and non-falsifiable cryptographic assumptions (so-called “knowledge assumptions”).In this work, we present a new framework for constructing incrementally verifiable computation schemes in both the publicly verifiable and designated-verifier settings. Our designated-verifier scheme is based on somewhat homomorphic encryption (which can be based on Learning with Errors) and our publicly verifiable scheme is based on the notion of zero-testable homomorphic encryption, which can be constructed from ideal multi-linear maps [Paneth and Rothblum, TCC 17].Our framework is anchored around the new notion of a probabilistically checkable proof (PCP) with incremental local updates. An incrementally updatable PCP proves the correctness of an ongoing computation, where after each computation step, the value of every symbol can be updated locally without reading any other symbol. This update results in a new PCP for the correctness of the next step in the computation. Our primary technical contribution is constructing such an incrementally updatable PCP. We show how to combine updatable PCPs with recently suggested (ordinary) verifiable computation to obtain our results.

2016

TCC

2015

TCC

2014

CRYPTO

#### Program Committees

- Crypto 2021
- TCC 2021
- Crypto 2019
- TCC 2019
- Eurocrypt 2017
- TCC 2017

#### Coauthors

- Boaz Barak (2)
- Shany Ben-David (1)
- Nir Bitansky (12)
- Zvika Brakerski (1)
- Ran Canetti (9)
- Angelo De Caro (1)
- Alessandro Chiesa (1)
- Arka Rai Choudhuri (1)
- Henry Cohn (1)
- Noa Eizenstadt (1)
- Benjamin Fuller (2)
- Sanjam Garg (1)
- Shafi Goldwasser (1)
- Justin Holmgren (1)
- Vincenzo Iovino (1)
- Yuval Ishai (1)
- Abhishek Jain (2)
- Yael Tauman Kalai (9)
- Chethan Kamath (1)
- Huijia Lin (2)
- Alex Lombardi (1)
- Moni Naor (1)
- Adam O'Neill (1)
- Rafail Ostrovsky (1)
- Dimitrios Papadopoulos (1)
- Giuseppe Persiano (1)
- Leonid Reyzin (2)
- Alon Rosen (1)
- Ron D. Rothblum (1)
- Guy N. Rothblum (2)
- Amit Sahai (2)
- Adam Smith (2)
- Nikos Triandopoulos (1)
- Vinod Vaikuntanathan (1)
- Daniel Wichs (1)
- Lizhen Yang (1)