CryptoDB
Pratyush Mishra
Publications and invited talks
Year
Venue
Title
2025
CRYPTO
Malicious Security for PIR (almost) for Free
Abstract
Using Private Information Retrieval (PIR), a client can retrieve a database element from a semi-honest server without leaking which element was queried. Maliciously-secure PIR (mPIR) [Colombo et al., USENIX Security '23] strengthens the guarantees of plain (i.e., semi-honest) PIR by ensuring that even a misbehaving server (a) cannot compromise client privacy via selective-failure attacks, and (b) must answer every query with respect to the same database. These additional security properties are crucial for many real-world applications.
In this work we present a compiler that transforms _any_ single-server PIR scheme into an mPIR scheme in a black-box manner with minimal overhead and by relying only on collision-resistant hash functions. Since single-server PIR implies collision-resistant hash functions, our transformation requires _no_ additional cryptographic assumptions, establishing the equivalence of mPIR and PIR. Instantiating our compiler with appropriate base PIR schemes gives the first constructions of mPIR under assumptions such as Decisional Composite Residuosity, Quadratic Residuosity, and $\phi$-hiding.
Efficiency-wise, our compiler yields mPIR schemes with $O(N^\varepsilon)$ communication and $O(1)$ computation overhead. Applying a slight tweak of our compiler to the recent breakthrough construction of doubly-efficient PIR [Lin et al., STOC '23], we construct a \emph{doubly-efficient mPIR} scheme requiring only $\polylog(N)$ communication and server and client computation. In comparison, all prior mPIR constructions incur at least $\Omega(\sqrt{N})$ cost in all these metrics.
Along the way, we construct a novel local decoding procedure for special ``subcode''-locally decodable codes (LDC) which guarantees that _for all_ corruption patterns, the decoding success probability is almost the same for _any_ two indices. Because usual LDC decoders only give guarantees on the decoding probability if the fraction of corruptions is _bounded_, this does not simply follow by parallel repetition.
2025
CRYPTO
Arc: Accumulation for Reed--Solomon Codes
Abstract
Proof-Carrying Data (PCD) is a foundational tool for ensuring the correctness of incremental distributed computations that has found numerous applications in theory and practice. The state-of-the-art PCD constructions are obtained via accumulation or folding schemes. Bunz, Mishra, Nguyen, and Wang recently introduced the first hash-based accumulation scheme, which is secure in the random oracle model and instantiable with any linear error-correcting code. However, their construction only supports a bounded number of accumulation steps.
We present Arc, a hash-based accumulation scheme that supports an unbounded number of accumulation steps. The core technique underlying our approach is a method for accumulating proximity claims to a Reed–Solomon code. Unlike prior work, we work in the list-decoding regime to obtain concrete efficiency improvements.
We introduce the notion of Interactive Oracle Reductions (IORs) to enable a modular and simple security analysis. These extend prior notions of Reductions of Knowledge to the setting of Interactive Oracle Proofs.
2022
RWC
arkworks: A Rust Ecosystem for Programming zkSNARKs
Abstract
zkSNARKs are an exciting avenue for enhancing the privacy and scalability of decentralized systems. Indeed, researchers and practitioners are implementing and deploying decentralized applications atop zkSNARKs at breakneck speed. However, existing zkSNARK implementations live in their own “walled gardens”: optimizations and improvements in one implementation cannot easily be shared with other projects, leading to either inefficiency, or wasted effort due to reimplementation. In this talk, I will introduce *arkworks*: a set of Rust libraries that resolves the foregoing problem by providing all of the components required for zkSNARK programming, packaged into generic, efficient, and easy-to-use modules, such as the following:
* Generic implementations of finite fields, elliptic curves, and pairings, as well as instantiations of widely-used curves. *
State-of-the-art zkSNARKs such as Groth16, Groth-Maller17, Marlin.
* Ergonomic libraries for writing constraints, along with implementations of many commonly-used constraint “gadgets”.
* Recursive composition of arbitrary SNARKs, including recursion from accumulation schemes.
* Libraries for aggregating proofs and signatures. The modular design of our libraries means that improvements in one component (such as finite field arithmetic) are inherited for free by downstream components (such as zkSNARK implementations). We achieve this composability without sacrificing performance: our generic libraries are competitive with the best application-specific libraries. As a result, our libraries have been deployed in existing industry products such as Celo, MINA, and Aleo.
2021
CRYPTO
Proof-Carrying Data without Succinct Arguments
📺
Abstract
Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Known approaches to construct PCD are based on succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme.
In this paper we show how to obtain PCD without relying on SNARKs. We construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size arguments) that has a *split accumulation scheme*, which is a weak form of accumulation that we introduce.
Moreover, we construct a transparent non-interactive argument of knowledge for R1CS whose split accumulation is verifiable via a (small) *constant number of group and field operations*. Our construction is proved secure in the random oracle model based on the hardness of discrete logarithms, and it leads, via the random oracle heuristic and our result above, to concrete efficiency improvements for PCD.
Along the way, we construct a split accumulation scheme for Hadamard products under Pedersen commitments and for a simple polynomial commitment scheme based on Pedersen commitments.
Our results are supported by a modular and efficient implementation.
2021
ASIACRYPT
Proofs for Inner Pairing Products and Applications
📺
Abstract
We present a generalized inner product argument and demonstrate its applications to pairing-based languages. We apply our generalized argument to prove that an inner pairing product is correctly evaluated with respect to committed vectors of $n$ source group elements. With a structured reference string (SRS), we achieve a logarithmic-time verifier whose work is dominated by $6 \log n$ target group exponentiations. Proofs are of size $6 \log n$ target group elements, computed using $6n$ pairings and $4n$ exponentiations in each source group.
We apply our inner product arguments to build the first polynomial commitment scheme with succinct (logarithmic) verification, $O(\sqrt{d})$ prover complexity for degree $d$ polynomials (not including the cost to evaluate the polynomial), and a SRS of size $O(\sqrt{d})$. Concretely, this means that for $d=2^{28}$, producing an evaluation proof in our protocol is $76\times$ faster than doing so in the KZG commitment scheme, and the CRS in our protocol is $1000\times$ smaller: $13$MB vs $13$GB for KZG.
As a second application, we introduce an argument for aggregating $n$ Groth16 zkSNARKs into an $O(\log n)$ sized proof. Our protocol is significantly faster ($>1000\times$) than aggregating SNARKs via recursive composition: we aggregate $\sim 130,000$ proofs in $25$ minutes, versus $90$ proofs via recursive composition. Finally, we further apply our aggregation protocol to construct a low-memory SNARK for machine computations that does not rely on recursive composition. For a computation that requires time $T$ and space $S$, our SNARK produces proofs in space $\tilde{\mathcal{O}}(S+T)$, which is significantly more space efficient than a monolithic SNARK, which requires space $\tilde{\mathcal{O}}(S \cdot T)$.
2020
EUROCRYPT
Marlin: Preprocessing zkSNARKs with Universal and Updatable SRS
📺
Abstract
We present a general methodology to construct preprocessing zkSNARKs where the structured reference string (SRS) is universal and updatable. This exploits a novel application of *holographic* IOPs, a natural generalization of holographic PCPs [Babai et al., STOC 1991].
We use our methodology to obtain a preprocessing zkSNARK where the SRS has linear size and arguments have constant size. Our construction improves on Sonic [Maller et al., CCS 2019], the prior state of the art in this setting, in all efficiency parameters: proving is an order of magnitude faster and verification is twice as fast, even with smaller SRS size and argument size. Our construction is most efficient when instantiated in the algebraic group model (also used by Sonic), but we also demonstrate how to realize it under concrete knowledge assumptions.
The core of our zkSNARK is a new holographic IOP for rank-1 constraint satisfiability (R1CS), which is the first to achieve linear proof length and constant query complexity (among other efficiency features).
2020
TCC
Proof-Carrying Data from Accumulation Schemes
📺
Abstract
Recursive proof composition has been shown to lead to powerful primitives such as incrementally-verifiable computation (IVC) and proof-carrying data (PCD). All existing approaches to recursive composition take a succinct non-interactive argument of knowledge (SNARK) and use it to prove a statement about its own verifier. This technique requires that the verifier run in time sublinear in the size of the statement it is checking, a strong requirement that restricts the class of SNARKs from which PCD can be built. This in turn restricts the efficiency and security properties of the resulting scheme.
Bowe, Grigg, and Hopwood (ePrint 2019/1021) outlined a novel approach to recursive composition, and applied it to a particular SNARK construction which does *not* have a sublinear-time verifier. However, they omit details about this approach and do not prove that it satisfies any security property. Nonetheless, schemes based on their ideas have already been implemented in software.
In this work we present a collection of results that establish the theoretical foundations for a generalization of the above approach. We define an *accumulation scheme* for a non-interactive argument, and show that this suffices to construct PCD, even if the argument itself does not have a sublinear-time verifier. Moreover we give constructions of accumulation schemes for SNARKs, which yield PCD schemes with novel efficiency and security features.
Service
- Crypto 2025 Program committee
- Crypto 2023 Program committee
Coauthors
- Benedikt Bünz (4)
- Alessandro Chiesa (4)
- Brett Hemenway Falk (1)
- Matthew Green (1)
- Yuncong Hu (1)
- Wei-Kai Lin (1)
- Jingcheng Liu (1)
- Mary Maller (2)
- Peihan Miao (1)
- Ian Miers (1)
- Pratyush Mishra (8)
- Wilson Nguyen (1)
- Matan Shtepel (1)
- Nicholas Spooner (2)
- Nirvan Tyagi (1)
- Psi Vesely (2)
- William Wang (1)
- Nicholas P. Ward (1)