CryptoDB
Maya Farber Brodsky
Publications
Year
Venue
Title
2023
CRYPTO
SNARGs for Monotone Policy Batch NP
Abstract
We construct a succinct non-interactive argument ($\mathsf{SNARG}$) for the class of monotone policy batch $\mathsf{NP}$ languages, under the Learning with Errors ($\mathsf{LWE}$) assumption. This class is a subclass of $\mathsf{NP}$ that is associated with a monotone function~$f:\{0,1\}^k\rightarrow\{0,1\}$ and an $\mathsf{NP}$ language $\mathcal{L}$, and contains instances $(x_1,\ldots,x_k)$ such the $f(b_1,\ldots,b_k)=1$ where $b_j=1$ if and only if $x_j\in \mathcal{L}$. Our $\mathsf{SNARG}$s are arguments of knowledge in the non-adaptive setting, and satisfy a new notion of somewhere extractability against adaptive adversaries.
This is the first $\mathsf{SNARG}$ under standard hardness assumptions for a sub-class of $\mathsf{NP}$ that is not known to have a (computational) non-signaling $\mathsf{PCP}$ with parameters compatible with the standard framework for constructing $\mathsf{SNARG}$s dating back to [Kalai-Raz-Rothblum, STOC '13]. Indeed, our approach necessarily departs from this framework.
Our construction combines existing quasi-arguments for $\mathsf{NP}$ (based on batch arguments for $\mathsf{NP}$) with a new type of cryptographic encoding of the instance and a new analysis going from local to global soundness. The main novel ingredient used in our encoding is a {\em predicate-extractable hash} ($\mathsf{PEH}$) family, which is a primitive that generalizes the notion of a somewhere extractable hash. Whereas a somewhere extractable hash allows to extract a single input coordinate, our $\mathsf{PEH}$ extracts a {\em global} property of the input. We view this primitive to be of independent interest, and believe that it will find other applications.
Coauthors
- Zvika Brakerski (1)
- Yael Tauman Kalai (1)
- Alex Lombardi (1)
- Omer Paneth (1)