International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Daniel Wichs

Publications

Year
Venue
Title
2024
EUROCRYPT
Laconic Function Evaluation, Functional Encryption and Obfuscation for RAMs with Sublinear Computation
Laconic function evaluation (LFE) is a ``flipped'' version of fully homomorphic encryption, where the server performing the computation gets the output. The server commits itself to a function $f$ by outputting a small digest. Clients can later efficiently encrypt inputs $x$ with respect to the digest in much less time than computing $f$, and ensure that the server only decrypts $f(x)$, but does not learn anything else about $x$. Prior works constructed LFE for \emph{circuits} under LWE, and for \emph{Turing Machines (TMs)} from indistinguishability obfuscation (iO). In this work we introduce LFE for \emph{Random-Access Machines} (RAM-LFE). The server commits itself to a potentially huge database $y$ via a short digest. Clients can later efficiently encrypt inputs $x$ with respect to the digest and the server decrypts $f(x,y)$ for some specified RAM program $f$ (e.g., a universal RAM), without learning anything else about $x$. The main advantage of RAM-LFE is that the server's decryption run-time only scales with the RAM run-time $T$ of the computation $f(x,y)$, which can be sublinear in both $|x|$ and $|y|$. We consider a \emph{weakly efficient} variant, where the client's run-time is also allowed to scale linearly with $T$, but not $|y|$, and a \emph{strongly efficient} variant, where the client's run-time must be sublinear in both $T$ and $|y|$. We construct the the former from doubly efficient private information retrieval (DEPIR) and laconic OT (LOT), both of which are known from RingLWE, and the latter from an additional use of iO. We then show how to leverage strongly efficient RAM-LFE to also get (many-key) \emph{functional encryption for RAMs (RAM-FE)} where secret keys are associate with big databases $y$ and the decryption time is sublinear in $|y|$, as well as \emph{iO for RAMs} where the obfuscated program contains a big database $y$ and the evaluation time is sublinear in $|y|$.
2023
PKC
A Map of Witness Maps: New Definitions and Connections
A \emph{witness map} deterministically maps a witness $w$ of some NP statement $x$ into computationally sound proof that $x$ is true, with respect to a public common reference string (CRS). In other words, it is a deterministic, non-interactive, computationally sound proof system in the CRS model. A \emph{unique witness map} (UWM) ensures that for any fixed statement $x$, the witness map should output the same \emph{unique} proof for $x$, no matter what witness $w$ it is applied to. More generally a \emph{compact witness map} (CWM) can only output one of at most $2^\alpha$ proofs for any given statement $x$, where $\alpha$ is some compactness parameter. Such compact/unique witness maps were proposed recently by Chakraborty, Prabhakaran and Wichs (PKC '20) as a tool for building tamper-resilient signatures, who showed how to construct UWMs from indistinguishability obfuscation (iO). In this work, we study CWMs and UWMs as primitives of independent interest and present a number of interesting connections to various notions in cryptography. \begin{itemize} \item First, we show that UWMs lie somewhere between witness PRFs (Zhandry; TCC '16) and iO -- they imply the former and are implied by the latter. In particular, we show that a relaxation of UWMs to the ``designated verifier (dv-UWM)'' setting is \emph{equivalent} to witness PRFs. Moreover, we consider two flavors of such dv-UWMs, which correspond to two flavors of witness PRFs previously considered in the literature, and show that they are all in fact equivalent to each other in terms of feasibility. \item Next, we consider CWMs that are extremely compact, with $\alpha = O(\log \kappa)$, where $\kappa$ is the security parameter. We show that such CWMs imply \emph{pseudo-UWMs} where the witness map is allowed to be \emph{pseudo-deterministic} -- i.e., for every true statement $x$, there is a unique proof such that, on any witness $w$, the witness map outputs this proof with $1-1/p(\lambda)$ probability, for a polynomial $p$ that we can set arbitrarily large. \item Lastly, we consider CWMs that are mildly compact, with $\alpha = p(\lambda)$ for some a-priori fixed polynomial $p$, independent of the length of the statement $x$ or witness $w$. Such CWMs are implied by succinct non-interactive arguments (SNARGs). We show that such CWMs imply NIZKs, and therefore lie somewhere between NIZKs and SNARGs. \end{itemize}
2023
EUROCRYPT
Speak Much, Remember Little: Cryptography in the Bounded Storage Model, Revisited
The goal of the bounded storage model (BSM) is to construct unconditionally secure cryptographic protocols, by only restricting the storage capacity of the adversary, but otherwise giving it unbounded computational power. Here, we consider a streaming variant of the BSM, where honest parties can stream huge amounts of data to each other so as to overwhelm the adversary's storage, even while their own storage capacity is significantly smaller than that of the adversary. Prior works showed several impressive results in this model, including key agreement and oblivious transfer, but only as long as adversary's storage $m = O(n^2)$ is at most quadratically larger than the honest user storage $n$. Moreover, the work of Dziembowski and Maurer (DM) also gave a seemingly matching lower bound, showing that key agreement in the BSM is impossible when $m > n^2$. In this work, we observe that the DM lower bound only applies to a significantly more restricted version of the BSM, and does not apply to the streaming variant. Surprisingly, we show that it is possible to construct key agreement and oblivious transfer protocols in the streaming BSM, where the adversary's storage can be significantly larger, and even exponential $m = 2^{O(n)}$. The only price of accommodating larger values of $m$ is that the round and communication complexities of our protocols grow accordingly, and we provide lower bounds to show that an increase in rounds and communication is necessary. As an added benefit of our work, we also show that our oblivious transfer (OT) protocol in the BSM satisfies a simulation-based notion of security. In contrast, even for the restricted case of $m = O(n^2)$, prior solutions only satisfied a weaker indistinguishability based definition. As an application of our OT protocol, we get general multiparty computation (MPC) in the BSM that allows for up to exponentially large gaps between $m$ and $n$, while also achieving simulation-based security.
2023
JOFC
Adaptively Secure MPC with Sublinear Communication Complexity
Ran Cohen Abhi Shelat Daniel Wichs
A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results: (1) A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size. (2) Under the same assumptions, we construct a “Bob-optimized” 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice’s input size. We prove impossibility of “Alice-optimized” protocols. (3) Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS’18) and shed light on an interesting duality between LFE and FHE. Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold-PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting and show a two-round SFE protocol with guaranteed output delivery under the same constraints. Our results highlight that the asymptotic cost of adaptive security can be reduced to be comparable to, and in many settings almost match, that of static security, with only a little sacrifice to the concrete round complexity and asymptotic communication complexity.
2023
CRYPTO
Universal Amplification of KDM Security: From 1-Key Circular to Multi-Key KDM
Daniel Wichs Brent Waters
An encryption scheme is Key Dependent Message (KDM) secure if it is safe to encrypt messages that can arbitrarily depend on the secret keys themselves. In this work, we show how to upgrade essentially the weakest form of KDM security into the strongest one. In particular, we assume the existence of a symmetric-key bit-encryption that is circular-secure in the 1-key setting, meaning that it maintains security even if one can encrypt individual bits of a single secret key under itself. We also rely on a standard CPA-secure public-key encryption. We construct a public-key encryption scheme that is KDM secure for general functions (of a-priori bounded circuit size) in the multi-key setting, meaning that it maintains security even if one can encrypt arbitrary functions of arbitrarily many secret keys under each of the public keys. As a special case, the latter guarantees security in the presence of arbitrary length key cycles. Prior work already showed how to amplify n-key circular to n-key KDM security for general functions. Therefore, the main novelty of our work is to upgrade from 1-key to n-key security for arbitrary n. As an independently interesting feature of our result, our construction does not need to know the actual specification of the underlying 1-key circular secure scheme, and we only rely on the existence of some such scheme in the proof of security. In particular, we present a universal construction of a multi-key KDM-secure encryption that is secure as long as some 1-key circular-secure scheme exists. While this feature is similar in spirit to Levin's universal construction of one-way functions, the way we achieve it is quite different technically, and does not come with the same galactic inefficiency.
2023
CRYPTO
The Pseudorandom Oracle Model and Ideal Obfuscation
We introduce a new idealized model of hash functions, which we refer to as the *pseudorandom oracle* (PrO) model. Intuitively, it allows us to model cryptosystems that use the code of an ideal hash function in a non-black-box way. Formally, we model hash functions via a combination of a pseudorandom function (PRF) family and an ideal oracle. A user can initialize the hash function by choosing a PRF key $k$ and mapping it to a public handle $h$ using the oracle. Given the handle $h$ and some input $x$, the oracle can also be called to evaluate the PRF at $x$ with the corresponding key $k$. A user who chooses the PRF key $k$ therefore has a complete description of the hash function and can use its code in non-black-box constructions, while an adversary, who just gets the handle $h$, only has black-box access to the hash function via the oracle. As our main result, we show how to construct ideal obfuscation in the PrO model, starting from functional encryption (FE), which in turn can be based on well-studied polynomial hardness assumptions. In contrast, we know that ideal obfuscation cannot be instantiated in the basic random oracle model under any assumptions. We believe our result provides heuristic justification for the following: (1) most natural security goals implied by ideal obfuscation can be achieved in the real world; (2) obfuscation can be constructed from FE at polynomial security loss. We also discuss how to interpret our result in the PrO model as a construction of ideal obfuscation using simple hardware tokens or as a way to bootstrap ideal obfuscation for PRFs to that for all functions.
2023
TCC
Lower Bounds on Anonymous Whistleblowing
Willy Quach LaKyah Tyner Daniel Wichs
Anonymous transfer, recently introduced by Agrikola, Couteau and Maier [ACM22] (TCC '22), allows a sender to leak a message anonymously by participating in a public non-anonymous discussion where everyone knows who said what. This opens up the intriguing possibility of using cryptography to ensure strong anonymity guarantees in a seemingly non-anonymous environment. The work of [ACM22] presented a lower bound on anonymous transfer, ruling out constructions with strong anonymity guarantees (where the adversary's advantage in identifying the sender is negligible) against arbitrary polynomial-time adversaries. They also provided a (heuristic) upper bound, giving a scheme with weak anonymity guarantees (the adversary's advantage in identifying the sender is inverse in the number of rounds) against fine-grained adversaries whose run-time is bounded by some fixed polynomial that exceeds the run-time of the honest users. This leaves a large gap between the lower bound and the upper bound, raising the intriguing possibility that one may be able to achieve weak anonymity against arbitrary polynomial time adversaries, or strong anonymity against fine grained adversaries. In this work, we present improved lower bounds on anonymous transfer, that rule out both of the above possibilities: - We rule out the existence of anonymous transfer with any non-trivial anonymity guarantees against general polynomial time adversaries. - Even if we restrict ourselves to fine-grained adversaries whose run-time is essentially equivalent to that of the honest parties, we cannot achieve strong anonymity, or even quantitatively improve over the inverse polynomial anonymity guarantees (heuristically) achieved by [ACM22]. Consequently, constructions of anonymous transfer can only provide security against fine-grained adversaries, and even in that case they achieve at most weak quantitative forms of anonymity.
2023
TCC
Security with Functional Re-Encryption from CPA
The notion of functional re-encryption security (funcCPA) for public-key encryption schemes was recently introduced by Akavia et al. (TCC'22), in the context of homomorphic encryption. This notion lies in between CPA security and CCA security: we give the attacker a *functional re-encryption oracle* instead of the decryption oracle of CCA security. This oracle takes a ciphertext ct and a function f, and returns fresh encryption of f applied to the decryption of ct; in symbols, ct'=Enc(f(Dec(ct))). In this work we observe that funcCPA security may have applications beyond homomorphic encryption, and set out to study its properties. As our main contribution, we prove that funcCPA is "closer to CPA than to CCA"; that is, funcCPA secure encryption can be constructed in a black-box manner from CPA-secure encryption. We stress that, prior to our work, this was not known even for regular re-encryption queries corresponding to identity function f. At the core of our result is a new technique, showing how to handle *adaptive* functional re-encryption queries using tools previously developed in the context of non-malleable encryption, which roughly corresponds to a single *non-adaptive* parallel decryption query.
2023
TCC
Multi-Instance Randomness Extraction and Security against Bounded-Storage Mass Surveillance
Jiaxin Guan Daniel Wichs Mark Zhandry
Consider a state-level adversary who observes and stores large amounts of encrypted data from all users on the Internet, but does not have the capacity to store it all. Later, it may target certain "persons of interest" in order to obtain their decryption keys. We would like to guarantee that, if the adversary's storage capacity is only (say) 1% of the total encrypted data size, then even if it can later obtain the decryption keys of arbitrary users, it can only learn something about the contents of (roughly) 1% of the ciphertexts, while the rest will maintain full security. This can be seen as an extension of incompressible cryptography (Dziembowski CRYPTO '06, Guan, Wichs and Zhandry EUROCRYPT '22) to the multi-user setting. We provide solutions in both the symmetric key and public key setting with various trade-offs in terms of computational assumptions and efficiency. As the core technical tool, we study an information-theoretic problem which we refer to as "multi-instance randomness extraction". Suppose $X_1$, $\ldots$, $X_t$ are correlated random variables whose total joint min-entropy rate is $\alpha$, but we know nothing else about their individual entropies. We choose $t$ random and independent seeds $S_1,\ldots, S_t$ and attempt to individually extract some small amount of randomness $Y_i = Ext(X_i; S_i)$ from each $X_i$. We'd like to say that roughly an $\alpha$-fraction of the extracted outputs $Y_i$ should be indistinguishable from uniform even given all the remaining extracted outputs and all the seeds. We show that this indeed holds for specific extractors based on Hadamard and Reed-Muller codes.
2022
EUROCRYPT
Incompressible Cryptography 📺
Jiaxin Guan Daniel Wichs Mark Zhandry
Incompressible encryption allows us to make the ciphertext size flexibly large and ensures that an adversary learns nothing about the encrypted data, even if the decryption key later leaks, unless she stores essentially the entire ciphertext. Incompressible signatures can be made arbitrarily large and ensure that an adversary cannot produce a signature on any message, even one she has seen signed before, unless she stores one of the signatures essentially in its entirety. In this work, we give simple constructions of both incompressible public-key encryption and signatures under minimal assumptions. Furthermore, large incompressible ciphertexts (resp. signatures) can be decrypted (resp. verified) in a streaming manner with low storage. In particular, these notions strengthen the related concepts of disappearing encryption and signatures, recently introduced by Guan and Zhandry (TCC 2021), whose previous constructions relied on sophisticated techniques and strong, non-standard assumptions. We extend our constructions to achieve an optimal "rate", meaning the large ciphertexts (resp. signatures) can contain almost equally large messages, at the cost of stronger assumptions.
2022
EUROCRYPT
Authentication in the Bounded Storage Model 📺
We consider the streaming variant of the Bounded Storage Model (BSM), where the honest parties can stream large amounts of data to each other, while only maintaining a small memory of size $n$. The adversary also operates as a streaming algorithm, but has a much larger memory size $m \gg n$. The goal is to construct unconditionally secure cryptographic schemes in the BSM, and prior works did so for symmetric-key encryption, key agreement, oblivious transfer and multiparty computation. In this work, we construct message authentication and signatures in the BSM. First, we consider the symmetric-key setting, where Alice and Bob share a small secret key. Alice can authenticate arbitrarily many messages to Bob by streaming long authentication tags of size $k \gg m$, while ensuring that the tags can be generated and verified using only $n$ bits of memory. We show a solution using local extractors (Vadhan; JoC '04), which allows for up to exponentially large adversarial memory $m = 2^{O(n)}$, and has tags of size $k= O(m)$. Second, we consider the same setting as above, but now additionally require each individual tag to be small, of size $k \leq n$. We show a solution is still possible when the adversary's memory is $m = O(n^2)$, which is optimal. Our solution relies on a space lower bound for leaning parities (Raz; FOCS '16). Third, we consider the public-key signature setting. A signer Alice initially streams a long verification key over an authentic channel, while only keeping a short signing key in her memory. A verifier Bob receives the streamed verification key and generates some short verification digest that he keeps in his memory. Later, Alice can sign arbitrarily many messages using her signing key by streaming the signatures to Bob, who can verify them using his verification digest. We show a solution for $m= O(n^2)$, which we show to be optimal. Our solution relies on a novel entropy lemma, of independent interest. We show that, if a sequence of blocks has sufficiently high min-entropy, then a large fraction of individual blocks must have high min-entropy. Naive versions of this lemma are false, but we show how to patch it to make it hold.
2022
CRYPTO
Nearly Optimal Property Preserving Hashing 📺
Property-preserving hashing (PPH) consists of a family of compressing hash functions $h$ such that, for any two inputs $x,y$, we can correctly identify whether some property $P(x,y)$ holds given only the digests $h(x),h(y)$. In a basic PPH, correctness should hold with overwhelming probability over the choice of $h$ when $x,y$ are worst-case values chosen a-priori and independently of $h$. In an adversarially robust PPH (RPPH), correctness must hold even when $x,y$ are chosen adversarially and adaptively depending on $h$. Here, we study (R)PPH for the property that the Hamming distance between $x$ and $y$ is at most $t$. The notion of (R)PPH was introduced by Boyle, LaVigne and Vaikuntanathan (ITCS '19), and further studied by Fleischhacker, Simkin (Eurocrypt '21) and Fleischhacker, Larsen, Simkin (Eurocrypt '22). In this work, we obtain improved constructions that are conceptually simpler, have nearly optimal parameters, and rely on more general assumptions than prior works. Our results are: * We construct information-theoretic non-robust PPH for Hamming distance via syndrome list-decoding of linear error-correcting codes. We provide a lower bound showing that this construction is essentially optimal. * We make the above construction robust with little additional overhead, by relying on homomorphic collision-resistant hash functions, which can be constructed from either the discrete-logarithm or the short-integer-solution assumptions. The resulting RPPH achieves improved compression compared to prior constructions, and is nearly optimal. * We also show an alternate construction of RPPH for Hamming distance under the minimal assumption that standard collision-resistant hash functions exist. The compression is slightly worse than our optimized construction using homomorphic collision-resistance, but essentially matches the prior state of the art constructions from specific algebraic assumptions. * Lastly, we study a new notion of randomized robust PPH (R2P2H) for Hamming distance, which relaxes RPPH by allowing the hashing algorithm itself to be randomized. We give an information-theoretic construction with optimal parameters.
2022
ASIACRYPT
Witness Encryption and Null-IO from Evasive LWE
Witness encryption (WE) allows us to use an arbitrary NP statement $x$ as a public key to encrypt a message, and the witness $w$ serves as a decryption key. Security ensures that, when the statement $x$ is false, the encrypted message remains computationally hidden. WE appears to be significantly weaker than indistinguishability obfuscation (iO). Indeed, WE is closely related to a highly restricted form of iO that only guarantees security for null circuits (null iO). However, all current approaches towards constructing WE under nice assumptions go through iO. Such constructions are quite complex and are unlikely to lead to practically instantiable schemes. In this work, we revisit a very simple WE and null iO candidate of Chen, Vaikuntanathan and Wee (CRYPTO 2018). We show how to prove its security under a nice and easy-to-state assumption that we refer to as {\em evasive LWE} following Wee (EUROCRYPT 2022). Roughly speaking, the evasive LWE assumption says the following: assume we have some joint distributions over matrices $\mathbf{P}$, $\mathbf{S}$ and auxiliary information $\aux$ such that $$({\bS\bB + \bE},{\bS \bP + \bE'}, \aux) \approx_c ({\bU},{\bU'}, \aux),$$ for a uniformly random (and secret) matrix $\mathbf{B}$, where $\mathbf{U}, \mathbf{U}'$ are uniformly random matrices, and $\mathbf{E},\mathbf{E}'$ are chosen from the LWE error distribution with appropriate parameters. Then it must also be the case that: $$({\bS\bB + \bE}, \bB^{-1}(\bP),\aux) \approx_c (\bU, \bB^{-1}(\bP),\aux).$$ Essentially the above says that given ${\bS\bB + \bE}$, getting the additional component $\bB^{-1}(\bP)$ is no more useful than just getting the product $({\bS\bB + \bE})\cdot \bB^{-1}(\bP) \approx \bS \bP + \bE'$.
2022
TCC
Post-Quantum Insecurity from LWE
We show that for many fundamental cryptographic primitives, proving classical security under the learning-with-errors (LWE) assumption, does \emph{not} imply post-quantum security. This is despite the fact that LWE is widely believed to be post-quantum secure, and our work does not give any evidence otherwise. Instead, it shows that post-quantum insecurity can arise inside cryptographic constructions, even if the assumptions are post-quantum secure. Concretely, our work provides (contrived) constructions of pseudorandom functions, CPA-secure symmetric-key encryption, message-authentication codes, signatures, and CCA-secure public-key encryption schemes, all of which are proven to be classically secure under LWE via black-box reductions, but demonstrably fail to be post-quantum secure. All of these cryptosystems are stateless and non-interactive, but their security is defined via an interactive game that allows the attacker to make oracle queries to the cryptosystem. The polynomial-time quantum attacker can break these schemes by only making a few \emph{classical} queries to the cryptosystem, and in some cases, a single query suffices. Previously, we only had examples of post-quantum insecurity under post-quantum assumptions for stateful/interactive protocols. Moreover, there appears to be a folklore belief that for stateless/non-interactive cryptosystems with black-box proofs of security, a quantum attack against the scheme should translate into a quantum attack on the assumption. This work shows otherwise. Our main technique is to carefully embed interactive protocols inside the interactive security games of the above primitives. As a result of independent interest, we also show a 3-round \emph{quantum disclosure of secrets (QDS)} protocol between a classical sender and a receiver, where a quantum receiver learns a secret message in the third round but, assuming LWE, a classical receiver does not.
2021
TCC
Succinct LWE Sampling, Random Polynomials, and Obfuscation 📺
We present a construction of indistinguishability obfuscation (iO) that relies on the learning with errors (LWE) assumption together with a new notion of succinctly sampling pseudo-random LWE samples. We then present a candidate LWE sampler whose security is related to the hardness of solving systems of polynomial equations. Our construction improves on the recent iO candidate of Wee and Wichs (Eurocrypt 2021) in two ways: first, we show that a much weaker and simpler notion of LWE sampling suffices for iO; and secondly, our candidate LWE sampler is secure based on a compactly specified and falsifiable assumption about random polynomials, with a simple error distribution that facilitates cryptanalysis.
2021
EUROCRYPT
Candidate Obfuscation via Oblivious LWE Sampling 📺
Hoeteck Wee Daniel Wichs
We present a new, simple candidate construction of indistinguishability obfuscation (iO). Our scheme is inspired by lattices and learning-with-errors (LWE) techniques, but we are unable to prove security under a standard assumption. Instead, we formulate a new falsifiable assumption under which the scheme is secure. Furthermore, the scheme plausibly achieves post-quantum security. Our construction is based on the recent ``split FHE'' framework of Brakerski, D\"ottling, Garg, and Malavolta (EUROCRYPT '20), and we provide a new instantiation of this framework. As a first step, we construct an iO scheme that is provably secure assuming that LWE holds and that it is possible to obliviously generate LWE samples without knowing the corresponding secrets. We define a precise notion of oblivious LWE sampling that suffices for the construction. It is known how to obliviously sample from any distribution (in a very strong sense) using iO, and our result provides a converse, showing that the ability to obliviously sample from the specific LWE distribution (in a much weaker sense) already also implies iO. As a second step, we give a heuristic contraction of oblivious LWE sampling. On a very high level, we do this by homomorphically generating pseudorandom LWE samples using an encrypted pseudorandom function.
2021
CRYPTO
Limits on the Adaptive Security of Yao’s Garbling 📺
Yao’s garbling scheme is one of the most fundamental cryptographic constructions. Lindell and Pinkas (Journal of Cryptograhy 2009) gave a formal proof of security in the selective setting assuming secure symmetric-key encryption (and hence one-way functions). This was fol- lowed by results, both positive and negative, concerning its security in the, stronger, adaptive setting. Applebaum et al. (Crypto 2013) showed that it cannot satisfy adaptive security as is, due to a simple incompressibility argument. Jafagholi and Wichs (TCC 2017) considered a natural adaptation of Yao’s scheme that circumvents this negative result, and proved that it is adaptively secure, at least for shallow circuits. In particular, they showed that for the class of circuits of depth d, the loss in security is at most exponential in d. The above results all concern the simulation-based notion of security. In this work, we show that the upper bound of Jafargholi and Wichs is more or less optimal in a strong sense. As our main result, we show that there exists a family of Boolean circuits, one for each depth d ∈ N, such that any black-box reduction proving the adaptive indistinguishability- security of the natural adaptation of Yao’s scheme from any symmetric-key encryption has to lose a factor that is sub-exponential in d. Since indistinguishability is a weaker notion than simulation, our bound also applies to adaptive simulation. To establish our results, we build on the recent approach of Kamath et al. (Eprint 2021), which uses pebbling lower bounds in conjunction with oracle separations to prove fine-grained lower bounds on loss in cryptographic security
2021
CRYPTO
Targeted Lossy Functions and Applications 📺
Willy Quach Brent Waters Daniel Wichs
Lossy trapdoor functions, introduced by Peikert and Waters (STOC '08), can be initialized in one of two indistinguishable modes: in injective mode, the function preserves all information about its input, and can be efficiently inverted given a trapdoor, while in lossy mode, the function loses some information about its input. Such functions have found countless applications in cryptography, and can be constructed from a variety of Cryptomania assumptions. In this work, we introduce \emph{targeted lossy functions (TLFs)}, which relax lossy trapdoor functions along two orthogonal dimensions. Firstly, they do not require an inversion trapdoor in injective mode. Secondly, the lossy mode of the function is initialized with some target input, and the function is only required to lose information about this particular target. The injective and lossy modes should be indistinguishable even given the target. We construct TLFs from Minicrypt assumptions, namely, injective pseudorandom generators, or even one-way functions under a natural relaxation of injectivity. We then generalize TLFs to incorporate \emph{branches}, and construct \emph{all-injective-but-one} and \emph{all-lossy-but-one} variants. We show a wide variety of applications of targeted lossy functions. In several cases, we get the first Minicrypt constructions of primitives that were previously only known under Cryptomania assumptions. Our applications include: \begin{itemize} \item \emph{Pseudo-entropy functions} from one-way functions. \item Deterministic leakage-resilient message-authentication codes and improved leakage-resilient symmetric-key encryption from one-way functions. \item Extractors for \emph{extractor-dependent sources} from one-way functions. \item Selective-opening secure symmetric-key encryption from one-way functions. \item A new construction of CCA PKE from (exponentially secure) trapdoor functions and injective pseudorandom generators. \end{itemize} We also discuss a fascinating connection to distributed point functions.
2021
TCC
Updatable Public Key Encryption in the Standard Model 📺
Forward security (FS) ensures that corrupting the current secret key in the system preserves the privacy or integrity of the prior usages of the system. Achieving forward security is especially hard in the setting of public-key encryption (PKE), where time is divided into periods, and in each period the receiver derives the next-period secret key from their current secret key, while the public key stays constant. Indeed, all current constructions of FS-PKE are built from hierarchical identity-based encryption (HIBE) and are rather complicated. Motivated by applications to secure messaging, recent works of Jost et al. (Eurocrypt’19) and Alwen et al. (CRYPTO’20) consider a natural relaxation of FS-PKE, which they term *updatable* PKE (UPKE). In this setting, the transition to the next period can be initiated by any sender, who can compute a special update ciphertext. This ciphertext directly produces the next-period public key and can be processed by the receiver to compute the next-period secret key. If done honestly, future (regular) ciphertexts produced with the new public key can be decrypted with the new secret key, but past such ciphertexts cannot be decrypted with the new secret key. Moreover, this is true even if all other previous-period updates were initiated by untrusted senders. Both papers also constructed a very simple UPKE scheme based on the CDH assumption in the random oracle model. However, they left open the question of building such schemes in the standard model, or based on other (e.g., post-quantum) assumptions, without using the heavy HIBE techniques. In this work, we construct two efficient UPKE schemes in the standard model, based on the DDH and LWE assumptions, respectively. Somewhat interestingly, our constructions gain their efficiency (compared to prior FS-PKE schemes from the same assumptions) by using tools from the area of circular-secure and leakage resilient public-key encryption schemes (rather than HIBE).
2021
JOFC
Is There an Oblivious RAM Lower Bound for Online Reads?
Mor Weiss Daniel Wichs
Oblivious RAM (ORAM), introduced by Goldreich (STOC 1987) and Ostrovsky (STOC 1990), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an $$O(\log n)$$ O ( log n ) overhead per access, where $$n$$ n is the data size. The work of Goldreich and Ostrovsky (JACM 1996) gave a lower bound, showing that this is optimal for ORAM schemes that operate in a “balls and bins” model, where memory blocks can only be shuffled between different locations but not manipulated otherwise (and the server is used solely as remote storage). The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond “balls and bins”? The work of Boyle and Naor (ITCS 2016) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with $$o(\log n)$$ o ( log n ) overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO 2018) shows that there indeed is an $$\Omega (\log n)$$ Ω ( log n ) lower bound for general online ORAM. This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM, in which the server is used solely as remote storage, where reads (but not writes ) have an $$o(\log n)$$ o ( log n ) overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs) . Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds.
2020
EUROCRYPT
Two-Round Oblivious Transfer from CDH or LPN 📺
We show a new general approach for constructing maliciously-secure two-round oblivious transfer (OT). Specifically, we provide a generic sequence of transformations to upgrade a very basic notion of two-roundOT, which we call elementary OT, to UC-secure OT. We then give simple constructions of elementary OT under the Computational Diffie-Hellman(CDH) assumption or the Learning Parity with Noise (LPN) assumption, yielding the first constructions of malicious (UC-secure) two-round OT under these assumptions. Since two-round OT is complete for two-round 2-party and multi-party computation in the malicious setting, we also achieve the first constructions of the latter under these assumptions.
2020
EUROCRYPT
Extracting Randomness from Extractor-Dependent Sources 📺
We revisit the well-studied problem of extracting nearly uniform randomness from an arbitrary source of sufficient min-entropy. Strong seeded extractors solve this problem by relying on a public random seed, which is unknown to the source. Here, we consider a setting where the seed is reused over time and the source may depend on prior calls to the extractor with the same seed. Can we still extract nearly uniform randomness? In more detail, we assume the seed is chosen randomly, but the source can make arbitrary oracle queries to the extractor with the given seed before outputting a sample. We require that the sample has entropy and differs from any of the previously queried values. The extracted output should look uniform even to a distinguisher that gets the seed. We consider two variants of the problem, depending on whether the source only outputs the sample, or whether it can also output some correlated public auxiliary information that preserves the sample's entropy. Our results are: * Without Auxiliary Information: We show that every pseudo-random function (PRF) with a sufficiently high security level is a good extractor in this setting, even if the distinguisher is computationally unbounded. We further show that the source necessarily needs to be computationally bounded and that such extractors imply one-way functions. * With Auxiliary Information: We construct secure extractors in this setting, as long as both the source and the distinguisher are computationally bounded. We give several constructions based on different intermediate primitives, yielding instantiations based on the DDH, DLIN, LWE or DCR assumptions. On the negative side, we show that one cannot prove security against computationally unbounded distinguishers in this setting under any standard assumption via a black-box reduction. Furthermore, even when restricting to computationally bounded distinguishers, we show that there exist PRFs that are insecure as extractors in this setting and that a large class of constructions cannot be proven secure via a black-box reduction from standard assumptions.
2020
EUROCRYPT
Statistical ZAPR Arguments from Bilinear Maps 📺
Dwork and Naor (FOCS '00) defined ZAPs as 2-message witness-indistinguishable proofs that are public-coin. We relax this to \emph{ZAPs with private Randomness} (ZAPRs), where the verifier can use private coins to sample the first message (independently of the statement being proved), but the proof must remain publicly verifiable given only the protocol transcript. In particular, ZAPRs are \emph{reusable}, meaning that the first message can be reused for multiple proofs without compromising security. Known constructions of ZAPs from trapdoor permutations or bilinear maps are only computationally WI (and statistically sound). Two recent results of Badrinarayanan-Fernando-Jain-Khurana-Sahai and Goyal-Jain-Jin-Malavolta [EUROCRYPT '20] construct the first \emph{statistical ZAP arguments}, which are statistically WI (and computationally sound), from the quasi-polynomial LWE assumption. Here, we construct \emph{statistical ZAPR arguments} from the quasi-polynomial decision-linear (DLIN) assumption on groups with a bilinear map. Our construction relies on a combination of several tools including Groth-Ostrovsky-Sahai NIZK and NIWI [EUROCRYPT '06, CRYPTO '06, JACM '12], ``sometimes-binding statistically hiding commitments'' [Kalai-Khurana-Sahai, EUROCRYPT '18] and the ``MPC-in-the-head'' technique [Ishai-Kushilevitz-Ostrovsky-Sahai, STOC '07].
2020
PKC
Witness Maps and Applications 📺
We introduce the notion of Witness Maps as a cryptographic notion of a proof system. A Unique Witness Map (UWM) deterministically maps all witnesses for an $$mathbf {NP}$$ statement to a single representative witness, resulting in a computationally sound, deterministic-prover, non-interactive witness independent proof system. A relaxation of UWM, called Compact Witness Map (CWM), maps all the witnesses to a small number of witnesses, resulting in a “lossy” deterministic-prover, non-interactive proof-system. We also define a Dual Mode Witness Map (DMWM) which adds an “extractable” mode to a CWM. Our main construction is a DMWM for all $$mathbf {NP}$$ relations, assuming sub-exponentially secure indistinguishability obfuscation ( $${imathcal {O}}$$ ), along with standard cryptographic assumptions. The DMWM construction relies on a CWM and a new primitive called Cumulative All-Lossy-But-One Trapdoor Functions (C-ALBO-TDF), both of which are in turn instantiated based on $${imathcal {O}}$$ and other primitives. Our instantiation of a CWM is in fact a UWM; in turn, we show that a UWM implies Witness Encryption. Along the way to constructing UWM and C-ALBO-TDF, we also construct, from standard assumptions, Puncturable Digital Signatures and a new primitive called Cumulative Lossy Trapdoor Functions (C-LTDF). The former improves up on a construction of Bellare et al. (Eurocrypt 2016), who relied on sub-exponentially secure $${imathcal {O}}$$ and sub-exponentially secure OWF. As an application of our constructions, we show how to use a DMWM to construct the first leakage and tamper-resilient signatures with a deterministic signer , thereby solving a decade old open problem posed by Katz and Vaikunthanathan (Asiacrypt 2009), by Boyle, Segev and Wichs (Eurocrypt 2011), as well as by Faonio and Venturi (Asiacrypt 2016). Our construction achieves the optimal leakage rate of $$1 - o(1)$$ .
2020
CRYPTO
Leakage-Resilient Key Exchange and Two-Seed Extractors 📺
Xin Li Fermi Ma Willy Quach Daniel Wichs
Can Alice and Bob agree on a uniformly random secret key without having any truly secret randomness to begin with? Here we consider a setting where Eve can get partial leakage on the internal state of both Alice and Bob individually before the protocol starts. They then run a protocol using their states without any additional randomness and need to agree on a shared key that looks uniform to Eve, even after observing the leakage and the protocol transcript. We focus on non-interactive (one round) key exchange (NIKE), where Alice and Bob send one message each without waiting for one another. We first consider this problem in the symmetric-key setting, where the states of Alice and Bob include a shared secret as well as individual uniform randomness. However, since Eve gets leakage on these states, Alice and Bob need to perform privacy amplification to derive a fresh secret key from them. Prior solutions require Alice and Bob to sample fresh uniform randomness during the protocol, while in our setting all of their randomness was already part of their individual states a priori and was therefore subject to leakage. We show an information-theoretic solution to this problem using a novel primitive that we call a two-seed extractor, which we in turn construct by drawing a connection to communication-complexity lower-bounds in the number-on-forehead (NOF) model. We then turn to studying this problem in the public-key setting, where the states of Alice and Bob consist of independent uniform randomness. Unfortunately, we give a black-box separation showing that leakage-resilient NIKE in this setting cannot be proven secure via a black-box reduction under any game-based assumption when the leakage is super-logarithmic. This includes virtually all assumptions used in cryptography, and even very strong assumptions such as indistinguishability obfuscation (iO). Nevertheless, we also provide positive results that get around the above separation: -We show that every key exchange protocol (e.g., Diffie-Hellman) is secure when the leakage amount is logarithmic, or potentially even greater if we assume sub-exponential security without leakage. -We notice that the black-box separation does not extend to schemes in the common reference string (CRS) model, or to schemes with preprocessing, where Alice and Bob can individually pre-process their random coins to derive their secret state prior to leakage. We give a solution in the CRS model with preprocessing using bilinear maps. We also give solutions in just the CRS model alone (without preprocessing) or just with preprocessing (without a CRS), using iO and lossy functions.
2020
CRYPTO
Incompressible Encodings 📺
Tal Moran Daniel Wichs
An incompressible encoding can probabilistically encode some data $m$ into a codeword $c$, which is not much larger. Anyone can decode $c$ to recover the original data $m$. However, $c$ cannot be efficiently compressed, even if the original data $m$ is given to the decompression procedure for free. In other words, $c$ is a representation of $m$, yet is computationally incompressible even given $m$. An incompressible encoding is composable if many encodings cannot be simultaneously compressed into anything sufficiently smaller than their concatenation. A recent work of Damgard, Ganesh and Orlandi (CRYPTO '19) defined a variant of incompressible encodings and gave an applications to ``proofs of replicated storage''. They constructed incompressible encodings in an ideal permutation model over a structured domain, but it was left open if they can be constructed under standard assumptions, or even in the more basic random-oracle model. In this work, we give new constructions, negative results and applications of incompressible encodings: * We construct incompressible encodings in the common random string (CRS) model under the Decisional Composite Residuosity (DCR) or Learning with Errors (LWE) assumptions. However, the construction has several drawbacks: (1) it is not composable, (2) it only achieves selective security, and (3) the CRS is as long as the data $m$. * We leverage the above construction to also get a scheme in the random-oracle model, under the same assumptions, that avoids all of the above drawbacks. Furthermore, it is significantly more efficient than the prior ideal-model construction. * We give black-box separations, showing that incompressible encodings in the plain model cannot be proven secure under any standard hardness assumption, and incompressible encodings in the CRS model must inherently suffer from all of the drawbacks above. * We give a new application to ``big-key cryptography in the bounded-retrieval model'', where secret keys are made intentionally huge to make them hard to exfiltrate. Using incompressible encodings, we can get all the security benefits of a big key without wasting storage space, by having the key to encode useful data.
2020
TCC
Optimal Broadcast Encryption from LWE and Pairings in the Standard Model 📺
Broadcast Encryption with optimal parameters was a long-standing problem, whose first solution was provided in an elegant work by Boneh, Waters and Zhandry \cite{BWZ14}. However, this work relied on multilinear maps of logarithmic degree, which is not considered a standard assumption. Recently, Agrawal and Yamada \cite{AY20} improved this state of affairs by providing the first construction of optimal broadcast encryption from Bilinear Maps and Learning With Errors (LWE). However, their proof of security was in the generic bilinear group model. In this work, we improve upon their result by providing a new construction and proof in the standard model. In more detail, we rely on the Learning With Errors (LWE) assumption and the Knowledge of OrthogonALity Assumption (KOALA) \cite{BW19} on bilinear groups. Our construction combines three building blocks: a (computational) nearly linear secret sharing scheme with compact shares which we construct from LWE, an inner-product functional encryption scheme with special properties which is constructed from the bilinear Matrix Decision Diffie Hellman (MDDH) assumption, and a certain form of hyperplane obfuscation, which is constructed using the KOALA assumption. While similar to that of Agrawal and Yamada, our construction provides a new understanding of how to decompose the construction into simpler, modular building blocks with concrete and easy-to-understand security requirements for each one. We believe this sheds new light on the requirements for optimal broadcast encryption, which may lead to new constructions in the future.
2019
EUROCRYPT
Private Anonymous Data Access 📺
We consider a scenario where a server holds a huge database that it wants to make accessible to a large group of clients. After an initial setup phase, clients should be able to read arbitrary locations in the database while maintaining privacy (the server does not learn which locations are being read) and anonymity (the server does not learn which client is performing each read). This should hold even if the server colludes with a subset of the clients. Moreover, the run-time of both the server and the client during each read operation should be low, ideally only poly-logarithmic in the size of the database and the number of clients. We call this notion Private Anonymous Data Access (PANDA). PANDA simultaneously combines aspects of Private Information Retrieval (PIR) and Oblivious RAM (ORAM). PIR has no initial setup, and allows anybody to privately and anonymously access a public database, but the server’s run-time is linear in the data size. On the other hand, ORAM achieves poly-logarithmic server run-time, but requires an initial setup after which only a single client with a secret key can access the database. The goal of PANDA is to get the best of both worlds: allow many clients to privately and anonymously access the database as in PIR, while having an efficient server as in ORAM.In this work, we construct bounded-collusion PANDA schemes, where the efficiency scales linearly with a bound on the number of corrupted clients that can collude with the server, but is otherwise poly-logarithmic in the data size and the total number of clients. Our solution relies on standard assumptions, namely the existence of fully homomorphic encryption, and combines techniques from both PIR and ORAM. We also extend PANDA to settings where clients can write to the database.
2019
EUROCRYPT
Reusable Designated-Verifier NIZKs for all NP from CDH 📺
Non-interactive zero-knowledge proofs (NIZKs) are a fundamental cryptographic primitive. Despite a long history of research, we only know how to construct NIZKs under a few select assumptions, such as the hardness of factoring or using bilinear maps. Notably, there are no known constructions based on either the computational or decisional Diffie-Hellman (CDH/DDH) assumption without relying on a bilinear map.In this paper, we study a relaxation of NIZKs in the designated verifier setting (DV-NIZK), in which the public common-reference string is generated together with a secret key that is given to the verifier in order to verify proofs. In this setting, we distinguish between one-time and reusable schemes, depending on whether they can be used to prove only a single statement or arbitrarily many statements. For reusable schemes, the main difficulty is to ensure that soundness continues to hold even when the malicious prover learns whether various proofs are accepted or rejected by the verifier. One-time DV-NIZKs are known to exist for general NP statements assuming only public-key encryption. However, prior to this work, we did not have any construction of reusable DV-NIZKs for general NP statements from any assumption under which we didn’t already also have standard NIZKs.In this work, we construct reusable DV-NIZKs for general NP statements under the CDH assumption, without requiring a bilinear map. Our construction is based on the hidden-bits paradigm, which was previously used to construct standard NIZKs. We define a cryptographic primitive called a hidden-bits generator (HBG), along with a designated-verifier variant (DV-HBG), which modularly abstract out how to use this paradigm to get both standard NIZKs and reusable DV-NIZKs. We construct a DV-HBG scheme under the CDH assumption by relying on techniques from the Cramer-Shoup hash-proof system, and this yields our reusable DV-NIZK for general NP statements under CDH.We also consider a strengthening of DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK) where the setup consists of an honestly generated common random string and the verifier then gets to choose his own (potentially malicious) public/secret key pair to generate/verify proofs. We construct MDV-NIZKs under the “one-more CDH” assumption without relying on bilinear maps.
2019
EUROCRYPT
Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing 📺
We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of $$1/2-1/\mathrm{poly}(n)$$ . Thus we can only show that “very hard” LPN is harder than some “very mildly hard” worst case problem. We note that LPN with noise $$1/2-1/\mathrm{poly}(n)$$ already implies symmetric cryptography.Specifically, we consider the (n, m, w)-nearest codeword problem ((n, m, w)-NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error $$w/m \approx {\log ^2 n}/{n}$$ , (n, m, w)-NCP can be solved given oracle access to an LPN distinguisher with noise ratio $$1/2-1/\mathrm{poly}(n)$$ .Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n, m, w)-NCP with the aforementioned parameters lies in the complexity class $$\mathrm {{Search}\hbox {-}\mathcal {BPP}}^\mathcal {SZK}$$ (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be $$\mathcal {NP}$$ -hard. We then show that the hardness of LPN with very low noise rate $$\log ^2(n)/n$$ implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in $$\mathcal {BPP}^\mathcal {SZK}$$ ).
2019
CRYPTO
Non-malleable Codes for Decision Trees 📺
Marshall Ball Siyao Guo Daniel Wichs
We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by decision trees of depth $$d= n^{1/4-o(1)}$$ . In particular, each bit of the tampered codeword is set arbitrarily after adaptively reading up to d arbitrary locations within the original codeword. Prior to this work, no efficient unconditional non-malleable codes were known for decision trees beyond depth $$O(\log ^2 n)$$ .Our result also yields efficient, unconditional non-malleable codes that are $$\exp (-n^{\varOmega (1)})$$ -secure against constant-depth circuits of $$\exp (n^{\varOmega (1)})$$ -size. Prior work of Chattopadhyay and Li (STOC 2017) and Ball et al. (FOCS 2018) only provide protection against $$\exp (O(\log ^2n))$$ -size circuits with $$\exp (-O(\log ^2n))$$ -security.We achieve our result through simple non-malleable reductions of decision tree tampering to split-state tampering. As an intermediary, we give a simple and generic reduction of leakage-resilient split-state tampering to split-state tampering with improved parameters. Prior work of Aggarwal et al. (TCC 2015) only provides a reduction to split-state non-malleable codes with decoders that exhibit particular properties.
2019
CRYPTO
On the Plausibility of Fully Homomorphic Encryption for RAMs 📺
We initiate the study of fully homomorphic encryption for RAMs (RAM-FHE). This is a public-key encryption scheme where, given an encryption of a large database D, anybody can efficiently compute an encryption of P(D) for an arbitrary RAM program P. The running time over the encrypted data should be as close as possible to the worst case running time of P, which may be sub-linear in the data size.A central difficulty in constructing a RAM-FHE scheme is hiding the sequence of memory addresses accessed by P. This is particularly problematic because an adversary may homomorphically evaluate many programs over the same ciphertext, therefore effectively “rewinding” any mechanism for making memory accesses oblivious.We identify a necessary prerequisite towards constructing RAM-FHE that we call rewindable oblivious RAM (rewindable ORAM), which provides security even in this strong adversarial setting. We show how to construct rewindable ORAM using symmetric-key doubly efficient PIR (SK-DEPIR) (Canetti-Holmgren-Richelson, Boyle-Ishai-Pass-Wootters: TCC ’17). We then show how to use rewindable ORAM, along with virtual black-box (VBB) obfuscation for specific circuits, to construct RAM-FHE. The latter primitive can be heuristically instantiated using existing indistinguishability obfuscation candidates. Overall, we obtain a RAM-FHE scheme where the multiplicative overhead in running time is polylogarithmic in the database size N. Our basic scheme is single-hop, but we also extend it to obtain multi-hop RAM-FHE with overhead $$N^\epsilon $$ for arbitrarily small $$\epsilon >0$$ .We view our work as the first evidence that RAM-FHE is likely to exist.
2019
CRYPTO
Adaptively Secure MPC with Sublinear Communication Complexity 📺
Ran Cohen Abhi Shelat Daniel Wichs
A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results:A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size.Under the same assumptions, we construct a “Bob-optimized” 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice’s input size. We prove impossibility of “Alice-optimized” protocols.Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS’18) and shed light on an interesting duality between LFE and FHE.Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints.
2019
CRYPTO
New Constructions of Reusable Designated-Verifier NIZKs 📺
Non-interactive zero-knowledge arguments (NIZKs) for $$\mathsf {NP}$$ are an important cryptographic primitive, but we currently only have instantiations under a few specific assumptions. Notably, we are missing constructions from the learning with errors (LWE) assumption, the Diffie-Hellman (CDH/DDH) assumption, and the learning parity with noise (LPN) assumption.In this paper, we study a relaxation of NIZKs to the designated-verifier setting (DV-NIZK), where a trusted setup generates a common reference string together with a secret key for the verifier. We want reusable schemes, which allow the verifier to reuse the secret key to verify many different proofs, and soundness should hold even if the malicious prover learns whether various proofs are accepted or rejected. Such reusable DV-NIZKs were recently constructed under the CDH assumption, but it was open whether they can also be constructed under LWE or LPN.We also consider an extension of reusable DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK). In this setting, the only trusted setup consists of a common random string. However, there is also an additional untrusted setup in which the verifier chooses a public/secret key needed to generate/verify proofs, respectively. We require that zero-knowledge holds even if the public key is chosen maliciously by the verifier. Such reusable MDV-NIZKs were recently constructed under the “one-more CDH” assumption, but constructions under CDH/LWE/LPN remained open.In this work, we give new constructions of (reusable) DV-NIZKs and MDV-NIZKs using generic primitives that can be instantiated under CDH, LWE, or LPN.
2019
CRYPTO
Broadcast and Trace with $N^{\varepsilon }$ Ciphertext Size from Standard Assumptions 📺
We construct a broadcast and trace scheme (also known as trace and revoke or broadcast, trace and revoke) with N users, where the ciphertext size can be made as low as $$O(N^\varepsilon )$$ , for any arbitrarily small constant $$\varepsilon >0$$ . This improves on the prior best construction of broadcast and trace under standard assumptions by Boneh and Waters (CCS ‘06), which had ciphertext size $$O(N^{1/2})$$ . While that construction relied on bilinear maps, ours uses a combination of the learning with errors (LWE) assumption and bilinear maps.Recall that, in both broadcast encryption and traitor-tracing schemes, there is a collection of N users, each of which gets a different secret key $${\mathsf {sk}}_i$$ . In broadcast encryption, it is possible to create ciphertexts targeted to a subset $$S \subseteq [N]$$ of the users such that only those users can decrypt it correctly. In a traitor tracing scheme, if a subset of users gets together and creates a decoder box D that is capable of decrypting ciphertexts, then it is possible to trace at least one of the users responsible for creating D. A broadcast and trace scheme intertwines the two properties, in a way that results in more than just their union. In particular, it ensures that if a decoder D is able to decrypt ciphertexts targeted toward a set S of users, then it should be possible to trace one of the users in the set S responsible for creating D, even if other users outside of S also participated. As of recently, we have essentially optimal broadcast encryption (Boneh, Gentry, Waters CRYPTO ’05) under bilinear maps and traitor tracing (Goyal, Koppula, Waters STOC ’18) under LWE, where the ciphertext size is at most poly-logarithmic in N. The main contribution of our paper is to carefully combine LWE and bilinear-map based components, and get them to interact with each other, to achieve broadcast and trace.
2019
JOFC
From Cryptomania to Obfustopia Through Secret-Key Functional Encryption
Functional encryption lies at the frontiers of the current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation (IO), while other variants have been constructed from standard assumptions such as LWE. Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of secret-key functional encryption with an unbounded number of keys and ciphertexts. On the one hand, this primitive is not known to imply anything outside of minicrypt, the land of secret-key cryptography, but, on the other hand, we do no know how to construct it without the heavy hammers in obfustopia. In this work, we show that (subexponentially secure) secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (subexponentially secure) plain public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia. On the technical side, our result relies on two main components. As our first contribution, we show how to use secret-key functional encryption to get “exponentially efficient indistinguishability obfuscation” (XIO), a notion recently introduced by Lin et al. (PKC’16) as a relaxation of IO. Lin et al. show how to use XIO and the LWE assumption to build IO. As our second contribution, we improve on this result by replacing its reliance on the LWE assumption with any plain public-key encryption scheme. Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS’15) shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black-box construction, that subexponentially secure-key functional encryption indeed leads to public-key encryption. The resulting public-key encryption scheme, however, is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.
2018
CRYPTO
Hardness of Non-interactive Differential Privacy from One-Way Functions 📺
A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset $$D \in X^n$$D∈Xn consisting of some small number of elements n from some large data universe X, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family Q.Ignoring computational constraints, this problem can be solved even when X and Q are exponentially large and n is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation, no efficient differentially private algorithm exists when X and Q can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption.In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when X and Q are exponentially large, and n is an arbitrary polynomial. In fact, we show that this result holds even if X is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey [52].
2018
TCC
Encrypted Computation
Daniel Wichs
2018
PKC
Multi-Key Searchable Encryption, Revisited
We consider a setting where users store their encrypted documents on a remote server and can selectively share documents with each other. A user should be able to perform keyword searches over all the documents she has access to, including the ones that others shared with her. The contents of the documents, and the search queries, should remain private from the server.This setting was considered by Popa et al. (NSDI ’14) who developed a new cryptographic primitive called Multi-Key Searchable Encryption (MKSE), together with an instantiation and an implementation within a system called Mylar, to address this goal. Unfortunately, Grubbs et al. (CCS ’16) showed that the proposed MKSE definition fails to provide basic security guarantees, and that the Mylar system is susceptible to simple attacks. Most notably, if a malicious Alice colludes with the server and shares a document with an honest Bob then the privacy of all of Bob’s search queries is lost.In this work we revisit the notion of MKSE and propose a new strengthened definition that rules out the above attacks. We then construct MKSE schemes meeting our definition. We first give a simple and efficient construction using only pseudorandom functions. This construction achieves our strong security definition at the cost of increasing the server storage overhead relative to Mylar, essentially replicating the document each time it is shared. We also show that high server storage overhead is not inherent, by giving an alternate (albeit impractical) construction that manages to avoid it using obfuscation.
2018
TCC
Traitor-Tracing from LWE Made Simple and Attribute-Based
A traitor tracing scheme is a public key encryption scheme for which there are many secret decryption keys. Any of these keys can decrypt a ciphertext; moreover, even if a coalition of users collude, put together their decryption keys and attempt to create a new decryption key, there is an efficient algorithm to trace the new key to at least one the colluders.Recently, Goyal, Koppula and Waters (GKW, STOC 18) provided the first traitor tracing scheme from LWE with ciphertext and secret key sizes that grow polynomially in $$\log n$$, where n is the number of users. The main technical building block in their construction is a strengthening of (bounded collusion secure) secret-key functional encryption which they refer to as mixed functional encryption (FE).In this work, we improve upon and extend the GKW traitor tracing scheme:We provide simpler constructions of mixed FE schemes based on the LWE assumption. Our constructions improve upon the GKW construction in terms of expressiveness, modularity, and security.We provide a construction of attribute-based traitor tracing for all circuits based on the LWE assumption.
2018
TCC
Is There an Oblivious RAM Lower Bound for Online Reads?
Mor Weiss Daniel Wichs
Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an $$O(\log n)$$ overhead per access, where $$n$$ is the data size. The work of Goldreich and Ostrovsky gave a lower bound showing that this is optimal for ORAM schemes that operate in a “balls and bins” model, where memory blocks can only be shuffled between different locations but not manipulated otherwise. The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond “balls and bins”?The work of Boyle and Naor (ITCS ’16) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with $$o(\log n)$$ overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO ’18) shows that there indeed is an $$\varOmega (\log n)$$ lower bound for general online ORAM.This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM where reads (but not writes) have an $$o(\log n)$$ overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs). Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds.
2018
TCC
Watermarking PRFs Under Standard Assumptions: Public Marking and Security with Extraction Queries
A software watermarking scheme can embed some information called a mark into a program while preserving its functionality. No adversary can remove the mark without damaging the functionality of the program. Cohen et al. (STOC ’16) gave the first positive results for watermarking, showing how to watermark certain pseudorandom function (PRF) families using indistinguishability obfuscation (iO). Their scheme has a secret marking procedure to embed marks in programs and a public extraction procedure to extract the marks from programs; security holds even against an attacker that has access to a marking oracle. Kim and Wu (CRYPTO ’17) later constructed a PRF watermarking scheme under only the LWE assumption. In their scheme, both the marking and extraction procedures are secret, but security only holds against an attacker with access to a marking oracle but not an extraction oracle. In fact, it is possible to completely break the security of the latter scheme using extraction queries, which is a significant limitation in any foreseeable application.In this work, we construct a new PRF watermarking scheme with the following properties. The marking procedure is public and therefore anyone can embed marks in PRFs from the family. Previously we had no such construction even using obfuscation.The extraction key is secret, but marks remain unremovable even if the attacker has access to an extraction oracle. Previously we had no such construction under standard assumptions.Our scheme is simple, uses generic components and can be instantiated under many different assumptions such as DDH, Factoring or LWE. The above benefits come with one caveat compared to prior work: the PRF family that we can watermark depends on the public parameters of the watermarking scheme and the watermarking authority has a secret key which can break the security of all of the PRFs in the family. Since the watermarking authority is usually assumed to be trusted, this caveat appears to be acceptable.
2017
CRYPTO
2017
TCC
2017
TCC
2017
JOFC
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
CRYPTO
2016
CRYPTO
2016
TCC
2016
TCC
2016
TCC
2016
TCC
2016
TCC
2016
JOFC
2015
TCC
2015
PKC
2015
ASIACRYPT
2014
CRYPTO
2014
CRYPTO
2014
EUROCRYPT
2014
EUROCRYPT
2014
EUROCRYPT
2013
TCC
2013
CRYPTO
2013
ASIACRYPT
2013
ASIACRYPT
2013
EUROCRYPT
2013
EUROCRYPT
2013
JOFC
Fully Leakage-Resilient Signatures
Elette Boyle Gil Segev Daniel Wichs
A signature scheme is fully leakage resilient (Katz and Vaikuntanathan, ASIACRYPT’09) if it is existentially unforgeable under an adaptive chosen-message attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on all intermediate values that are used throughout the lifetime of the system. This is a strong and meaningful notion of security that captures a wide range of side-channel attacks.One of the main challenges in constructing fully leakage-resilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the random-oracle model. Moreover, even in the random-oracle model, known schemes are only resilient to leakage of less than half the length of their signing key.In this paper we construct the first fully leakage-resilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length (1−o(1))L bits, where L is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific number-theoretic assumptions. In addition, we show that our approach extends to the continual-leakage model, recently introduced by Dodis, Haralambiev, Lopez-Alt and Wichs (FOCS’10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS’10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.
2012
TCC
2012
EUROCRYPT
2012
EUROCRYPT
2011
TCC
2011
CRYPTO
2011
EUROCRYPT
2010
TCC
2010
ASIACRYPT
2010
EUROCRYPT
2009
TCC
2009
TCC
2009
CRYPTO
2009
CRYPTO
2008
EUROCRYPT
2008
EUROCRYPT

Program Committees

Crypto 2022
Eurocrypt 2021
TCC 2020
Crypto 2018
Eurocrypt 2017
TCC 2017
TCC 2015
Asiacrypt 2014
PKC 2014
Crypto 2013
TCC 2012

Coauthors

Shweta Agrawal (2)
Joël Alwen (3)
Gilad Asharov (1)
Marshall Ball (1)
Allison Bishop (1)
Nir Bitansky (4)
Elette Boyle (2)
Zvika Brakerski (1)
Ran Canetti (1)
David Cash (2)
Suvradip Chakraborty (2)
Yilei Chen (1)
Ran Cohen (2)
Ronald Cramer (1)
Dana Dachman-Soled (1)
Ivan Damgård (2)
Lalita Devadas (1)
Srinivas Devadas (1)
Yevgeniy Dodis (16)
Fangqi Dong (1)
Nico Döttling (1)
Stefan Dziembowski (2)
Sebastian Faust (1)
Serge Fehr (1)
Christopher W. Fletcher (1)
Juan A. Garay (1)
Sanjam Garg (3)
Rosario Gennaro (1)
Craig Gentry (2)
Shafi Goldwasser (1)
Rishab Goyal (1)
Jiaxin Guan (2)
Siyao Guo (1)
Mohammad Hajiabadi (1)
Shai Halevi (4)
Ariel Hamlin (3)
Zihan Hao (1)
Kristiyan Haralambiev (1)
Carmit Hazay (2)
Brett Hemenway (1)
Dennis Hofheinz (1)
Justin Holmgren (2)
Zahra Jafargholi (5)
Abhishek Jain (3)
Aayush Jain (1)
Yael Tauman Kalai (2)
Chethan Kamath (2)
Harish Karthikeyan (1)
Tomasz Kazana (2)
Eike Kiltz (1)
Saleet Klein (1)
Karen Klein (2)
Ilan Komargodski (1)
Lucas Kowalczyk (1)
Stephan Krenn (1)
Alptekin Küpçü (2)
Xin Li (1)
Huijia Lin (1)
Minghao Liu (1)
Alex Lombardi (3)
Adriana López-Alt (2)
Adriana López-Alt (3)
Steve Lu (1)
Ji Luo (1)
Vadim Lyubashevsky (2)
Fermi Ma (1)
Tal Malkin (1)
Daniel Masny (1)
Ethan Mook (2)
Tal Moran (2)
Pratyay Mukherjee (2)
Moni Naor (1)
Jesper Buus Nielsen (2)
Ryo Nishimaki (3)
Tatsuaki Okamoto (1)
Rafail Ostrovsky (3)
Carles Padró (1)
Omer Paneth (1)
Alain Passelègue (2)
Valerio Pastro (1)
Krzysztof Pietrzak (6)
Manoj Prabhakaran (2)
Willy Quach (11)
Rajmohan Rajaraman (1)
Vanishree Rao (1)
Mariana Raykova (1)
Ling Ren (1)
Ron D. Rothblum (3)
Alessandra Scafuro (2)
Gil Segev (3)
Adi Shamir (1)
Abhi Shelat (3)
Elaine Shi (1)
Noah Stephens-Davidowitz (1)
Eran Tromer (1)
LaKyah Tyner (2)
Jonathan Ullman (1)
Salil P. Vadhan (1)
Vinod Vaikuntanathan (8)
Marten van Dijk (1)
Mayank Varia (1)
Daniele Venturi (1)
Shabsi Walfish (1)
Brent Waters (5)
Hoeteck Wee (6)
Mor Weiss (5)
David J. Wu (1)
Shota Yamada (1)
Mark Zhandry (3)
Hong-Sheng Zhou (1)
Giorgos Zirdelis (1)