International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Daniel Wichs

Publications

Year
Venue
Title
2021
TCC
Succinct LWE Sampling, Random Polynomials, and Obfuscation
We present a construction of indistinguishability obfuscation (iO) that relies on the learning with errors (LWE) assumption together with a new notion of succinctly sampling pseudo-random LWE samples. We then present a candidate LWE sampler whose security is related to the hardness of solving systems of polynomial equations. Our construction improves on the recent iO candidate of Wee and Wichs (Eurocrypt 2021) in two ways: first, we show that a much weaker and simpler notion of LWE sampling suffices for iO; and secondly, our candidate LWE sampler is secure based on a compactly specified and falsifiable assumption about random polynomials, with a simple error distribution that facilitates cryptanalysis.
2021
EUROCRYPT
Candidate Obfuscation via Oblivious LWE Sampling 📺
Hoeteck Wee Daniel Wichs
We present a new, simple candidate construction of indistinguishability obfuscation (iO). Our scheme is inspired by lattices and learning-with-errors (LWE) techniques, but we are unable to prove security under a standard assumption. Instead, we formulate a new falsifiable assumption under which the scheme is secure. Furthermore, the scheme plausibly achieves post-quantum security. Our construction is based on the recent ``split FHE'' framework of Brakerski, D\"ottling, Garg, and Malavolta (EUROCRYPT '20), and we provide a new instantiation of this framework. As a first step, we construct an iO scheme that is provably secure assuming that LWE holds and that it is possible to obliviously generate LWE samples without knowing the corresponding secrets. We define a precise notion of oblivious LWE sampling that suffices for the construction. It is known how to obliviously sample from any distribution (in a very strong sense) using iO, and our result provides a converse, showing that the ability to obliviously sample from the specific LWE distribution (in a much weaker sense) already also implies iO. As a second step, we give a heuristic contraction of oblivious LWE sampling. On a very high level, we do this by homomorphically generating pseudorandom LWE samples using an encrypted pseudorandom function.
2021
CRYPTO
Limits on the Adaptive Security of Yao’s Garbling 📺
Yao’s garbling scheme is one of the most fundamental cryptographic constructions. Lindell and Pinkas (Journal of Cryptograhy 2009) gave a formal proof of security in the selective setting assuming secure symmetric-key encryption (and hence one-way functions). This was fol- lowed by results, both positive and negative, concerning its security in the, stronger, adaptive setting. Applebaum et al. (Crypto 2013) showed that it cannot satisfy adaptive security as is, due to a simple incompressibility argument. Jafagholi and Wichs (TCC 2017) considered a natural adaptation of Yao’s scheme that circumvents this negative result, and proved that it is adaptively secure, at least for shallow circuits. In particular, they showed that for the class of circuits of depth d, the loss in security is at most exponential in d. The above results all concern the simulation-based notion of security. In this work, we show that the upper bound of Jafargholi and Wichs is more or less optimal in a strong sense. As our main result, we show that there exists a family of Boolean circuits, one for each depth d ∈ N, such that any black-box reduction proving the adaptive indistinguishability- security of the natural adaptation of Yao’s scheme from any symmetric-key encryption has to lose a factor that is sub-exponential in d. Since indistinguishability is a weaker notion than simulation, our bound also applies to adaptive simulation. To establish our results, we build on the recent approach of Kamath et al. (Eprint 2021), which uses pebbling lower bounds in conjunction with oracle separations to prove fine-grained lower bounds on loss in cryptographic security
2021
CRYPTO
Targeted Lossy Functions and Applications 📺
Willy Quach Brent Waters Daniel Wichs
Lossy trapdoor functions, introduced by Peikert and Waters (STOC '08), can be initialized in one of two indistinguishable modes: in injective mode, the function preserves all information about its input, and can be efficiently inverted given a trapdoor, while in lossy mode, the function loses some information about its input. Such functions have found countless applications in cryptography, and can be constructed from a variety of Cryptomania assumptions. In this work, we introduce \emph{targeted lossy functions (TLFs)}, which relax lossy trapdoor functions along two orthogonal dimensions. Firstly, they do not require an inversion trapdoor in injective mode. Secondly, the lossy mode of the function is initialized with some target input, and the function is only required to lose information about this particular target. The injective and lossy modes should be indistinguishable even given the target. We construct TLFs from Minicrypt assumptions, namely, injective pseudorandom generators, or even one-way functions under a natural relaxation of injectivity. We then generalize TLFs to incorporate \emph{branches}, and construct \emph{all-injective-but-one} and \emph{all-lossy-but-one} variants. We show a wide variety of applications of targeted lossy functions. In several cases, we get the first Minicrypt constructions of primitives that were previously only known under Cryptomania assumptions. Our applications include: \begin{itemize} \item \emph{Pseudo-entropy functions} from one-way functions. \item Deterministic leakage-resilient message-authentication codes and improved leakage-resilient symmetric-key encryption from one-way functions. \item Extractors for \emph{extractor-dependent sources} from one-way functions. \item Selective-opening secure symmetric-key encryption from one-way functions. \item A new construction of CCA PKE from (exponentially secure) trapdoor functions and injective pseudorandom generators. \end{itemize} We also discuss a fascinating connection to distributed point functions.
2021
TCC
Updatable Public Key Encryption in the Standard Model
Forward security (FS) ensures that corrupting the current secret key in the system preserves the privacy or integrity of the prior usages of the system. Achieving forward security is especially hard in the setting of public-key encryption (PKE), where time is divided into periods, and in each period the receiver derives the next-period secret key from their current secret key, while the public key stays constant. Indeed, all current constructions of FS-PKE are built from hierarchical identity-based encryption (HIBE) and are rather complicated. Motivated by applications to secure messaging, recent works of Jost et al. (Eurocrypt’19) and Alwen et al. (CRYPTO’20) consider a natural relaxation of FS-PKE, which they term *updatable* PKE (UPKE). In this setting, the transition to the next period can be initiated by any sender, who can compute a special update ciphertext. This ciphertext directly produces the next-period public key and can be processed by the receiver to compute the next-period secret key. If done honestly, future (regular) ciphertexts produced with the new public key can be decrypted with the new secret key, but past such ciphertexts cannot be decrypted with the new secret key. Moreover, this is true even if all other previous-period updates were initiated by untrusted senders. Both papers also constructed a very simple UPKE scheme based on the CDH assumption in the random oracle model. However, they left open the question of building such schemes in the standard model, or based on other (e.g., post-quantum) assumptions, without using the heavy HIBE techniques. In this work, we construct two efficient UPKE schemes in the standard model, based on the DDH and LWE assumptions, respectively. Somewhat interestingly, our constructions gain their efficiency (compared to prior FS-PKE schemes from the same assumptions) by using tools from the area of circular-secure and leakage resilient public-key encryption schemes (rather than HIBE).
2020
EUROCRYPT
Two-Round Oblivious Transfer from CDH or LPN 📺
We show a new general approach for constructing maliciously-secure two-round oblivious transfer (OT). Specifically, we provide a generic sequence of transformations to upgrade a very basic notion of two-roundOT, which we call elementary OT, to UC-secure OT. We then give simple constructions of elementary OT under the Computational Diffie-Hellman(CDH) assumption or the Learning Parity with Noise (LPN) assumption, yielding the first constructions of malicious (UC-secure) two-round OT under these assumptions. Since two-round OT is complete for two-round 2-party and multi-party computation in the malicious setting, we also achieve the first constructions of the latter under these assumptions.
2020
EUROCRYPT
Extracting Randomness from Extractor-Dependent Sources 📺
We revisit the well-studied problem of extracting nearly uniform randomness from an arbitrary source of sufficient min-entropy. Strong seeded extractors solve this problem by relying on a public random seed, which is unknown to the source. Here, we consider a setting where the seed is reused over time and the source may depend on prior calls to the extractor with the same seed. Can we still extract nearly uniform randomness? In more detail, we assume the seed is chosen randomly, but the source can make arbitrary oracle queries to the extractor with the given seed before outputting a sample. We require that the sample has entropy and differs from any of the previously queried values. The extracted output should look uniform even to a distinguisher that gets the seed. We consider two variants of the problem, depending on whether the source only outputs the sample, or whether it can also output some correlated public auxiliary information that preserves the sample's entropy. Our results are: * Without Auxiliary Information: We show that every pseudo-random function (PRF) with a sufficiently high security level is a good extractor in this setting, even if the distinguisher is computationally unbounded. We further show that the source necessarily needs to be computationally bounded and that such extractors imply one-way functions. * With Auxiliary Information: We construct secure extractors in this setting, as long as both the source and the distinguisher are computationally bounded. We give several constructions based on different intermediate primitives, yielding instantiations based on the DDH, DLIN, LWE or DCR assumptions. On the negative side, we show that one cannot prove security against computationally unbounded distinguishers in this setting under any standard assumption via a black-box reduction. Furthermore, even when restricting to computationally bounded distinguishers, we show that there exist PRFs that are insecure as extractors in this setting and that a large class of constructions cannot be proven secure via a black-box reduction from standard assumptions.
2020
EUROCRYPT
Statistical ZAPR Arguments from Bilinear Maps 📺
Dwork and Naor (FOCS '00) defined ZAPs as 2-message witness-indistinguishable proofs that are public-coin. We relax this to \emph{ZAPs with private Randomness} (ZAPRs), where the verifier can use private coins to sample the first message (independently of the statement being proved), but the proof must remain publicly verifiable given only the protocol transcript. In particular, ZAPRs are \emph{reusable}, meaning that the first message can be reused for multiple proofs without compromising security. Known constructions of ZAPs from trapdoor permutations or bilinear maps are only computationally WI (and statistically sound). Two recent results of Badrinarayanan-Fernando-Jain-Khurana-Sahai and Goyal-Jain-Jin-Malavolta [EUROCRYPT '20] construct the first \emph{statistical ZAP arguments}, which are statistically WI (and computationally sound), from the quasi-polynomial LWE assumption. Here, we construct \emph{statistical ZAPR arguments} from the quasi-polynomial decision-linear (DLIN) assumption on groups with a bilinear map. Our construction relies on a combination of several tools including Groth-Ostrovsky-Sahai NIZK and NIWI [EUROCRYPT '06, CRYPTO '06, JACM '12], ``sometimes-binding statistically hiding commitments'' [Kalai-Khurana-Sahai, EUROCRYPT '18] and the ``MPC-in-the-head'' technique [Ishai-Kushilevitz-Ostrovsky-Sahai, STOC '07].
2020
PKC
Witness Maps and Applications 📺
We introduce the notion of Witness Maps as a cryptographic notion of a proof system. A Unique Witness Map (UWM) deterministically maps all witnesses for an $$mathbf {NP}$$ statement to a single representative witness, resulting in a computationally sound, deterministic-prover, non-interactive witness independent proof system. A relaxation of UWM, called Compact Witness Map (CWM), maps all the witnesses to a small number of witnesses, resulting in a “lossy” deterministic-prover, non-interactive proof-system. We also define a Dual Mode Witness Map (DMWM) which adds an “extractable” mode to a CWM. Our main construction is a DMWM for all $$mathbf {NP}$$ relations, assuming sub-exponentially secure indistinguishability obfuscation ( $${imathcal {O}}$$ ), along with standard cryptographic assumptions. The DMWM construction relies on a CWM and a new primitive called Cumulative All-Lossy-But-One Trapdoor Functions (C-ALBO-TDF), both of which are in turn instantiated based on $${imathcal {O}}$$ and other primitives. Our instantiation of a CWM is in fact a UWM; in turn, we show that a UWM implies Witness Encryption. Along the way to constructing UWM and C-ALBO-TDF, we also construct, from standard assumptions, Puncturable Digital Signatures and a new primitive called Cumulative Lossy Trapdoor Functions (C-LTDF). The former improves up on a construction of Bellare et al. (Eurocrypt 2016), who relied on sub-exponentially secure $${imathcal {O}}$$ and sub-exponentially secure OWF. As an application of our constructions, we show how to use a DMWM to construct the first leakage and tamper-resilient signatures with a deterministic signer , thereby solving a decade old open problem posed by Katz and Vaikunthanathan (Asiacrypt 2009), by Boyle, Segev and Wichs (Eurocrypt 2011), as well as by Faonio and Venturi (Asiacrypt 2016). Our construction achieves the optimal leakage rate of $$1 - o(1)$$ .
2020
CRYPTO
Leakage-Resilient Key Exchange and Two-Seed Extractors 📺
Xin Li Fermi Ma Willy Quach Daniel Wichs
Can Alice and Bob agree on a uniformly random secret key without having any truly secret randomness to begin with? Here we consider a setting where Eve can get partial leakage on the internal state of both Alice and Bob individually before the protocol starts. They then run a protocol using their states without any additional randomness and need to agree on a shared key that looks uniform to Eve, even after observing the leakage and the protocol transcript. We focus on non-interactive (one round) key exchange (NIKE), where Alice and Bob send one message each without waiting for one another. We first consider this problem in the symmetric-key setting, where the states of Alice and Bob include a shared secret as well as individual uniform randomness. However, since Eve gets leakage on these states, Alice and Bob need to perform privacy amplification to derive a fresh secret key from them. Prior solutions require Alice and Bob to sample fresh uniform randomness during the protocol, while in our setting all of their randomness was already part of their individual states a priori and was therefore subject to leakage. We show an information-theoretic solution to this problem using a novel primitive that we call a two-seed extractor, which we in turn construct by drawing a connection to communication-complexity lower-bounds in the number-on-forehead (NOF) model. We then turn to studying this problem in the public-key setting, where the states of Alice and Bob consist of independent uniform randomness. Unfortunately, we give a black-box separation showing that leakage-resilient NIKE in this setting cannot be proven secure via a black-box reduction under any game-based assumption when the leakage is super-logarithmic. This includes virtually all assumptions used in cryptography, and even very strong assumptions such as indistinguishability obfuscation (iO). Nevertheless, we also provide positive results that get around the above separation: -We show that every key exchange protocol (e.g., Diffie-Hellman) is secure when the leakage amount is logarithmic, or potentially even greater if we assume sub-exponential security without leakage. -We notice that the black-box separation does not extend to schemes in the common reference string (CRS) model, or to schemes with preprocessing, where Alice and Bob can individually pre-process their random coins to derive their secret state prior to leakage. We give a solution in the CRS model with preprocessing using bilinear maps. We also give solutions in just the CRS model alone (without preprocessing) or just with preprocessing (without a CRS), using iO and lossy functions.
2020
CRYPTO
Incompressible Encodings 📺
Tal Moran Daniel Wichs
An incompressible encoding can probabilistically encode some data $m$ into a codeword $c$, which is not much larger. Anyone can decode $c$ to recover the original data $m$. However, $c$ cannot be efficiently compressed, even if the original data $m$ is given to the decompression procedure for free. In other words, $c$ is a representation of $m$, yet is computationally incompressible even given $m$. An incompressible encoding is composable if many encodings cannot be simultaneously compressed into anything sufficiently smaller than their concatenation. A recent work of Damgard, Ganesh and Orlandi (CRYPTO '19) defined a variant of incompressible encodings and gave an applications to ``proofs of replicated storage''. They constructed incompressible encodings in an ideal permutation model over a structured domain, but it was left open if they can be constructed under standard assumptions, or even in the more basic random-oracle model. In this work, we give new constructions, negative results and applications of incompressible encodings: * We construct incompressible encodings in the common random string (CRS) model under the Decisional Composite Residuosity (DCR) or Learning with Errors (LWE) assumptions. However, the construction has several drawbacks: (1) it is not composable, (2) it only achieves selective security, and (3) the CRS is as long as the data $m$. * We leverage the above construction to also get a scheme in the random-oracle model, under the same assumptions, that avoids all of the above drawbacks. Furthermore, it is significantly more efficient than the prior ideal-model construction. * We give black-box separations, showing that incompressible encodings in the plain model cannot be proven secure under any standard hardness assumption, and incompressible encodings in the CRS model must inherently suffer from all of the drawbacks above. * We give a new application to ``big-key cryptography in the bounded-retrieval model'', where secret keys are made intentionally huge to make them hard to exfiltrate. Using incompressible encodings, we can get all the security benefits of a big key without wasting storage space, by having the key to encode useful data.
2020
TCC
Optimal Broadcast Encryption from LWE and Pairings in the Standard Model 📺
Broadcast Encryption with optimal parameters was a long-standing problem, whose first solution was provided in an elegant work by Boneh, Waters and Zhandry \cite{BWZ14}. However, this work relied on multilinear maps of logarithmic degree, which is not considered a standard assumption. Recently, Agrawal and Yamada \cite{AY20} improved this state of affairs by providing the first construction of optimal broadcast encryption from Bilinear Maps and Learning With Errors (LWE). However, their proof of security was in the generic bilinear group model. In this work, we improve upon their result by providing a new construction and proof in the standard model. In more detail, we rely on the Learning With Errors (LWE) assumption and the Knowledge of OrthogonALity Assumption (KOALA) \cite{BW19} on bilinear groups. Our construction combines three building blocks: a (computational) nearly linear secret sharing scheme with compact shares which we construct from LWE, an inner-product functional encryption scheme with special properties which is constructed from the bilinear Matrix Decision Diffie Hellman (MDDH) assumption, and a certain form of hyperplane obfuscation, which is constructed using the KOALA assumption. While similar to that of Agrawal and Yamada, our construction provides a new understanding of how to decompose the construction into simpler, modular building blocks with concrete and easy-to-understand security requirements for each one. We believe this sheds new light on the requirements for optimal broadcast encryption, which may lead to new constructions in the future.
2019
EUROCRYPT
Private Anonymous Data Access 📺
We consider a scenario where a server holds a huge database that it wants to make accessible to a large group of clients. After an initial setup phase, clients should be able to read arbitrary locations in the database while maintaining privacy (the server does not learn which locations are being read) and anonymity (the server does not learn which client is performing each read). This should hold even if the server colludes with a subset of the clients. Moreover, the run-time of both the server and the client during each read operation should be low, ideally only poly-logarithmic in the size of the database and the number of clients. We call this notion Private Anonymous Data Access (PANDA). PANDA simultaneously combines aspects of Private Information Retrieval (PIR) and Oblivious RAM (ORAM). PIR has no initial setup, and allows anybody to privately and anonymously access a public database, but the server’s run-time is linear in the data size. On the other hand, ORAM achieves poly-logarithmic server run-time, but requires an initial setup after which only a single client with a secret key can access the database. The goal of PANDA is to get the best of both worlds: allow many clients to privately and anonymously access the database as in PIR, while having an efficient server as in ORAM.In this work, we construct bounded-collusion PANDA schemes, where the efficiency scales linearly with a bound on the number of corrupted clients that can collude with the server, but is otherwise poly-logarithmic in the data size and the total number of clients. Our solution relies on standard assumptions, namely the existence of fully homomorphic encryption, and combines techniques from both PIR and ORAM. We also extend PANDA to settings where clients can write to the database.
2019
EUROCRYPT
Reusable Designated-Verifier NIZKs for all NP from CDH 📺
Non-interactive zero-knowledge proofs (NIZKs) are a fundamental cryptographic primitive. Despite a long history of research, we only know how to construct NIZKs under a few select assumptions, such as the hardness of factoring or using bilinear maps. Notably, there are no known constructions based on either the computational or decisional Diffie-Hellman (CDH/DDH) assumption without relying on a bilinear map.In this paper, we study a relaxation of NIZKs in the designated verifier setting (DV-NIZK), in which the public common-reference string is generated together with a secret key that is given to the verifier in order to verify proofs. In this setting, we distinguish between one-time and reusable schemes, depending on whether they can be used to prove only a single statement or arbitrarily many statements. For reusable schemes, the main difficulty is to ensure that soundness continues to hold even when the malicious prover learns whether various proofs are accepted or rejected by the verifier. One-time DV-NIZKs are known to exist for general NP statements assuming only public-key encryption. However, prior to this work, we did not have any construction of reusable DV-NIZKs for general NP statements from any assumption under which we didn’t already also have standard NIZKs.In this work, we construct reusable DV-NIZKs for general NP statements under the CDH assumption, without requiring a bilinear map. Our construction is based on the hidden-bits paradigm, which was previously used to construct standard NIZKs. We define a cryptographic primitive called a hidden-bits generator (HBG), along with a designated-verifier variant (DV-HBG), which modularly abstract out how to use this paradigm to get both standard NIZKs and reusable DV-NIZKs. We construct a DV-HBG scheme under the CDH assumption by relying on techniques from the Cramer-Shoup hash-proof system, and this yields our reusable DV-NIZK for general NP statements under CDH.We also consider a strengthening of DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK) where the setup consists of an honestly generated common random string and the verifier then gets to choose his own (potentially malicious) public/secret key pair to generate/verify proofs. We construct MDV-NIZKs under the “one-more CDH” assumption without relying on bilinear maps.
2019
EUROCRYPT
Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing 📺
We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of $$1/2-1/\mathrm{poly}(n)$$ . Thus we can only show that “very hard” LPN is harder than some “very mildly hard” worst case problem. We note that LPN with noise $$1/2-1/\mathrm{poly}(n)$$ already implies symmetric cryptography.Specifically, we consider the (n, m, w)-nearest codeword problem ((n, m, w)-NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error $$w/m \approx {\log ^2 n}/{n}$$ , (n, m, w)-NCP can be solved given oracle access to an LPN distinguisher with noise ratio $$1/2-1/\mathrm{poly}(n)$$ .Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n, m, w)-NCP with the aforementioned parameters lies in the complexity class $$\mathrm {{Search}\hbox {-}\mathcal {BPP}}^\mathcal {SZK}$$ (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be $$\mathcal {NP}$$ -hard. We then show that the hardness of LPN with very low noise rate $$\log ^2(n)/n$$ implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in $$\mathcal {BPP}^\mathcal {SZK}$$ ).
2019
CRYPTO
Non-malleable Codes for Decision Trees 📺
Marshall Ball Siyao Guo Daniel Wichs
We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by decision trees of depth $$d= n^{1/4-o(1)}$$ . In particular, each bit of the tampered codeword is set arbitrarily after adaptively reading up to d arbitrary locations within the original codeword. Prior to this work, no efficient unconditional non-malleable codes were known for decision trees beyond depth $$O(\log ^2 n)$$ .Our result also yields efficient, unconditional non-malleable codes that are $$\exp (-n^{\varOmega (1)})$$ -secure against constant-depth circuits of $$\exp (n^{\varOmega (1)})$$ -size. Prior work of Chattopadhyay and Li (STOC 2017) and Ball et al. (FOCS 2018) only provide protection against $$\exp (O(\log ^2n))$$ -size circuits with $$\exp (-O(\log ^2n))$$ -security.We achieve our result through simple non-malleable reductions of decision tree tampering to split-state tampering. As an intermediary, we give a simple and generic reduction of leakage-resilient split-state tampering to split-state tampering with improved parameters. Prior work of Aggarwal et al. (TCC 2015) only provides a reduction to split-state non-malleable codes with decoders that exhibit particular properties.
2019
CRYPTO
On the Plausibility of Fully Homomorphic Encryption for RAMs 📺
We initiate the study of fully homomorphic encryption for RAMs (RAM-FHE). This is a public-key encryption scheme where, given an encryption of a large database D, anybody can efficiently compute an encryption of P(D) for an arbitrary RAM program P. The running time over the encrypted data should be as close as possible to the worst case running time of P, which may be sub-linear in the data size.A central difficulty in constructing a RAM-FHE scheme is hiding the sequence of memory addresses accessed by P. This is particularly problematic because an adversary may homomorphically evaluate many programs over the same ciphertext, therefore effectively “rewinding” any mechanism for making memory accesses oblivious.We identify a necessary prerequisite towards constructing RAM-FHE that we call rewindable oblivious RAM (rewindable ORAM), which provides security even in this strong adversarial setting. We show how to construct rewindable ORAM using symmetric-key doubly efficient PIR (SK-DEPIR) (Canetti-Holmgren-Richelson, Boyle-Ishai-Pass-Wootters: TCC ’17). We then show how to use rewindable ORAM, along with virtual black-box (VBB) obfuscation for specific circuits, to construct RAM-FHE. The latter primitive can be heuristically instantiated using existing indistinguishability obfuscation candidates. Overall, we obtain a RAM-FHE scheme where the multiplicative overhead in running time is polylogarithmic in the database size N. Our basic scheme is single-hop, but we also extend it to obtain multi-hop RAM-FHE with overhead $$N^\epsilon $$ for arbitrarily small $$\epsilon >0$$ .We view our work as the first evidence that RAM-FHE is likely to exist.
2019
CRYPTO
Adaptively Secure MPC with Sublinear Communication Complexity 📺
Ran Cohen Abhi Shelat Daniel Wichs
A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results:A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size.Under the same assumptions, we construct a “Bob-optimized” 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice’s input size. We prove impossibility of “Alice-optimized” protocols.Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS’18) and shed light on an interesting duality between LFE and FHE.Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints.
2019
CRYPTO
New Constructions of Reusable Designated-Verifier NIZKs 📺
Non-interactive zero-knowledge arguments (NIZKs) for $$\mathsf {NP}$$ are an important cryptographic primitive, but we currently only have instantiations under a few specific assumptions. Notably, we are missing constructions from the learning with errors (LWE) assumption, the Diffie-Hellman (CDH/DDH) assumption, and the learning parity with noise (LPN) assumption.In this paper, we study a relaxation of NIZKs to the designated-verifier setting (DV-NIZK), where a trusted setup generates a common reference string together with a secret key for the verifier. We want reusable schemes, which allow the verifier to reuse the secret key to verify many different proofs, and soundness should hold even if the malicious prover learns whether various proofs are accepted or rejected. Such reusable DV-NIZKs were recently constructed under the CDH assumption, but it was open whether they can also be constructed under LWE or LPN.We also consider an extension of reusable DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK). In this setting, the only trusted setup consists of a common random string. However, there is also an additional untrusted setup in which the verifier chooses a public/secret key needed to generate/verify proofs, respectively. We require that zero-knowledge holds even if the public key is chosen maliciously by the verifier. Such reusable MDV-NIZKs were recently constructed under the “one-more CDH” assumption, but constructions under CDH/LWE/LPN remained open.In this work, we give new constructions of (reusable) DV-NIZKs and MDV-NIZKs using generic primitives that can be instantiated under CDH, LWE, or LPN.
2019
CRYPTO
Broadcast and Trace with $N^{\varepsilon }$ Ciphertext Size from Standard Assumptions 📺
We construct a broadcast and trace scheme (also known as trace and revoke or broadcast, trace and revoke) with N users, where the ciphertext size can be made as low as $$O(N^\varepsilon )$$ , for any arbitrarily small constant $$\varepsilon >0$$ . This improves on the prior best construction of broadcast and trace under standard assumptions by Boneh and Waters (CCS ‘06), which had ciphertext size $$O(N^{1/2})$$ . While that construction relied on bilinear maps, ours uses a combination of the learning with errors (LWE) assumption and bilinear maps.Recall that, in both broadcast encryption and traitor-tracing schemes, there is a collection of N users, each of which gets a different secret key $${\mathsf {sk}}_i$$ . In broadcast encryption, it is possible to create ciphertexts targeted to a subset $$S \subseteq [N]$$ of the users such that only those users can decrypt it correctly. In a traitor tracing scheme, if a subset of users gets together and creates a decoder box D that is capable of decrypting ciphertexts, then it is possible to trace at least one of the users responsible for creating D. A broadcast and trace scheme intertwines the two properties, in a way that results in more than just their union. In particular, it ensures that if a decoder D is able to decrypt ciphertexts targeted toward a set S of users, then it should be possible to trace one of the users in the set S responsible for creating D, even if other users outside of S also participated. As of recently, we have essentially optimal broadcast encryption (Boneh, Gentry, Waters CRYPTO ’05) under bilinear maps and traitor tracing (Goyal, Koppula, Waters STOC ’18) under LWE, where the ciphertext size is at most poly-logarithmic in N. The main contribution of our paper is to carefully combine LWE and bilinear-map based components, and get them to interact with each other, to achieve broadcast and trace.
2019
JOFC
From Cryptomania to Obfustopia Through Secret-Key Functional Encryption
Functional encryption lies at the frontiers of the current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation (IO), while other variants have been constructed from standard assumptions such as LWE. Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of secret-key functional encryption with an unbounded number of keys and ciphertexts. On the one hand, this primitive is not known to imply anything outside of minicrypt, the land of secret-key cryptography, but, on the other hand, we do no know how to construct it without the heavy hammers in obfustopia. In this work, we show that (subexponentially secure) secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (subexponentially secure) plain public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia. On the technical side, our result relies on two main components. As our first contribution, we show how to use secret-key functional encryption to get “exponentially efficient indistinguishability obfuscation” (XIO), a notion recently introduced by Lin et al. (PKC’16) as a relaxation of IO. Lin et al. show how to use XIO and the LWE assumption to build IO. As our second contribution, we improve on this result by replacing its reliance on the LWE assumption with any plain public-key encryption scheme. Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS’15) shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black-box construction, that subexponentially secure-key functional encryption indeed leads to public-key encryption. The resulting public-key encryption scheme, however, is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.
2018
CRYPTO
Hardness of Non-interactive Differential Privacy from One-Way Functions 📺
A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset $$D \in X^n$$D∈Xn consisting of some small number of elements n from some large data universe X, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family Q.Ignoring computational constraints, this problem can be solved even when X and Q are exponentially large and n is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation, no efficient differentially private algorithm exists when X and Q can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption.In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when X and Q are exponentially large, and n is an arbitrary polynomial. In fact, we show that this result holds even if X is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey [52].
2018
PKC
Multi-Key Searchable Encryption, Revisited
We consider a setting where users store their encrypted documents on a remote server and can selectively share documents with each other. A user should be able to perform keyword searches over all the documents she has access to, including the ones that others shared with her. The contents of the documents, and the search queries, should remain private from the server.This setting was considered by Popa et al. (NSDI ’14) who developed a new cryptographic primitive called Multi-Key Searchable Encryption (MKSE), together with an instantiation and an implementation within a system called Mylar, to address this goal. Unfortunately, Grubbs et al. (CCS ’16) showed that the proposed MKSE definition fails to provide basic security guarantees, and that the Mylar system is susceptible to simple attacks. Most notably, if a malicious Alice colludes with the server and shares a document with an honest Bob then the privacy of all of Bob’s search queries is lost.In this work we revisit the notion of MKSE and propose a new strengthened definition that rules out the above attacks. We then construct MKSE schemes meeting our definition. We first give a simple and efficient construction using only pseudorandom functions. This construction achieves our strong security definition at the cost of increasing the server storage overhead relative to Mylar, essentially replicating the document each time it is shared. We also show that high server storage overhead is not inherent, by giving an alternate (albeit impractical) construction that manages to avoid it using obfuscation.
2018
TCC
Traitor-Tracing from LWE Made Simple and Attribute-Based
A traitor tracing scheme is a public key encryption scheme for which there are many secret decryption keys. Any of these keys can decrypt a ciphertext; moreover, even if a coalition of users collude, put together their decryption keys and attempt to create a new decryption key, there is an efficient algorithm to trace the new key to at least one the colluders.Recently, Goyal, Koppula and Waters (GKW, STOC 18) provided the first traitor tracing scheme from LWE with ciphertext and secret key sizes that grow polynomially in $$\log n$$, where n is the number of users. The main technical building block in their construction is a strengthening of (bounded collusion secure) secret-key functional encryption which they refer to as mixed functional encryption (FE).In this work, we improve upon and extend the GKW traitor tracing scheme:We provide simpler constructions of mixed FE schemes based on the LWE assumption. Our constructions improve upon the GKW construction in terms of expressiveness, modularity, and security.We provide a construction of attribute-based traitor tracing for all circuits based on the LWE assumption.
2018
TCC
Encrypted Computation
Daniel Wichs
2018
TCC
Is There an Oblivious RAM Lower Bound for Online Reads?
Mor Weiss Daniel Wichs
Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an $$O(\log n)$$ overhead per access, where $$n$$ is the data size. The work of Goldreich and Ostrovsky gave a lower bound showing that this is optimal for ORAM schemes that operate in a “balls and bins” model, where memory blocks can only be shuffled between different locations but not manipulated otherwise. The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond “balls and bins”?The work of Boyle and Naor (ITCS ’16) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with $$o(\log n)$$ overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO ’18) shows that there indeed is an $$\varOmega (\log n)$$ lower bound for general online ORAM.This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM where reads (but not writes) have an $$o(\log n)$$ overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs). Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds.
2018
TCC
Watermarking PRFs Under Standard Assumptions: Public Marking and Security with Extraction Queries
A software watermarking scheme can embed some information called a mark into a program while preserving its functionality. No adversary can remove the mark without damaging the functionality of the program. Cohen et al. (STOC ’16) gave the first positive results for watermarking, showing how to watermark certain pseudorandom function (PRF) families using indistinguishability obfuscation (iO). Their scheme has a secret marking procedure to embed marks in programs and a public extraction procedure to extract the marks from programs; security holds even against an attacker that has access to a marking oracle. Kim and Wu (CRYPTO ’17) later constructed a PRF watermarking scheme under only the LWE assumption. In their scheme, both the marking and extraction procedures are secret, but security only holds against an attacker with access to a marking oracle but not an extraction oracle. In fact, it is possible to completely break the security of the latter scheme using extraction queries, which is a significant limitation in any foreseeable application.In this work, we construct a new PRF watermarking scheme with the following properties. The marking procedure is public and therefore anyone can embed marks in PRFs from the family. Previously we had no such construction even using obfuscation.The extraction key is secret, but marks remain unremovable even if the attacker has access to an extraction oracle. Previously we had no such construction under standard assumptions.Our scheme is simple, uses generic components and can be instantiated under many different assumptions such as DDH, Factoring or LWE. The above benefits come with one caveat compared to prior work: the PRF family that we can watermark depends on the public parameters of the watermarking scheme and the watermarking authority has a secret key which can break the security of all of the PRFs in the family. Since the watermarking authority is usually assumed to be trusted, this caveat appears to be acceptable.
2017
CRYPTO
2017
TCC
2017
TCC
2017
JOFC
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
CRYPTO
2016
CRYPTO
2016
TCC
2016
TCC
2016
TCC
2016
TCC
2016
TCC
2016
JOFC
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
EPRINT
2015
TCC
2015
PKC
2015
ASIACRYPT
2014
CRYPTO
2014
CRYPTO
2014
EUROCRYPT
2014
EUROCRYPT
2014
EUROCRYPT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2014
EPRINT
2013
TCC
2013
CRYPTO
2013
ASIACRYPT
2013
ASIACRYPT
2013
EUROCRYPT
2013
EUROCRYPT
2013
JOFC
Fully Leakage-Resilient Signatures
Elette Boyle Gil Segev Daniel Wichs
A signature scheme is fully leakage resilient (Katz and Vaikuntanathan, ASIACRYPT’09) if it is existentially unforgeable under an adaptive chosen-message attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on all intermediate values that are used throughout the lifetime of the system. This is a strong and meaningful notion of security that captures a wide range of side-channel attacks.One of the main challenges in constructing fully leakage-resilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the random-oracle model. Moreover, even in the random-oracle model, known schemes are only resilient to leakage of less than half the length of their signing key.In this paper we construct the first fully leakage-resilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length (1−o(1))L bits, where L is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific number-theoretic assumptions. In addition, we show that our approach extends to the continual-leakage model, recently introduced by Dodis, Haralambiev, Lopez-Alt and Wichs (FOCS’10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS’10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.
2012
TCC
2012
EUROCRYPT
2012
EUROCRYPT
2011
TCC
2011
CRYPTO
2011
EUROCRYPT
2010
TCC
2010
ASIACRYPT
2010
EUROCRYPT
2010
EPRINT
On Symmetric Encryption and Point Obfuscation
We show tight connections between several cryptographic primitives, namely encryption with weakly random keys, encryption with key-dependent messages (KDM), and obfuscation of point functions with multi-bit output(which we call multi-bit point functions, or MBPFs, for short). These primitives, which have been studied mostly separately in recent works, bear some apparent similarities, both in the flavor of their security requirements and in the flavor of their constructions and assumptions. Still, rigorous connections have not been drawn. Our results can be interpreted as indicating that MBPF obfuscators imply a very strong form of encryption that simultaneously achieves security for weakly-random keys and key-dependent messages as special cases. Similarly, each one of the other primitives implies a certain restricted form of MBPF obfuscation. Our results carry both constructions and impossibility results from one primitive to others. In particular: The recent impossibility result for KDM security of Haitner and Holenstein (TCC ’09) carries over to MBPF obfuscators. The Canetti-Dakdouk construction of MBPF obfuscators based on a strong variant of the DDH assumption (EC ’08) gives an encryption scheme which is secure w.r.t. any weak key distribution of super-logarithmic min-entropy (and in particular, also has very strong leakage resilient properties). All the recent constructions of encryption schemes that are secure w.r.t. weak keys imply a weak form of MBPF obfuscators. 
2010
EPRINT
Efficient Public-Key Cryptography in the Presence of Key Leakage
We study the design of cryptographic primitives resistant to a large class of side-channel attacks, called "memory attacks", where an attacker can repeatedly and adaptively learn information about the secret key, subject *only* to the constraint that the *overall amount* of such information is bounded by some parameter $\ell$. Although the study of such primitives was initiated only recently by Akavia et al. [AGV09], subsequent work already produced many such "leakage-resilient" primitives [NS09,ADW09,KV09], including signature, encryption, identification (ID) and authenticated key agreement (AKA) schemes. Unfortunately, every existing scheme, --- for any of the four fundamental primitives above, --- fails to satisfy at least one of the following desirable properties: - Efficiency. While the construction may be generic, it should have some *efficient* instantiations, based on standard cryptographic assumptions, and without relying on random oracles. - Strong Security. The construction should satisfy the strongest possible definition of security (even in the presence of leakage). For example, encryption schemes should be secure against chosen *ciphertext* attack (CCA), while signatures should be *existentially* unforgeable. - Leakage Flexibility. It should be possible to set the parameters of the schemes so that the leakage bound $\ell$ can come arbitrarily close to the size of the secret key $sk$. In this work we design the first signature, encryption, ID and AKA schemes which overcome these limitations, and satisfy all the properties above. Moreover, all our constructions are generic, in several cases elegantly simplifying and generalizing the prior constructions (which did not have any efficient instantiations). We also introduce several tools of independent interest, such as the abstraction (and constructions) of *simulation extractable* NIZK arguments, and a new *deniable* DH-based AKA protocol based on any CCA-secure encryption.
2010
EPRINT
Cryptography Against Continuous Memory Attacks
We say that a cryptographic scheme is Continous Leakage-Resilient (CLR), if it allows users to refresh their secret keys, using only fresh local randomness, such that: 1. The scheme remains functional after any number of key refreshes, although the public key never changes. Thus, the “outside world” is neither affected by these key refreshes, nor needs to know about their frequency. 2. The scheme remains secure even if the adversary can continuously leak arbitrary information about the current secret-key of the system, as long as the amount of leaked information is bounded in between any two successive key refreshes. There is no bound on the total amount of information that can be leaked during the lifetime of the system. In this work, we construct a variety of practical CLR schemes, including CLR one-way relations, CLR signatures, CLR identification schemes, and CLR authenticated key agreement protocols. For each of the above, we give general constructions, and then show how to instantiate them efficiently using a well established assumption on bilinear groups, called the K-Linear assumption (for any constant K >= 1). Our constructions are highly modular, and we develop many interesting techniques and building-blocks along the way, including: leakage-indistinguishable re-randomizable relations, homomorphic NIZKs, and leakage-of-ciphertext non-malleable encryption schemes. Prior to our work, no “truly CLR” schemes were known, as previous leakage-resilient schemes suffer from one or more of the following drawbacks: (a) restrictions are placed on the type of allowed leakage, such as the axiom that “only computation leaks information”; (b) the overall amount of key leakage is bounded a-priori for the lifetime of the system and there is no method for refreshing keys ; (c) the efficiency of the scheme degrades proportionally with the number of refreshes; (d) the key updates require an additional leak-free “master secret key” to be stored securely; (e) the scheme is only proven secure under a strong non-standard assumption.
2009
TCC
2009
TCC
2009
CRYPTO
2009
CRYPTO
2009
EPRINT
Proofs of Retrievability via Hardness Amplification
Proofs of Retrievability (PoR), introduced by Juels and Kaliski, allow the client to store a file $F$ on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client's data. Constructions of PoR schemes attempt to minimize the client and server storage, the communication complexity of an audit, and even the number of file-blocks accessed by the server during the audit. In this work, we identify several different variants of the problem (such as bounded-use vs. unbounded-use, knowledge-soundness vs. information-soundness), and giving nearly optimal PoR schemes for each of these variants. Our constructions either improve (and generalize) the prior PoR constructions, or give the first known PoR schemes with the required properties. In particular, we \begin{itemize} \item Formally prove the security of an (optimized) variant of the bounded-use scheme of Juels and Kaliski~\cite{JuelsK07}, without making any simplifying assumptions on the behavior of the adversary. \item Build the first unbounded-use PoR scheme where the communication complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open question of Shacham and Waters~\cite{ShachamW08}. \item Build the first bounded-use scheme with {\em information-theoretic} security. \end{itemize} The main insight of our work comes from a simple connection between PoR schemes and the notion of {\em hardness amplification}, extensively studied in complexity theory. In particular, our improvements come from first abstracting a purely information-theoretic notion of {\em PoR codes}, and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.
2008
EUROCRYPT
2008
EUROCRYPT
2008
EPRINT
Detection of Algebraic Manipulation with Applications to Robust Secret Sharing and Fuzzy Extractors
Consider an abstract storage device $\Sigma(\G)$ that can hold a single element $x$ from a fixed, publicly known finite group $\G$. Storage is private in the sense that an adversary does not have read access to $\Sigma(\G)$ at all. However, $\Sigma(\G)$ is non-robust in the sense that the adversary can modify its contents by adding some offset $\Delta \in \G$. Due to the privacy of the storage device, the value $\Delta$ can only depend on an adversary's {\em a priori} knowledge of $x$. We introduce a new primitive called an {\em algebraic manipulation detection} (AMD) code, which encodes a source $s$ into a value $x$ stored on $\Sigma(\G)$ so that any tampering by an adversary will be detected, except with a small error probability $\delta$. We give a nearly optimal construction of AMD codes, which can flexibly accommodate arbitrary choices for the length of the source $s$ and security level $\delta$. We use this construction in two applications: \begin{itemize} \item We show how to efficiently convert any linear secret sharing scheme into a {\em robust secret sharing scheme}, which ensures that no \emph{unqualified subset} of players can modify their shares and cause the reconstruction of some value $s'\neq s$. \item We show how how to build nearly optimal {\em robust fuzzy extractors} for several natural metrics. Robust fuzzy extractors enable one to reliably extract and later recover random keys from noisy and non-uniform secrets, such as biometrics, by relying only on {\em non-robust public storage}. In the past, such constructions were known only in the random oracle model, or required the entropy rate of the secret to be greater than half. Our construction relies on a randomly chosen common reference string (CRS) available to all parties. \end{itemize}
2008
EPRINT
Somewhat Non-Committing Encryption and Efficient Adaptively Secure Oblivious Transfer
Designing efficient cryptographic protocols tolerating adaptive adversaries, who are able to corrupt parties on the fly as the computation proceeds, has been an elusive task. Indeed, thus far no \emph{efficient} protocols achieve adaptive security for general multi-party computation, or even for many specific two-party tasks such as oblivious transfer (OT). In fact, it is difficult and expensive to achieve adaptive security even for the task of \emph{secure communication}, which is arguably the most basic task in cryptography. In this paper we make progress in this area. First, we introduce a new notion called \emph{semi-adaptive} security which is slightly stronger than static security but \emph{significantly weaker than fully adaptive security}. The main difference between adaptive and semi-adaptive security is that, for semi-adaptive security, the simulator is not required to handle the case where \emph{both} parties start out honest and one becomes corrupted later on during the protocol execution. As such, semi-adaptive security is much easier to achieve than fully adaptive security. We then give a simple, generic protocol compiler which transforms any semi-adaptively secure protocol into a fully adaptively secure one. The compilation effectively decomposes the problem of adaptive security into two (simpler) problems which can be tackled separately: the problem of semi-adaptive security and the problem of realizing a weaker variant of secure channels. We solve the latter problem by means of a new primitive that we call {\em somewhat non-committing encryption} resulting in significant efficiency improvements over the standard method for realizing (fully) secure channels using (fully) non-committing encryption. Somewhat non-committing encryption has two parameters: an equivocality parameter $\ell$ (measuring the number of ways that a ciphertext can be ``opened'') and the message sizes $k$. Our implementation is very efficient for small values $\ell$, \emph{even} when $k$ is large. This translates into a very efficient compilation of many semi-adaptively secure protocols (in particular, for a task with small input/output domains such as bit-OT) into a fully adaptively secure protocol. Finally, we showcase our methodology by applying it to the recent Oblivious Transfer protocol by Peikert \etal\ [Crypto 2008], which is only secure against static corruptions, to obtain the first efficient (expected-constant round, expected-constant number of public-key operations), adaptively secure and composable bit-OT protocol.
2008
EPRINT
One-Round Authenticated Key Agreement from Weak Secrets
Yevgeniy Dodis Daniel Wichs
We study the question of information-theoretically secure authenticated key agreement from weak secrets. In this setting, Alice and Bob share a $n$-bit secret $W$, which might \emph{not} be uniformly random but the adversary has at least $k$ bits of uncertainty about it (formalized using conditional min-entropy). Alice and Bob wish to use $W$ to agree on a nearly uniform secret key $R$, over a public channel controlled by an \emph{active} adversary Eve. We show that non-interactive (single-message) protocols do not work when $k\le \frac{n}{2}$, and require poor parameters even when $\frac{n}{2} < k\ll n$. On the other hand, for arbitrary values of $k$, we design a communication efficient {\em two-message (i.e, one-round!)} protocol extracting nearly $k$ random bits. This dramatically improves the only previously known protocol of Renner and Wolf~\cite{RennerW03}, which required $O(\lambda)$ rounds where $\lambda$ is the security parameter. Our solution takes a new approach by studying and constructing \emph{``non-malleable'' seeded randomness extractors} --- if an attacker sees a random seed $X$ and comes up with an arbitrarily related seed $X'$, then we bound the relationship between $R= \Ext(W;X)$ and $R' = \Ext(W;X')$. We also extend our one-round key agreement protocol to the ``fuzzy'' setting, where Alice and Bob share ``close'' (but not equal) secrets $W_A$ and $W_B$, and to the Bounded Retrieval Model (BRM) where the size of the secret $W$ is huge.
2007
EPRINT
Isolated Proofs of Knowledge and Isolated Zero Knowledge
We introduce a new notion called $\ell$-isolated proofs of knowledge ($\ell$-IPoK). These are proofs of knowledge where a cheating prover is allowed to exchange up to $\ell$ bits of communication with some external adversarial environment during the run of the proof. Without any additional setup assumptions, no witness hiding protocol can be an $\ell$-IPoK for \emph{unbounded} values of $\ell$. However, for any \emph{pre-defined} threshold $\ell$, and any relation in NP and we construct an $\ell$-IPoK protocol for that relation. The resulting protocols are zero knowledge (ZK) in the standard sense, i.e., w.r.t. a verifier that communicates only with the prover during the proof. The cost of having a large threshold $\ell$ is a large communication complexity of the constructed protocol. We analyze these costs and present a solution that is asymptotically optimal. If a cheating verifier is allowed to communicate arbitrarily with an external environment, it is not possible to construct an $\ell$-IPoK that is also ZK with respect to such a verifier. As another new notion, we define $\ell$-isolated zero knowledge ($\ell$-IZK) where the verifier is $\ell$-isolated. For every relation in NP and every $\ell$, we construct an $\ell$-IPoK protocol that is also $\ell$-IZK. We describe several applications of $\ell$-IPoK protocols under the physical assumption that one can $\ell$-isolate a prover for the duration of the proof phase. Firstly, we can use a witness indistinguishable (WI) $\ell$-IPoK to prevent ``man-in-the-middle'' attacks on identification schemes. Prior results for this scenario required all verifiers to register keys under a PKI, or the ability to fully isolate the prover. Secondly, a partially isolated prover can register a public key and use a WI $\ell$-IPoK to prove knowledge of the corresponding secret key to another party acting as a verifier. This allows us to set up a PKI where the key registrant does not need to trust the Certificate Authority. The PKI is not perfect since the proof is only witness indistinguishable and not zero knowledge. In a companion paper, we show how to set up such a PKI and use it to implement arbitrary multiparty computation securely in the UC framework without relying on any trusted third parties.
2007
EPRINT
Universally Composable Multiparty Computation with Partially Isolated Parties
It is well known that universally composable multiparty computation cannot, in general, be achieved in the standard model without setup assumptions when the adversary can corrupt an arbitrary number of players. One way to get around this problem is by having a trusted third party generate some global setup such as a common reference string (CRS) or a public key infrastructure (PKI). Recently, an alternative solution was proposed by Katz in \cite{Katz}, suggesting that one may rely on physical assumptions rather than trusted third parties. Concretely, the solution assumed it physically possible to construct tamper proof hardware tokens which can be run in complete isolation from the surrounding environment. Here we improve upon the work of \cite{Katz} by constructing a scheme in which the tokens only need to be partially isolated and may have some {\em limited communication with the environment}. In addition we improve on Katz's work by presenting a scheme which is secure against \emph{adaptive adversaries} and is based on \emph{general cryptographic assumptions}. We also consider an alternative scenario, in which there are some trusted third parties but no single such party is trusted by all of the players. This compromise allows us to limit the use of the physical set-up and hence might be preferred in practice.

Program Committees

TCC 2020
Crypto 2018
Eurocrypt 2017
TCC 2017
TCC 2015
PKC 2014
Asiacrypt 2014
Crypto 2013
TCC 2012

Coauthors

Shweta Agrawal (2)
Joël Alwen (3)
Gilad Asharov (1)
Marshall Ball (1)
Allison Bishop (2)
Nir Bitansky (5)
Elette Boyle (2)
Zvika Brakerski (1)
Ran Canetti (2)
David Cash (2)
Suvradip Chakraborty (1)
Yilei Chen (1)
Ran Cohen (1)
Ronald Cramer (2)
Dana Dachman-Soled (1)
Ivan Damgård (4)
Lalita Devadas (1)
Srinivas Devadas (1)
Yevgeniy Dodis (19)
Nico Döttling (1)
Stefan Dziembowski (2)
Sebastian Faust (1)
Serge Fehr (2)
Christopher W. Fletcher (1)
Juan A. Garay (2)
Sanjam Garg (3)
Rosario Gennaro (1)
Craig Gentry (4)
Shafi Goldwasser (1)
Rishab Goyal (1)
Siyao Guo (1)
Mohammad Hajiabadi (1)
Shai Halevi (5)
Ariel Hamlin (3)
Kristiyan Haralambiev (3)
Carmit Hazay (2)
Brett Hemenway (1)
Dennis Hofheinz (2)
Justin Holmgren (1)
Zahra Jafargholi (5)
Abhishek Jain (3)
Yael Tauman Kalai (3)
Chethan Kamath (2)
Harish Karthikeyan (1)
Tomasz Kazana (2)
Eike Kiltz (1)
Saleet Klein (1)
Karen Klein (2)
Ilan Komargodski (1)
Lucas Kowalczyk (1)
Stephan Krenn (1)
Alptekin Küpçü (2)
Xin Li (1)
Alex Lombardi (2)
Adriana López-Alt (2)
Adriana López-Alt (5)
Steve Lu (1)
Vadim Lyubashevsky (2)
Fermi Ma (1)
Tal Malkin (1)
Daniel Masny (1)
Tal Moran (2)
Pratyay Mukherjee (3)
Moni Naor (1)
Jesper Buus Nielsen (4)
Ryo Nishimaki (5)
Tatsuaki Okamoto (2)
Rafail Ostrovsky (3)
Carles Padró (2)
Omer Paneth (2)
Alain Passelègue (2)
Valerio Pastro (2)
Krzysztof Pietrzak (7)
Manoj Prabhakaran (1)
Willy Quach (7)
Rajmohan Rajaraman (2)
Vanishree Rao (2)
Mariana Raykova (3)
Ling Ren (1)
Ron D. Rothblum (3)
Alessandra Scafuro (2)
Gil Segev (3)
Adi Shamir (2)
Abhi Shelat (2)
Elaine Shi (1)
Noah Stephens-Davidowitz (2)
Eran Tromer (1)
Jonathan Ullman (1)
Salil P. Vadhan (2)
Vinod Vaikuntanathan (7)
Marten van Dijk (1)
Mayank Varia (2)
Daniele Venturi (1)
Shabsi Walfish (1)
Brent Waters (5)
Hoeteck Wee (5)
Mor Weiss (4)
David J. Wu (1)
Shota Yamada (1)
Mark Zhandry (2)
Hong-Sheng Zhou (2)
Giorgos Zirdelis (1)