International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Papers from TCC 2018

Year
Venue
Title
2018
TCC
A Ciphertext-Size Lower Bound for Order-Preserving Encryption with Limited Leakage
We consider a security definition of Chenette, Lewi, Weis, and Wu for order-revealing encryption (ORE) and order-preserving encryption (OPE) (FSE 2016). Their definition says that the comparison of two ciphertexts should only leak the index of the most significant bit on which they differ. While their work could achieve order-revealing encryption with short ciphertexts that expand the plaintext by a factor $$\approx 1.58$$, it could only find order-preserving encryption with longer ciphertexts that expanded the plaintext by a security-parameter factor. We give evidence that this gap between ORE and OPE is inherent, by proving that any OPE meeting the information-theoretic version of their security definition (for instance, in the random oracle model) must have ciphertext length close to that of their constructions. We extend our result to identify an abstract security property of any OPE that will result in the same lower bound.
2018
TCC
A Simple Construction of iO for Turing Machines
We give a simple construction of indistinguishability obfuscation for Turing machines where the time to obfuscate grows only with the description size of the machine and otherwise, independent of the running time and the space used. While this result is already known [Koppula, Lewko, and Waters, STOC 2015] from $$i\mathcal {O}$$ for circuits and injective pseudorandom generators, our construction and its analysis are conceptually much simpler. In particular, the main technical component in the proof of our construction is a simple combinatorial pebbling argument [Garg and Srinivasan, EUROCRYPT 2018]. Our construction makes use of indistinguishability obfuscation for circuits and $$\mathrm {somewhere\, statistically\, binding\, hash\, functions}$$ .
2018
TCC
Achieving Fair Treatment in Algorithmic Classification
Fairness in classification has become an increasingly relevant and controversial issue as computers replace humans in many of today’s classification tasks. In particular, a subject of much recent debate is that of finding, and subsequently achieving, suitable definitions of fairness in an algorithmic context. In this work, following the work of Hardt et al. (NIPS’16), we consider and formalize the task of sanitizing an unfair classifier $$\mathcal {C}$$C into a classifier $$\mathcal {C}'$$C′ satisfying an approximate notion of “equalized odds” or fair treatment. Our main result shows how to take any (possibly unfair) classifier $$\mathcal {C}$$C over a finite outcome space, and transform it—by just perturbing the output of $$\mathcal {C}$$C—according to some distribution learned by just having black-box access to samples of labeled, and previously classified, data, to produce a classifier $$\mathcal {C}'$$C′ that satisfies fair treatment; we additionally show that our derived classifier is near-optimal in terms of accuracy. We also experimentally evaluate the performance of our method.
2018
TCC
Adaptively Secure Distributed PRFs from $\mathsf {LWE}$
In distributed pseudorandom functions (DPRFs), a PRF secret key SK is secret shared among N servers so that each server can locally compute a partial evaluation of the PRF on some input X. A combiner that collects t partial evaluations can then reconstruct the evaluation F(SK, X) of the PRF under the initial secret key. So far, all non-interactive constructions in the standard model are based on lattice assumptions. One caveat is that they are only known to be secure in the static corruption setting, where the adversary chooses the servers to corrupt at the very beginning of the game, before any evaluation query. In this work, we construct the first fully non-interactive adaptively secure DPRF in the standard model. Our construction is proved secure under the $$\mathsf {LWE}$$ assumption against adversaries that may adaptively decide which servers they want to corrupt. We also extend our construction in order to achieve robustness against malicious adversaries.
2018
TCC
Best Possible Information-Theoretic MPC
We reconsider the security guarantee that can be achieved by general protocols for secure multiparty computation in the most basic of settings: information-theoretic security against a semi-honest adversary. Since the 1980s, we have elegant solutions to this problem that offer full security, as long as the adversary controls a minority of the parties, but fail completely when that threshold is crossed. In this work, we revisit this problem, questioning the optimality of the standard notion of security. We put forward a new notion of information-theoretic security which is strictly stronger than the standard one, and which we argue to be “best possible.” This notion still requires full security against dishonest minority in the usual sense, and adds a meaningful notion of information-theoretic security even against dishonest majority.We present protocols for useful classes of functions that satisfy this new notion of security. Our protocols have the unique feature of combining the efficiency benefits of protocols for an honest majority and (most of) the security benefits of protocols for dishonest majority. We further extend some of the solutions to the malicious setting.
2018
TCC
Certifying Trapdoor Permutations, Revisited
The modeling of trapdoor permutations has evolved over the years. Indeed, finding an appropriate abstraction that bridges between the existing candidate constructions and the needs of applications has proved to be challenging. In particular, the notions of certifying permutations (Bellare and Yung, 96), enhanced and doubly enhanced trapdoor permutations (Goldreich, 04, 08, 11, Goldreich and Rothblum, 13) were added to bridge the gap between the modeling of trapdoor permutations and needs of applications. We identify an additional gap in the current abstraction of trapdoor permutations: Previous works implicitly assumed that it is easy to recognize elements in the domain, as well as uniformly sample from it, even for illegitimate function indices. We demonstrate this gap by using the (Bitansky-Paneth-Wichs, 16) doubly-enhanced trapdoor permutation family to instantiate the Feige-Lapidot-Shamir (FLS) paradigm for constructing non-interactive zero-knowledge (NIZK) protocols, and show that the resulting proof system is unsound. To close the gap, we propose a general notion of certifiably injective doubly enhanced trapdoor functions (DECITDFs), which provides a way of certifying that a given key defines an injective function over the domain defined by it, even when that domain is not efficiently recognizable and sampleable. We show that DECITDFs suffice for instantiating the FLS paradigm; more generally, we argue that certifiable injectivity is needed whenever the generation process of the function is not trusted. We then show two very different ways to construct DECITDFs: One is via the traditional method of RSA/Rabin with the Bellare-Yung certification mechanism, and the other using indistinguishability obfuscation and injective pseudorandom generators. In particular the latter is the first candidate injective trapdoor function, from assumptions other than factoring, that suffices for the FLS paradigm. Finally we observe that a similar gap appears also in other paths proposed in the literature for instantiating the FLS paradigm, specifically via verifiable pseudorandom generators and verifiable pseudorandom functions. Closing the gap there can be done in similar ways to the ones proposed here.
2018
TCC
Ciphertext Expansion in Limited-Leakage Order-Preserving Encryption: A Tight Computational Lower Bound
Order-preserving encryption emerged as a key ingredient underlying the security of practical database management systems. Boldyreva et al. (EUROCRYPT ’09) initiated the study of its security by introducing two natural notions of security. They proved that their first notion, a “best-possible” relaxation of semantic security allowing ciphertexts to reveal the ordering of their corresponding plaintexts, is not realizable. Later on Boldyreva et al. (CRYPTO ’11) proved that any scheme satisfying their second notion, indistinguishability from a random order-preserving function, leaks about half of the bits of a random plaintext.This unsettling state of affairs was recently changed by Chenette et al. (FSE ’16), who relaxed the above “best-possible” notion and constructed a scheme satisfying it based on any pseudorandom function. In addition to revealing the ordering of any two encrypted plaintexts, ciphertexts in their scheme reveal only the position of the most significant bit on which the plaintexts differ. A significant drawback of their scheme, however, is its substantial ciphertext expansion: Encrypting plaintexts of length m bits results in ciphertexts of length $$m \cdot \ell $$ bits, where $$\ell $$ determines the level of security (e.g., $$\ell = 80$$ in practice).In this work we prove a lower bound on the ciphertext expansion of any order-preserving encryption scheme satisfying the “limited-leakage” notion of Chenette et al. with respect to non-uniform polynomial-time adversaries, matching the ciphertext expansion of their scheme up to lower-order terms. This improves a recent result of Cash and Zhang (TCC ’18), who proved such a lower bound for schemes satisfying this notion with respect to computationally-unbounded adversaries (capturing, for example, schemes whose security can be proved in the random-oracle model without relying on cryptographic assumptions). Our lower bound applies, in particular, to schemes whose security is proved in the standard model.
2018
TCC
Classical Proofs for the Quantum Collapsing Property of Classical Hash Functions
Hash functions are of fundamental importance in theoretical and in practical cryptography, and with the threat of quantum computers possibly emerging in the future, it is an urgent objective to understand the security of hash functions in the light of potential future quantum attacks. To this end, we reconsider the collapsing property of hash functions, as introduced by Unruh, which replaces the notion of collision resistance when considering quantum attacks. Our contribution is a formalism and a framework that offers significantly simpler proofs for the collapsing property of hash functions. With our framework, we can prove the collapsing property for hash domain extension constructions entirely by means of decomposing the iteration function into suitable elementary composition operations. In particular, given our framework, one can argue purely classically about the quantum-security of hash functions; this is in contrast to previous proofs which are in terms of sophisticated quantum-information-theoretic and quantum-algorithmic reasoning.
2018
TCC
Continuous NMC Secure Against Permutations and Overwrites, with Applications to CCA Secure Commitments
Non-Malleable Codes (NMC) were introduced by Dziembowski, Pietrzak and Wichs in ICS 2010 as a relaxation of error correcting codes and error detecting codes. Faust, Mukherjee, Nielsen, and Venturi in TCC 2014 introduced an even stronger notion of non-malleable codes called continuous non-malleable codes where security is achieved against continuous tampering of a single codeword without re-encoding.We construct information theoretically secure CNMC resilient to bit permutations and overwrites, this is the first Continuous NMC constructed outside of the split-state model.In this work we also study relations between the CNMC and parallel CCA commitments. We show that the CNMC can be used to bootstrap a Self-destruct parallel CCA bit commitment to a Self-destruct parallel CCA string commitment, where Self-destruct parallel CCA is a weak form of parallel CCA security. Then we can get rid of the Self-destruct limitation obtaining a parallel CCA commitment, requiring only one-way functions.
2018
TCC
2018
TCC
Enhancements are Blackbox Non-trivial: Impossibility of Enhanced Trapdoor Permutations from Standard Trapdoor Permutations
Trapdoor permutations (TDP) are a fundamental primitive in cryptography. Several variants of this notion have emerged as a result of different applications. However, it is not clear whether these variants can be based on the standard notion of TDPs.We study the question of whether enhanced trapdoor permutations can be based on classical trapdoor permutations. The main motivation of our work is in the context of existing TDP-based constructions of oblivious transfer and non-interactive zero knowledge protocols, which require enhancements to the classical TDP notion. We prove that these enhancements are non-trivial, in the sense that there does not exist fully blackbox constructions of enhanced TDPs from classical TDPs.On the technical side, we show that the enhanced TDP security of any construction in the random TDP oracle world can be broken via a polynomial number of queries to the TDP oracle as well as a weakening oracle, which provides inversion with respect to randomness. We also show that the standard one-wayness of the random TDP oracle stays intact in the presence of this weakening oracle.
2018
TCC
Exploring Crypto Dark Matter:
Pseudorandom functions (PRFs) are one of the fundamental building blocks in cryptography. Traditionally, there have been two main approaches for PRF design: the “practitioner’s approach” of building concretely-efficient constructions based on known heuristics and prior experience, and the “theoretician’s approach” of proposing constructions and reducing their security to a previously-studied hardness assumption. While both approaches have their merits, the resulting PRF candidates vary greatly in terms of concrete efficiency and design complexity.In this work, we depart from these traditional approaches by exploring a new space of plausible PRF candidates. Our guiding principle is to maximize simplicity while optimizing complexity measures that are relevant to cryptographic applications. Our primary focus is on weak PRFs computable by very simple circuits—specifically, depth-2$$\mathsf {ACC}^0$$ circuits. Concretely, our main weak PRF candidate is a “piecewise-linear” function that first applies a secret mod-2 linear mapping to the input, and then a public mod-3 linear mapping to the result. We also put forward a similar depth-3 strong PRF candidate.The advantage of our approach is twofold. On the theoretical side, the simplicity of our candidates enables us to draw many natural connections between their hardness and questions in complexity theory or learning theory (e.g., learnability of $$\mathsf {ACC}^0$$ and width-3 branching programs, interpolation and property testing for sparse polynomials, and new natural proof barriers for showing super-linear circuit lower bounds). On the applied side, the piecewise-linear structure of our candidates lends itself nicely to applications in secure multiparty computation (MPC). Using our PRF candidates, we construct protocols for distributed PRF evaluation that achieve better round complexity and/or communication complexity (often both) compared to protocols obtained by combining standard MPC protocols with PRFs like AES, LowMC, or Rasta (the latter two are specialized MPC-friendly PRFs).Finally, we introduce a new primitive we call an encoded-input PRF, which can be viewed as an interpolation between weak PRFs and standard (strong) PRFs. As we demonstrate, an encoded-input PRF can often be used as a drop-in replacement for a strong PRF, combining the efficiency benefits of weak PRFs and the security benefits of strong PRFs. We conclude by showing that our main weak PRF candidate can plausibly be boosted to an encoded-input PRF by leveraging standard error-correcting codes.
2018
TCC
FE and iO for Turing Machines from Minimal Assumptions
We construct Indistinguishability Obfuscation ($$\mathsf {iO}$$) and Functional Encryption ($$\mathsf {FE}$$) schemes in the Turing machine model from the minimal assumption of compact $$\mathsf {FE}$$ for circuits ($$\mathsf {CktFE}$$). Our constructions overcome the barrier of sub-exponential loss incurred by all prior work. Our contributions are:1.We construct $$\mathsf {iO}$$ in the Turing machine model from the same assumptions as required in the circuit model, namely, sub-exponentially secure $$\mathsf {FE}$$ for circuits. The previous best constructions [6, 41] require sub-exponentially secure $$\mathsf {iO}$$ for circuits, which in turn requires sub-exponentially secure $$\mathsf {FE}$$ for circuits [5, 15].2.We provide a new construction of single input $$\mathsf {FE}$$ for Turing machines with unbounded length inputs and optimal parameters from polynomially secure, compact $$\mathsf {FE}$$ for circuits. The previously best known construction by Ananth and Sahai [7] relies on $$\mathsf {iO}$$ for circuits, or equivalently, sub-exponentially secure $$\mathsf {FE}$$ for circuits.3.We provide a new construction of multi-input $$\mathsf {FE}$$ for Turing machines. Our construction supports a fixed number of encryptors (say k), who may each encrypt a string $$\mathbf {x}_i$$ of unbounded length. We rely on sub-exponentially secure $$\mathsf {FE}$$ for circuits, while the only previous construction [10] relies on a strong knowledge type assumption, namely, public coin differing inputs obfuscation. Our techniques are new and from first principles, and avoid usage of sophisticated $$\mathsf {iO}$$ specific machinery such as positional accumulators and splittable signatures that were used by all relevant prior work [6, 7, 41].
2018
TCC
Fine-Grained Secure Computation
This paper initiates a study of Fine Grained Secure Computation: i.e. the construction of secure computation primitives against “moderately complex” adversaries. We present definitions and constructions for compact Fully Homomorphic Encryption and Verifiable Computation secure against (non-uniform) $$\mathsf {NC}^1$$ adversaries. Our results do not require the existence of one-way functions and hold under a widely believed separation assumption, namely $$\mathsf {NC}^{1}\subsetneq \oplus \mathsf {L}/ {\mathsf {poly}}$$ . We also present two application scenarios for our model: (i) hardware chips that prove their own correctness, and (ii) protocols against rational adversaries potentially relevant to the Verifier’s Dilemma in smart-contracts transactions such as Ethereum.
2018
TCC
Game Theoretic Notions of Fairness in Multi-party Coin Toss
Coin toss has been extensively studied in the cryptography literature, and the well-accepted notion of fairness (henceforth called strong fairness) requires that a corrupt coalition cannot cause non-negligible bias. It is well-understood that two-party coin toss is impossible if one of the parties can prematurely abort; further, this impossibility generalizes to multiple parties with a corrupt majority (even if the adversary is computationally bounded and fail-stop only).Interestingly, the original proposal of (two-party) coin toss protocols by Blum in fact considered a weaker notion of fairness: imagine that the (randomized) transcript of the coin toss protocol defines a winner among the two parties. Now Blum’s notion requires that a corrupt party cannot bias the outcome in its favor (but self-sacrificing bias is allowed). Blum showed that this weak notion is indeed attainable for two parties assuming the existence of one-way functions.In this paper, we ask a very natural question which, surprisingly, has been overlooked by the cryptography literature: can we achieve Blum’s weak fairness notion in multi-party coin toss? What is particularly interesting is whether this relaxation allows us to circumvent the corrupt majority impossibility that pertains to strong fairness. Even more surprisingly, in answering this question, we realize that it is not even understood how to define weak fairness for multi-party coin toss. We propose several natural notions drawing inspirations from game theory, all of which equate to Blum’s notion for the special case of two parties. We show, however, that for multiple parties, these notions vary in strength and lead to different feasibility and infeasibility results.
2018
TCC
Impossibility of Order-Revealing Encryption in Idealized Models
An Order-Revealing Encryption (ORE) scheme gives a public procedure by which two ciphertexts can be compared to reveal the order of their underlying plaintexts. The ideal security notion for ORE is that only the order is revealed—anything else, such as the distance between plaintexts, is hidden. The only known constructions of ORE achieving such ideal security are based on cryptographic multilinear maps and are currently too impractical for real-world applications.In this work, we give evidence that building ORE from weaker tools may be hard. Indeed, we show black-box separations between ORE and most symmetric-key primitives, as well as public key encryption and anything else implied by generic groups in a black-box way. Thus, any construction of ORE must either (1) achieve weaker notions of security, (2) be based on more complicated cryptographic tools, or (3) require non-black-box techniques. This suggests that any ORE achieving ideal security will likely be somewhat inefficient.Central to our proof is a proof of impossibility for something we call information theoretic ORE, which has connections to tournament graphs and a theorem by Erdös. This impossibility proof will be useful for proving other black box separations for ORE.
2018
TCC
Impossibility of Simulation Secure Functional Encryption Even with Random Oracles
In this work we study the feasibility of achieving simulation security in functional encryption (FE) in the random oracle model. Our main result is negative in that we give a functionality for which it is impossible to achieve simulation security even with the aid of random oracles.We begin by giving a formal definition of simulation security that explicitly incorporates the random oracles. Next, we show a particular functionality for which it is impossible to achieve simulation security. Here messages are interpreted as seeds to a (weak) pseudorandom function family F and private keys are ascribed to points in the domain of the function. On a message s and private key x one can learn F(s, x). We show that there exists an attacker that makes a polynomial number of private key queries followed by a single ciphertext query for which there exists no simulator.Our functionality and attacker access pattern closely matches the standard model impossibility result of Agrawal, Gorbunov, Vaikuntanathan and Wee (CRYPTO 2013). The crux of their argument is that no simulator can succinctly program in the outputs of an unbounded number of evaluations of a pseudorandom function family into a fixed size ciphertext. However, their argument does not apply in the random oracle setting since the oracle acts as an additional conduit of information which the simulator can program. We overcome this barrier by proposing an attacker who decrypts the challenge ciphertext with the secret keys issued earlier without using the random oracle, even though the decryption algorithm may require it. This involves collecting most of the useful random oracle queries in advance, without giving the simulator too many opportunities to program.On the flip side, we demonstrate the utility of the random oracle in simulation security. Given only public key encryption and low-depth PRGs we show how to build an FE system that is simulation secure for any poly-time attacker that makes an unbounded number of message queries, but an a-priori bounded number of key queries. This bests what is possible in the standard model where it is only feasible to achieve security for an attacker that is bounded both in the number of key and message queries it makes. We achieve this by creating a system that leverages the random oracle to get one-key security and then adapt previously known techniques to boost the system to resist up to q queries.Finally, we ask whether it is possible to achieve simulation security for an unbounded number of messages and keys, but where all key queries are made after the message queries. We show this too is impossible to achieve using a different twist on our first impossibility result.
2018
TCC
Information-Theoretic Broadcast with Dishonest Majority for Long Messages
Byzantine broadcast is a fundamental primitive for secure computation. In a setting with n parties in the presence of an adversary controlling at most t parties, while a lot of progress in optimizing communication complexity has been made for $$t < n/2$$t<n/2, little progress has been made for the general case $$t<n$$t<n, especially for information-theoretic security. In particular, all information-theoretic secure broadcast protocols for $$\ell $$ℓ-bit messages and $$t<n$$t<n and optimal round complexity $${\mathcal {O}}(n)$$O(n) have, so far, required a communication complexity of $${\mathcal {O}}(\ell n^2)$$O(ℓn2). A broadcast extension protocol allows a long message to be broadcast more efficiently using a small number of single-bit broadcasts. Through broadcast extension, so far, the best achievable round complexity for $$t<n$$t<n setting with the optimal communication complexity of $${\mathcal {O}}(\ell n)$$O(ℓn) is $${\mathcal {O}}(n^4)$$O(n4) rounds.In this work, we construct a new broadcast extension protocol for $$t<n$$t<n with information-theoretic security. Our protocol improves the round complexity to $${\mathcal {O}}(n^3)$$O(n3) while maintaining the optimal communication complexity for long messages. Our result shortens the gap between the information-theoretic setting and the computational setting, and between the optimal communication protocol and the optimal round protocol in the information-theoretic setting for $$t<n$$t<n.
2018
TCC
Information-Theoretic Secret-Key Agreement: The Asymptotically Tight Relation Between the Secret-Key Rate and the Channel Quality Ratio
Information-theoretic secret-key agreement between two parties Alice and Bob is a well-studied problem that is provably impossible in a plain model with public (authenticated) communication, but is known to be possible in a model where the parties also have access to some correlated randomness. One particular type of such correlated randomness is the so-called satellite setting, where uniform random bits (e.g., sent by a satellite) are received by the parties and the adversary Eve over inherently noisy channels. The antenna size determines the error probability, and the antenna is the adversary’s limiting resource much as computing power is the limiting resource in traditional complexity-based security. The natural assumption about the adversary is that her antenna is at most Q times larger than both Alice’s and Bob’s antenna, where, to be realistic, Q can be very large.The goal of this paper is to characterize the secret-key rate per transmitted bit in terms of Q. Traditional results in this so-called satellite setting are phrased in terms of the error probabilities $$\epsilon _A$$ϵA, $$\epsilon _B$$ϵB, and $$\epsilon _E$$ϵE, of the binary symmetric channels through which the parties receive the bits and, quite surprisingly, the secret-key rate has been shown to be strictly positive unless Eve’s channel is perfect ($$\epsilon _E=0$$ϵE=0) or either Alice’s or Bob’s channel output is independent of the transmitted bit (i.e., $$\epsilon _A=0.5$$ϵA=0.5 or $$\epsilon _B=0.5$$ϵB=0.5). However, the best proven lower bound, if interpreted in terms of the channel quality ratio Q, is only exponentially small in Q. The main result of this paper is that the secret-key rate decreases asymptotically only like $$1/Q^2$$1/Q2 if the per-bit signal energy, affecting the quality of all channels, is treated as a system parameter that can be optimized. Moreover, this bound is tight if Alice and Bob have the same antenna sizes.Motivated by considering a fixed sending signal power, in which case the per-bit energy is inversely proportional to the bit-rate, we also propose a definition of the secret-key rate per second (rather than per transmitted bit) and prove that it decreases asymptotically only like 1/Q.
2018
TCC
Injective Trapdoor Functions via Derandomization: How Strong is Rudich’s Black-Box Barrier?
We present a cryptographic primitive $$\mathcal {P}$$ P satisfying the following properties:Rudich’s seminal impossibility result (PhD thesis ’88) shows that $$\mathcal {P}$$ P cannot be used in a black-box manner to construct an injective one-way function. $$\mathcal {P}$$ P can be used in a non-black-box manner to construct an injective one-way function assuming the existence of a hitting-set generator that fools deterministic circuits (such a generator is known to exist based on the worst-case assumption that $$\text{ E } = \text{ DTIME }(2^{O(n)})$$ E=DTIME(2O(n)) has a function of deterministic circuit complexity $$2^{\Omega (n)}$$ 2Ω(n)).Augmenting $$\mathcal {P}$$ P with a trapdoor algorithm enables a non-black-box construction of an injective trapdoor function (once again, assuming the existence of a hitting-set generator that fools deterministic circuits), while Rudich’s impossibility result still holds. The primitive $$\mathcal {P}$$ P and its augmented variant can be constructed based on any injective one-way function and on any injective trapdoor function, respectively, and they are thus unconditionally essential for the existence of such functions. Moreover, $$\mathcal {P}$$ P can also be constructed based on various known primitives that are secure against related-key attacks, thus enabling to base the strong structural guarantees of injective one-way functions on the strong security guarantees of such primitives.Our application of derandomization techniques is inspired mainly by the work of Barak, Ong and Vadhan (CRYPTO ’03), which on one hand relies on any one-way function, but on the other hand only results in a non-interactive perfectly-binding commitment scheme (offering significantly weaker structural guarantees compared to injective one-way functions), and does not seem to enable an extension to public-key primitives.The key observation underlying our approach is that Rudich’s impossibility result applies not only to one-way functions as the underlying primitive, but in fact to a variety of “unstructured” primitives. We put forward a condition for identifying such primitives, and then subtly tailor the properties of our primitives such that they are both sufficiently unstructured in order to satisfy this condition, and sufficiently structured in order to yield injective one-way and trapdoor functions. This circumvents the basic approach underlying Rudich’s long-standing evidence for the difficulty of constructing injective one-way functions (and, in particular, injective trapdoor functions) based on seemingly weaker or unstructured assumptions.
2018
TCC
Is There an Oblivious RAM Lower Bound for Online Reads?
Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an $$O(\log n)$$ overhead per access, where $$n$$ is the data size. The work of Goldreich and Ostrovsky gave a lower bound showing that this is optimal for ORAM schemes that operate in a “balls and bins” model, where memory blocks can only be shuffled between different locations but not manipulated otherwise. The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond “balls and bins”?The work of Boyle and Naor (ITCS ’16) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with $$o(\log n)$$ overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO ’18) shows that there indeed is an $$\varOmega (\log n)$$ lower bound for general online ORAM.This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM where reads (but not writes) have an $$o(\log n)$$ overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs). Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds.
2018
TCC
No-signaling Linear PCPs
In this paper, we give a no-signaling linear probabilistically checkable proof (PCP) system for polynomial-time deterministic computation, i.e., a PCP system for $${\mathcal {P}}$$P such that (1) the PCP oracle is a linear function and (2) the soundness holds against any (computational) no-signaling cheating prover, who is allowed to answer each query according to a distribution that depends on the entire query set in a certain way. To the best of our knowledge, our construction is the first PCP system that satisfies these two properties simultaneously.As an application of our PCP system, we obtain a 2-message scheme for delegating computation by using a known transformation. Compared with existing 2-message delegation schemes based on standard cryptographic assumptions, our scheme requires preprocessing but has a simpler structure and makes use of different (possibly cheaper) standard cryptographic primitives, namely additive/multiplicative homomorphic encryption schemes.
2018
TCC
Oblivious Transfer in Incomplete Networks
Secure message transmission and Byzantine agreement have been studied extensively in incomplete networks. However, information theoretically secure multiparty computation (MPC) in incomplete networks is less well understood. In this paper, we characterize the conditions under which a pair of parties can compute oblivious transfer (OT) information theoretically securely against a general adversary structure in an incomplete network of reliable, private channels. We provide characterizations for both semi-honest and malicious models. A consequence of our results is a complete characterization of networks in which a given subset of parties can compute any functionality securely with respect to an adversary structure in the semi-honest case and a partial characterization in the malicious case.
2018
TCC
On Basing Search SIVP on NP-Hardness
Best Student Paper
The possibility of basing cryptography on the minimal assumption $$\mathbf{NP }\nsubseteq \mathbf{BPP }$$ NP⊈BPP is at the very heart of complexity-theoretic cryptography. The closest we have gotten so far is lattice-based cryptography whose average-case security is based on the worst-case hardness of approximate shortest vector problems on integer lattices. The state-of-the-art is the construction of a one-way function (and collision-resistant hash function) based on the hardness of the $$\tilde{O}(n)$$ O~(n)-approximate shortest independent vector problem $${\textsf {SIVP}}_{\tilde{O}(n)}$$ SIVPO~(n).Although $${\textsf {SIVP}}$$ SIVP is NP-hard in its exact version, Guruswami et al. (CCC 2004) showed that $${\textsf {gapSIVP}}_{\sqrt{n/\log n}}$$ gapSIVPn/logn is in $$\mathbf{NP } \cap \mathbf{coAM }$$ NP∩coAM and thus unlikely to be $$\mathbf{NP }$$ NP-hard. Indeed, any language that can be reduced to $${\textsf {gapSIVP}}_{\tilde{O}(\sqrt{n})}$$ gapSIVPO~(n) (under general probabilistic polynomial-time adaptive reductions) is in $$\mathbf{AM } \cap \mathbf{coAM }$$ AM∩coAM by the results of Peikert and Vaikuntanathan (CRYPTO 2008) and Mahmoody and Xiao (CCC 2010). However, none of these results apply to reductions to search problems, still leaving open a ray of hope: can $$\mathbf{NP }$$ NPbe reduced to solving search SIVP with approximation factor $$\tilde{O}(n)$$ O~(n)?We eliminate such possibility, by showing that any language that can be reduced to solving search $${\textsf {SIVP}}$$ SIVP with any approximation factor $$\lambda (n) = \omega (n\log n)$$ λ(n)=ω(nlogn) lies in AM intersect coAM.
2018
TCC
On the Complexity of Fair Coin Flipping
A two-party coin-flipping protocol is $$\varepsilon $$ε-fair if no efficient adversary can bias the output of the honest party (who always outputs a bit, even if the other party aborts) by more than $$\varepsilon $$ε. Cleve [STOC ’86] showed that r-round o(1 / r)-fair coin-flipping protocols do not exist. Awerbuch et al. [Manuscript ’85] constructed a $$\varTheta (1/\sqrt{r})$$Θ(1/r)-fair coin-flipping protocol, assuming the existence of one-way functions. Moran et al. [Journal of Cryptology ’16] constructed an r-round coin-flipping protocol that is $$\varTheta (1/r)$$Θ(1/r)-fair (thus matching the aforementioned lower bound of Cleve [STOC ’86]), assuming the existence of oblivious transfer.The above gives rise to the intriguing question of whether oblivious transfer, or more generally “public-key primitives”, is required for an $$o(1/\sqrt{r})$$o(1/r)-fair coin flipping. This question was partially answered by Dachman-Soled et al. [TCC ’11] and Dachman-Soled et al. [TCC ’14], who showed that restricted types of fully black-box reductions cannot establish $$o(1/\sqrt{r})$$o(1/r)-fair coin-flipping protocols from one-way functions. In particular, for constant-round coin-flipping protocols, [10] yields that black-box techniques from one-way functions can only guarantee fairness of order $$1/\sqrt{r}$$1/r.We make progress towards answering the above question by showing that, for any constant , the existence of an $$1/(c\cdot \sqrt{r})$$1/(c·r)-fair, r-round coin-flipping protocol implies the existence of an infinitely-often key-agreement protocol, where c denotes some universal constant (independent of r). Our reduction is non black-box and makes a novel use of the recent dichotomy for two-party protocols of Haitner et al. [FOCS ’18] to facilitate a two-party variant of the attack of Beimel et al. [FOCS ’18] on multi-party coin-flipping protocols.
2018
TCC
On the Power of Amortization in Secret Sharing: d-Uniform Secret Sharing and CDS with Constant Information Rate
Consider the following secret-sharing problem. Your goal is to distribute a long file s between n servers such that $$(d-1)$$ (d-1)-subsets cannot recover the file, $$(d+1)$$ (d+1)-subsets can recover the file, and d-subsets should be able to recover s if and only if they appear in some predefined list L. How small can the information ratio (i.e., the number of bits stored on a server per each bit of the secret) be?We advocate the study of such d-uniform access structures as a useful scaled-down version of general access structures. Our main result shows that, for constant d, any d-uniform access structure admits a secret sharing scheme with a constant asymptotic information ratio of $$c_d$$ cd that does not grow with the number of servers n. This result is based on a new construction of d-party Conditional Disclosure of Secrets (CDS) for arbitrary predicates over n-size domain in which each party communicates at most four bits per secret bit.In both settings, previous results achieved a non-constant information ratio that grows asymptotically with n, even for the simpler (and widely studied) special case of $$d=2$$ d=2. Moreover, our multiparty CDS construction yields the first example of an access structure whose amortized information ratio is constant, whereas its best-known non-amortized information ratio is sub-exponential, thus providing a unique evidence for the potential power of amortization in the context of secret sharing.Our main result applies to exponentially long secrets, and so it should be mainly viewed as a barrier against amortizable lower-bound techniques. We also show that in some natural simple cases (e.g., low-degree predicates), amortization kicks in even for quasi-polynomially long secrets. Finally, we prove some limited lower-bounds, point out some limitations of existing lower-bound techniques, and describe some applications to the setting of private simultaneous messages.
2018
TCC
On the Security Loss of Unique Signatures
We consider the question of whether the security of unique digital signature schemes can be based on game-based cryptographic assumptions using linear-preserving black-box security reductions—that is, black-box reductions for which the security loss (i.e., the ratio between “work” of the adversary and the “work” of the reduction) is some a priori bounded polynomial. A seminal result by Coron (Eurocrypt’02) shows limitations of such reductions; however, his impossibility result and its subsequent extensions all suffer from two notable restrictions: (1) they only rule out so-called “simple” reductions, where the reduction is restricted to only sequentially invoke “straight-line” instances of the adversary; and (2) they only rule out reductions to non-interactive (two-round) assumptions. In this work, we present the first full impossibility result: our main result shows that the existence of any linear-preserving black-box reduction for basing the security of unique signatures on some bounded-round assumption implies that the assumption can be broken in polynomial time.
2018
TCC
On the Structure of Unconditional UC Hybrid Protocols
We study the problem of secure two-party computation in the presence of a trusted setup. If there is an unconditionally UC-secure protocol for f that makes use of calls to an ideal g, then we say that freduces tog (and write $$f \sqsubseteq g$$). Some g are complete in the sense that all functions reduce to g. However, almost nothing is known about the power of an incomplete g in this setting. We shed light on this gap by showing a characterization of $$f \sqsubseteq g$$ for incomplete g.Very roughly speaking, we show that f reduces to g if and only if it does so by the simplest possible protocol: one that makes a single call to ideal g and uses no further communication. Furthermore, such simple protocols can be characterized by a natural combinatorial condition on f and g.Looking more closely, our characterization applies only to a very wide class of f, and only for protocols that are deterministic or logarithmic-round. However, we give concrete examples showing that both of these limitations are inherent to the characterization itself. Functions not covered by our characterization exhibit qualitatively different properties. Likewise, randomized, superlogarithmic-round protocols are qualitatively more powerful than deterministic or logarithmic-round ones.
2018
TCC
One-Message Zero Knowledge and Non-malleable Commitments
We introduce a new notion of one-message zero-knowledge (1ZK) arguments that satisfy a weak soundness guarantee—the number of false statements that a polynomial-time non-uniform adversary can convince the verifier to accept is not much larger than the size of its non-uniform advice. The zero-knowledge guarantee is given by a simulator that runs in (mildly) super-polynomial time. We construct such 1ZK arguments based on the notion of multi-collision-resistant keyless hash functions, recently introduced by Bitansky, Kalai, and Paneth (STOC 2018). Relying on the constructed 1ZK arguments, subexponentially-secure time-lock puzzles, and other standard assumptions, we construct one-message fully-concurrent non-malleable commitments. This is the first construction that is based on assumptions that do not already incorporate non-malleability, as well as the first based on (subexponentially) falsifiable assumptions.
2018
TCC
Perfect Secure Computation in Two Rounds
We show that any multi-party functionality can be evaluated using a two-round protocol with perfect correctness and perfect semi-honest security, provided that the majority of parties are honest. This settles the round complexity of information-theoretic semi-honest MPC, resolving a longstanding open question (cf. Ishai and Kushilevitz, FOCS 2000). The protocol is efficient for $${\mathrm {NC}}^1$$NC1 functionalities. Furthermore, given black-box access to a one-way function, the protocol can be made efficient for any polynomial functionality, at the cost of only guaranteeing computational security.Technically, we extend and relax the notion of randomized encoding to specifically address multi-party functionalities. The property of a multi-party randomized encoding (MPRE) is that if the functionality g is an encoding of the functionality f, then for any (permitted) coalition of players, their respective outputs and inputs in g allow them to simulate their respective inputs and outputs in f, without learning anything else, including the other outputs of f.
2018
TCC
Perfectly Secure Oblivious Parallel RAM
We show that PRAMs can be obliviously simulated with perfect security, incurring only $$O(\log N \log \log N)$$ blowup in parallel runtime, $$O(\log ^3 N)$$ blowup in total work, and O(1) blowup in space relative to the original PRAM. Our results advance the theoretical understanding of Oblivious (Parallel) RAM in several respects. First, prior to our work, no perfectly secure Oblivious Parallel RAM (OPRAM) construction was known; and we are the first in this respect. Second, even for the sequential special case of our algorithm (i.e., perfectly secure ORAM), we not only achieve logarithmic improvement in terms of space consumption relative to the state-of-the-art, but also significantly simplify perfectly secure ORAM constructions. Third, our perfectly secure OPRAM scheme matches the parallel runtime of earlier statistically secure schemes with negligible failure probability. Since we remove the dependence (in performance) on the security parameter, our perfectly secure OPRAM scheme in fact asymptotically outperforms known statistically secure ones if (sub-)exponentially small failure probability is desired. Our techniques for achieving small parallel runtime are novel and we employ special expander graphs to derandomize earlier statistically secure OPRAM techniques—this is the first time such techniques are used in the constructions of ORAMs/OPRAMs.
2018
TCC
Provable Time-Memory Trade-Offs: Symmetric Cryptography Against Memory-Bounded Adversaries
We initiate the study of symmetric encryption in a regime where the memory of the adversary is bounded. For a block cipher with n-bit blocks, we present modes of operation for encryption and authentication that guarantee security beyond$$2^n$$ encrypted/authenticated messages, as long as (1) the adversary’s memory is restricted to be less than $$2^n$$ bits, and (2) the key of the block cipher is long enough to mitigate memory-less key-search attacks. This is the first proposal of a setting which allows to bypass the $$2^n$$ barrier under a reasonable assumption on the adversarial resources.Motivated by the above, we also discuss the problem of stretching the key of a block cipher in the setting where the memory of the adversary is bounded. We show a tight equivalence between the security of double encryption in the ideal-cipher model and the hardness of a special case of the element distinctness problem, which we call the list-disjointness problem. Our result in particular implies a conditional lower bound on time-memory trade-offs to break PRP security of double encryption, assuming optimality of the worst-case complexity of existing algorithms for list disjointness.
2018
TCC
Registration-Based Encryption: Removing Private-Key Generator from IBE
In this work, we introduce the notion of registration-based encryption (RBE for short) with the goal of removing the trust parties need to place in the private-key generator in an IBE scheme. In an RBE scheme, users sample their own public and secret keys. There will also be a “key curator” whose job is only to aggregate the public keys of all the registered users and update the “short” public parameter whenever a new user joins the system. Encryption can still be performed to a particular recipient using the recipient’s identity and any public parameters released subsequent to the recipient’s registration. Decryption requires some auxiliary information connecting users’ public (and secret) keys to the public parameters. Because of this, as the public parameters get updated, a decryptor may need to obtain “a few” additional auxiliary information for decryption. More formally, if n is the total number of identities and $$\mathrm {\kappa }$$κ is the security parameter, we require the following.Efficiency requirements: (1) A decryptor only needs to obtain updated auxiliary information for decryption at most $$O(\log n)$$O(logn) times in its lifetime, (2) each of these updates are computed by the key curator in time $${\text {poly}}(\mathrm {\kappa },\log n)$$poly(κ,logn), and (3) the key curator updates the public parameter upon the registration of a new party in time $${\text {poly}}(\mathrm {\kappa },\log n)$$poly(κ,logn). Properties (2) and (3) require the key curator to have random access to its data.Compactness requirements: (1) Public parameters are always at most $${\text {poly}}(\mathrm {\kappa },\log n)$$poly(κ,logn) bit, and (2) the total size of updates a user ever needs for decryption is also at most $${\text {poly}}(\mathrm {\kappa },\log n)$$poly(κ,logn) bits.We present feasibility results for constructions of RBE based on indistinguishably obfuscation. We further provide constructions of weakly efficient RBE, in which the registration step is done in $${\text {poly}}(\mathrm {\kappa },n)$$poly(κ,n), based on CDH, Factoring or LWE assumptions. Note that registration is done only once per identity, and the more frequent operation of generating updates for a user, which can happen more times, still runs in time $${\text {poly}}(\mathrm {\kappa },\log n)$$poly(κ,logn). We leave open the problem of obtaining standard RBE (with $${\text {poly}}(\mathrm {\kappa },\log n)$$poly(κ,logn) registration time) from standard assumptions.
2018
TCC
Return of GGH15: Provable Security Against Zeroizing Attacks
The GGH15 multilinear maps have served as the foundation for a number of cutting-edge cryptographic proposals. Unfortunately, many schemes built on GGH15 have been explicitly broken by so-called “zeroizing attacks,” which exploit leakage from honest zero-test queries. The precise settings in which zeroizing attacks are possible have remained unclear. Most notably, none of the current indistinguishability obfuscation (iO) candidates from GGH15 have any formal security guarantees against zeroizing attacks.In this work, we demonstrate that all known zeroizing attacks on GGH15 implicitly construct algebraic relations between the results of zero-testing and the encoded plaintext elements. We then propose a “GGH15 zeroizing model” as a new general framework which greatly generalizes known attacks.Our second contribution is to describe a new GGH15 variant, which we formally analyze in our GGH15 zeroizing model. We then construct a new iO candidate using our multilinear map, which we prove secure in the GGH15 zeroizing model. This implies resistance to all known zeroizing strategies. The proof relies on the Branching Program Un-Annihilatability (BPUA) Assumption of Garg et al. [TCC 16-B] (which is implied by PRFs in $$\mathsf {NC}^1$$ secure against $$\mathsf {P}/\mathsf {poly}$$) and the complexity-theoretic p-Bounded Speedup Hypothesis of Miles et al. [ePrint 14] (a strengthening of the Exponential Time Hypothesis).
2018
TCC
Round Optimal Black-Box “Commit-and-Prove”
Motivated by theoretical and practical considerations, an important line of research is to design secure computation protocols that only make black-box use of cryptography. An important component in nearly all the black-box secure computation constructions is a black-box commit-and-prove protocol. A commit-and-prove protocol allows a prover to commit to a value and prove a statement about this value while guaranteeing that the committed value remains hidden. A black-box commit-and-prove protocol implements this functionality while only making black-box use of cryptography.In this paper, we build several tools that enable constructions of round-optimal, black-box commit and prove protocols. In particular, assuming injective one-way functions, we design the first round-optimal, black-box commit-and-prove arguments of knowledge satisfying strong privacy against malicious verifiers, namely:Zero-knowledge in four rounds and,Witness indistinguishability in three rounds. Prior to our work, the best known black-box protocols achieving commit-and-prove required more rounds.We additionally ensure that our protocols can be used, if needed, in the delayed-input setting, where the statement to be proven is decided only towards the end of the interaction. We also observe simple applications of our protocols towards achieving black-box four-round constructions of extractable and equivocal commitments.We believe that our protocols will provide a useful tool enabling several new constructions and easy round-efficient conversions from non-black-box to black-box protocols in the future.
2018
TCC
Round-Optimal Fully Black-Box Zero-Knowledge Arguments from One-Way Permutations
In this paper, we revisit the round complexity of designing zero-knowledge (ZK) arguments via a black-box construction from minimal assumptions. Our main result implements a 4-round ZK argument for any language in $$\textsf {NP}$$ NP, based on injective one-way functions, that makes black-box use of the underlying function. As a corollary, we also obtain the first 4-round perfect zero-knowledge argument for $$\textsf {NP}$$ NP based on claw-free permutations via a black-box construction and 4-round input-delayed commit-and-prove zero-knowledge argument based on injective one-way functions.
2018
TCC
2018
TCC
Secure Certification of Mixed Quantum States with Application to Two-Party Randomness Generation
We investigate sampling procedures that certify that an arbitrary quantum state on n subsystems is close to an ideal mixed state $$\varphi ^{\otimes n}$$ for a given reference state $$\varphi $$, up to errors on a few positions. This task makes no sense classically: it would correspond to certifying that a given bitstring was generated according to some desired probability distribution. However, in the quantum case, this is possible if one has access to a prover who can supply a purification of the mixed state.In this work, we introduce the concept of mixed-state certification, and we show that a natural sampling protocol offers secure certification in the presence of a possibly dishonest prover: if the verifier accepts then he can be almost certain that the state in question has been correctly prepared, up to a small number of errors.We then apply this result to two-party quantum coin-tossing. Given that strong coin tossing is impossible, it is natural to ask “how close can we get”. This question has been well studied and is nowadays well understood from the perspective of the bias of individual coin tosses. We approach and answer this question from a different—and somewhat orthogonal—perspective, where we do not look at individual coin tosses but at the global entropy instead. We show how two distrusting parties can produce a common high-entropy source, where the entropy is an arbitrarily small fraction below the maximum.
2018
TCC
Secure Computation Using Leaky Correlations (Asymptotically Optimal Constructions)
Most secure computation protocols can be effortlessly adapted to offload a significant fraction of their computationally and cryptographically expensive components to an offline phase so that the parties can run a fast online phase and perform their intended computation securely. During this offline phase, parties generate private shares of a sample generated from a particular joint distribution, referred to as the correlation. These shares, however, are susceptible to leakage attacks by adversarial parties, which can compromise the security of the secure computation protocol. The objective, therefore, is to preserve the security of the honest party despite the leakage performed by the adversary on her share.Prior solutions, starting with n-bit leaky shares, either used 4 messages or enabled the secure computation of only sub-linear size circuits. Our work presents the first 2-message secure computation protocol for 2-party functionalities that have $$\varTheta (n)$$ circuit-size despite $$\varTheta (n)$$-bits of leakage, a qualitatively optimal result. We compose a suitable 2-message secure computation protocol in parallel with our new 2-message correlation extractor. Correlation extractors, introduced by Ishai, Kushilevitz, Ostrovsky, and Sahai (FOCS–2009) as a natural generalization of privacy amplification and randomness extraction, recover “fresh” correlations from the leaky ones, which are subsequently used by other cryptographic protocols. We construct the first 2-message correlation extractor that produces $$\varTheta (n)$$-bit fresh correlations even after $$\varTheta (n)$$-bit leakage.Our principal technical contribution, which is of potential independent interest, is the construction of a family of multiplication-friendly linear secret sharing schemes that is simultaneously a family of small-bias distributions. We construct this family by randomly “twisting then permuting” appropriate Algebraic Geometry codes over constant-size fields.
2018
TCC
Smooth NIZK Arguments
We introduce a novel notion of smooth (-verifier) non- interactive zero-knowledge proofs (NIZK) which parallels the familiar notion of smooth projective hash functions (SPHF). We also show that the single group element quasi-adaptive NIZK (QA-NIZK) of Jutla and Roy (CRYPTO 2014) and Kiltz and Wee (EuroCrypt 2015) for linear subspaces can be easily extended to be computationally smooth. One important distinction of the new notion from SPHFs is that in a smooth NIZK the public evaluation of the hash on a language member using the projection key does not require the witness of the language member, but instead just requires its NIZK proof.This has the remarkable consequence that if one replaces the traditionally employed SPHFs with the novel smooth QA-NIZK in the Gennaro-Lindell paradigm of designing universally-composable password- authenticated key-exchange (UC-PAKE) protocols, one gets highly efficient UC-PAKE protocols that are secure even under adaptive corruption. This simpler and modular design methodology allows us to give the first single-round asymmetric UC-PAKE protocol, which is also secure under adaptive corruption in the erasure model. Previously, all asymmetric UC-PAKE protocols required at least two rounds. In fact, our protocol just requires each party to send a single message asynchronously. In addition, the protocol has short messages, with each party sending only four group elements. Moreover, the server password file needs to store only one group element per client. The protocol employs asymmetric bilinear pairing groups and is proven secure in the (limited programmability) random oracle model and under the standard bilinear pairing assumption SXDH.
2018
TCC
Static-Memory-Hard Functions, and Modeling the Cost of Space vs. Time
A series of recent research starting with (Alwen and Serbinenko, STOC 2015) has deepened our understanding of the notion of memory-hardness in cryptography—a useful property of hash functions for deterring large-scale password-cracking attacks—and has shown memory-hardness to have intricate connections with the theory of graph pebbling. Definitions of memory-hardness are not yet unified in the somewhat nascent field of memory-hardness, however, and the guarantees proven to date are with respect to a range of proposed definitions. In this paper, we observe two significant and practical considerations that are not analyzed by existing models of memory-hardness, and propose new models to capture them, accompanied by constructions based on new hard-to-pebble graphs. Our contribution is two-fold, as follows. First, existing measures of memory-hardness only account for dynamic memory usage (i.e., memory read/written at runtime), and do not consider static memory usage (e.g., memory on disk). Among other things, this means that memory requirements considered by prior models are inherently upper-bounded by a hash function’s runtime; in contrast, counting static memory would potentially allow quantification of much larger memory requirements, decoupled from runtime. We propose a new definition of static-memory-hard function (SHF) which takes static memory into account: we model static memory usage by oracle access to a large preprocessed string, which may be considered part of the hash function description. Static memory requirements are complementary to dynamic memory requirements: neither can replace the other, and to deter large-scale password-cracking attacks, a hash function will benefit from being both dynamic-memory-hard and static-memory-hard. We give two SHF constructions based on pebbling. To prove static-memory-hardness, we define a new pebble game (“black-magic pebble game”), and new graph constructions with optimal complexity under our proposed measure. Moreover, we provide a prototype implementation of our first SHF construction (which is based on pebbling of a simple “cylinder” graph), providing an initial demonstration of practical feasibility for a limited range of parameter settings. Secondly, existing memory-hardness models implicitly assume that the cost of space and time are more or less on par: they consider only linear ratios between the costs of time and space. We propose a new model to capture nonlinear time-space trade-offs: e.g., how is the adversary impacted when space is quadratically more expensive than time? We prove that nonlinear tradeoffs can in fact cause adversaries to employ different strategies from linear tradeoffs.Please refer to the full version of our paper for all results, proofs, appendices, and implementation details [DLP18].
2018
TCC
Succinct Garbling Schemes from Functional Encryption Through a Local Simulation Paradigm
We study a simulation paradigm, referred to as local simulation, in garbling schemes. This paradigm captures simulation proof strategies in which the simulator consists of many local simulators that generate different blocks of the garbled circuit. A useful property of such a simulation strategy is that only a few of these local simulators depend on the input, whereas the rest of the local simulators only depend on the circuit.We formalize this notion by defining locally simulatable garbling schemes. By suitably realizing this notion, we give a new construction of succinct garbling schemes for Turing machines assuming the polynomial hardness of compact functional encryption and standard assumptions (such as either CDH or LWE). Prior constructions of succinct garbling schemes either assumed sub-exponential hardness of compact functional encryption or were designed only for small-space Turing machines.We also show that a variant of locally simulatable garbling schemes can be used to generically obtain adaptively secure garbling schemes for circuits. All prior constructions of adaptively secure garbling that use somewhere equivocal encryption can be seen as instantiations of our construction.
2018
TCC
The MMap Strikes Back: Obfuscation and New Multilinear Maps Immune to CLT13 Zeroizing Attacks
All known multilinear map candidates have suffered from a class of attacks known as “zeroizing” attacks, which render them unusable for many applications. We provide a new construction of polynomial-degree multilinear maps and show that our scheme is provably immune to zeroizing attacks under a strengthening of the Branching Program Un-Annihilatability Assumption (Garg et al., TCC 2016-B).Concretely, we build our scheme on top of the CLT13 multilinear maps (Coron et al., CRYPTO 2013). In order to justify the security of our new scheme, we devise a weak multilinear map model for CLT13 that captures zeroizing attacks and generalizations, reflecting all known classical polynomial-time attacks on CLT13. In our model, we show that our new multilinear map scheme achieves ideal security, meaning no known attacks apply to our scheme. Using our scheme, we give a new multiparty key agreement protocol that is several orders of magnitude more efficient that what was previously possible.We also demonstrate the general applicability of our model by showing that several existing obfuscation and order-revealing encryption schemes, when instantiated with CLT13 maps, are secure against known attacks. These are schemes that are actually being implemented for experimentation, but until our work had no rigorous justification for security.
2018
TCC
The Security of Lazy Users in Out-of-Band Authentication
Faced with the threats posed by man-in-the-middle attacks, messaging platforms rely on “out-of-band” authentication, assuming that users have access to an external channel for authenticating one short value. For example, assuming that users recognizing each other’s voice can authenticate a short value, Telegram and WhatApp ask their users to compare 288-bit and 200-bit values, respectively. The existing protocols, however, do not take into account the plausible behavior of users who may be “lazy” and only compare parts of these values (rather than their entirety).Motivated by such a security-critical user behavior, we study the security of lazy users in out-of-band authentication. We start by showing that both the protocol implemented by WhatsApp and the statistically-optimal protocol of Naor, Segev and Smith (CRYPTO ’06) are completely vulnerable to man-in-the-middle attacks when the users consider only a half of the out-of-band authenticated value. In this light, we put forward a framework that captures the behavior and security of lazy users. Our notions of security consider both statistical security and computational security, and for each flavor we derive a lower bound on the tradeoff between the number of positions that are considered by the lazy users and the adversary’s forgery probability.Within our framework we then provide two authentication protocols. First, in the statistical setting, we present a transformation that converts any out-of-band authentication protocol into one that is secure even when executed by lazy users. Instantiating our transformation with a new refinement of the protocol of Naor et al. results in a protocol whose tradeoff essentially matches our lower bound in the statistical setting. Then, in the computational setting, we show that the computationally-optimal protocol of Vaudenay (CRYPTO ’05) is secure even when executed by lazy users – and its tradeoff matches our lower bound in the computational setting.
2018
TCC
Topology-Hiding Computation Beyond Semi-Honest Adversaries
Topology-hiding communication protocols allow a set of parties, connected by an incomplete network with unknown communication graph, where each party only knows its neighbors, to construct a complete communication network such that the network topology remains hidden even from a powerful adversary who can corrupt parties. This communication network can then be used to perform arbitrary tasks, for example secure multi-party computation, in a topology-hiding manner. Previously proposed protocols could only tolerate passive corruption. This paper proposes protocols that can also tolerate fail-corruption (i.e., the adversary can crash any party at any point in time) and so-called semi-malicious corruption (i.e., the adversary can control a corrupted party’s randomness), without leaking more than an arbitrarily small fraction of a bit of information about the topology. A small-leakage protocol was recently proposed by Ball et al. [Eurocrypt’18], but only under the unrealistic set-up assumption that each party has a trusted hardware module containing secret correlated pre-set keys, and with the further two restrictions that only passively corrupted parties can be crashed by the adversary, and semi-malicious corruption is not tolerated. Since leaking a small amount of information is unavoidable, as is the need to abort the protocol in case of failures, our protocols seem to achieve the best possible goal in a model with fail-corruption.Further contributions of the paper are applications of the protocol to obtain secure MPC protocols, which requires a way to bound the aggregated leakage when multiple small-leakage protocols are executed in parallel or sequentially. Moreover, while previous protocols are based on the DDH assumption, a new so-called PKCR public-key encryption scheme based on the LWE assumption is proposed, allowing to base topology-hiding computation on LWE. Furthermore, a protocol using fully-homomorphic encryption achieving very low round complexity is proposed.
2018
TCC
Towards Tight Security of Cascaded LRW2
The Cascaded LRW2 tweakable block cipher was introduced by Landecker et al. at CRYPTO 2012, and proven secure up to $$2^{2n/3}$$ queries. There has not been any attack on the construction faster than the generic attack in $$2^n$$ queries. In this work we initiate the quest towards a tight bound. We first present a distinguishing attack in $$2n^{1/2}2^{3n/4}$$ queries against a generalized version of the scheme. The attack is supported with an experimental verification and a formal success probability analysis. We subsequently discuss non-trivial bottlenecks in proving tight security, most importantly the distinguisher’s freedom in choosing the tweak values. Finally, we prove that if every tweak value occurs at most $$2^{n/4}$$ times, Cascaded LRW2 is secure up to $$2^{3n/4}$$ queries.
2018
TCC
Traitor-Tracing from LWE Made Simple and Attribute-Based
A traitor tracing scheme is a public key encryption scheme for which there are many secret decryption keys. Any of these keys can decrypt a ciphertext; moreover, even if a coalition of users collude, put together their decryption keys and attempt to create a new decryption key, there is an efficient algorithm to trace the new key to at least one the colluders.Recently, Goyal, Koppula and Waters (GKW, STOC 18) provided the first traitor tracing scheme from LWE with ciphertext and secret key sizes that grow polynomially in $$\log n$$, where n is the number of users. The main technical building block in their construction is a strengthening of (bounded collusion secure) secret-key functional encryption which they refer to as mixed functional encryption (FE).In this work, we improve upon and extend the GKW traitor tracing scheme:We provide simpler constructions of mixed FE schemes based on the LWE assumption. Our constructions improve upon the GKW construction in terms of expressiveness, modularity, and security.We provide a construction of attribute-based traitor tracing for all circuits based on the LWE assumption.
2018
TCC
Two-Message Statistically Sender-Private OT from LWE
We construct a two-message oblivious transfer (OT) protocol without setup that guarantees statistical privacy for the sender even against malicious receivers. Receiver privacy is game based and relies on the hardness of learning with errors (LWE). This flavor of OT has been a central building block for minimizing the round complexity of witness indistinguishable and zero knowledge proof systems, non-malleable commitment schemes and multi-party computation protocols, as well as for achieving circuit privacy for homomorphic encryption in the malicious setting. Prior to this work, all candidates in the literature from standard assumptions relied on number theoretic assumptions and were thus insecure in the post-quantum setting. This work provides the first (presumed) post-quantum secure candidate and thus allows to instantiate the aforementioned applications in a post-quantum secure manner.Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.
2018
TCC
Two-Round Adaptively Secure Multiparty Computation from Standard Assumptions
We present the first two-round multiparty computation (MPC) protocols secure against malicious adaptive corruption in the common reference string (CRS) model, based on DDH, LWE, or QR. Prior two-round adaptively secure protocols were known only in the two-party setting against semi-honest adversaries, or in the general multiparty setting assuming the existence of indistinguishability obfuscation (iO).Our protocols are constructed in two steps. First, we construct two-round oblivious transfer (OT) protocols secure against malicious adaptive corruption in the CRS model based on DDH, LWE, or QR. We achieve this by generically transforming any two-round OT that is only secure against static corruption but has certain oblivious sampleability properties, into a two-round adaptively secure OT. Prior constructions were only secure against semi-honest adversaries or based on iO.Second, building upon recent constructions of two-round MPC from two-round OT in the weaker static corruption setting [Garg and Srinivasan, Benhamouda and Lin, Eurocrypt’18] and using equivocal garbled circuits from [Canetti, Poburinnaya and Venkitasubramaniam, STOC’17], we show how to construct two-round adaptively secure MPC from two-round adaptively secure OT and constant-round adaptively secure MPC, with respect to both malicious and semi-honest adversaries. As a corollary, we also obtain the first 2-round MPC secure against semi-honest adaptive corruption in the plain model based on augmented non-committing encryption (NCE), which can be based on a variety of assumptions, CDH, RSA, DDH, LWE, or factoring Blum integers. Finally, we mention that our OT and MPC protocols in the CRS model are, in fact, adaptively secure in the Universal Composability framework.
2018
TCC
Two-Round MPC: Information-Theoretic and Black-Box
We continue the study of protocols for secure multiparty computation (MPC) that require only two rounds of interaction. The recent works of Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) essentially settle the question by showing that such protocols are implied by the minimal assumption that a two-round oblivious transfer (OT) protocol exists. However, these protocols inherently make a non-black-box use of the underlying OT protocol, which results in poor concrete efficiency. Moreover, no analogous result was known in the information-theoretic setting, or alternatively based on one-way functions, given an OT correlations setup or an honest majority.Motivated by these limitations, we study the possibility of obtaining information-theoretic and “black-box” implementations of two-round MPC protocols. We obtain the following results:Two-round MPC from OT correlations. Given an OT correlations setup, we get protocols that make a black-box use of a pseudorandom generator (PRG) and are secure against a malicious adversary corrupting an arbitrary number of parties. For a semi-honest adversary, we get similar information-theoretic protocols for branching programs.New NIOT constructions. Towards realizing OT correlations, we extend the DDH-based non-interactive OT (NIOT) protocol of Bellare and Micali (Crypto’89) to the malicious security model, and present new NIOT constructions from the Quadratic Residuosity Assumption (QRA) and the Learning With Errors (LWE) assumption.Two-round black-box MPC with strong PKI setup. Combining the two previous results, we get two-round MPC protocols that make a black-box use of any DDH-hard or QRA-hard group. The protocols can offer security against a malicious adversary, and require a PKI setup that depends on the number of parties and the size of computation, but not on the inputs or the identities of the participating parties.Two-round honest-majority MPC from secure channels. Given secure point-to-point channels, we get protocols that make a black-box use of a pseudorandom generator (PRG), as well as information-theoretic protocols for branching programs. These protocols can tolerate a semi-honest adversary corrupting a strict minority of the parties, where in the information-theoretic case the complexity is exponential in the number of parties.
2018
TCC
Upgrading to Functional Encryption
The notion of Functional Encryption (FE) has recently emerged as a strong primitive with several exciting applications. In this work, we initiate the study of the following question: Can existing public key encryption schemes be “upgraded” to Functional Encryption schemes without changing their public keys or the encryption algorithm? We call a public-key encryption scheme with this property to be FE-compatible. Indeed, assuming ideal obfuscation, it is easy to see that every CCA-secure public-key encryption scheme is FE-compatible. Despite the recent success in using indistinguishability obfuscation to replace ideal obfuscation for many applications, we show that this phenomenon most likely will not apply here. We show that assuming fully homomorphic encryption and the learning with errors (LWE) assumption, there exists a CCA-secure encryption scheme that is provably not FE-compatible. We also show that a large class of natural CCA-secure encryption schemes proven secure in the random oracle model are not FE-compatible in the random oracle model.Nevertheless, we identify a key structure that, if present, is sufficient to provide FE-compatibility. Specifically, we show that assuming sub-exponentially secure iO and sub-exponentially secure one way functions, there exists a class of public key encryption schemes which we call Special-CCA secure encryption schemes that are in fact, FE-compatible. In particular, each of the following popular CCA secure encryption schemes (some of which existed even before the notion of FE was introduced) fall into the class of Special-CCA secure encryption schemes and are thus FE-compatible:1.[CHK04] when instantiated with the IBE scheme of [BB04].2.[CHK04] when instantiated with any Hierarchical IBE scheme.3.[PW08] when instantiated with any Lossy Trapdoor Function.
2018
TCC
Watermarking PRFs Under Standard Assumptions: Public Marking and Security with Extraction Queries
A software watermarking scheme can embed some information called a mark into a program while preserving its functionality. No adversary can remove the mark without damaging the functionality of the program. Cohen et al. (STOC ’16) gave the first positive results for watermarking, showing how to watermark certain pseudorandom function (PRF) families using indistinguishability obfuscation (iO). Their scheme has a secret marking procedure to embed marks in programs and a public extraction procedure to extract the marks from programs; security holds even against an attacker that has access to a marking oracle. Kim and Wu (CRYPTO ’17) later constructed a PRF watermarking scheme under only the LWE assumption. In their scheme, both the marking and extraction procedures are secret, but security only holds against an attacker with access to a marking oracle but not an extraction oracle. In fact, it is possible to completely break the security of the latter scheme using extraction queries, which is a significant limitation in any foreseeable application.In this work, we construct a new PRF watermarking scheme with the following properties. The marking procedure is public and therefore anyone can embed marks in PRFs from the family. Previously we had no such construction even using obfuscation.The extraction key is secret, but marks remain unremovable even if the attacker has access to an extraction oracle. Previously we had no such construction under standard assumptions.Our scheme is simple, uses generic components and can be instantiated under many different assumptions such as DDH, Factoring or LWE. The above benefits come with one caveat compared to prior work: the PRF family that we can watermark depends on the public parameters of the watermarking scheme and the watermarking authority has a secret key which can break the security of all of the PRFs in the family. Since the watermarking authority is usually assumed to be trusted, this caveat appears to be acceptable.