International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Gilad Asharov

Publications

Year
Venue
Title
2024
EUROCRYPT
Perfect Asynchronous MPC with Linear Communication Overhead
We study secure multiparty computation in the asynchronous setting with perfect security and optimal resilience (less than one-fourth of the participants are malicious). It has been shown that every function can be computed in this model [Ben-OR, Canetti, and Goldreich, STOC'1993]. Despite 30 years of research, all protocols in the asynchronous setting require $\Omega(n^2C)$ communication complexity for computing a circuit with $C$ multiplication gates. In contrast, for nearly 15 years, in the synchronous setting, it has been known how to achieve $\mathcal{O}(nC)$ communication complexity (Beerliova and Hirt; TCC 2008). The techniques for achieving this result in the synchronous setting are not known to be sufficient for obtaining an analogous result in the asynchronous setting. We close this gap between synchronous and asynchronous secure computation and show the first asynchronous protocol with $\mathcal{O}(nC)$ communication complexity for a circuit with $C$ multiplication gates. Linear overhead forms a natural barrier for general secret-sharing-based MPC protocols. Our main technical contribution is an asynchronous weak binding secret sharing that achieves rate-1 communication (i.e., $\mathcal{O}(1)$-overhead per secret). To achieve this goal, we develop new techniques for the asynchronous setting, including the use of \emph{trivariate polynomials} (as opposed to bivariate polynomials).
2024
EUROCRYPT
Perfect (Parallel) Broadcast in Constant Expected Rounds via Statistical VSS
Gilad Asharov Anirudh Chandramouli
We study broadcast protocols in the information-theoretic model under optimal conditions, where the number of corruptions $t$ is at most one-third of the parties, $n$. While worst-case $\Omega(n)$ round broadcast protocols are known to be impossible to achieve, protocols with an expected constant number of rounds have been demonstrated since the seminal work of Feldman and Micali [STOC'88]. Communication complexity for such protocols has gradually improved over the years, reaching $O(nL)$ plus expected $O(n^4\log n)$ for broadcasting a message of size $L$ bits. This paper presents a perfectly secure broadcast protocol with expected constant rounds and communication complexity of $O(nL)$ plus expected $O(n^3 \log^2n)$ bits. In addition, we consider the problem of parallel broadcast, where $n$ senders, each wish to broadcast a message of size $L$. We show a parallel broadcast protocol with expected constant rounds and communication complexity of $O(n^2L)$ plus expected $O(n^3 \log^2n)$ bits. Our protocol is optimal (up to expectation) for messages of length $L \in \Omega(n \log^2 n)$. Our main contribution is a framework for obtaining \emph{perfectly} secure broadcast with an expected constant number of rounds from a \emph{statistically} secure verifiable secret sharing. Moreover, we provide a new statistically secure verifiable secret sharing where the broadcast cost per participant is reduced from $O(n \log n)$ bits to only $O(\poly \log n)$ bits. All our protocols are adaptively secure.
2023
EUROCRYPT
Detect, Pack and Batch: Perfectly-Secure MPC with Linear Communication and Constant Expected Time
We prove that perfectly-secure optimally-resilient secure Multi-Party Computation (MPC) for a circuit with $C$ gates and depth $D$ can be obtained in $O((Cn+n^4 + Dn^2)\log n)$ communication complexity and $O(D)$ expected time. For $D \ll n$ and $C\geq n^3$, this is the \textbf{first} perfectly-secure optimal-resilient MPC protocol with \textbf{linear} communication complexity per gate and \textbf{constant} expected time complexity per layer. Compared to state-of-the-art MPC protocols in the player elimination framework [Beerliova and Hirt TCC'08, and Goyal, Liu, and Song CRYPTO'19], for $C>n^3$ and $D \ll n$, our results significantly improve the run time from $\Theta(n+D)$ to expected $O(D)$ while keeping communication complexity at $O(Cn\log n)$. Compared to state-of-the-art MPC protocols that obtain an expected $O(D)$ time complexity [Abraham, Asharov, and Yanai TCC'21], for $C>n^3$, our results significantly improve the communication complexity from $O(Cn^4\log n)$ to $O(Cn\log n)$ while keeping the expected run time at $O(D)$. One salient part of our technical contribution is centered around a new primitive we call \textit{detectable secret sharing}. It is perfectly-hiding, weakly-binding, and has the property that either reconstruction succeeds, or $O(n)$ parties are (privately) detected. On the one hand, we show that detectable secret sharing is sufficiently powerful to generate multiplication triplets needed for MPC. On the other hand, we show how to share $p$ secrets via detectable secret sharing with communication complexity of just $O(n^4\log n+p \log n)$. When sharing $p\geq n^4$ secrets, the communication cost is amortized to just $O(1)$ per secret. Our second technical contribution is a new Verifiable Secret Sharing protocol that can share $p$ secrets at just $O(n^4\log n+pn\log n)$ word complexity. When sharing $p\geq n^3$ secrets, the communication cost is amortized to just $O(n)$ per secret. The best prior required $O(n^3)$ communication per secret.
2023
JOFC
Oblivious RAM with Worst-Case Logarithmic Overhead
We present the first Oblivious RAM (ORAM) construction that for N memory blocks supports accesses with worst-case $$O(\log N)$$ O ( log N ) overhead for any block size $$\Omega (\log N)$$ Ω ( log N ) while requiring a client memory of only a constant number of memory blocks. We rely on the existence of one-way functions and guarantee computational security. Our result closes a long line of research on fundamental feasibility results for ORAM constructions as logarithmic overhead is necessary. The previous best logarithmic overhead construction only guarantees it in an amortized sense, i.e., logarithmic overhead is achieved only for long enough access sequences, where some of the individual accesses incur $$\Theta (N)$$ Θ ( N ) overhead. The previously best ORAM in terms of worst-case overhead achieves $$O(\log ^2 N/\log \log N)$$ O ( log 2 N / log log N ) overhead. Technically, we design a novel de-amortization framework for modern ORAM constructions that use the “shuffled inputs” assumption. Our framework significantly departs from all previous de-amortization frameworks, originating from Ostrovsky and Shoup (STOC’97), that seem to be fundamentally too weak to be applied on modern ORAM constructions.
2022
EUROCRYPT
A Complete Characterization of Game-Theoretically Fair, Multi-Party Coin Toss 📺
Gilad Asharov Elaine Shi Ke Wu
Cleve's celebrated lower bound (STOC'86) showed that a de facto strong fairness notion is impossible in 2-party coin toss, i.e., the corrupt party always has a strategy of biasing the honest party's outcome by a noticeable amount. Nonetheless, Blum's famous coin-tossing protocol (CRYPTO'81) achieves a strictly weaker "game-theoretic'' notion of fairness — specifically, it is a 2-party coin toss protocol in which neither party can bias the outcome towards its own preference; and thus the honest protocol forms a Nash equilibrium in which neither party would want to deviate. Surprisingly, an n-party analog of Blum's famous coin toss protocol was not studied till recently. The work by Chung et al.~(TCC'18) was the first to explore the feasibility of game-theoretically fair n-party coin toss in the presence of corrupt majority. We may assume that each party has a publicly stated preference for either the bit 0 or 0, and if the outcome agrees with the party's preference, it obtains utility 1; else it obtains nothing. A natural game-theoretic formulation is to require that the honest protocol form a coalition-resistant Nash equilibrium, i.e., no coalition should have incentive to deviate from the honest behavior. Chung et al. phrased this game-theoretic notion as “cooperative-strategy-proofness'' or ”CSP-fairness'' for short. Unfortunately, Chung et al.~showed that under (n-1)-sized coalitions, it is impossible to design such a CSP-fair coin toss protocol, unless all parties except one prefer the same bit. In this paper, we show that the impossibility of Chung et al.~is in fact not as broad as it may seem. When coalitions are majority but not $n-1$ in size, we can indeed get feasibility results in some meaningful parameter regimes. We give a complete characterization of the regime in which CSP-fair coin toss is possible, by providing a matching upper- and lower-bound. Our complete characterization theorem also shows that the mathematical structure of game-theoretic fairness is starkly different from the de facto strong fairness notion in the multi-party computation literature.
2022
TCC
Asymptotically Free Broadcast in Constant Expected Time via Packed VSS
Broadcast is an essential primitive for secure computation. We focus in this paper on optimal resilience (i.e., when the number of corrupted parties $t$ is less than a third of the computing parties $n$), and with no setup or cryptographic assumptions. While broadcast with worst case $t$ rounds is impossible, it has been shown [Feldman and Micali STOC'88, Katz and Koo CRYPTO'06] how to construct protocols with expected constant number of rounds in the private channel model. However, those constructions have large communication complexity, specifically $\bigO(n^2L+n^6\log n)$ expected number of bits transmitted for broadcasting a message of length $L$. This leads to a significant communication blowup in secure computation protocols in this setting. In this paper, we substantially improve the communication complexity of broadcast in constant expected time. Specifically, the expected communication complexity of our protocol is $\bigO(nL+n^4\log n)$. For messages of length $L=\Omega(n^3 \log n)$, our broadcast has no asymptotic overhead (up to expectation), as each party has to send or receive $\bigO(n^3 \log n)$ bits. We also consider parallel broadcast, where $n$ parties wish to broadcast $L$ bit messages in parallel. Our protocol has no asymptotic overhead for $L=\Omega(n^2\log n)$, which is a common communication pattern in perfectly secure MPC protocols. For instance, it is common that all parties share their inputs simultaneously at the same round, and verifiable secret sharing protocols require the dealer to broadcast a total of $\bigO(n^2\log n)$ bits. As an independent interest, our broadcast is achieved by a \emph{packed verifiable secret sharing}, a new notion that we introduce. We show a protocol that verifies $\bigO(n)$ secrets simultaneously with the same cost of verifying just a single secret. This improves by a factor of $n$ the state-of-the-art.
2022
JOFC
Efficient Perfectly Secure Computation with Optimal Resilience
Secure computation enables n mutually distrustful parties to compute a function over their private inputs jointly. In 1988, Ben-Or, Goldwasser, and Wigderson (BGW) proved that any function can be computed with perfect security in the presence of a malicious adversary corrupting at most $$t< n/3$$ t < n / 3 parties. After more than 30 years, protocols with perfect malicious security, and round complexity proportional to the circuit’s depth, still require (verifiably) sharing a total of $$O(n^2)$$ O ( n 2 ) values per multiplication. In contrast, only O ( n ) values need to be shared per multiplication to achieve semi-honest security. Sharing $$\Omega (n)$$ Ω ( n ) values for a single multiplication seems to be the natural barrier for polynomial secret-sharing-based multiplication. In this paper, we construct a new secure computation protocol with perfect, optimal resilience and malicious security that incurs (verifiably) sharing O ( n ) values per multiplication. Our protocol requires a constant number of rounds per multiplication. Like BGW, it has an overall round complexity that is proportional only to the multiplicative depth of the circuit. Our improvement is obtained by a novel construction for weak VSS for polynomials of degree 2t , which incurs the same communication and round complexities as the state-of-the-art constructions for VSS for polynomials of degree t . Our second contribution is a method for reducing the communication complexity for any depth 1 sub-circuit to be proportional only to the size of the input and output (rather than the size of the circuit). This implies protocols with sub-linear communication complexity (in the size of the circuit) for perfectly secure computation for important functions like matrix multiplication.
2022
JOFC
On the Complexity of Compressing Obfuscation
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far-reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct. In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (Functional signatures and pseudorandom functions, PKC, 2016) and Bitansky et al. (From Cryptomania to Obfustopia through secret-key functional encryption, TCC, 2016) by allowing for a broader regime of parameters. We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation. First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives. Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions. Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists , whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
2022
JOFC
Locality-Preserving Oblivious RAM
Oblivious RAMs, introduced by Goldreich and Ostrovsky [JACM’96], compile any RAM program into one that is “memory oblivious,” i.e., the access pattern to the memory is independent of the input. All previous ORAM schemes, however, completely break the locality of data accesses (for instance, by shuffling the data to pseudorandom positions in memory). In this work, we initiate the study of locality-preserving ORAMs —ORAMs that preserve locality of the accessed memory regions, while leaking only the lengths of contiguous memory regions accessed. Our main results demonstrate the existence of a locality-preserving ORAM with polylogarithmic overhead both in terms of bandwidth and locality. We also study the trade-off between locality, bandwidth and leakage, and show that any scheme that preserves locality and does not leak the lengths of the contiguous memory regions accessed, suffers from prohibitive bandwidth. To further improve the parameters, we also consider a weaker notion of a File ORAM, which supports accesses to predefined non-overlapping regions. Assuming one-way functions, we present a computationally secure File ORAM that has a work overhead and locality of roughly $$O(\log ^2 N)$$ O ( log 2 N ) , while ignoring $$\log \log N$$ log log N factors. To the best of our knowledge, before our work, the only works combining locality and obliviousness were for symmetric searchable encryption [e.g., Cash and Tessaro (EUROCRYPT’14), Asharov et al. (STOC’16)]. Symmetric search encryption ensures obliviousness if each keyword is searched only once, whereas ORAM provides obliviousness to any input program. Thus, our work generalizes that line of work to the much more challenging task of preserving locality in ORAMs.
2021
EUROCRYPT
Towards Accountability in CRS Generation 📺
It is well known that several cryptographic primitives cannot be achieved without a common reference string (CRS). Those include, for instance, non-interactive zero-knowledge for NP, or malicious secure computation in fewer than four rounds. The security of those primitives heavily rely upon on the assumption that the trusted authority, who generates the CRS, does not misuse the randomness used in the CRS generation. However, we argue that there is no such thing as an unconditionally trusted authority and every authority must be held accountable for any trust to be well-founded. Indeed, a malicious authority can, for instance, recover private inputs of honest parties given transcripts of the protocols executed with respect to the CRS it has generated. While eliminating trust in the trusted authority may not be entirely feasible, can we at least move towards achieving some notion of accountability? We propose a new notion in which, if the CRS authority releases the private inputs of protocol executions to others, we can then provide a publicly-verifiable proof that certifies that the authority misbehaved. We study the feasibility of this notion in the context of non-interactive zero knowledge and two-round secure two-party computation.
2021
CRYPTO
Oblivious RAM with Worst-Case Logarithmic Overhead 📺
We present the first Oblivious RAM (ORAM) construction that for $N$ memory blocks supports accesses with \emph{worst-case} $O(\log N)$ overhead for any block size $\Omega(\log N)$ while requiring a client memory of only a constant number of memory blocks. We rely on the existence of one-way functions and guarantee computational security. Our result closes a long line of research on fundamental feasibility results for ORAM constructions as logarithmic overhead is necessary. The previous best logarithmic overhead construction only guarantees it in an \emph{amortized} sense, i.e., logarithmic overhead is achieved only for long enough access sequences, where some of the individual accesses incur $\Theta(N)$ overhead. The previously best ORAM in terms of \emph{worst-case} overhead achieves $O(\log^2 N/\log\log N)$ overhead. Technically, we design a novel de-amortization framework for modern ORAM constructions that use the ``shuffled inputs'' assumption. Our framework significantly departs from all previous de-amortization frameworks, originating from Ostrovsky and Shoup (STOC~'97), that seem to be fundamentally too weak to be applied on modern ORAM constructions.
2021
TCC
Efficient Perfectly Secure Computation with Optimal Resilience 📺
Secure computation enables $n$ mutually distrustful parties to compute a function over their private inputs jointly. In 1988 Ben-Or, Goldwasser, and Wigderson (BGW) demonstrated that any function can be computed with perfect security in the presence of a malicious adversary corrupting at most $t< n/3$ parties. After more than 30 years, protocols with perfect malicious security, with round complexity proportional to the circuit's depth, still require sharing a total of $O(n^2)$ values per multiplication. In contrast, only $O(n)$ values need to be shared per multiplication to achieve semi-honest security. Indeed sharing $\Omega(n)$ values for a single multiplication seems to be the natural barrier for polynomial secret sharing-based multiplication. In this paper, we close this gap by constructing a new secure computation protocol with perfect, optimal resilience and malicious security that incurs sharing of only $O(n)$ values per multiplication, thus, matching the semi-honest setting for protocols with round complexity that is proportional to the circuit depth. Our protocol requires a constant number of rounds per multiplication. Like BGW, it has an overall round complexity that is proportional only to the multiplicative depth of the circuit. Our improvement is obtained by a novel construction for {\em weak VSS for polynomials of degree-$2t$}, which incurs the same communication and round complexities as the state-of-the-art constructions for {\em VSS for polynomials of degree-$t$}. Our second contribution is a method for reducing the communication complexity for any depth-1 sub-circuit to be proportional only to the size of the input and output (rather than the size of the circuit). This implies protocols with \emph{sublinear communication complexity} (in the size of the circuit) for perfectly secure computation for important functions like matrix multiplication.
2021
JOFC
Tight Tradeoffs in Searchable Symmetric Encryption
Gilad Asharov Gil Segev Ido Shahaf
A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead , locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT ’14) and Asharov et al. (STOC ’16) to construct SSE schemes offering various such tradeoffs and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed. We establish tight bounds on the tradeoff between the space overhead, locality and read efficiency of SSE schemes within two general frameworks that capture the memory access pattern underlying all existing schemes. First, we introduce the “pad-and-split” framework, refining that of Cash and Tessaro while still capturing the same existing schemes. Within our framework we significantly strengthen their lower bound, proving that any scheme with locality L must use space $$\Omega ( N \log N / \log L )$$ Ω ( N log N / log L ) for databases of size N . This is a tight lower bound, matching the tradeoff provided by the scheme of Demertzis and Papamanthou (SIGMOD ’17) which is captured by our pad-and-split framework. Then, within the “statistical-independence” framework of Asharov et al. we show that their lower bound is essentially tight: We construct a scheme whose tradeoff matches their lower bound within an additive $$O(\log \log \log N)$$ O ( log log log N ) factor in its read efficiency, once again improving upon the existing schemes. Our scheme offers optimal space and locality, and nearly optimal read efficiency that depends on the frequency of the queried keywords: For a keyword that is associated with $$n = N^{1 - \epsilon (n)}$$ n = N 1 - ϵ ( n ) document identifiers, the read efficiency is $$\omega (1) \cdot {\epsilon }(n)^{-1}+ O(\log \log \log N)$$ ω ( 1 ) · ϵ ( n ) - 1 + O ( log log log N ) when retrieving its identifiers (where the $$\omega (1)$$ ω ( 1 ) term may be arbitrarily small, and $$\omega (1) \cdot {\epsilon }(n)^{-1}$$ ω ( 1 ) · ϵ ( n ) - 1 is the lower bound proved by Asharov et al.). In particular, for any keyword that is associated with at most $$N^{1 - 1/o(\log \log \log N)}$$ N 1 - 1 / o ( log log log N ) document identifiers (i.e., for any keyword that is not exceptionally common), we provide read efficiency $$O(\log \log \log N)$$ O ( log log log N ) when retrieving its identifiers.
2020
EUROCRYPT
OptORAMa: Optimal Oblivious RAM 📺
Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC '87 and J. ACM '96) is a technique for provably obfuscating programs' access patterns, such that the access patterns leak no information about the programs' secret inputs. To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy's original ORAM work for statistical security and in a somewhat restricted model (the so called \emph{balls-and-bins} model), and recently by Larsen and Nielsen (CRYPTO '18) for computational security. A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS '18) who gave a construction with $O(\log N\cdot \log\log N)$ amortized blowup, assuming one-way functions. One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of $n$ elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our $O(n)$ algorithm improves the previously best known deterministic or randomized algorithms whose running time is $O(n \cdot\log n)$ or $O(n \cdot\log \log n)$, respectively.
2019
EUROCRYPT
Locality-Preserving Oblivious RAM 📺
Oblivious RAMs, introduced by Goldreich and Ostrovsky [JACM’96], compile any RAM program into one that is “memory oblivious”, i.e., the access pattern to the memory is independent of the input. All previous ORAM schemes, however, completely break the locality of data accesses (for instance, by shuffling the data to pseudorandom positions in memory).In this work, we initiate the study of locality-preserving ORAMs—ORAMs that preserve locality of the accessed memory regions, while leaking only the lengths of contiguous memory regions accessed. Our main results demonstrate the existence of a locality-preserving ORAM with poly-logarithmic overhead both in terms of bandwidth and locality. We also study the tradeoff between locality, bandwidth and leakage, and show that any scheme that preserves locality and does not leak the lengths of the contiguous memory regions accessed, suffers from prohibitive bandwidth.To the best of our knowledge, before our work, the only works combining locality and obliviousness were for symmetric searchable encryption [e.g., Cash and Tessaro (EUROCRYPT’14), Asharov et al. (STOC’16)]. Symmetric search encryption ensures obliviousness if each keyword is searched only once, whereas ORAM provides obliviousness to any input program. Thus, our work generalizes that line of work to the much more challenging task of preserving locality in ORAMs.
2018
JOFC
2018
CRYPTO
On the Complexity of Compressing Obfuscation 📺
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct.In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters.We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
2018
CRYPTO
Tight Tradeoffs in Searchable Symmetric Encryption 📺
Gilad Asharov Gil Segev Ido Shahaf
A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead, locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT ’14) and Asharov et al. (STOC ’16) to construct SSE schemes offering various such tradeoffs, and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed.We establish tight bounds on the tradeoff between the space overhead, locality and read efficiency of SSE schemes within two general frameworks that capture the memory access pattern underlying all existing schemes. First, we introduce the “pad-and-split” framework, refining that of Cash and Tessaro while still capturing the same existing schemes. Within our framework we significantly strengthen their lower bound, proving that any scheme with locality L must use space $$\varOmega ( N \log N / \log L )$$Ω(NlogN/logL) for databases of size N. This is a tight lower bound, matching the tradeoff provided by the scheme of Demertzis and Papamanthou (SIGMOD ’17) which is captured by our pad-and-split framework.Then, within the “statistical-independence” framework of Asharov et al. we show that their lower bound is essentially tight: We construct a scheme whose tradeoff matches their lower bound within an additive $$O(\log \log \log N)$$O(logloglogN) factor in its read efficiency, once again improving upon the existing schemes. Our scheme offers optimal space and locality, and nearly-optimal read efficiency that depends on the frequency of the queried keywords: For a keyword that is associated with $$n = N^{1 - \epsilon (n)}$$n=N1-ϵ(n) document identifiers, the read efficiency is $$\omega (1) \cdot {\epsilon }(n)^{-1}+ O(\log \log \log N)$$ω(1)·ϵ(n)-1+O(logloglogN) when retrieving its identifiers (where the $$\omega (1)$$ω(1) term may be arbitrarily small, and $$\omega (1) \cdot {\epsilon }(n)^{-1}$$ω(1)·ϵ(n)-1 is the lower bound proved by Asharov et al.). In particular, for any keyword that is associated with at most $$N^{1 - 1/o(\log \log \log N)}$$N1-1/o(logloglogN) document identifiers (i.e., for any keyword that is not exceptionally common), we provide read efficiency $$O(\log \log \log N)$$O(logloglogN) when retrieving its identifiers.
2017
JOFC
2017
JOFC
2016
TCC
2016
JOFC
2015
TCC
2015
EUROCRYPT
2014
TCC
2013
TCC
2013
ASIACRYPT
2012
EUROCRYPT
2012
ASIACRYPT
2011
JOFC
2011
CRYPTO
2011
EUROCRYPT
2009
CRYPTO

Program Committees

Crypto 2024
Eurocrypt 2023
TCC 2022
TCC 2021
PKC 2020
TCC 2019
TCC 2018
Eurocrypt 2017