International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Gil Segev

Publications

Year
Venue
Title
2023
TCC
Rogue-Instance Security for Batch Knowledge Proofs
We propose a new notion of knowledge soundness, denoted \emph{rogue-instance security}, for interactive and non-interactive \emph{batch} knowledge proofs. Our notion, inspired by the standard notion of rogue-key security for multi-signature schemes, considers a setting in which a malicious prover is provided with an honestly-generated instance $x_1$, and may then be able to maliciously generate related ``rogue'' instances $x_2,\ldots,x_k$ for convincing a verifier in a batch knowledge proof of corresponding witnesses $w_1,\ldots,w_k$ for all $k$ instances -- without actually having knowledge of the witness $w_1$ corresponding to the honestly-generated instance. This setting provides a powerful security guarantee for batch versions of a wide variety of practically-relevant protocols, such as Schnorr's protocol and similar ones. We present a highly-efficient generic construction of a batch proof-of-knowledge applicable to any \emph{algebraic} Sigma protocols. The algebraic property refers to a homomorphic structure of the underlying group and includes Schnorr's protocol and others. We provide an almost tight security analysis for our generic batch protocol, which significantly improves upon the previously known security bounds even for the specific case of batch Schnorr protocol. We extend our results beyond algebraic Sigma protocols. We analyze the rogue-instance security of a general batch protocol with plus-one special soundness (a generalization of standard special soundness) and achieve improved security bounds in the generic case. Our results use a particular type of \emph{high-moment} assumptions introduced by Rotem and Segev (CRYPTO 2021). These assumptions consider the hardness of a relation against algorithms with bounded \emph{expected} running time. Although Rotem and Segev introduced these assumptions, they did not provide evidence to support their hardness. To substantiate and validate the high-moment assumptions, we present a new framework for assessing the concrete hardness of cryptographic problems against oracle algorithms with bounded expected runtime. Our framework covers generic models, including the generic group model, random oracle model, and more. Utilizing our framework, we achieve the first hardness result for these high-moment assumptions. In particular, we establish the second-moment hardness of the discrete-logarithm problem against expected-time algorithms in the generic group model.
2023
JOFC
Non-malleable Vector Commitments via Local Equivocability
Lior Rotem Gil Segev
Vector commitments (VCs), enabling to commit to a vector and locally reveal any of its entries, play a key role in a variety of both classic and recently evolving applications. However, security notions for VCs have so far focused on passive attacks, and non-malleability notions considering active attacks have not been explored. Moreover, existing frameworks that may enable to capture the non-malleability of VCs seem either too weak (non-malleable non-interactive commitments that do not account for the security implications of local openings) or too strong (non-malleable zero-knowledge sets that support both membership and non-membership proofs). We put forward a rigorous framework capturing the non-malleability of VCs, striking a careful balance between the existing weaker and stronger frameworks: We strengthen the framework of non-malleable non-interactive commitments by considering attackers that may be exposed to local openings, and we relax the framework of non-malleable zero-knowledge sets by focusing on membership proofs. In addition, we strengthen both frameworks by supporting (inherently private) updates to entries of committed vectors, and discuss the benefits of non-malleable VCs in the context of both UTXO-based and account-based stateless blockchains, and in the context of simultaneous multi-round auctions (that have been adopted by the US Federal Communications Commission as the standard auction format for selling spectrum ranges). Within our framework, we present a direct approach for constructing non-malleable VCs whose efficiency essentially matches that of the existing standard VCs. Specifically, we show that any VC can be transformed into a non-malleable one, relying on a new primitive that we put forth. Our new primitive, locally equivocable commitments with all-but-one binding , is evidently both conceptually and technically simpler compared to multi-trapdoor mercurial trapdoor commitments (the main building block underlying existing non-malleable zero-knowledge sets), and admits more efficient instantiations based on the same number-theoretic assumptions.
2021
CRYPTO
Tighter Security for Schnorr Identification and Signatures: A High-Moment Forking Lemma for ${\Sigma}$-Protocols 📺
Lior Rotem Gil Segev
The Schnorr identification and signature schemes have been amongst the most influential cryptographic protocols of the past three decades. Unfortunately, although the best-known attacks on these two schemes are via discrete-logarithm computation, the known approaches for basing their security on the hardness of the discrete logarithm problem encounter the ``square-root barrier''. In particular, in any group of order $p$ where Shoup's generic hardness result for the discrete logarithm problem is believed to hold (and is thus used for setting concrete security parameters), the best-known $t$-time attacks on the Schnorr identification and signature schemes have success probability $t^2/p$, whereas existing proofs of security only rule out attacks with success probabilities $(t^2/p)^{1/2}$ and $(q_{\Hash} \cdot t^2/p)^{1/2}$, respectively, where $q_{\Hash}$ denotes the number of random-oracle queries issued by the attacker. We establish tighter security guarantees for identification and signature schemes which result from $\Sigma$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is ``$d$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $d$-th moment of the algorithm's running time. In the concrete context of the discrete logarithm problem, already Shoup's original proof shows that the discrete logarithm problem is $2$-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the $2$-moment hardness of the discrete logarithm problem, any $t$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $(t^2/p)^{2/3}$ and $(q_\Hash \cdot t^2/p)^{2/3}$, respectively.
2021
TCC
Non-Malleable Vector Commitments via Local Equivocability 📺
Lior Rotem Gil Segev
Vector commitments (VCs), enabling to commit to a vector and locally reveal any of its entries, play a key role in a variety of both classic and recently-evolving applications. However, security notions for VCs have so far focused on passive attacks, and non-malleability notions considering active attacks have not been explored. Moreover, existing frameworks that may enable to capture the non-malleability of VCs seem either too weak (non-malleable non-interactive commitments that do not account for the security implications of local openings) or too strong (non-malleable zero-knowledge sets that support both membership and non-membership proofs). We put forward a rigorous framework capturing the non-malleability of VCs, striking a careful balance between the existing weaker and stronger frameworks: We strengthen the framework of non-malleable non-interactive commitments by considering attackers that may be exposed to local openings, and we relax the framework of non-malleable zero-knowledge sets by focusing on membership proofs. In addition, we strengthen both frameworks by supporting (inherently-private) updates to entries of committed vectors, and discuss the benefits of non-malleable VCs in the context of both UTXO-based and account-based stateless blockchains, and in the context of simultaneous multi-round auctions (that have been adopted by the US Federal Communications Commission as the standard auction format for selling spectrum ranges). Within our framework we present a direct approach for constructing non-malleable VCs whose efficiency essentially matches that of the existing standard VCs. Specifically, we show that any VC can be transformed into a non-malleable one, relying on a new primitive that we put forth. Our new primitive, locally-equivocable commitments with all-but-one binding, is evidently both conceptually and technically simpler compared to multi-trapdoor mercurial trapdoor commitments (the main building block underlying existing non-malleable zero-knowledge sets), and admits more efficient instantiations based on the same number-theoretic assumptions.
2021
JOFC
Injective Trapdoor Functions via Derandomization: How Strong is Rudich’s Black-Box Barrier?
Lior Rotem Gil Segev
We present a cryptographic primitive $${\mathcal {P}}$$ P satisfying the following properties: Rudich’s seminal impossibility result (PhD thesis ’88) shows that $${\mathcal {P}}$$ P cannot be used in a black-box manner to construct an injective one-way function. $${\mathcal {P}}$$ P can be used in a non-black-box manner to construct an injective one-way function assuming the existence of a hitting-set generator that fools deterministic circuits (such a generator is known to exist based on the worst-case assumption that $$\text{ E } = \text{ DTIME }(2^{O(n)})$$ E = DTIME ( 2 O ( n ) ) has a function of deterministic circuit complexity $$2^{\Omega (n)}$$ 2 Ω ( n ) ). The non-black box aspect of our construction only requires a bound on the size of $${\mathcal {P}}$$ P ’s implementation. Augmenting $${\mathcal {P}}$$ P with a trapdoor algorithm enables a non-black-box construction of an injective trapdoor function (once again, assuming the existence of a hitting-set generator that fools deterministic circuits), while Rudich’s impossibility result still holds. The primitive $${\mathcal {P}}$$ P and its augmented variant can be constructed based on any injective one-way function and on any injective trapdoor function, respectively, and they are thus unconditionally essential for the existence of such functions. Moreover, $${\mathcal {P}}$$ P can also be constructed based on various known primitives that are secure against related-key attacks (e.g., pseudorandom functions), thus enabling to base the strong structural guarantees of injective one-way functions on the strong security guarantees of such primitives. Our application of derandomization techniques is inspired mainly by the work of Barak, Ong and Vadhan (CRYPTO ’03), which on one hand relies on any one-way function, but on the other hand only results in a non-interactive perfectly binding commitment scheme (offering significantly weaker structural guarantees compared to injective one-way functions) and does not seem to enable an extension to public-key primitives. The key observation underlying our approach is that Rudich’s impossibility result applies not only to one-way functions as the underlying primitive, but in fact to a variety of “unstructured” primitives. We put forward a condition for identifying such primitives, and then subtly tailor the properties of our primitives such that they are both sufficiently unstructured in order to satisfy this condition, and sufficiently structured in order to yield injective one-way and trapdoor functions. This circumvents the basic approach underlying Rudich’s long-standing evidence for the difficulty of constructing injective one-way functions (and, in particular, injective trapdoor functions) based on seemingly weaker or unstructured assumptions.
2021
JOFC
Tight Tradeoffs in Searchable Symmetric Encryption
A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead , locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT ’14) and Asharov et al. (STOC ’16) to construct SSE schemes offering various such tradeoffs and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed. We establish tight bounds on the tradeoff between the space overhead, locality and read efficiency of SSE schemes within two general frameworks that capture the memory access pattern underlying all existing schemes. First, we introduce the “pad-and-split” framework, refining that of Cash and Tessaro while still capturing the same existing schemes. Within our framework we significantly strengthen their lower bound, proving that any scheme with locality L must use space $$\Omega ( N \log N / \log L )$$ Ω ( N log N / log L ) for databases of size N . This is a tight lower bound, matching the tradeoff provided by the scheme of Demertzis and Papamanthou (SIGMOD ’17) which is captured by our pad-and-split framework. Then, within the “statistical-independence” framework of Asharov et al. we show that their lower bound is essentially tight: We construct a scheme whose tradeoff matches their lower bound within an additive $$O(\log \log \log N)$$ O ( log log log N ) factor in its read efficiency, once again improving upon the existing schemes. Our scheme offers optimal space and locality, and nearly optimal read efficiency that depends on the frequency of the queried keywords: For a keyword that is associated with $$n = N^{1 - \epsilon (n)}$$ n = N 1 - ϵ ( n ) document identifiers, the read efficiency is $$\omega (1) \cdot {\epsilon }(n)^{-1}+ O(\log \log \log N)$$ ω ( 1 ) · ϵ ( n ) - 1 + O ( log log log N ) when retrieving its identifiers (where the $$\omega (1)$$ ω ( 1 ) term may be arbitrarily small, and $$\omega (1) \cdot {\epsilon }(n)^{-1}$$ ω ( 1 ) · ϵ ( n ) - 1 is the lower bound proved by Asharov et al.). In particular, for any keyword that is associated with at most $$N^{1 - 1/o(\log \log \log N)}$$ N 1 - 1 / o ( log log log N ) document identifiers (i.e., for any keyword that is not exceptionally common), we provide read efficiency $$O(\log \log \log N)$$ O ( log log log N ) when retrieving its identifiers.
2021
JOFC
Can PPAD Hardness be Based on Standard Cryptographic Assumptions?
We consider the question of whether PPAD hardness can be based on standard cryptographic assumptions, such as the existence of one-way functions or public-key encryption. This question is particularly well-motivated in light of new devastating attacks on obfuscation candidates and their underlying building blocks, which are currently the only known source for PPAD hardness. Central in the study of obfuscation-based PPAD hardness is the sink-of-verifiable-line (SVL) problem, an intermediate step in constructing instances of the PPAD-complete problem source-or-sink . Within the framework of black-box reductions, we prove the following results: (i) average-case PPAD hardness (and even SVL hardness) does not imply any form of cryptographic hardness (not even one-way functions). Moreover, even when assuming the existence of one-way functions, average-case PPAD hardness (and, again, even SVL hardness) does not imply any public-key primitive. Thus, strong cryptographic assumptions (such as obfuscation-related ones) are not essential for average-case PPAD hardness. (ii) Average-case SVL hardness cannot be based either on standard cryptographic assumptions or on average-case PPAD hardness. In particular, average-case SVL hardness is not essential for average-case PPAD hardness. (iii) Any attempt for basing the average-case hardness of the PPAD-complete problem source-or-sink on standard cryptographic assumptions must result in instances with a nearly exponential number of solutions. This stands in striking contrast to the obfuscation-based approach, which results in instances having a unique solution. Taken together, our results imply that it may still be possible to base PPAD hardness on standard cryptographic assumptions, but any such black-box attempt must significantly deviate from the obfuscation-based approach: It cannot go through the SVL problem, and it must result in source-or-sink instances with a nearly exponential number of solutions.
2020
EUROCRYPT
Generic-Group Delay Functions Require Hidden-Order Groups 📺
Despite the fundamental importance of delay functions, underlying both the classic notion of a time-lock puzzle and the more recent notion of a verifiable delay function, the only known delay function that offers both sufficient structure for realizing these two notions and a realistic level of practicality is the ``iterated squaring'' construction of Rivest, Shamir and Wagner. This construction, however, is based on rather strong assumptions in groups of hidden orders, such as the RSA group (which requires a trusted setup) or the class group of an imaginary quadratic number field (which is still somewhat insufficiently explored from the cryptographic perspective). For more than two decades, the challenge of constructing delay functions in groups of known orders, admitting a variety of well-studied instantiations, has eluded the cryptography community. In this work we prove that there are no constructions of generic-group delay functions in cyclic groups of known orders: We show that for any delay function that does not exploit any particular property of the representation of the underlying group, there exists an attacker that completely breaks the function's sequentiality when given the group's order. As any time-lock puzzle and verifiable delay function give rise to a delay function, our result holds for these two notions we well, and explains the lack of success in resolving the above-mentioned long-standing challenge. Moreover, our result holds even if the underlying group is equipped with a d-linear map, for any constant d>=2 (and even for super-constant values of d under certain conditions).
2020
CRYPTO
Generically Speeding-Up Repeated Squaring is Equivalent to Factoring: Sharp Thresholds for All Generic-Ring Delay Functions 📺
Lior Rotem Gil Segev
Despite the fundamental importance of delay functions, repeated squaring in RSA groups (Rivest, Shamir and Wagner '96) is the only candidate offering both a useful structure and a realistic level of practicality. Somewhat unsatisfyingly, its sequentiality is provided directly by assumption (i.e., the function is assumed to be a delay function). We prove sharp thresholds on the sequentiality of all generic-ring delay functions relative to an RSA modulus based on the hardness of factoring in the standard model. In particular, we show that generically speeding-up repeated squaring (even with a preprocessing stage and any polynomial number parallel processors) is equivalent to factoring. More generally, based on the (essential) hardness of factoring, we prove that any generic-ring function is in fact a delay function, admitting a sharp sequentiality threshold that is determined by our notion of sequentiality depth. Moreover, we show that generic-ring functions admit not only sharp sequentiality thresholds, but also sharp pseudorandomness thresholds.
2020
TCC
Accumulators in (and Beyond) Generic Groups: Non-Trivial Batch Verification Requires Interaction 📺
Gili Schul-Ganz Gil Segev
We prove a tight lower bound on the number of group operations required for batch verification by any generic-group accumulator that stores a less-than-trivial amount of information. Specifically, we show that $\Omega(t \cdot (\lambda / \log \lambda))$ group operations are required for the batch verification of any subset of $t \geq 1$ elements, where $\lambda \in \mathbb{N}$ is the security parameter, thus ruling out non-trivial batch verification in the standard non-interactive manner. Our lower bound applies already to the most basic form of accumulators (i.e., static accumulators that support membership proofs), and holds both for known-order (and even multilinear) groups and for unknown-order groups, where it matches the asymptotic performance of the known bilinear and RSA accumulators, respectively. In addition, it complements the techniques underlying the generic-group accumulators of Boneh, B{\"{u}}nz and Fisch (CRYPTO '19) and Thakur (ePrint '19) by justifying their application of the Fiat-Shamir heuristic for transforming their interactive batch-verification protocols into non-interactive procedures. Moreover, motivated by a fundamental challenge introduced by Aggarwal and Maurer (EUROCRYPT '09), we propose an extension of the generic-group model that enables us to capture a bounded amount of arbitrary non-generic information (e.g., least-significant bits or Jacobi symbols that are hard to compute generically but are easy to compute non-generically). We prove our lower bound within this extended model, which may be of independent interest for strengthening the implications of impossibility results in idealized models.
2020
TCC
Algebraic Distinguishers: From Discrete Logarithms to Decisional Uber Assumptions 📺
Lior Rotem Gil Segev
The algebraic group model, introduced by Fuchsbauer, Kiltz and Loss (CRYPTO '18), is a substantial relaxation of the generic group model capturing algorithms that may exploit the representation of the underlying group. This idealized yet realistic model was shown useful for reasoning about cryptographic assumptions and security properties defined via computational problems. However, it does not generally capture assumptions and properties defined via decisional problems. As such problems play a key role in the foundations and applications of cryptography, this leaves a significant gap between the restrictive generic group model and the standard model. We put forward the notion of algebraic distinguishers, strengthening the algebraic group model by enabling it to capture decisional problems. Within our framework we then reveal new insights on the algebraic interplay between a wide variety of decisional assumptions. These include the decisional Diffie-Hellman assumption, the family of Linear assumptions in multilinear groups, and the family of Uber assumptions in bilinear groups. Our main technical results establish that, from an algebraic perspective, these decisional assumptions are in fact all polynomially equivalent to either the most basic discrete logarithm assumption or to its higher-order variant, the $q$-discrete logarithm assumption. On the one hand, these results increase the confidence in these strong decisional assumptions, while on the other hand, they enable to direct cryptanalytic efforts towards either extracting discrete logarithms or significantly deviating from standard algebraic techniques.
2019
JOFC
From Minicrypt to Obfustopia via Private-Key Functional Encryption
Ilan Komargodski Gil Segev
Private-key functional encryption enables fine-grained access to symmetrically encrypted data. Although private-key functional encryption (supporting an unbounded number of keys and ciphertexts) seems significantly weaker than its public-key variant, its known realizations all rely on public-key functional encryption. At the same time, however, up until recently it was not known to imply any public-key primitive, demonstrating our poor understanding of this primitive. Bitansky et al. (Theory of cryptography—14th international conference, TCC 2016-B, 2016 ) showed that sub-exponentially secure private-key function encryption bridges from nearly exponential security in Minicrypt to slightly super-polynomial security in Cryptomania, and from sub-exponential security in Cryptomania to Obfustopia. Specifically, given any sub-exponentially secure private-key functional encryption scheme and a nearly exponentially secure one-way function, they constructed a public-key encryption scheme with slightly super-polynomial security. Assuming, in addition, a sub-exponentially secure public-key encryption scheme, they then constructed an indistinguishability obfuscator (or a public-key functional encryption scheme if the given building blocks are polynomially secure). We show that quasi-polynomially secure private-key functional encryption bridges from sub-exponential security in Minicrypt all the way to Cryptomania. First, given any quasi-polynomially secure private-key functional encryption scheme, we construct an indistinguishability obfuscator for circuits with inputs of poly-logarithmic length. Then, we observe that such an obfuscator can be used to instantiate many natural applications of indistinguishability obfuscation. Specifically, relying on sub-exponentially secure one-way functions, we show that quasi-polynomially secure private-key functional encryption implies not just public-key encryption but leads all the way to public-key functional encryption for circuits with inputs of poly-logarithmic length. Moreover, relying on sub-exponentially secure injective one-way functions, we show that quasi-polynomially secure private-key functional encryption implies a hard-on-average distribution over instances of a PPAD-complete problem. Underlying our constructions is a new transformation from single-input functional encryption to multi-input functional encryption in the private-key setting. The previously known such transformation (Brakerski et al. J Cryptol 31(2):434–520, 2018 ) required a sub-exponentially secure single-input scheme, and obtained a scheme supporting only a slightly super-constant number of inputs. Our transformation both relaxes the underlying assumption and supports more inputs: Given any quasi-polynomially secure single-input scheme, we obtain a scheme supporting a poly-logarithmic number of inputs.
2018
JOFC
2018
JOFC
2018
JOFC
2018
JOFC
2018
JOFC
2018
EUROCRYPT
2018
CRYPTO
Tight Tradeoffs in Searchable Symmetric Encryption 📺
A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead, locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT ’14) and Asharov et al. (STOC ’16) to construct SSE schemes offering various such tradeoffs, and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed.We establish tight bounds on the tradeoff between the space overhead, locality and read efficiency of SSE schemes within two general frameworks that capture the memory access pattern underlying all existing schemes. First, we introduce the “pad-and-split” framework, refining that of Cash and Tessaro while still capturing the same existing schemes. Within our framework we significantly strengthen their lower bound, proving that any scheme with locality L must use space $$\varOmega ( N \log N / \log L )$$Ω(NlogN/logL) for databases of size N. This is a tight lower bound, matching the tradeoff provided by the scheme of Demertzis and Papamanthou (SIGMOD ’17) which is captured by our pad-and-split framework.Then, within the “statistical-independence” framework of Asharov et al. we show that their lower bound is essentially tight: We construct a scheme whose tradeoff matches their lower bound within an additive $$O(\log \log \log N)$$O(logloglogN) factor in its read efficiency, once again improving upon the existing schemes. Our scheme offers optimal space and locality, and nearly-optimal read efficiency that depends on the frequency of the queried keywords: For a keyword that is associated with $$n = N^{1 - \epsilon (n)}$$n=N1-ϵ(n) document identifiers, the read efficiency is $$\omega (1) \cdot {\epsilon }(n)^{-1}+ O(\log \log \log N)$$ω(1)·ϵ(n)-1+O(logloglogN) when retrieving its identifiers (where the $$\omega (1)$$ω(1) term may be arbitrarily small, and $$\omega (1) \cdot {\epsilon }(n)^{-1}$$ω(1)·ϵ(n)-1 is the lower bound proved by Asharov et al.). In particular, for any keyword that is associated with at most $$N^{1 - 1/o(\log \log \log N)}$$N1-1/o(logloglogN) document identifiers (i.e., for any keyword that is not exceptionally common), we provide read efficiency $$O(\log \log \log N)$$O(logloglogN) when retrieving its identifiers.
2018
CRYPTO
Out-of-Band Authentication in Group Messaging: Computational, Statistical, Optimal 📺
Lior Rotem Gil Segev
Extensive efforts are currently put into securing messaging platforms, where a key challenge is that of protecting against man-in-the-middle attacks when setting up secure end-to-end channels. The vast majority of these efforts, however, have so far focused on securing user-to-user messaging, and recent attacks indicate that the security of group messaging is still quite fragile.We initiate the study of out-of-band authentication in the group setting, extending the user-to-user setting where messaging platforms (e.g., Telegram and WhatsApp) protect against man-in-the-middle attacks by assuming that users have access to an external channel for authenticating one short value (e.g., two users who recognize each other’s voice can compare a short value). Inspired by the frameworks of Vaudenay (CRYPTO ’05) and Naor et al. (CRYPTO ’06) in the user-to-user setting, we assume that users communicate over a completely-insecure channel, and that a group administrator can out-of-band authenticate one short message to all users. An adversary may read, remove, or delay this message (for all or for some of the users), but cannot undetectably modify it.Within our framework we establish tight bounds on the tradeoff between the adversary’s success probability and the length of the out-of-band authenticated message (which is a crucial bottleneck given that the out-of-band channel is of low bandwidth). We consider both computationally-secure and statistically-secure protocols, and for each flavor of security we construct an authentication protocol and prove a lower bound showing that our protocol achieves essentially the best possible tradeoff.In particular, considering groups that consist of an administrator and k additional users, for statistically-secure protocols we show that at least $$(k+1)\cdot (\log (1/\epsilon ) - \varTheta (1))$$ (k+1)·(log(1/ϵ)-Θ(1)) bits must be out-of-band authenticated, whereas for computationally-secure ones $$\log (1/\epsilon ) + \log k$$ log(1/ϵ)+logk bits suffice, where $$\epsilon $$ ϵ is the adversary’s success probability. Moreover, instantiating our computationally-secure protocol in the random-oracle model yields an efficient and practically-relevant protocol (which, alternatively, can also be based on any one-way function in the standard model).
2018
TCC
Injective Trapdoor Functions via Derandomization: How Strong is Rudich’s Black-Box Barrier?
Lior Rotem Gil Segev
We present a cryptographic primitive $$\mathcal {P}$$ P satisfying the following properties:Rudich’s seminal impossibility result (PhD thesis ’88) shows that $$\mathcal {P}$$ P cannot be used in a black-box manner to construct an injective one-way function. $$\mathcal {P}$$ P can be used in a non-black-box manner to construct an injective one-way function assuming the existence of a hitting-set generator that fools deterministic circuits (such a generator is known to exist based on the worst-case assumption that $$\text{ E } = \text{ DTIME }(2^{O(n)})$$ E=DTIME(2O(n)) has a function of deterministic circuit complexity $$2^{\Omega (n)}$$ 2Ω(n)).Augmenting $$\mathcal {P}$$ P with a trapdoor algorithm enables a non-black-box construction of an injective trapdoor function (once again, assuming the existence of a hitting-set generator that fools deterministic circuits), while Rudich’s impossibility result still holds. The primitive $$\mathcal {P}$$ P and its augmented variant can be constructed based on any injective one-way function and on any injective trapdoor function, respectively, and they are thus unconditionally essential for the existence of such functions. Moreover, $$\mathcal {P}$$ P can also be constructed based on various known primitives that are secure against related-key attacks, thus enabling to base the strong structural guarantees of injective one-way functions on the strong security guarantees of such primitives.Our application of derandomization techniques is inspired mainly by the work of Barak, Ong and Vadhan (CRYPTO ’03), which on one hand relies on any one-way function, but on the other hand only results in a non-interactive perfectly-binding commitment scheme (offering significantly weaker structural guarantees compared to injective one-way functions), and does not seem to enable an extension to public-key primitives.The key observation underlying our approach is that Rudich’s impossibility result applies not only to one-way functions as the underlying primitive, but in fact to a variety of “unstructured” primitives. We put forward a condition for identifying such primitives, and then subtly tailor the properties of our primitives such that they are both sufficiently unstructured in order to satisfy this condition, and sufficiently structured in order to yield injective one-way and trapdoor functions. This circumvents the basic approach underlying Rudich’s long-standing evidence for the difficulty of constructing injective one-way functions (and, in particular, injective trapdoor functions) based on seemingly weaker or unstructured assumptions.
2018
TCC
The Security of Lazy Users in Out-of-Band Authentication
Moni Naor Lior Rotem Gil Segev
Faced with the threats posed by man-in-the-middle attacks, messaging platforms rely on “out-of-band” authentication, assuming that users have access to an external channel for authenticating one short value. For example, assuming that users recognizing each other’s voice can authenticate a short value, Telegram and WhatApp ask their users to compare 288-bit and 200-bit values, respectively. The existing protocols, however, do not take into account the plausible behavior of users who may be “lazy” and only compare parts of these values (rather than their entirety).Motivated by such a security-critical user behavior, we study the security of lazy users in out-of-band authentication. We start by showing that both the protocol implemented by WhatsApp and the statistically-optimal protocol of Naor, Segev and Smith (CRYPTO ’06) are completely vulnerable to man-in-the-middle attacks when the users consider only a half of the out-of-band authenticated value. In this light, we put forward a framework that captures the behavior and security of lazy users. Our notions of security consider both statistical security and computational security, and for each flavor we derive a lower bound on the tradeoff between the number of positions that are considered by the lazy users and the adversary’s forgery probability.Within our framework we then provide two authentication protocols. First, in the statistical setting, we present a transformation that converts any out-of-band authentication protocol into one that is secure even when executed by lazy users. Instantiating our transformation with a new refinement of the protocol of Naor et al. results in a protocol whose tradeoff essentially matches our lower bound in the statistical setting. Then, in the computational setting, we show that the computationally-optimal protocol of Vaudenay (CRYPTO ’05) is secure even when executed by lazy users – and its tradeoff matches our lower bound in the computational setting.
2018
TCC
Ciphertext Expansion in Limited-Leakage Order-Preserving Encryption: A Tight Computational Lower Bound
Gil Segev Ido Shahaf
Order-preserving encryption emerged as a key ingredient underlying the security of practical database management systems. Boldyreva et al. (EUROCRYPT ’09) initiated the study of its security by introducing two natural notions of security. They proved that their first notion, a “best-possible” relaxation of semantic security allowing ciphertexts to reveal the ordering of their corresponding plaintexts, is not realizable. Later on Boldyreva et al. (CRYPTO ’11) proved that any scheme satisfying their second notion, indistinguishability from a random order-preserving function, leaks about half of the bits of a random plaintext.This unsettling state of affairs was recently changed by Chenette et al. (FSE ’16), who relaxed the above “best-possible” notion and constructed a scheme satisfying it based on any pseudorandom function. In addition to revealing the ordering of any two encrypted plaintexts, ciphertexts in their scheme reveal only the position of the most significant bit on which the plaintexts differ. A significant drawback of their scheme, however, is its substantial ciphertext expansion: Encrypting plaintexts of length m bits results in ciphertexts of length $$m \cdot \ell $$ bits, where $$\ell $$ determines the level of security (e.g., $$\ell = 80$$ in practice).In this work we prove a lower bound on the ciphertext expansion of any order-preserving encryption scheme satisfying the “limited-leakage” notion of Chenette et al. with respect to non-uniform polynomial-time adversaries, matching the ciphertext expansion of their scheme up to lower-order terms. This improves a recent result of Cash and Zhang (TCC ’18), who proved such a lower bound for schemes satisfying this notion with respect to computationally-unbounded adversaries (capturing, for example, schemes whose security can be proved in the random-oracle model without relying on cryptographic assumptions). Our lower bound applies, in particular, to schemes whose security is proved in the standard model.
2017
EUROCRYPT
2017
TCC
2017
TCC
2016
EUROCRYPT
2016
TCC
2016
JOFC
2015
TCC
2015
TCC
2015
CRYPTO
2014
EUROCRYPT
2014
JOFC
2013
CRYPTO
2013
CRYPTO
2013
ASIACRYPT
2013
EUROCRYPT
2013
JOFC
Fully Leakage-Resilient Signatures
A signature scheme is fully leakage resilient (Katz and Vaikuntanathan, ASIACRYPT’09) if it is existentially unforgeable under an adaptive chosen-message attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on all intermediate values that are used throughout the lifetime of the system. This is a strong and meaningful notion of security that captures a wide range of side-channel attacks.One of the main challenges in constructing fully leakage-resilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the random-oracle model. Moreover, even in the random-oracle model, known schemes are only resilient to leakage of less than half the length of their signing key.In this paper we construct the first fully leakage-resilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length (1−o(1))L bits, where L is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific number-theoretic assumptions. In addition, we show that our approach extends to the continual-leakage model, recently introduced by Dodis, Haralambiev, Lopez-Alt and Wichs (FOCS’10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS’10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.
2012
TCC
2012
EUROCRYPT
2011
TCC
2011
CRYPTO
2011
EUROCRYPT
2010
TCC
2010
PKC
2010
EUROCRYPT
2009
TCC
2009
TCC
2009
ASIACRYPT
2009
CRYPTO
2008
TCC
2008
EUROCRYPT
2006
CRYPTO

Program Committees

TCC 2021
Eurocrypt 2019
PKC 2018
Crypto 2017
PKC 2017
TCC 2016
Crypto 2014
PKC 2012
TCC 2011