International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Papers from Journal of Cryptology 2021

Year
Venue
Title
2021
JOFC
A Bad Day to Die Hard: Correcting the Dieharder Battery
We analyze Dieharder statistical randomness tests according to accuracy and correct interpretation of their results. We used all tests, processed 8 TB of quantum-generated data, and obtained null distributions of first-level and second-level p -values. We inspected whether the p -values are uniformly distributed. The analysis showed that more than half (out of 110) of Dierharder atomic tests (test with particular setting) produce null distributions of p -values that are biased from the expected uniform one. Additional analysis of the Kolmogorov–Smirnov (KS) test showed that the key KS test is also biased. This increases the probability of false positives (in the right tail) for all Dieharder tests as KS is used to post-process their results. Moreover, 12 tests (22 atomic) produce results significantly biased from the null distribution of the KS test which may suggest problems with the implementation of these tests.
2021
JOFC
A Cryptographic Analysis of the TLS 1.3 Handshake Protocol
We analyze the handshake protocol of the Transport Layer Security (TLS) protocol, version 1.3. We address both the full TLS 1.3 handshake (the one round-trip time mode, with signatures for authentication and (elliptic curve) Diffie–Hellman ephemeral ((EC)DHE) key exchange), and the abbreviated resumption/“PSK” mode which uses a pre-shared key for authentication (with optional (EC)DHE key exchange and zero round-trip time key establishment). Our analysis in the reductionist security framework uses a multi-stage key exchange security model, where each of the many session keys derived in a single TLS 1.3 handshake is tagged with various properties (such as unauthenticated versus unilaterally authenticated versus mutually authenticated, whether it is intended to provide forward security, how it is used in the protocol, and whether the key is protected against replay attacks). We show that these TLS 1.3 handshake protocol modes establish session keys with their desired security properties under standard cryptographic assumptions.
2021
JOFC
A Formal Analysis of Prefetching in Profiled Cache-Timing Attacks on Block Ciphers
Formally bounding side-channel leakage is important to bridge the gap between theory and practice in cryptography. However, bounding side-channel leakages is difficult because leakage in a cryptosystem could be from several sources. Moreover, the amount of leakage from a source may vary depending on the implementation of the cipher and the form of attack. To formally analyze the security of a cryptosystem, it is therefore essential to consider each source of leakage independently. This paper considers data prefetching, which is used in most modern day cache memories to reduce miss penalty. We build a framework that would help computer architects theoretically gauge the impact of a data prefetcher in time-driven cache attacks early in the design phase. The framework computes leakage due to the prefetcher using a metric that is based on the Kullback–Leibler transformation. We use the framework to analyze two commonly used prefetching algorithms, namely sequential and arbitrary-stride prefetching. These form the basis of several other prefetching algorithms. We also demonstrate its use by designing a new prefetching algorithm called even–odd prefetcher that does not have leakage in time-driven cache attacks.
2021
JOFC
Actively Secure Setup for SPDZ
We present the first actively secure, practical protocol to generate the distributed secret keys needed in the SPDZ offline protocol. As an added bonus our protocol results in the resulting distribution of the public and secret keys are such that the associated SHE ‘noise’ analysis is the same as if the distributed keys were generated by a trusted setup. We implemented the presented protocol for distributed BGV key generation within the SCALE-MAMBA   framework. Our method makes use of a new method for creating doubly (or even more) authenticated bits in different MPC engines, which has applications in other areas of MPC-based secure computation. We were able to generate keys for two parties and a plaintext size of 64 bits in around 5 min, and a little more than 18 min for a 128-bit prime.
2021
JOFC
Adaptively Secure Distributed PRFs from $\textsf {LWE}$
In distributed pseudorandom functions (DPRFs), a PRF secret key SK is secret shared among N servers so that each server can locally compute a partial evaluation of the PRF on some input X . A combiner that collects t partial evaluations can then reconstruct the evaluation F ( SK ,  X ) of the PRF under the initial secret key. So far, all non-interactive constructions in the standard model are based on lattice assumptions. One caveat is that they are only known to be secure in the static corruption setting, where the adversary chooses the servers to corrupt at the very beginning of the game, before any evaluation query. In this work, we construct the first fully non-interactive adaptively secure DPRF in the standard model. Our construction is proved secure under the $$\textsf {LWE}$$ LWE assumption against adversaries that may adaptively decide which servers they want to corrupt. We also extend our construction in order to achieve robustness against malicious adversaries.
2021
JOFC
Ascon v1.2: Lightweight Authenticated Encryption and Hashing
Authenticated encryption satisfies the basic need for authenticity and confidentiality in our information infrastructure. In this paper, we provide the specification of Ascon -128 and Ascon -128a. Both authenticated encryption algorithms provide efficient authenticated encryption on resource-constrained devices and on high-end CPUs. Furthermore, they have been selected as the “primary choice” for lightweight authenticated encryption in the final portfolio of the CAESAR competition. In addition, we specify the hash function Ascon-Hash , and the extendable output function Ascon-Xof . Moreover, we complement the specification by providing a detailed overview of existing cryptanalysis and implementation results.
2021
JOFC
Bloom Filter Encryption and Applications to Efficient Forward-Secret 0-RTT Key Exchange
Forward secrecy is considered an essential design goal of modern key establishment (KE) protocols, such as TLS 1.3, for example. Furthermore, efficiency considerations such as zero round-trip time (0-RTT), where a client is able to send cryptographically protected payload data along with the very first KE message, are motivated by the practical demand for secure low-latency communication. For a long time, it was unclear whether protocols that simultaneously achieve 0-RTT and full forward secrecy exist. Only recently, the first forward-secret 0-RTT protocol was described by Günther et al. ( Eurocrypt , 2017). It is based on puncturable encryption. Forward secrecy is achieved by “puncturing” the secret key after each decryption operation, such that a given ciphertext can only be decrypted once (cf. also Green and Miers, S&P 2015). Unfortunately, their scheme is completely impractical, since one puncturing operation takes between 30 s and several minutes for reasonable security and deployment parameters, such that this solution is only a first feasibility result, but not efficient enough to be deployed in practice. In this paper, we introduce a new primitive that we term Bloom filter encryption (BFE), which is derived from the probabilistic Bloom filter data structure. We describe different constructions of BFE schemes and show how these yield new puncturable encryption mechanisms with extremely efficient puncturing. Most importantly, a puncturing operation only involves a small number of very efficient computations, plus the deletion of certain parts of the secret key, which outperforms previous constructions by orders of magnitude. This gives rise to the first forward-secret 0-RTT protocols that are efficient enough to be deployed in practice. We believe that BFE will find applications beyond forward-secret 0-RTT protocols.
2021
JOFC
Bootstrapping for HElib
Gentry’s bootstrapping technique is still the only known method of obtaining fully homomorphic encryption where the system’s parameters do not depend on the complexity of the evaluated functions. Bootstrapping involves a recryption procedure where the scheme’s decryption algorithm is evaluated homomorphically. Prior to this work, there were very few implementations of recryption and fewer still that can handle “packed ciphertexts” that encrypt vectors of elements. In the current work, we report on an implementation of recryption of fully packed ciphertexts using the HElib library for somewhat homomorphic encryption. This implementation required extending previous recryption algorithms from the literature, as well as many aspects of the HElib library. Our implementation supports bootstrapping of packed ciphertexts over many extension fields/rings. One example that we tested involves ciphertexts that encrypt vectors of 1024 elements from $${\text {GF}}(2^{16})$$ GF ( 2 16 ) . In that setting, the recryption procedure takes under 3 min (at security level $$\approx 80$$ ≈ 80 ) on a single core and allows a multiplicative depth-11 computation before the next recryption is needed. This report updates the results that we reported in Eurocrypt 2015 in several ways. Most importantly, it includes a much more robust method for deriving the parameters, ensuring that recryption errors only occur with negligible probability. Many aspects of this analysis are proved, and for the few well-specified heuristics that we made, we report on thorough experimentation to validate them. The procedure that we describe here is also significantly more efficient than in the previous version, incorporating many optimizations that were reported elsewhere (such as more efficient linear transformations) and adding a few new ones. Finally, our implementation now also incorporates Chen and Han’s techniques from Eurocrypt 2018 for more efficient digit extraction (for some parameters), as well as for “thin bootstrapping” when the ciphertext is only sparsely packed.
2021
JOFC
Can PPAD Hardness be Based on Standard Cryptographic Assumptions?
We consider the question of whether PPAD hardness can be based on standard cryptographic assumptions, such as the existence of one-way functions or public-key encryption. This question is particularly well-motivated in light of new devastating attacks on obfuscation candidates and their underlying building blocks, which are currently the only known source for PPAD hardness. Central in the study of obfuscation-based PPAD hardness is the sink-of-verifiable-line (SVL) problem, an intermediate step in constructing instances of the PPAD-complete problem source-or-sink . Within the framework of black-box reductions, we prove the following results: (i) average-case PPAD hardness (and even SVL hardness) does not imply any form of cryptographic hardness (not even one-way functions). Moreover, even when assuming the existence of one-way functions, average-case PPAD hardness (and, again, even SVL hardness) does not imply any public-key primitive. Thus, strong cryptographic assumptions (such as obfuscation-related ones) are not essential for average-case PPAD hardness. (ii) Average-case SVL hardness cannot be based either on standard cryptographic assumptions or on average-case PPAD hardness. In particular, average-case SVL hardness is not essential for average-case PPAD hardness. (iii) Any attempt for basing the average-case hardness of the PPAD-complete problem source-or-sink on standard cryptographic assumptions must result in instances with a nearly exponential number of solutions. This stands in striking contrast to the obfuscation-based approach, which results in instances having a unique solution. Taken together, our results imply that it may still be possible to base PPAD hardness on standard cryptographic assumptions, but any such black-box attempt must significantly deviate from the obfuscation-based approach: It cannot go through the SVL problem, and it must result in source-or-sink instances with a nearly exponential number of solutions.
2021
JOFC
Compact Designated Verifier NIZKs from the CDH Assumption Without Pairings
In a non-interactive zero-knowledge (NIZK) proof, a prover can non-interactively convince a verifier of a statement without revealing any additional information. A useful relaxation of NIZK is a designated verifier NIZK (DV-NIZK) proof, where proofs are verifiable only by a designated party in possession of a verification key. A crucial security requirement of DV-NIZKs is unbounded-soundness, which guarantees soundness even if the verification key is reused for multiple statements. Most known DV-NIZKs (except standard NIZKs) for $$\mathbf{NP} $$ NP do not have unbounded-soundness. Existing DV-NIZKs for $$\mathbf{NP} $$ NP satisfying unbounded-soundness are based on assumptions which are already known to imply standard NIZKs. In particular, it is an open problem to construct (DV-)NIZKs from weak paring-free group assumptions such as decisional Diffie–Hellman (DH). As a further matter, all constructions of (DV-)NIZKs from DH type assumptions (regardless of whether it is over a paring-free or paring group) require the proof size to have a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$ | C | · poly ( κ ) , where | C | is the size of the circuit that computes the $$\mathbf{NP} $$ NP relation. In this work, we make progress of constructing DV-NIZKs from DH-type assumptions that are not known to imply standard NIZKs. Our results are summarized as follows: DV-NIZKs for $$\mathbf{NP} $$ NP from the computational DH assumption over pairing-free groups. This is the first construction of such NIZKs on pairing-free groups and resolves the open problem posed by Kim and Wu (CRYPTO’18). DV-NIZKs for $$\mathbf{NP} $$ NP with proof size $$|C|+\mathsf {poly}(\kappa )$$ | C | + poly ( κ ) from the computational DH assumption over specific pairing-free groups. This is the first DV-NIZK that achieves a compact proof from a standard DH type assumption. Moreover, if we further assume the $$\mathbf{NP} $$ NP relation to be computable in $$\mathbf{NC} ^1$$ NC 1 and assume hardness of a (non-static) falsifiable DH type assumption over specific pairing-free groups, the proof size can be made as small as $$|w| + \mathsf {poly}(\kappa )$$ | w | + poly ( κ ) .
2021
JOFC
2021
JOFC
2021
JOFC
Decomposable Obfuscation: A Framework for Building Applications of Obfuscation from Polynomial Hardness
There is some evidence that indistinguishability obfuscation (iO) requires either exponentially many assumptions or (sub)exponentially hard assumptions, and indeed, all known ways of building obfuscation suffer one of these two limitations. As such, any application built from iO suffers from these limitations as well. However, for most applications, such limitations do not appear to be inherent to the application, just the approach using iO. Indeed, several recent works have shown how to base applications of iO instead on functional encryption (FE), which can in turn be based on the polynomial hardness of just a few assumptions. However, these constructions are quite complicated and recycle a lot of similar techniques. In this work, we unify the results of previous works in the form of a weakened notion of obfuscation, called decomposable obfuscation . We show (1) how to build decomposable obfuscation from functional encryption and (2) how to build a variety of applications from decomposable obfuscation, including all of the applications already known from FE. The construction in (1) hides most of the difficult techniques in the prior work, whereas the constructions in (2) are much closer to the comparatively simple constructions from iO. As such, decomposable obfuscation represents a convenient new platform for obtaining more applications from polynomial hardness.
2021
JOFC
2021
JOFC
Fast Secure Two-Party ECDSA Signing
ECDSA is a standard digital signature scheme that is widely used in TLS, Bitcoin and elsewhere. Unlike other schemes like RSA, Schnorr signatures and more, it is particularly hard to construct efficient threshold signature protocols for ECDSA (and DSA). As a result, the best-known protocols today for secure distributed ECDSA require running heavy zero-knowledge proofs and computing many large-modulus exponentiations for every signing operation. In this paper, we consider the specific case of two parties (and thus no honest majority) and construct a protocol that is approximately two orders of magnitude faster than the previous best. Concretely, our protocol achieves good performance, with a single signing operation for curve P-256 taking approximately 37 ms between two standard machine types in Azure (utilizing a single core only). Our protocol is proven secure for sequential composition under standard assumptions using a game-based definition. In addition, we prove security by simulation under a plausible yet non-standard assumption regarding Paillier. We show that partial concurrency (where if one execution aborts, then all need to abort) can also be achieved.
2021
JOFC
Fine-Grained Cryptography Revisited
Fine-grained cryptographic primitives are secure against adversaries with bounded resources and can be computed by honest users with less resources than the adversaries. In this paper, we revisit the results by Degwekar, Vaikuntanathan, and Vasudevan in Crypto 2016 on fine-grained cryptography and show constructions of three key fundamental fine-grained cryptographic primitives: one-way permutation families , hash proof systems (which in turn implies a public-key encryption scheme against chosen chiphertext attacks ), and trapdoor one-way functions . All of our constructions are computable in $$\textsf {NC}^1$$ NC 1 and secure against ( non-uniform ) $$\textsf {NC}^1$$ NC 1 circuits under the widely believed worst-case assumption $$\textsf {NC}^1\subsetneq {\oplus \textsf {L/poly}}$$ NC 1 ⊊ ⊕ L / poly .
2021
JOFC
From Fairness to Full Security in Multiparty Computation
In the setting of secure multiparty computation (MPC), a set of mutually distrusting parties wish to jointly compute a function, while guaranteeing the privacy of their inputs and the correctness of the output. An MPC protocol is called fully secure if no adversary can prevent the honest parties from obtaining their outputs. A protocol is called fair if an adversary can prematurely abort the computation, however, only before learning any new information. We present efficient transformations from fair computations to fully secure computations, assuming a constant fraction of honest parties (e.g., $$1\%$$ 1 % of the parties are honest). Compared to previous transformations that require linear invocations (in the number of parties) of the fair computation, our transformations require super-logarithmic, and sometimes even super-constant, such invocations. The main idea is to delegate the computation to random committees that invoke the fair computation. Apart from the benefit of uplifting security, the reduction in the number of parties is also useful, since only committee members are required to work, whereas the remaining parties simply “listen” to the computation over a broadcast channel. One application of these transformations is a new $$\delta $$ δ -bias coin-flipping protocol, whose round complexity has a super-logarithmic dependency on the number of parties, improving over the linear-dependency protocol of Beimel, Omri, and Orlov (Crypto 2010). A second application is a new fully secure protocol for computing the Boolean OR function, with a super-constant round complexity, improving over the protocol of Gordon and Katz (TCC 2009) whose round complexity is linear in the number of parties. Finally, we show that our positive results are in a sense optimal, by proving that for some functionalities, a super-constant number of (sequential) invocations of the fair computation is necessary for computing the functionality in a fully secure manner.
2021
JOFC
High-Performance Multi-party Computation for Binary Circuits Based on Oblivious Transfer
We present a unified view of the two-party and multi-party computation protocols based on oblivious transfer first outlined in Nielsen et al. (CRYPTO 2012) and Larraia et al. (CRYPTO 2014). We present a number of modifications and improvements to these earlier presentations, as well as full proofs of the entire protocol. Improvements include a unified pre-processing and online MAC methodology, mechanisms to pass between different MAC variants and fixing a minor bug in the protocol of Larraia et al. in relation to a selective failure attack. It also fixes a minor bug in Nielsen et al. resulting from using Jensen’s inequality in the wrong direction in an analysis.
2021
JOFC
Injective Trapdoor Functions via Derandomization: How Strong is Rudich’s Black-Box Barrier?
We present a cryptographic primitive $${\mathcal {P}}$$ P satisfying the following properties: Rudich’s seminal impossibility result (PhD thesis ’88) shows that $${\mathcal {P}}$$ P cannot be used in a black-box manner to construct an injective one-way function. $${\mathcal {P}}$$ P can be used in a non-black-box manner to construct an injective one-way function assuming the existence of a hitting-set generator that fools deterministic circuits (such a generator is known to exist based on the worst-case assumption that $$\text{ E } = \text{ DTIME }(2^{O(n)})$$ E = DTIME ( 2 O ( n ) ) has a function of deterministic circuit complexity $$2^{\Omega (n)}$$ 2 Ω ( n ) ). The non-black box aspect of our construction only requires a bound on the size of $${\mathcal {P}}$$ P ’s implementation. Augmenting $${\mathcal {P}}$$ P with a trapdoor algorithm enables a non-black-box construction of an injective trapdoor function (once again, assuming the existence of a hitting-set generator that fools deterministic circuits), while Rudich’s impossibility result still holds. The primitive $${\mathcal {P}}$$ P and its augmented variant can be constructed based on any injective one-way function and on any injective trapdoor function, respectively, and they are thus unconditionally essential for the existence of such functions. Moreover, $${\mathcal {P}}$$ P can also be constructed based on various known primitives that are secure against related-key attacks (e.g., pseudorandom functions), thus enabling to base the strong structural guarantees of injective one-way functions on the strong security guarantees of such primitives. Our application of derandomization techniques is inspired mainly by the work of Barak, Ong and Vadhan (CRYPTO ’03), which on one hand relies on any one-way function, but on the other hand only results in a non-interactive perfectly binding commitment scheme (offering significantly weaker structural guarantees compared to injective one-way functions) and does not seem to enable an extension to public-key primitives. The key observation underlying our approach is that Rudich’s impossibility result applies not only to one-way functions as the underlying primitive, but in fact to a variety of “unstructured” primitives. We put forward a condition for identifying such primitives, and then subtly tailor the properties of our primitives such that they are both sufficiently unstructured in order to satisfy this condition, and sufficiently structured in order to yield injective one-way and trapdoor functions. This circumvents the basic approach underlying Rudich’s long-standing evidence for the difficulty of constructing injective one-way functions (and, in particular, injective trapdoor functions) based on seemingly weaker or unstructured assumptions.
2021
JOFC
Internal Symmetries and Linear Properties: Full-permutation Distinguishers and Improved Collisions on Gimli
$$\mathsf {Gimli}$$ Gimli is a family of cryptographic primitives (both a hash function and an AEAD scheme) that has been selected for the second round of the NIST competition for standardizing new lightweight designs. The candidate $$\mathsf {Gimli}$$ Gimli is based on the permutation $$\mathsf {Gimli}$$ Gimli , which was presented at CHES 2017. In this paper, we study the security of both the permutation and the constructions that are based on it. We exploit the slow diffusion in $$\mathsf {Gimli}$$ Gimli and its internal symmetries to build, for the first time, a distinguisher on the full permutation of complexity $$2^{64}$$ 2 64 . We also provide a practical distinguisher on 23 out of the full 24 rounds of $$\mathsf {Gimli}$$ Gimli that has been implemented. Next, we give (full state) collision and semi-free start collision attacks on $$\mathsf {Gimli}$$ Gimli -Hash, reaching, respectively, up to 12 and 18 rounds. On the practical side, we compute a collision on 8-round $$\mathsf {Gimli}$$ Gimli -Hash. In the quantum setting, these attacks reach 2 more rounds. Finally, we perform the first study of linear trails in $$\mathsf {Gimli}$$ Gimli , and we find a linear distinguisher on the full permutation.
2021
JOFC
2021
JOFC
Is There an Oblivious RAM Lower Bound for Online Reads?
Oblivious RAM (ORAM), introduced by Goldreich (STOC 1987) and Ostrovsky (STOC 1990), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an $$O(\log n)$$ O ( log n ) overhead per access, where $$n$$ n is the data size. The work of Goldreich and Ostrovsky (JACM 1996) gave a lower bound, showing that this is optimal for ORAM schemes that operate in a “balls and bins” model, where memory blocks can only be shuffled between different locations but not manipulated otherwise (and the server is used solely as remote storage). The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond “balls and bins”? The work of Boyle and Naor (ITCS 2016) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with $$o(\log n)$$ o ( log n ) overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO 2018) shows that there indeed is an $$\Omega (\log n)$$ Ω ( log n ) lower bound for general online ORAM. This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM, in which the server is used solely as remote storage, where reads (but not writes ) have an $$o(\log n)$$ o ( log n ) overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs) . Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds.
2021
JOFC
Limits on the Efficiency of (Ring) LWE-Based Non-interactive Key Exchange
$$\mathsf {LWE}$$ LWE -based key-exchange protocols lie at the heart of post-quantum public-key cryptography. However, all existing protocols either lack the non-interactive nature of Diffie–Hellman key exchange or polynomial $$\mathsf {LWE}$$ LWE -modulus, resulting in unwanted efficiency overhead. We study the possibility of designing non-interactive $$\mathsf {LWE}$$ LWE -based protocols with polynomial $$\mathsf {LWE}$$ LWE -modulus. To this end, we identify and formalize simple non-interactive and polynomial $$\mathsf {LWE}$$ LWE -modulus variants of the existing protocols, where Alice and Bob simultaneously exchange one or more (ring) $$\mathsf {LWE}$$ LWE samples with polynomial $$\mathsf {LWE}$$ LWE -modulus and then run individual key reconciliation functions to obtain the shared key. We point out central barriers and show that such non-interactive key-exchange protocols are impossible in either of the following cases: (1) the reconciliation functions first compute the inner product of the received $$\mathsf {LWE}$$ LWE sample with their private $$\mathsf {LWE}$$ LWE secret. This impossibility is information theoretic. (2) One of the reconciliation functions does not depend on the error of the transmitted $$\mathsf {LWE}$$ LWE sample. This impossibility assumes hardness of $$\mathsf {LWE}$$ LWE . We show that progress toward either a polynomial $$\mathsf {LWE}$$ LWE -modulus $$\text {NIKE}$$ NIKE construction or a general impossibility result has implications to the current understanding of lattice-based cryptographic constructions. Overall, our results show possibilities and challenges in designing simple (ring) $$\mathsf {LWE}$$ LWE -based non-interactive key-exchange protocols.
2021
JOFC
Match Me if You Can: Matchmaking Encryption and Its Applications
We introduce a new form of encryption that we name matchmaking encryption (ME). Using ME, sender S and receiver R (each with its own attributes) can both specify policies the other party must satisfy in order for the message to be revealed. The main security guarantee is that of privacy-preserving policy matching: During decryption, nothing is leaked beyond the fact that a match occurred/did not occur. ME opens up new ways of secretly communicating and enables several new applications where both participants can specify fine-grained access policies to encrypted data. For instance, in social matchmaking, S can encrypt a file containing his/her personal details and specify a policy so that the file can be decrypted only by his/her ideal partner. On the other end, a receiver R will be able to decrypt the file only if S corresponds to his/her ideal partner defined through a policy. On the theoretical side, we define security for ME, as well as provide generic frameworks for constructing ME from functional encryption. These constructions need to face the technical challenge of simultaneously checking the policies chosen by S and R, to avoid any leakage. On the practical side, we construct an efficient identity-based scheme for equality policies, with provable security in the random oracle model under the standard BDH assumption. We implement and evaluate our scheme and provide experimental evidence that our construction is practical. We also apply identity-based ME to a concrete use case, in particular for creating an anonymous bulletin board over a Tor network.
2021
JOFC
Modeling for Three-Subset Division Property without Unknown Subset
A division property is a generic tool to search for integral distinguishers, and automatic tools such as MILP or SAT/SMT allow us to evaluate the propagation efficiently. In the application to stream ciphers, it enables us to estimate the security of cube attacks theoretically, and it leads to the best key-recovery attacks against well-known stream ciphers. However, it was reported that some of the key-recovery attacks based on the division property degenerate to distinguishing attacks due to the inaccuracy of the division property. Three-subset division property (without unknown subset) is a promising method to solve this inaccuracy problem, and a new algorithm using automatic tools for the three-subset division property was recently proposed at Asiacrypt2019. In this paper, we first show that this state-of-the-art algorithm is not always efficient and we cannot improve the existing key-recovery attacks. Then, we focus on the three-subset division property without unknown subset and propose another new efficient algorithm using automatic tools. Our algorithm is more efficient than existing algorithms, and it can improve existing key-recovery attacks. In the application to Trivium , we show a 842-round key-recovery attack. We also show that a 855-round key-recovery attack, which was proposed at CRYPTO2018, has a critical flaw and does not work. As a result, our 842-round attack becomes the best key-recovery attack. In the application to Grain-128AEAD, we show that the known 184-round key-recovery attack degenerates to a distinguishing attack. Then, the distinguishing attacks are improved up to 189 rounds, and we also show the best key-recovery attack against 190 rounds. In the application to ACORN , we prove that the 772-round key-recovery attack at ISC2019 is in fact a constant-sum distinguisher. We then give new key-recovery attacks mounting to 773-, 774- and 775-round ACORN . We verify the current best key-recovery attack on 892-round Kreyvium and recover the exact superpoly. We further propose a new attack mounting to 893 rounds.
2021
JOFC
Obfuscating Circuits Via Composite-Order Graded Encoding
We present a candidate obfuscator based on composite-order graded encoding schemes (GES), which are a generalization of multilinear maps. Our obfuscator operates on circuits directly without converting them into formulas or branching programs as was done in previous solutions. As a result, the time and size complexity of the obfuscated program, measured by the number of GES elements, is directly proportional to the circuit complexity of the program being obfuscated. This improves upon previous constructions whose complexity was related to the formula or branching program size. Known instantiations of Graded Encoding Schemes allow us to obfuscate circuit classes of polynomial degree, which include for example families of circuits of logarithmic depth. We prove that our obfuscator is secure against a class of generic algebraic attacks, formulated by a generic graded encoding model. We further consider a more robust model which provides more power to the adversary and extend our results to this setting as well. As a secondary contribution, we define a new simple notion of algebraic security (which was implicit in previous works) and show that it captures standard security relative to an ideal GES oracle.
2021
JOFC
On Abelian and Homomorphic Secret Sharing Schemes
Homomorphic (resp. abelian) secret sharing is a generalization of ubiquitous linear secret sharing in which the secret value and the shares are taken from finite (resp. abelian) groups instead of vector spaces over a finite field. Homomorphic secret sharing was first defined by Benaloh and, later in the early nineties, Frankel and Desmedt presented some relevant results. Except for a few other related topics such as black-box secret sharing and secret sharing over rings, the subject has remained dormant for about three decades. The study of homomorphic secret sharing is resumed in this paper and three main results are presented: (1) mixed-linear schemes, a subclass of abelian schemes to be introduced in this paper, are more powerful than linear schemes in terms of the best achievable information ratio (the claim is proved for the port of a well-known almost entropic matroid), (2) the information ratios of dual access structures are equal for the class of abelian schemes and (3) every ideal homomorphic scheme can be transformed into an ideal linear scheme with the same access structure.
2021
JOFC
On Subversion-Resistant SNARKs
While NIZK arguments in the CRS model are widely studied, the question of what happens when the CRS is subverted has received little attention. In ASIACRYPT 2016, Bellare, Fuchsbauer, and Scafuro showed the first negative and positive results, proving also that it is impossible to achieve subversion soundness and (even non-subversion) zero knowledge at the same time. On the positive side, they constructed a sound and subversion-zero knowledge (Sub-ZK) non-succinct NIZK argument for NP. We consider the practically very relevant case of zk-SNARKs. We make Groth’s zk-SNARK for Circuit-SAT from EUROCRYPT 2016 computationally knowledge-sound and perfectly composable Sub-ZK with minimal changes. We only require the CRS trapdoor to be extractable and the CRS to be publicly verifiable. To achieve the latter, we add some new elements to the CRS and construct an efficient CRS verification algorithm. We also provide a definitional framework for knowledge-sound and Sub-ZK SNARKs.
2021
JOFC
On the Exact Round Complexity of Secure Three-Party Computation
We settle the exact round complexity of three-party computation (3PC) in honest-majority setting, for a range of security notions such as selective abort, unanimous abort, fairness and guaranteed output delivery. It is a folklore that the implication holds from the guaranteed output delivery to fairness to unanimous abort to selective abort. We focus on computational security and consider two network settings—pairwise-private channels without and with a broadcast channel. In the minimal setting of pairwise-private channels, 3PC with selective abort is known to be feasible in just two rounds, while guaranteed output delivery is infeasible to achieve irrespective of the number of rounds. Settling the quest for exact round complexity of 3PC in this setting, we show that three rounds are necessary and sufficient for unanimous abort and fairness. Extending our study to the setting with an additional broadcast channel, we show that while unanimous abort is achievable in just two rounds, three rounds are necessary and sufficient for fairness and guaranteed output delivery. Our lower bound results extend for any number of parties in honest majority setting and imply tightness of several known constructions. While our lower bounds extend to the common reference string (CRS) model, all our upper bounds are in the plain model. The fundamental concept of garbled circuits underlies all our upper bounds. Concretely, our constructions involve transmitting and evaluating only constant number of garbled circuits. Assumption-wise, our constructions rely on injective (one-to-one) one-way functions.
2021
JOFC
On the Local Leakage Resilience of Linear Secret Sharing Schemes
We consider the following basic question: to what extent are standard secret sharing schemes and protocols for secure multiparty computation that build on them resilient to leakage? We focus on a simple local leakage model, where the adversary can apply an arbitrary function of a bounded output length to the secret state of each party, but cannot otherwise learn joint information about the states. We show that additive secret sharing schemes and high-threshold instances of Shamir’s secret sharing scheme are secure under local leakage attacks when the underlying field is of a large prime order and the number of parties is sufficiently large. This should be contrasted with the fact that any linear secret sharing scheme over a small characteristic field is clearly insecure under local leakage attacks, regardless of the number of parties. Our results are obtained via tools from Fourier analysis and additive combinatorics. We present two types of applications of the above results and techniques. As a positive application, we show that the “GMW protocol” for honest-but-curious parties, when implemented using shared products of random field elements (so-called “Beaver Triples”), is resilient in the local leakage model for sufficiently many parties and over certain fields. This holds even when the adversary has full access to a constant fraction of the views. As a negative application, we rule out multiparty variants of the share conversion scheme used in the 2-party homomorphic secret sharing scheme of Boyle et al. (in: Crypto, 2016).
2021
JOFC
On the Tight Security of TLS 1.3: Theoretically Sound Cryptographic Parameters for Real-World Deployments
We consider the theoretically sound selection of cryptographic parameters, such as the size of algebraic groups or RSA keys, for TLS 1.3 in practice. While prior works gave security proofs for TLS 1.3, their security loss is quadratic in the total number of sessions across all users, which due to the pervasive use of TLS is huge. Therefore, in order to deploy TLS 1.3 in a theoretically sound way, it would be necessary to compensate this loss with unreasonably large parameters that would be infeasible for practical use at large scale. Hence, while these previous works show that in principle the design of TLS 1.3 is secure in an asymptotic sense, they do not yet provide any useful concrete security guarantees for real-world parameters used in practice. In this work, we provide a new security proof for the cryptographic core of TLS 1.3 in the random oracle model, which reduces the security of TLS 1.3 tightly (that is, with constant security loss) to the (multi-user) security of its building blocks. For some building blocks, such as the symmetric record layer encryption scheme, we can then rely on prior work to establish tight security. For others, such as the RSA-PSS digital signature scheme currently used in TLS 1.3, we obtain at least a linear loss in the number of users, independent of the number of sessions, which is much easier to compensate with reasonable parameters. Our work also shows that by replacing the RSA-PSS scheme with a tightly secure scheme (e.g., in a future TLS version), one can obtain the first fully tightly secure TLS protocol. Our results enable a theoretically sound selection of parameters for TLS 1.3, even in large-scale settings with many users and sessions per user.
2021
JOFC
Placing Conditional Disclosure of Secrets in the Communication Complexity Universe
In the conditional disclosure of secrets (CDS) problem (Gertner et al. in J Comput Syst Sci, 2000) Alice and Bob, who hold n -bit inputs x and y respectively, wish to release a common secret z to Carol, who knows both x and y , if and only if the input ( x ,  y ) satisfies some predefined predicate f . Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate f to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of $$\Omega (n)$$ Ω ( n ) or $$\Omega (n^{1-\epsilon })$$ Ω ( n 1 - ϵ ) , providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication—a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the communication complexity class $$\text {AM}^{\text {cc}}$$ AM cc , or even $$\text {AM}^{\text {cc}}\cap \text {co-AM}^{\text {cc}}$$ AM cc ∩ co-AM cc —a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the “civilized” part of the communication complexity world for which explicit lower-bounds are known.
2021
JOFC
Quantum Lightning Never Strikes the Same State Twice. Or: Quantum Money from Cryptographic Assumptions
Public key quantum money can be seen as a version of the quantum no-cloning theorem that holds even when the quantum states can be verified by the adversary. In this work, we investigate quantum lightning , a formalization of “collision-free quantum money” defined by Lutomirski et al. [ICS’10], where no-cloning holds even when the adversary herself generates the quantum state to be cloned . We then study quantum money and quantum lightning, showing the following results: We demonstrate the usefulness of quantum lightning beyond quantum money by showing several potential applications, such as generating random strings with a proof of entropy, to completely decentralized cryptocurrency without a blockchain where transactions are instantaneous and local. We give win–win results for quantum money/lightning, showing that either signatures/hash functions/commitment schemes meet very strong recently proposed notions of security, or they yield quantum money or lightning. Given the difficulty in constructing public key quantum money, this suggests that natural schemes do attain strong security guarantees. We show that instantiating the quantum money scheme of Aaronson and Christiano [STOC’12] with indistinguishability obfuscation that is secure against quantum computers yields a secure quantum money scheme. This construction can be seen as an instance of our win–win result for signatures, giving the first separation between two security notions for signatures from the literature. Finally, we give a plausible construction for quantum lightning, which we prove secure under an assumption related to the multicollision resistance of degree-2 hash functions. Our construction is inspired by our win–win result for hash functions and yields the first plausible standard model instantiation of a non-collapsing collision-resistant hash function. This improves a result of Unruh [Eurocrypt’16] which is relative to a quantum oracle. Thus, we provide the first constructions of public key quantum money from several cryptographic assumptions. Along the way, we develop several new techniques including a new precise variant of the no-cloning theorem.
2021
JOFC
Round-Optimal Secure Multi-party Computation
Secure multi-party computation (MPC) is a central cryptographic task that allows a set of mutually distrustful parties to jointly compute some function of their private inputs where security should hold in the presence of an active (i.e. malicious) adversary that can corrupt any number of parties. Despite extensive research, the precise round complexity of this “standard-bearer” cryptographic primitive, under polynomial-time hardness assumptions, is unknown. Recently, Garg, Mukherjee, Pandey and Polychroniadou, in Eurocrypt 2016 demonstrated that the round complexity of any MPC protocol relying on black-box proofs of security in the plain model must be at least four. Following this work, independently Ananth, Choudhuri and Jain, CRYPTO 2017 and Brakerski, Halevi, and Polychroniadou, TCC 2017 made progress towards solving this question and constructed four-round protocols based on the DDH and LWE assumptions, respectively, albeit with super-polynomial hardness. More recently, Ciampi, Ostrovsky, Siniscalchi and Visconti in TCC 2017 closed the gap for two-party protocols by constructing a four-round protocol from polynomial-time assumptions, concretely, trapdoor permutations. In another work, Ciampi, Ostrovsky, Siniscalchi and Visconti TCC 2017 showed how to design a four-round multi-party protocol for the specific case of multi-party coin-tossing based on one-way functions. In this work, we resolve this question by designing a four-round actively secure multi-party (two or more parties) protocol for general functionalities under standard polynomial-time hardness assumptions with a black-box proof of security, specifically, under the assumptions LWE, DDH, QR and DCR.
2021
JOFC
Round-Preserving Parallel Composition of Probabilistic-Termination Cryptographic Protocols
An important benchmark for multi-party computation protocols (MPC) is their round complexity . For several important MPC tasks, such as broadcast, (tight) lower bounds on the round complexity are known. However, some of these lower bounds can be circumvented when the termination round of every party is not a priori known, and simultaneous termination is not guaranteed. Protocols with this property are called probabilistic-termination ( PT ) protocols. Running PT protocols in parallel affects the round complexity of the resulting protocol in somewhat unexpected ways. For instance, an execution of m protocols with constant expected round complexity might take $$O(\log m)$$ O ( log m ) rounds to complete. In a seminal work, Ben-Or and El-Yaniv (Distributed Computing ‘03) developed a technique for a parallel execution of arbitrarily many broadcast protocols, while preserving expected round complexity. More recently, Cohen et al.  (CRYPTO ‘16) devised a framework for universal composition of PT protocols, and provided the first composable parallel-broadcast protocol with a simulation-based proof. These constructions crucially rely on the fact that broadcast is “privacy-free,” and do not generalize to arbitrary protocols in a straightforward way. This raises the question of whether it is possible to execute arbitrary PT protocols in parallel, without increasing the round complexity. In this paper we tackle this question and provide both feasibility and infeasibility results. We construct a round-preserving protocol compiler, tolerating any dishonest minority of actively corrupted parties, that compiles arbitrary protocols into a protocol realizing their parallel composition, while having a black-box access to the underlying protocols . Furthermore, we prove that the same cannot be achieved, using known techniques, given only black-box access to the functionalities realized by the protocols, unless merely security against semi-honest corruptions is required, for which case we provide a protocol. To prove our results, we utilize the language and results by Cohen et al. , which we extend to capture parallel composition and reactive functionalities, and to handle the case of an honest majority.
2021
JOFC
Secure Communication Channel Establishment: TLS 1.3 (over TCP Fast Open) versus QUIC
Secure channel establishment protocols such as Transport Layer Security (TLS) are some of the most important cryptographic protocols, enabling the encryption of Internet traffic. Reducing latency (the number of interactions between parties before encrypted data can be transmitted) in such protocols has become an important design goal to improve user experience. The most important protocols addressing this goal are TLS 1.3, the latest TLS version standardized in 2018 to replace the widely deployed TLS 1.2, and Quick UDP Internet Connections (QUIC), a secure transport protocol from Google that is implemented in the Chrome browser. There have been a number of formal security analyses for TLS 1.3 and QUIC, but their security, when layered with their underlying transport protocols, cannot be easily compared. Our work is the first to thoroughly compare the security and availability properties of these protocols. Toward this goal, we develop novel security models that permit “layered” security analysis. In addition to the standard goals of server authentication and data confidentiality and integrity, we consider the goals of IP spoofing prevention, key exchange packet integrity, secure channel header integrity, and reset authentication, which capture a range of practical threats not usually taken into account by existing security models that focus mainly on the cryptographic cores of the protocols. Equipped with our new models we provide a detailed comparison of three low-latency layered protocols: TLS 1.3 over TCP Fast Open (TFO), QUIC over UDP, and QUIC[TLS] (a new design for QUIC that uses TLS 1.3 key exchange) over UDP. In particular, we show that TFO’s cookie mechanism does provably achieve the security goal of IP spoofing prevention. Additionally, we find several new availability attacks that manipulate the early key exchange packets without being detected by the communicating parties. By including packet-level attacks in our analysis, our results shed light on how the reliability, flow control, and congestion control of the above layered protocols compare, in adversarial settings. We hope that our models will help protocol designers in their future protocol analyses and that our results will help practitioners better understand the advantages and limitations of secure channel establishment protocols.
2021
JOFC
Selfie: reflections on TLS 1.3 with PSK
TLS 1.3 allows two parties to establish a shared session key from an out-of-band agreed pre-shared key (PSK). The PSK is used to mutually authenticate the parties, under the assumption that it is not shared with others. This allows the parties to skip the certificate verification steps, saving bandwidth, communication rounds, and latency. In this paper, we identify a vulnerability in this specific TLS 1.3 option by showing a new “reflection attack” that we call “ Selfie .” This attack uses the fact that TLS does not mandate explicit authentication of the server and the client, and leverages it to break the protocol’s mutual authentication property. We explain the root cause of this TLS 1.3 vulnerability, provide a fully detailed demonstration of a Selfie  attack using the TLS implementation of OpenSSL, and propose mitigation. The Selfie  attack is the first attack on TLS 1.3 after its official release in 2018. It is surprising because it uncovers an interesting gap in the existing TLS 1.3 models that the security proofs rely on. We explain the gap in these model assumptions and show how it affects the proofs in this case.
2021
JOFC
Session Resumption Protocols and Efficient Forward Security for TLS 1.3 0-RTT
The TLS 1.3 0-RTT mode enables a client reconnecting to a server to send encrypted application-layer data in “0-RTT” (“zero round-trip time”), without the need for a prior interactive handshake. This fundamentally requires the server to reconstruct the previous session’s encryption secrets upon receipt of the client’s first message. The standard techniques to achieve this are session caches or, alternatively, session tickets . The former provides forward security and resistance against replay attacks, but requires a large amount of server-side storage. The latter requires negligible storage, but provides no forward security and is known to be vulnerable to replay attacks. In this paper, we first formally define session resumption protocols as an abstract perspective on mechanisms like session caches and session tickets. We give a new generic construction that provably provides forward security and replay resilience, based on puncturable pseudorandom functions (PPRFs). We show that our construction can immediately be used in TLS 1.3 0-RTT and deployed unilaterally by servers, without requiring any changes to clients or the protocol. To this end, we present a generic composition of our new construction with TLS 1.3 and prove its security. This yields the first construction that achieves forward security for all messages, including the 0-RTT data. We then describe two new constructions of PPRFs, which are particularly suitable for use for forward-secure and replay-resilient session resumption in TLS 1.3. The first construction is based on the strong RSA assumption. Compared to standard session caches, for “128-bit security” it reduces the required server storage by a factor of almost 20, when instantiated in a way such that key derivation and puncturing together are cheaper on average than one full exponentiation in an RSA group. Hence, a 1 GB session cache can be replaced with only about 51 MBs of storage, which significantly reduces the amount of secure memory required. For larger security parameters or in exchange for more expensive computations, even larger storage reductions are achieved. The second construction combines a standard binary tree PPRF with a new “domain extension” technique. For a reasonable choice of parameters, this reduces the required storage by a factor of up to 5 compared to a standard session cache. It employs only symmetric cryptography, is suitable for high-traffic scenarios, and can serve thousands of tickets per second.
2021
JOFC
Simple and Generic Constructions of Succinct Functional Encryption
We propose simple generic constructions of succinct functional encryption. Our key tool is strong exponentially efficient indistinguishability obfuscator (SXIO), which is the same as indistinguishability obfuscator (IO) except that the size of an obfuscated circuit and the running time of an obfuscator are slightly smaller than that of a brute-force canonicalizer that outputs the entire truth table of a circuit to be obfuscated. A “compression factor” of SXIO indicates how much SXIO compresses the brute-force canonicalizer. In this study, we propose a significantly simple framework to construct succinct functional encryption via SXIO and show that SXIO is powerful enough to achieve cutting-edge cryptography. In particular, we propose the following constructions: Single-key weakly succinct secret-key functional encryption (SKFE) is constructed from SXIO (even with a bad compression factor) and one-way functions. Single-key weakly succinct public-key functional encryption (PKFE) is constructed from SXIO with a good compression factor and public-key encryption. Single-key weakly succinct PKFE is constructed from SXIO (even with a bad compression factor) and identity-based encryption. Our new framework has side benefits. Our constructions do not rely on any number theoretic or lattice assumptions such as decisional Diffie–Hellman and learning with errors assumptions. Moreover, all security reductions incur only polynomial security loss. Known constructions of weakly succinct SKFE or PKFE from SXIO with polynomial security loss rely on number theoretic or lattice assumptions. As corollaries of our results, relationships among SXIO, a few variants of SKFE, and a variant of randomized encoding are discovered.
2021
JOFC
The Deoxys AEAD Family
We present the Deoxys family of authenticated encryption schemes, which consists of Deoxys-I and Deoxys-II . Both are nonce-based authenticated encryption schemes with associated data and have either 128- or 256-bit keys. Deoxys-I is similar to OCB : It is single-pass but insecure when nonces are repeated; in contrast, Deoxys-II is nonce-misuse resistant. Deoxys-II was selected as first choice in the final portfolio of the CAESAR competition for the defense-in-depth category. Deoxys uses a new family of tweakable block ciphers as internal primitive, Deoxys-TBC , which follows the TWEAKEY framework (Jean, Nikolić, and Peyrin, ASIACRYPT 2014) and relies on the AES round function. Our benchmarks indicate that Deoxys does not sacrifice efficiency for security and performs very well both in software (e.g., Deoxys-I efficiency is similar to AES-GCM ) and hardware.
2021
JOFC
The Design and Evolution of OCB
We describe OCB3, the final version of OCB, a blockcipher mode for authenticated encryption (AE). We prove the construction secure, up to the birthday bound, assuming its underlying blockcipher is secure as a strong-PRP. We study the scheme’s software performance, comparing its speed, on multiple platforms, to a variety of other AE schemes. We reflect on the history and development of the mode.
2021
JOFC
The Number of Almost Perfect Nonlinear Functions Grows Exponentially
Almost perfect nonlinear (APN) functions play an important role in the design of block ciphers as they offer the strongest resistance against differential cryptanalysis. Despite more than 25 years of research, only a limited number of APN functions are known. In this paper, we show that a recent construction by Taniguchi provides at least $$\frac{\varphi (m)}{2}\left\lceil \frac{2^m+1}{3m} \right\rceil $$ φ ( m ) 2 2 m + 1 3 m inequivalent APN functions on the finite field with $${2^{2m}}$$ 2 2 m elements, where $$\varphi $$ φ denotes Euler’s totient function. This is a great improvement of previous results: for even m , the best known lower bound has been $$\frac{\varphi (m)}{2}\left( \lfloor \frac{m}{4}\rfloor +1\right) $$ φ ( m ) 2 ⌊ m 4 ⌋ + 1 ; for odd m , there has been no such lower bound at all. Moreover, we determine the automorphism group of Taniguchi’s APN functions.
2021
JOFC
Tight Tradeoffs in Searchable Symmetric Encryption
A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead , locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT ’14) and Asharov et al. (STOC ’16) to construct SSE schemes offering various such tradeoffs and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed. We establish tight bounds on the tradeoff between the space overhead, locality and read efficiency of SSE schemes within two general frameworks that capture the memory access pattern underlying all existing schemes. First, we introduce the “pad-and-split” framework, refining that of Cash and Tessaro while still capturing the same existing schemes. Within our framework we significantly strengthen their lower bound, proving that any scheme with locality L must use space $$\Omega ( N \log N / \log L )$$ Ω ( N log N / log L ) for databases of size N . This is a tight lower bound, matching the tradeoff provided by the scheme of Demertzis and Papamanthou (SIGMOD ’17) which is captured by our pad-and-split framework. Then, within the “statistical-independence” framework of Asharov et al. we show that their lower bound is essentially tight: We construct a scheme whose tradeoff matches their lower bound within an additive $$O(\log \log \log N)$$ O ( log log log N ) factor in its read efficiency, once again improving upon the existing schemes. Our scheme offers optimal space and locality, and nearly optimal read efficiency that depends on the frequency of the queried keywords: For a keyword that is associated with $$n = N^{1 - \epsilon (n)}$$ n = N 1 - ϵ ( n ) document identifiers, the read efficiency is $$\omega (1) \cdot {\epsilon }(n)^{-1}+ O(\log \log \log N)$$ ω ( 1 ) · ϵ ( n ) - 1 + O ( log log log N ) when retrieving its identifiers (where the $$\omega (1)$$ ω ( 1 ) term may be arbitrarily small, and $$\omega (1) \cdot {\epsilon }(n)^{-1}$$ ω ( 1 ) · ϵ ( n ) - 1 is the lower bound proved by Asharov et al.). In particular, for any keyword that is associated with at most $$N^{1 - 1/o(\log \log \log N)}$$ N 1 - 1 / o ( log log log N ) document identifiers (i.e., for any keyword that is not exceptionally common), we provide read efficiency $$O(\log \log \log N)$$ O ( log log log N ) when retrieving its identifiers.
2021
JOFC
Tighter Security Proofs for GPV-IBE in the Quantum Random Oracle Model
In (STOC, 2008), Gentry, Peikert, and Vaikuntanathan proposed the first identity-based encryption (GPV-IBE) scheme based on a post-quantum assumption, namely the learning with errors assumption. Since their proof was only made in the random oracle model (ROM) instead of the quantum random oracle model (QROM), it remained unclear whether the scheme was truly post-quantum or not. In (CRYPTO, 2012), Zhandry developed new techniques to be used in the QROM and proved security of GPV-IBE in the QROM, hence answering in the affirmative that GPV-IBE is indeed post-quantum. However, since the general technique developed by Zhandry incurred a large reduction loss, there was a wide gap between the concrete efficiency and security level provided by GPV-IBE in the ROM and QROM. Furthermore, regardless of being in the ROM or QROM, GPV-IBE is not known to have a tight reduction in the multi-challenge setting. Considering that in the real-world an adversary can obtain many ciphertexts, it is desirable to have a security proof that does not degrade with the number of challenge ciphertext. In this paper, we provide a much tighter proof for the GPV-IBE in the QROM in the single-challenge setting. In addition, we show that a slight variant of the GPV-IBE has an almost tight reduction in the multi-challenge setting both in the ROM and QROM, where the reduction loss is independent of the number of challenge ciphertext. Our proof departs from the traditional partitioning technique and resembles the approach used in the public key encryption scheme of Cramer and Shoup (CRYPTO, 1998). Our proof strategy allows the reduction algorithm to program the random oracle the same way for all identities and naturally fits the QROM setting where an adversary may query a superposition of all identities in one random oracle query. Notably, our proofs are much simpler than the one by Zhandry and conceptually much easier to follow for cryptographers not familiar with quantum computation. Although at a high level, the techniques used for the single- and multi-challenge setting are similar, the technical details are quite different. For the multi-challenge setting, we rely on the Katz–Wang technique (CCS, 2003) to overcome some obstacles regarding the leftover hash lemma.
2021
JOFC
Toward Non-interactive Zero-Knowledge Proofs for NP from LWE
Non-interactive zero-knowledge ( $$\mathsf {NIZK}$$ NIZK ) is a fundamental primitive that is widely used in the construction of cryptographic schemes and protocols. Our main result is a reduction from constructing $$\mathsf {NIZK}$$ NIZK proof systems for all of $$\mathbf {NP}$$ NP based on $$\mathsf {LWE}$$ LWE , to constructing a $$\mathsf {NIZK}$$ NIZK proof system for a particular computational problem on lattices, namely a decisional variant of the bounded distance decoding ( $$\mathsf {BDD}$$ BDD ) problem. That is, we show that assuming $$\mathsf {LWE}$$ LWE , every language $$L \in \mathbf {NP}$$ L ∈ NP has a $$\mathsf {NIZK}$$ NIZK proof system if (and only if) the decisional $$\mathsf {BDD}$$ BDD problem has a $$\mathsf {NIZK}$$ NIZK proof system. This (almost) confirms a conjecture of Peikert and Vaikuntanathan (CRYPTO, 2008). To construct our $$\mathsf {NIZK}$$ NIZK proof system, we introduce a new notion that we call prover-assisted oblivious ciphertext sampling ( $$\mathsf {POCS}$$ POCS ), which we believe to be of independent interest. This notion extends the idea of oblivious ciphertext sampling , which allows one to sample ciphertexts without knowing the underlying plaintext. Specifically, we augment the oblivious ciphertext sampler with access to an (untrusted) prover to help it accomplish this task. We show that the existence of encryption schemes with a $$\mathsf {POCS}$$ POCS procedure, as well as some additional natural requirements, suffices for obtaining $$\mathsf {NIZK}$$ NIZK proofs for $$\mathbf {NP}$$ NP . We further show that such encryption schemes can be instantiated based on $$\mathsf {LWE}$$ LWE , assuming the existence of a $$\mathsf {NIZK}$$ NIZK proof system for the decisional $$\mathsf {BDD}$$ BDD problem.
2021
JOFC
Translating the Discrete Logarithm Problem on Jacobians of Genus 3 Hyperelliptic Curves with $(\ell ,\ell ,\ell )$-Isogenies
We give an algorithm to compute $$(\ell ,\ell ,\ell )$$ ( ℓ , ℓ , ℓ ) -isogenies from the Jacobians of genus three hyperelliptic curves to the Jacobians of non-hyperelliptic curves over a finite field of characteristic different from 2 in time $$\tilde{O}(\ell ^3)$$ O ~ ( ℓ 3 ) , where $$\ell $$ ℓ is an odd prime which is coprime to the characteristic. An important application is to reduce the discrete logarithm problem in the Jacobian of a hyperelliptic curve to the corresponding problem in the Jacobian of a non-hyperelliptic curve.
2021
JOFC
Unconditionally Secure Computation Against Low-Complexity Leakage
We consider the problem of constructing leakage-resilient circuit compilers that are secure against global leakage functions with bounded output length. By global, we mean that the leakage can depend on all circuit wires and output a low-complexity function (represented as a multi-output Boolean circuit) applied on these wires. In this work, we design compilers both in the stateless (a.k.a. single-shot leakage) setting and the stateful (a.k.a. continuous leakage) setting that are unconditionally secure against $$\mathsf {AC}^0$$ AC 0 leakage and similar low-complexity classes. In the stateless case, we show that the original private circuits construction of Ishai, Sahai, and Wagner (Crypto 2003) is actually secure against $$\mathsf {AC}^0$$ AC 0 leakage. In the stateful case, we modify the construction of Rothblum (Crypto 2012), obtaining a simple construction with unconditional security. Prior works that designed leakage-resilient circuit compilers against $$\mathsf {AC}^0$$ AC 0 leakage had to rely either on secure hardware components (Faust et al., Eurocrypt 2010, Miles-Viola, STOC 2013) or on (unproven) complexity-theoretic assumptions (Rothblum, Crypto 2012).
2021
JOFC
Watermarking Cryptographic Functionalities from Standard Lattice Assumptions
A software watermarking scheme allows one to embed a “mark” into a program without significantly altering the behavior of the program. Moreover, it should be difficult to remove the watermark without destroying the functionality of the program. Recently, Cohen et al. (STOC 2016) and Boneh et al. (PKC 2017) showed how to watermark cryptographic functions such as pseudorandom functions (PRFs) using indistinguishability obfuscation. Notably, in their constructions, the watermark remains intact even against arbitrary removal strategies. A natural question is whether we can build watermarking schemes from standard assumptions that achieve this strong mark-unremovability property. We give the first construction of a watermarkable family of PRFs that satisfies this strong mark-unremovability property from standard lattice assumptions (namely, the learning with errors (LWE) and the one-dimensional short integer solution (SIS) problems). As part of our construction, we introduce a new cryptographic primitive called a translucent PRF. We then give a concrete construction of a translucent PRF family from standard lattice assumptions, which in turn yields a watermarkable family of PRFs from the same assumptions.