International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Recent videos of IACR talks

Year
Venue
Title
2020
ASIACRYPT
Twisted-PHS: Using the Product Formula to Solve Approx-SVP in Ideal Lattices 📺
Approx-SVP is a well-known hard problem on lattices, which asks to find short vectors on a given lattice, but its variant restricted to ideal lattices (which correspond to ideals of the ring of integers $\mathcal{O}_{K}$ of a number field $K$) is still not fully understood. For a long time, the best known algorithm to solve this problem on ideal lattices was the same as for arbitrary lattice. But recently, a series of works tends to show that solving this problem could be easier in ideal lattices than in arbitrary ones, in particular in the quantum setting. Our main contribution is to propose a new ``twisted'' version of the PHS (by Pellet-Mary, Hanrot and Stehlé 2019) algorithm, that we call Twisted-PHS. As a minor contribution, we also propose several improvements of the PHS algorithm. On the theoretical side, we prove that our twisted-PHS algorithm performs at least as well as the original PHS algorithm. On the practical side though, we provide a full implementation of our algorithm which suggest that much better approximation factors are achieved, and that the given lattice bases are a lot more orthogonal than the ones used in PHS. This is the first time to our knowledge that this type of algorithm is completely implemented and tested for fields of degrees up to 60.
2020
ASIACRYPT
Succinct Diophantine-Satisfiability Arguments 📺
A Diophantine equation is a multi-variate polynomial equation with integer coefficients and it is satisfiable if it has a solution with all unknowns taking integer values. Davis, Putnam, Robinson and Matiyasevich showed that the general Diophantine satisfiability problem is undecidable (giving a negative answer to Hilbert's tenth problem) but it is nevertheless possible to argue in zero-knowledge the knowledge of a solution, if a solution is known to a prover. We provide the first succinct honest-verifier zero-knowledge argument for the satisfiability of Diophantine equations with a communication complexity and a round complexity that grows logarithmically in the size of the polynomial equation. The security of our argument relies on standard assumptions on hidden-order groups. As the argument requires to commit to integers, we introduce a new integer-commitment scheme that has much smaller parameters than Damgard and Fujisaki's scheme. We finally show how to succinctly argue knowledge of solutions to several NP-complete problems and cryptographic problems by encoding them as Diophantine equations.
2020
ASIACRYPT
On the Adaptive Security of MACs and PRFs 📺
We consider the security of two of the most commonly used cryptographic primitives--message authentication codes (MACs) and pseudorandom functions (PRFs)--in a multi-user setting with adaptive corruption. Whereas is it well known that any secure MAC or PRF is also multi-user secure under adaptive corruption, the trivial reduction induces a security loss that is linear in the number of users. Our main result shows that black-box reductions from "standard" assumptions cannot be used to provide a tight, or even a linear-preserving, security reduction for adaptive multi-user secure deterministic stateless MACs and thus also PRFs. In other words, a security loss that grows with the number of users is necessary for any such black-box reduction.
2020
ASIACRYPT
Individual Simulations 📺
We develop an individual simulation technique that explicitly makes use of particular properties/structures of a given adversary's functionality. Using this simulation technique, we obtain the following results. 1. We construct the first protocols that break previous black-box barriers under the standard hardness of factoring, both of which are polynomial time simulatable against all a-priori bounded polynomial size distinguishers: a)Two-round selective opening secure commitment scheme. b)Three-round concurrent zero knowledge and concurrent witness hiding argument for NP in the bare public-key model. 2. We present a simpler two-round weak zero knowledge and witness hiding argument for NP in the plain model under the sub-exponential hardness of factoring. Our technique also yields a significantly simpler proof that existing distinguisher-dependent simulatable zero knowledge protocols are also polynomial time simulatable against all distinguishers of a-priori bounded polynomial size. The core conceptual idea underlying our individual simulation technique is an observation of the existence of nearly optimal extractors for all hard distributions: For any NP-instance(s) sampling algorithm, there exists a polynomial-size witness extractor (depending on the sampler's functionality) that almost outperforms any circuit of a-priori bounded polynomial size in terms of the success probability.
2020
ASIACRYPT
Circuit Amortization Friendly Encodings and their Application to Statistically Secure Multiparty Computation 📺
At CRYPTO 2018, Cascudo et al. introduced Reverse Multiplication Friendly Embeddings (RMFEs). These are a mechanism to compute $\delta$ parallel evaluations of the same arithmetic circuit over a field $\mathbb{F}_q$ at the cost of a single evaluation of that circuit in $\mathbb{F}_{q^d}$, where $\delta < d$. Due to this inequality, RMFEs are a useful tool when protocols require to work over $\mathbb{F}_{q^d}$ but one is only interested in computing over $\mathbb{F}_q$. In this work we introduce Circuit Amortization Friendly Encodings (CAFEs), which generalize RMFEs while having concrete efficiency in mind. For a Galois Ring $R = GR(2^k,d)$, CAFEs allow to compute certain circuits over $\mathbb{Z}_{2^k}}$ at the cost of a single secure multiplication in $R$. We present three CAFE instantiations, which we apply to the protocol for MPC over $\mathbb{Z}_{2^k}}$ via Galois Rings by Abspoel et al. (TCC 2019). Our protocols allow for efficient switching between the different CAFEs, as well as between computation over $GR(2^k,d)$ and $\mathbb{F}_{2^{d}}$ in a way that preserves the CAFE in both rings. This adaptability leads to efficiency gains for e.g. Machine Learning applications, which can be represented as highly parallel circuits over $\mathbb{Z}_{2^k}}$ followed by bit-wise operations. From an implementation of our techniques, we estimate that an SVM can be evaluated on 250 images in parallel up to $\times 7$ as efficient using our techniques, compared to using the protocols from Abspoel et al. (TCC 2019).
2020
ASIACRYPT
Unbounded Dynamic Predicate Compositions in ABE from Standard Assumptions 📺
At Eurocrypt'19, Attrapadung presented several transformations that dynamically compose a set of attribute-based encryption (ABE) schemes for simpler predicates into a new ABE scheme for more expressive predicates. Due to the powerful unbounded and modular nature of his compositions, many new ABE schemes can be obtained in a systematic manner (including those that resolved some open problems at the time). However, his approach heavily relies on so-called $q$-type assumptions, which are not standard. Devising such powerful compositions from standard assumptions was left as an important open problem. In this paper, we present a new framework for constructing ABE schemes that allow unbounded and dynamic predicate compositions among them, and show that the adaptive security of these composed ABE will be preserved by relying only on the standard matrix Diffie-Hellman (MDDH) assumption. This thus resolves the open problem posed by Attrapadung. As for applications, we obtain various ABEs that are the first such instantiations of their kinds from standard assumptions. These include the following adaptively secure \emph{large-universe} ABEs for Boolean formulae under MDDH: - The first completely unbounded monotone key-policy (KP)/ciphertext-policy (CP) ABE. Previously, such ABE has been only recently proposed, but only for the KP and \emph{small-universe} flavor (Kowalczyk and Wee, Eurocrypt'19). - The first completely unbounded non-monotone KP/CP-ABE. Especially, our ABEs support a new type of non-monotonicity that subsumes previous two types of non-monotonicity, namely, by Ostrovsky et al. (CCS'07) and by Okamoto and Takashima (CRYPTO'10). - The first non-monotone KP and CP-ABE with constant-size ciphertexts and secret keys, respectively. - The first monotone KP and CP-ABE with constant-size secret keys and ciphertexts, respectively. At the core of our framework lies a new \emph{partially symmetric} design of the core 1-key 1-ciphertext oracle component called Key Encoding Indistinguishability, which exploits the symmetry so as to obtain compositions.
2020
ASIACRYPT
Simpler Statistically Sender Private Oblivious Transfer from Ideals of Cyclotomic Integers 📺
We present a two-message oblivious transfer protocol achieving statistical sender privacy and computational receiver privacy based on the RLWE assumption for cyclotomic number fields. This work improves upon prior lattice-based statistically sender-private oblivious transfer protocols by reducing the total communication between parties by a factor O(nlogq) for transfer of length O(n) messages. Prior work of Brakerski and Dottling uses transference theorems to show that either a lattice or its dual must have short vectors, the existence of which guarantees lossy encryption for encodings with respect to that lattice, and therefore statistical sender privacy. In the case of ideal lattices from embeddings of cyclotomic integers, the existence of one short vector implies the existence of many, and therefore encryption with respect to either a lattice or its dual is guaranteed to ``lose" more information about the message than can be ensured in the case of general lattices. This additional structure of ideals of cyclotomic integers allows for efficiency improvements beyond those that are typical when moving from the generic to ideal lattice setting, resulting in smaller message sizes for sender and receiver, as well as a protocol that is simpler to describe and analyze.
2020
ASIACRYPT
Efficient and Round-Optimal Oblivious Transfer and Commitment with Adaptive Security 📺
We construct the most efficient two-round adaptively secure bit-OT in the Common Random String (CRS) model. The scheme is UC secure under the Decisional Diffie-Hellman (DDH) assumption. It incurs O(1) exponentiations and sends O(1) group elements, whereas the state of the art requires O(k^2) exponentiations and communicates poly(k) bits, where k is the computational security parameter. Along the way, we obtain several other efficient UC-secure OT protocols under DDH : - The most efficient yet two-round adaptive string-OT protocol assuming global programmable random oracle. Furthermore, the protocol can be made non-interactive in the simultaneous message setting, assuming random inputs for the sender. - The fi rst two-round string-OT with amortized constant exponentiations and communication overhead which is secure in the global observable random oracle model. - The first two-round receiver equivocal string-OT in the CRS model that incurs constant computation and communication overhead. We also obtain the first non-interactive adaptive string UC-commitment in the CRS model which incurs a sublinear communication overhead in the security parameter. Speci cally, we commit to polylog(k) bits while communicating O(k) bits. Moreover, it is additively homomorphic. We can also extend our results to the single CRS model where multiple sessions share the same CRS. As a corollary, we obtain a two-round adaptively secure MPC protocol in this model.
2020
ASIACRYPT
Two-Pass Authenticated Key Exchange with Explicit Authentication and Tight Security 📺
We propose a generic construction of 2-pass authenticated key exchange (AKE) scheme with explicit authentication from key encapsulation mechanism (KEM) and signature (SIG) schemes. We improve the security model due to Gjosteen and Jager [Crypto2018] to a stronger one. In the strong model, if a replayed message is accepted by some user, the authentication of AKE is broken. We define a new security notion named ''IND-mCPA with adaptive reveals'' for KEM. When the underlying KEM has such a security and SIG has unforgeability with adaptive corruptions, our construction of AKE equipped with counters as states is secure in the strong model, and stateless AKE without counter is secure in the traditional model. We also present a KEM possessing tight ''IND-mCPA security with adaptive reveals'' from the Computation Diffie-Hellman assumption in the random oracle model. When the generic construction of AKE is instantiated with the KEM and the available SIG by Gjosteen and Jager [Crypto2018], we obtain the first practical 2-pass AKE with tight security and explicit authentication. In addition, the integration of the tightly IND-mCCA secure KEM (derived from PKE by Han et al. [Crypto2019]) and the tightly secure SIG by Bader et al. [TCC2015] results in the first tightly secure 2-pass AKE with explicit authentication in the standard model.
2020
ASIACRYPT
Succinct and Adaptively Secure ABE for Arithmetic Branching Programs from k-Lin 📺
We present succinct and adaptively secure attribute-based encryption (ABE) schemes for arithmetic branching programs, based on k-Lin in pairing groups. Our key-policy ABE scheme have ciphertexts of constant size, independent of the length of the attributes, and our ciphertext-policy ABE scheme have secret keys of constant size. Our schemes improve upon the recent succinct ABE schemes in [Tomida and Attrapadung, ePrint '20], which only handles Boolean formulae. All other prior succinct ABE schemes either achieve only selective security or rely on q-type assumptions. Our schemes are obtained through a general and modular approach that combines a public-key inner product functional encryption satisfying a new security notion called gradual simulation security and an information-theoretic randomized encoding scheme called arithmetic key garbling scheme.
2020
ASIACRYPT
CCA Updatable Encryption Against Malicious Re-Encryption Attacks 📺
Updatable encryption (UE) is an attractive primitive, which allows the secret key of the outsourced encrypted data to be updated to a fresh one periodically. Several elegant works exist studying various security properties. We notice several major issues in existing security models of (ciphertext dependent) updatable encryption, in particular, integrity and CCA security. The adversary in the models is only allowed to request the server to re-encrypt {\em honestly} generated ciphertext, while in practice, an attacker could try to inject arbitrary ciphertexts into the server as she wishes. Those malformed ciphertext could be updated and leveraged by the adversary and cause serious security issues. In this paper, we fill the gap and strengthen the security definitions in multiple aspects: most importantly our integrity and CCA security models remove the restriction in previous models and achieve standard notions of integrity and CCA security in the setting of updatable encryption. Along the way, we refine the security model to capture post-compromise security and enhance the re-encryption indistinguishability to the CCA style. Guided by the new models, we provide a novel construction \recrypt, which satisfies our strengthened security definitions. The technical building block of homomorphic hash from a group may be of independent interests. We also study the relations among security notions; and a bit surprisingly, the folklore result in authenticated encryption that IND-CPA plus ciphertext integrity imply IND-CCA security does {\em not} hold for ciphertext dependent updatable encryption.
2020
ASIACRYPT
Efficient Fully Secure Computation via Distributed Zero-Knowledge Proofs 📺
Secure computation protocols enable mutually distrusting parties to compute a function of their private inputs while revealing nothing but the output. Protocols with {\em full security} (also known as {\em guaranteed output delivery}) in particular protect against denial-of-service attacks, guaranteeing that honest parties receive a correct output. This feature can be realized in the presence of an honest majority, and significant research effort has gone toward attaining full security with good asymptotic and concrete efficiency. We present a fully secure protocol for {\em any constant} number of parties $n$ and $t<n/2$ corruptions that achieves full security with the {\em same amortized communication} as for semi-honest security: $\frac{3t}{2t+1}|C| + o(|C|)$ $R$-elements per party ($\approx 1.5$ $R$-elements), for a circuit with $|C|$ multiplication gates over either a finite field $R=\FF$ or over the ring $R=\Z_{2^k}$. Our techniques include new methods for utilizing the distributed zero-knowledge proofs of Boneh {\em et al.} (CRYPTO 2019) for both distributed verifiers {\em and} provers. As a secondary contribution, we show that similar techniques can be used to compile the best known honest-majority protocols for an arbitrary (super-constant) number of semi-honest parties into ones that achieve {\em security with abort} against malicious parties, with sublinear additive cost. We present an efficient protocol for {\em any constant} number of parties $n$, with full security against $t<n/2$ corrupted parties, that makes a black-box use of a pseudorandom generator. Our protocol evaluates an arithmetic circuit $C$ over a finite ring $R$ (either a finite field or $R=\Z_{2^k}$) with communication complexity of $\frac{3t}{2t+1}S + o(S)$ $R$-elements per party, where $S$ is the number of multiplication gates in $C$ (namely, $<1.5$ elements per party per gate). This matches the best known protocols for the semi-honest model up to the sublinear additive term. For a small number of parties $n$, this improves over a recent protocol of Goyal {\em et al.} (Crypto 2020) by a constant factor for circuits over large fields, and by at least an $\Omega(\log n)$ factor for Boolean circuits or circuits over rings. Our protocol provides new methods for applying the distributed zero-knowledge proofs of Boneh {\em et al.}~(Crypto 2019), which only require logarithmic communication, for compiling semi-honest protocols into fully secure ones in the more challenging case of $t>1$ corrupted parties. %Similarly to the recent fully secure 3-party protocol of Boyle {\em et al.} (CCS 2019), our protocol builds on the sublinear-communication distributed zero-knowledge proofs of Boneh {\em et al.} (Crypto 2019) to compile any ``natural'' semi-honest protocol into a fully secure protocol. However, applying this tool with $t>1$ corrupted parties introduces several nontrivial challenges that we overcome in this work. Our protocol relies on {\em replicated secret sharing} to minimize communication and simplify the mechanism for achieving full security. This results in computational cost that scales exponentially with $n$. Our main protocol builds on a new honest-majority protocol for verifying the correctness of multiplication triples by making a {\em general} use of distributed zero-knowledge proofs. While the protocol only achieves the weaker notion of {\em security with abort}, it applies to any linear secret-sharing scheme and provides a conceptually simpler, more general, and more efficient alternative to previous protocols from the literature. In particular, it can be combined with the Fiat-Shamir heuristic to simultaneously achieve logarithmic communication complexity and constant round complexity.
2020
ASIACRYPT
Practical Exact Proofs from Lattices: New Techniques to Exploit Fully-Splitting Rings 📺
We propose a lattice-based zero-knowledge proof system for exactly proving knowledge of a ternary solution $\vec{s} \in \{-1,0,1\}^n$ to a linear equation $A\vec{s}=\vec{u}$ over $\mathbb{Z}_q$, which improves upon the protocol by Bootle, Lyubashevsky and Seiler (CRYPTO 2019) by producing proofs that are shorter by a factor of $7.5$. At the core lies a technique that utilizes the module-homomorphic BDLOP commitment scheme (SCN 2018) over the fully splitting cyclotomic ring $\mathbb{Z}_q[X]/(X^d + 1)$ to prove scalar products with the NTT vector of a secret polynomial.
2020
ASIACRYPT
MOTIF: (Almost) Free Branching in GMW via Vector-Scalar Multiplication 📺
MPC functionalities are increasingly specified in high-level languages, where control-flow constructions such as conditional statements are extensively used. Today, concretely efficient MPC protocols are circuit-based and must evaluate all conditional branches at high cost to hide the taken branch. The Goldreich-Micali-Wigderson, or GMW, protocol is a foundational circuit-based technique that realizes MPC for p players and is secure against up to p-1 semi-honest corruptions. While GMW requires communication rounds proportional to the computed circuit’s depth, it is effective in many natural settings. Our main contribution is MOTIF (Minimizing OTs for IFs), a novel GMW extension that evaluates conditional branches almost for free by amortizing Oblivious Transfers (OTs) across branches. That is, we simultaneously evaluate multiple independent AND gates, one gate from each mutually exclusive branch, by representing them as a single cheap vector-scalar multiplication (VS) gate. For 2PC with b branches, we simultaneously evaluate up to b AND gates using only two 1-out-of-2 OTs of b-bit secrets. This is a factor ~b improvement over the state-of-the-art 2b 1-out-of-2 OTs of 1-bit secrets. Our factor b improvement generalizes to the multiparty setting as well: b AND gates consume only p(p ? 1) 1-out-of-2 OTs of b-bit secrets. We implemented our approach and report its performance. For 2PC and a circuit with 16 branches, each comparing two length-65000 bitstrings, MOTIF outperforms standard GMW in terms of communication by ~9.4x. Total wall-clock time is improved by 4.1 - 9.2x depending on network settings. Our work is in the semi-honest model, tolerating all-but-one corruptions.
2020
ASIACRYPT
Towards Classical Hardness of Module-LWE: The Linear Rank Case 📺
We prove that the module learning with errors (M-LWE) problem with arbitrary polynomial-sized modulus $p$ is \emph{classically} at least as hard as standard worst-case lattice problems, as long as the module rank $d$ is not smaller than the ring dimension $n$. Previous publications only showed the hardness under quantum reductions. We achieve this result in an analogous manner as in the case of the learning with errors (LWE) problem. First, we show the classical hardness of M-LWE with an exponential-sized modulus. In a second step, we prove the hardness of M-LWE using a binary secret. And finally, we provide a modulus reduction technique. The complete result applies to the class of power-of-two cyclotomic fields. However, several tools hold for more general classes of number fields and may be of independent interest.
2020
ASIACRYPT
Maliciously Secure Matrix Multiplication with Applications to Private Deep Learning 📺
Computing on data in a manner that preserve the privacy is of growing importance. Multi-Party Computation (MPC) and Homomorphic Encryption (HE) are two cryptographic techniques for privacy-preserving computations. In this work, we have developed efficient UC-secure multiparty protocols for matrix multiplications and two-dimensional convolutions. We built upon the SPDZ framework and integrated the state-of-the-art HE algorithms for matrix multiplication. Our protocol achieved communication cost linear only in the input and output dimensions and not on the number of multiplication operations. We eliminate the ''triple sacrifice'' step of SPDZ to improve efficiency and simplify the zero-knowledge proofs. We implemented our protocols and benchmarked them against the SPDZ LowGear variant (Keller et al. Eurocrypt'18). For multiplying two square matrices of size 128, we reduced the communication cost from 1.54 GB to 12.46 MB, an improvement of over two orders of magnitude that only improves with larger matrix sizes. For evaluating all convolution layers of the ResNet-50 neural network, the communication reduces cost from 5 TB to 41 GB.
2020
ASIACRYPT
Finding Collisions in a Quantum World: Quantum Black-Box Separation of Collision-Resistance and One-Wayness 📺
Best Paper Award
Since the celebrated work of Impagliazzo and Rudich (STOC 1989), a number of black-box impossibility results have been established. However, these works only ruled out classical black-box reductions among cryptographic primitives. Therefore it may be possible to overcome these impossibility results by using quantum reductions. To exclude such a possibility, we have to extend these impossibility results to the quantum setting. In this paper, we study black-box impossibility in the quantum setting. We first formalize a quantum counterpart of fully-black-box reduction following the formalization by Reingold, Trevisan and Vadhan (TCC 2004). Then we prove that there is no quantum fully-black-box reduction from collision-resistant hash functions to one-way permutations (or even trapdoor permutations). We take both of classical and quantum implementations of primitives into account. This is an extension to the quantum setting of the work of Simon (Eurocrypt 1998) who showed a similar result in the classical setting.
2020
ASIACRYPT
Estimating quantum speedups for lattice sieves 📺
Quantum variants of lattice sieve algorithms are routinely used to assess the security of lattice based cryptographic constructions. In this work we provide a heuristic, non-asymptotic, analysis of the cost of several algorithms for near neighbour search on high dimensional spheres. These algorithms are key components of lattice sieves. We design quantum circuits for near neighbour search algorithms and provide software that numerically optimises algorithm parameters according to various cost metrics. Using this software we estimate the cost of classical and quantum near neighbour search on spheres. For the most performant near neighbour search algorithm that we analyse we fi nd a small quantum speedup in dimensions of cryptanalytic interest. Achieving this speedup requires several optimistic physical and algorithmic assumptions.
2020
ASIACRYPT
B-SIDH: supersingular isogeny Diffie-Hellman using twisted torsion 📺
This paper explores a new way of instantiating isogeny-based cryptography in which parties can work in both the (p+1)-torsion of a set of supersingular curves and in the (p-1)-torsion corresponding to the set of their quadratic twists. Although the isomorphism between a given supersingular curve and its quadratic twist is not defined over GF(p^2) in general, restricting operations to the x-lines of both sets of twists allows all arithmetic to be carried out over GF(p^2) as usual. Furthermore, since supersingular twists always have the same GF(p^2)-rational j-invariant, the SIDH protocol remains unchanged when Alice and Bob are free to work in both sets of twists. This framework lifts the restrictions on the shapes of the underlying prime fields originally imposed by Jao and De Feo, and allows a range of new options for instantiating isogeny-based public key cryptography. These include alternatives that exploit Mersenne and Montgomery-friendly primes, as well as the possibility of halving the size of the primes in the Jao-De Feo construction at no known loss of asymptotic security. For a given target security level, the resulting public keys are smaller than the public keys of all of the key encapsulation schemes currently under consideration in the NIST post-quantum standardisation effort. The best known attacks against the instantiations proposed in this paper are the classical path finding algorithm due to Delfs and Galbraith and its quantum adapation due to Biasse, Jao and Sankar; these run in respective time O(p^(1/2)) and O(p^(1/4)), and are essentially memory-free. The upshot is that removing the big-O's and obtaining concrete security estimates is a matter of costing the circuits needed to implement the corresponding isogeny. In contrast to other post-quantum proposals, this makes the security analysis of B-SIDH rather straightforward. Searches for friendly parameters are used to find several primes that range from 237 to 256 bits, the conjectured security of which are comparable to the 434-bit prime used to target NIST level 1 security in the SIKE proposal. One noteworthy example is a 247-bit prime for which Alice's secret isogeny is 7901-smooth and Bob's secret isogeny is 7621-smooth.
2020
ASIACRYPT
Mind the Propagation of States New Automatic Search Tool for Impossible Differentials and Impossible Polytopic Transitions 📺
Impossible differentials cryptanalysis and impossible polytopic cryptanalysis are the most effective approaches to estimate the security of block ciphers. However, the previous automatic search methods of their distinguishers, impossible differentials and impossible polytopic transitions, neither consider the impact of key schedule in the single-key setting and the differential property of large S-boxes, nor apply to the block ciphers with variable rotations. Thus, unlike previous methods which focus on the propagation of the difference or s-difference, we redefine the impossible differentials and impossible (s + 1)-polytopic transitions according to the propagation of state, which allow us to break through those limitations of the previous methods. Theoretically, we prove that traditional impossible differentials and impossible (s+1)-polytopic transitions are equivalent to part of our redefinitions, which have advantages from broader view. Technically, we renew the automatic search model and design an SAT-based tool to evaluate our redefined impossible differentials and impossible (s + 1)-polytopic transitions efficiently. As a result, for GIFT64, we get the 6-round impossible differentials which cannot be detected by all previous tools. For PRINTcipher, we propose the first modeling method for the key-dependent permutation and key-dependent S-box. For MISTY1, we derive 902 4-round impossible differentials by exploiting the differential property of S-boxes. For RC5, we present the first modeling method for the variable rotation and get 2.5-round impossible differentials for each version of it. More remarkable, our tool can be used to evaluate the security of given cipher against the impossible differentials, and we prove that there exists no 5-round 1 input active word and 1 output active word impossible differentials for AES-128 even consider the relations of 3-round keys. Besides, we also get the impossible (s + 1)-polytopic transitions for PRINTcipher, GIFT64, PRESENT, and RC5, all of which can cover more rounds than their corresponding impossible differentials as far as we know.
2020
ASIACRYPT
Minimizing the Two-Round Tweakable Even-Mansour Cipher 📺
In CRYPTO 2015, Cogliati et al. have proposed one-round tweakable Even-Mansour (\textsf{1-TEM}) cipher constructed out of a single $n$-bit public permutation $\pi$ and a uniform and almost XOR-universal hash function \textsf{H} as $(k, t, x) \mapsto \textsf{H}_k(t) \oplus \pi(\textsf{H}_k(t) \oplus x)$, where $t$ is the tweak, and $x$ is the $n$-bit message. Authors have shown that its two-round extension, which we refer to as \textsf{2-TEM}, obtained by cascading $2$-independent instances of the construction gives $2n/3$-bit security and $r$-round cascading gives $rn/r+2$-bit security. In ASIACRYPT 2015, Cogliati and Seurin have shown that four-round tweakable Even-Mansour cipher, which we refer to as \textsf{4-TEM}, constructed out of four independent $n$-bit permutations $\pi_1, \pi_2, \pi_3, \pi_4$ and two independent $n$-bit keys $k_1, k_2$, defined as $$k_1 \oplus t \oplus \pi_4(k_2 \oplus t \oplus \pi_3(k_1 \oplus t \oplus \pi_2(k_2 \oplus t \oplus \pi_1(k_1 \oplus t \oplus x)))), $$ is secure upto $2^{2n/3}$ adversarial queries. In this paper, we have shown that if we replace two independent permutations of \textsf{2-TEM} (Cogliati et al., CRYPTO 2015) with a single $n$-bit public permutation, then the resultant construction still guarrantees security upto $2^{2n/3}$ adversarial queries. Using the results derived therein, we also show that replacing the permutation $(\pi_4, \pi_3)$ with $(\pi_1, \pi_2)$ in the above equation preserves security upto $2^{2n/3}$ adversarial queries.
2020
ASIACRYPT
Determining the Core Primitive for Optimally Secure Ratcheting 📺
After ratcheting attracted attention mostly due to practical real-world protocols, recently a line of work studied ratcheting as a primitive from a theoretic point of view. Literature in this line, pursuing the strongest security of ratcheting one can hope for, utilized for constructions strong, yet inefficient key-updatable primitives – based on hierarchical identity based encryption (HIBE). As none of these works formally justified utilizing these building blocks, we answer the yet open question under which conditions their use is actually necessary. We revisit these strong notions of ratcheted key exchange (RKE), and propose a more realistic (slightly stronger) security definition. In this security definition, both exposure of participants' local secrets and attacks against executions' randomness are considered. While these two attacks were partially considered in previous work, we are the first to unify them cleanly in a natural game based notion. Our definitions are based on the systematic RKE notion by Poettering and Rösler (CRYPTO 2018). Due to slight (but meaningful) changes to regard attacks against randomness, we are ultimately able to show that, in order to fulfill strong security for RKE, public key cryptography with (independently) updatable key pairs is a necessary building block. Surprisingly, this implication already holds for the simplest RKE variant. Hence, (1) we model optimally secure RKE under randomness manipulation to cover realistic attacks, (2) we (provably) extract the core primitive that is necessary to realize strongly secure RKE, and (3) our results indicate which relaxations in security allow for constructions that only rely on standard public key cryptography.
2020
ASIACRYPT
Beyond Birthday Bound Secure Fresh Rekeying: Application to Authenticated Encryption 📺
Fresh rekeying is a well-established method to protect a primitive or mode against side-channel attacks: an easy to protect but cryptographically not so involved function generates a subkey from the master key, and this subkey is then used for the block encryption of a single or a few messages. It is an efficient way to achieve side-channel protection, but current solutions only achieve birthday bound security in the block size of the cipher and thus halve its security (except if more involved primitives are employed). We present generalized solutions to parallel block cipher rekeying that, for the first time, achieve security beyond the birthday bound in the block size $n$. The first solution involves, next to the subkey generation, one multiplication and the core block cipher call and achieves $2^{2n/3}$ security. The second solution makes two block cipher calls, and achieves optimal $2^n$ security. Our third solution uses a slightly larger subkey generation function but requires no adaptations to the core encryption and also achieves optimal security. The construction seamlessly generalizes to permutation based fresh rekeying. Central to our schemes is the observation that fresh rekeying and generic tweakable block cipher design are two very related topics, and we can take lessons from the advanced results in the latter to improve our understanding and development of the former. We subsequently use these rekeying schemes in a constructive manner to deliver three authenticated encryption modes that achieve beyond birthday bound security and are easy to protect against side-channel attacks.
2020
ASIACRYPT
The Direction of Updatable Encryption does not Matter Much 📺
Updatable encryption schemes allow for key rotation on ciphertexts. A client outsourcing storage of encrypted data to a cloud server can change its encryption key. The cloud server can update the stored ciphertexts to the new key using only a token provided by the client. This paper solves two open problems in updatable encryption, that of uni-directional vs. bi-directional updates, and post-quantum security. The main result in this paper is to analyze the security notions based on uni- and bi-directional updates. Surprisingly, we prove that uni- and bi-directional variants of each security notion are equivalent. The second result in this paper is to provide a new and efficient updatable encryption scheme based on the Decisional Learning with Error assumption. This gives us post-quantum security. Our scheme is bi-directional, but because of our main result, this is sufficient.
2020
ASIACRYPT
Incrementally Aggregatable Vector Commitments and Applications to Verifiable Decentralized Storage 📺
Vector commitments with subvector openings (SVC) [Lai-Malavolta, Boneh-Bunz-Fisch; CRYPTO'19] allow one to open a committed vector at a set of positions with an opening of size independent of both the vector's length and the number of opened positions. We continue the study of SVC with two goals in mind: improving their efficiency and making them more suitable to decentralized settings. We address both problems by proposing a new notion for VC that we call \emph{incremental aggregation} and that allows one to merge openings in a succinct way an \emph{unbounded} number of times. We show two applications of this property. The first one is immediate and is a method to generate openings in a distributed way. The second application is an algorithm for faster generation of openings via preprocessing. We then proceed to realize SVC with incremental aggregation. We provide two constructions in groups of unknown order that, similarly to that of Boneh et al. (which supports aggregating only once), have constant-size public parameters, commitments and openings. As an additional feature, for the first construction we propose efficient arguments of knowledge of subvector openings which immediately yields a keyless proof of storage with compact proofs. Finally, we address a problem closely related to that of SVC: storing a file efficiently in completely decentralized networks. We introduce and construct \emph{verifiable decentralized storage} (VDS), a cryptographic primitive that allows to check the integrity of a file stored by a network of nodes in a distributed and decentralized way. Our VDS constructions rely on our new vector commitment techniques.
2020
ASIACRYPT
Lattice-Based E-Cash, Revisited 📺
Electronic cash (e-cash) was introduced 40 years ago as the digital analogue of traditional cash. It allows users to withdraw electronic coins that can be spent anonymously with merchants. As advocated by Camenisch et al. (Eurocrypt 2005), it should be possible to store the withdrawn coins compactly (i.e., with logarithmic cost in the total number of coins), which has led to the notion of compact e-cash. Many solutions were proposed for this problem but the security proofs of most of them were invalidated by a very recent paper by Bourse et al. (Asiacrypt 2019). The same paper describes a generic way of fixing existing constructions/proofs but concrete instantiations of this patch are currently unknown in some settings. In particular, compact e-cash is no longer known to exist under quantum-safe assumptions. In this work, we resolve this problem by proposing the first secure compact e-cash system based on lattices following the result from Bourse et al. Contrarily to the latter work, our construction is not only generic, but we describe two concrete instantiations. We depart from previous frameworks of e-cash systems by leveraging lossy trapdoor functions to construct our coins. The indistinguishability of lossy and injective keys allows us to avoid the very strong requirements on the involved pseudo-random functions that were necessary to instantiate the generic patch proposed by Bourse et al.
2020
ASIACRYPT
Privacy-Preserving Pattern Matching on Encrypted Data 📺
Pattern matching is one of the most fundamental and important paradigms in several application domains such as digital forensics, cyber threat intelligence, or genomic and medical data analysis. While it is a straightforward operation when performed on plaintext data, it becomes a challenging task when the privacy of both the analyzed data and the analysis patterns must be preserved. In this paper, we propose new provably correct, secure, and relatively efficient (compared to similar existing schemes) public and private key based constructions that allow arbitrary pattern matching over encrypted data while protecting both the data to be analyzed and the patterns to be matched. That is, except the pattern provider (resp. the data owner), all other involved parties in the proposed constructions will learn nothing about the patterns to be searched (resp. the data to be inspected). Compared to existing solutions, the constructions we propose has some interesting properties: (1) the size of the ciphertext is linear to the size of plaintext and independent of the sizes and the number of the analysis patterns; (2) the sizes of the issued trapdoors are constant on the size of the data to be analyzed; and (3) the search complexity is linear on the size of the data to be inspected and is constant on the sizes of the analysis patterns. The conducted evaluations show that our constructions drastically improve the performance of the most efficient state of the art solution.
2020
ASIACRYPT
SiGamal: A supersingular isogeny-based PKE and its application to a PRF 📺
We propose two new supersingular isogeny-based public key encryptions: SiGamal and C-SiGamal. They were developed by giving an additional point of the order $2^r$ to CSIDH. SiGamal is similar to ElGamal encryption, while C-SiGamal is a compressed version of SiGamal. We prove that SiGamal and C-SiGamal are IND-CPA secure without using hash functions under a new assumption: the P-CSSDDH assumption. This assumption comes from the expectation that no efficient algorithm can distinguish between a random point and a point that is the image of a public point under a hidden isogeny. Next, we propose a Naor-Reingold type pseudo random function (PRF) based on SiGamal. If the P-CSSDDH assumption and the CSSDDH$^*$ assumption, which guarantees the security of CSIDH that uses a prime $p$ in the setting of SiGamal, hold, then our proposed function is a pseudo random function. Moreover, we estimate that the computational costs of group actions to compute our proposed PRF are about $\sqrt{\frac{8T}{3\pi}}$ times that of the group actions in CSIDH, where $T$ is the Hamming weight of the input of the PRF. Finally, we experimented with group actions in SiGamal and C-SiGamal. The computational costs of group actions in SiGamal-512 with a $256$-bit plaintext message space were about $2.62$ times that of a group action in CSIDH-512.
2020
ASIACRYPT
Improving Speed and Security in Updatable Encryption Schemes 📺
Periodic key rotation is a common practice designed to limit the long-term power of cryptographic keys. Key rotation refers to the process of re-encrypting encrypted content under a fresh key, and overwriting the old ciphertext with the new one. When encrypted data is stored in the cloud, key rotation can be very costly: it may require downloading the entire encrypted content from the cloud, re-encrypting it on the client's machine, and uploading the new ciphertext back to the cloud. An updatable encryption scheme is a symmetric-key encryption scheme designed to support efficient key rotation in the cloud. The data owner sends a short update token to the cloud. This update token lets the cloud rotate the ciphertext from the old key to the new key, without learning any information about the plaintext. Recent work on updatable encryption has led to several security definitions and proposed constructions. However, existing constructions are not yet efficient enough for practical adoption, and the existing security definitions can be strengthened. In this work we make three contributions. First, we introduce stronger security definitions for updatable encryption (in the ciphertext-dependent setting) that capture desirable security properties not covered in prior work. Second, we construct two new updatable encryption schemes. The first construction relies only on symmetric cryptographic primitives, but only supports a bounded number of key rotations. The second construction supports a (nearly) unbounded number of updates, and is built from the Ring Learning with Errors (RLWE) assumption. Due to complexities of using RLWE, this scheme achieves a slightly weaker notion of integrity compared to the first. Finally, we implement both constructions and compare their performance to prior work. Our RLWE-based construction is 200x faster than a prior proposal for an updatable encryption scheme based on the hardness of elliptic curve DDH. Our first construction, based entirely on symmetric primitives, has the highest encryption throughput, approaching the performance of AES, and the highest decryption throughput on ciphertexts that were re-encrypted fewer than fifty times. For ciphertexts re-encrypted over fifty times, the RLWE construction dominates it in decryption speed.
2020
ASIACRYPT
Non-Committing Encryption with Constant Ciphertext Expansion from Standard Assumptions 📺
Non-committing encryption (NCE) introduced by Canetti et al. (STOC '96) is a central tool to achieve multi-party computation protocols secure in the adaptive setting. Recently, Yoshida et al. (ASIACRYPT '19) proposed an NCE scheme based on the hardness of the DDH problem, which has ciphertext expansion $\mathcal{O}(\log\lambda)$ and public-key expansion $\mathcal{O}(\lambda^2)$. In this work, we improve their result and propose a methodology to construct an NCE scheme that achieves \emph{constant} ciphertext expansion. Our methodology can be instantiated from the DDH assumption and the LWE assumption. When instantiated from the LWE assumption, the public-key expansion is $\lambda\cdot\mathsf{poly}(\log\lambda)$. They are the first NCE schemes satisfying constant ciphertext expansion without using iO or common reference strings. Along the way, we define a weak notion of NCE, which satisfies only weak forms of correctness and security. We show how to amplify such a weak NCE scheme into a full-fledged one using wiretap codes with a new security property.
2020
ASIACRYPT
Efficient Homomorphic Comparison Methods with Optimal Complexity 📺
Comparison of two numbers is one of the most frequently used operations, but it has been a challenging task to efficiently compute the comparison function in homomorphic encryption~(HE) which basically support addition and multiplication. Recently, Cheon et al.~(Asiacrypt~2019) introduced a new approximate representation of the comparison function with a rational function, and showed that this rational function can be evaluated by an iterative algorithm. Due to this iterative feature, their method achieves a logarithmic computational complexity compared to previous polynomial approximation methods; however, the computational complexity is still not optimal, and the algorithm is quite slow for large-bit inputs in HE implementation. In this work, we propose new comparison methods with \emph{optimal} asymptotic complexity based on \emph{composite polynomial} approximation. The main idea is to systematically design a constant-degree polynomial $f$ by identifying the \emph{core properties} to make a composite polynomial $f\circ f \circ \cdots \circ f$ get close to the sign function (equivalent to the comparison function) as the number of compositions increases. We additionally introduce an acceleration method applying a mixed polynomial composition $f\circ \cdots \circ f\circ g \circ \cdots \circ g$ for some other polynomial $g$ with different properties instead of $f\circ f \circ \cdots \circ f$. Utilizing the devised polynomials $f$ and $g$, our new comparison algorithms only require $\Theta(\log(1/\epsilon)) + \Theta(\log\alpha)$ computational complexity to obtain an approximate comparison result of $a,b\in[0,1]$ satisfying $|a-b|\ge \epsilon$ within $2^{-\alpha}$ error. The asymptotic optimality results in substantial performance enhancement: our comparison algorithm on $16$-bit encrypted integers for $\alpha = 16$ takes $1.22$ milliseconds in amortized running time based on an approximate HE scheme HEAAN, which is $18$ times faster than the previous work.
2020
ASIACRYPT
Fuzzy Asymmetric Password-Authenticated Key Exchange 📺
Password-Authenticated Key Exchange (PAKE) lets users with passwords exchange a cryptographic key. There have been two variants of PAKE which make it more applicable to real-world scenarios: * Asymmetric PAKE (aPAKE), which aims at protecting a client's password even if the authentication server is untrusted, and * Fuzzy PAKE (fPAKE), which enables key agreement even if passwords of users are noisy, but "close enough". Supporting fuzzy password matches eases the use of higher entropy passwords and enables using biometrics and environmental readings (both of which are naturally noisy). Until now, both variants of PAKE have been considered only in separation. In this paper, we consider both of them simultaneously. We introduce the notion of Fuzzy Asymmetric PAKE (fuzzy aPAKE), which protects against untrusted servers and supports noisy passwords. We formulate our new notion in the Universal Composability framework of Canetti (FOCS'01), which is the preferred model for password-based primitives. We then show that fuzzy aPAKE can be obtained from oblivious transfer and some variant of robust secret sharing (Cramer et al, EC'15). We achieve security against malicious parties while avoiding expensive tools such as non-interactive zero-knowledge proofs. Our construction is round-optimal, with message and password file sizes that are independent of the schemes error tolerance.
2020
ASIACRYPT
Public-Key Generation with Verifiable Randomness 📺
We revisit the problem of proving that a user algorithm selected and correctly used a truly random seed in the generation of her cryptographic key. A first approach was proposed in 2002 by Juels and Guajardo for the validation of RSA secret keys. We present a new security model and general tools to efficiently prove that a private key was generated at random according to a prescribed process, without revealing any further information about the private key. We give a generic protocol for all key-generation algorithms based on probabilistic circuits and prove its security. We also propose a new protocol for factoring-based cryptography that we prove secure in the aforementioned model. This latter relies on a new efficient zero-knowledge argument for the double discrete logarithm problem that achieves an exponential improvement in communication complexity compared to the state of the art, and is of independent interest.
2020
ASIACRYPT
Side Channel Information Set Decoding using Iterative Chunking 📺
This paper presents an attack based on side-channel information and information set decoding (ISD) on the code-based Niederreiter cryptosystem and an evaluation of the practicality of the attack using an electromagnetic side channel. We start by directly adapting the timing side-channel plaintext-recovery attack by Shoufan et al. from 2010 to the constant-time implementation of the Niederreiter cryptosystem as used in the official FPGA-implementation of the NIST finalist “Classic McEliece”. We then enhance our attack using ISD and a new technique that we call iterative chunking to further significantly reduce the number of required side-channel measurements. We theoretically show that our attack improvements have a significant impact on reducing the number of required side-channel measurements. For example, for the 256-bit security parameter set kem/mceliece6960119 of “Classic McEliece”, we improve the basic attack that requires 5415 measurements to less than 562 measurements on average to mount a successful plaintext-recovery attack. Further reductions can be achieved at the price of increasing the cost of the ISD computations. We confirm our findings by practically mounting the attack on the official FPGA-implementation of “Classic McEliece” for all proposed parameter sets.
2020
ASIACRYPT
On the Exact Round Complexity of Best-of-both-Worlds Multi-party Computation 📺
The two traditional streams of multiparty computation (MPC) protocols consist of-- (a) protocols achieving guaranteed output delivery (\god) or fairness (\fair) in the honest-majority setting and (b) protocols achieving unanimous or selective abort (\uab, \sab) in the dishonest-majority setting. The favorable presence of honest majority amongst the participants is necessary to achieve the stronger notions of \god~or \fair. While the constructions of each type are abound in the literature, one class of protocols does not seem to withstand the threat model of the other. For instance, the honest-majority protocols do not guarantee privacy of the inputs of the honest parties in the face of dishonest majority and likewise the dishonest-majority protocols cannot achieve $\god$ and $\fair$, tolerating even a single corruption, let alone dishonest minority. The promise of the unconventional yet much sought-after species of MPC, termed as `Best-of-Both-Worlds' (BoBW), is to offer the best possible security depending on the actual corruption scenario. This work nearly settles the exact round complexity of two classes of BoBW protocols differing on the security achieved in the honest-majority setting, namely $\god$ and $\fair$ respectively, under the assumption of no setup (plain model), public setup (CRS) and private setup (CRS + PKI or simply PKI). The former class necessarily requires the number of parties to be strictly more than the sum of the bounds of corruptions in the honest-majority and dishonest-majority setting, for a feasible solution to exist. Demoting the goal to the second-best attainable security in the honest-majority setting, the latter class needs no such restriction. Assuming a network with pair-wise private channels and a broadcast channel, we show that 5 and 3 rounds are necessary and sufficient for the class of BoBW MPC with $\fair$ under the assumption of `no setup' and `public and private setup' respectively. For the class of BoBW MPC with $\god$, we show necessity and sufficiency of 3 rounds for the public setup case and 2 rounds for the private setup case. In the no setup setting, we show the sufficiency of 5 rounds, while the known lower bound is 4. All our upper bounds are based on polynomial-time assumptions and assume black-box simulation. With distinct feasibility conditions, the classes differ in terms of the round requirement. The bounds are in some cases different and on a positive note at most one more, compared to the maximum of the needs of the honest-majority and dishonest-majority setting. Our results remain unaffected when security with abort and fairness are upgraded to their identifiable counterparts.
2020
ASIACRYPT
Improved Security Analysis for Nonce-based Enhanced Hash-then-Mask MACs 📺
In this paper, we prove that the nonce-based enhanced hash-then-mask MAC (nEHtM) is secure up to 2^{3n/4} MAC queries and 2^n verification queries (ignoring logarithmic factors) as long as the number of faulty queries \mu is below 2^{3n/8}, significantly improving the previous bound by Dutta et al. Even when \mu goes beyond 2^{3n/8}, nEHtM enjoys graceful degradation of security. The second result is to prove the security of PRF-based nEHtM; when nEHtM is based on an n-to-s bit random function for a fixed size s such that 1 <= s <= n, it is proved to be secure up to any number of MAC queries and 2^s verification queries, if (1) s = n and \mu < 2^{n/2} or (2) n/2 < s < 2^{n-s} and \mu < max{2^{s/2}, 2^{n-s}}, or (3) s <= n/2 and \mu < 2^{n/2}. This result leads to the security proof of truncated nEHtM that returns only s bits of the original tag since a truncated permutation can be seen as a pseudorandom function. In particular, when s <= 2n/3, the truncated nEHtM is secure up to 2^{n - s/2} MAC queries and 2^s verification queries as long as \mu < min{2^{n/2}, 2^{n-s}}. For example, when s = n/2 (resp. s = n/4), the truncated nEHtM is secure up to 2^{3n/4} (resp. 2^{7n/8}) MAC queries. So truncation might provide better provable security than the original nEHtM with respect to the number of MAC queries.
2020
ASIACRYPT
Simulation-Sound Arguments for LWE and Applications to KDM-CCA2 Security 📺
The Naor-Yung paradigm is a well-known technique that constructs IND-CCA2-secure encryption schemes by means of non-interactive zero-knowledge proofs satisfying a notion of simulation-soundness. Until recently, it was an open problem to instantiate it under the sole Learning-With-Errors (LWE) assumption without relying on random oracles. While the recent results of Canetti et al. (STOC'19) and Peikert-Shiehian (Crypto'19) provide a solution to this problem by applying the Fiat-Shamir transform in the standard model, the resulting constructions are extremely inefficient as they proceed via a reduction to an NP-complete problem. In this paper, we give a direct, non-generic method for instantiating Naor-Yung under the LWE assumption outside the random oracle model. Specifically, we give a direct construction of an unbounded simulation-sound NIZK argument system which, for carefully chosen parameters, makes it possible to express the equality of plaintexts encrypted under different keys in Regev's cryptosystem. We also give a variant of our argument that provides tight security. As an application, we obtain an LWE-based public-key encryption scheme for which we can prove (tight) key-dependent message security under chosen-ciphertext attacks in the standard model.
2020
ASIACRYPT
MPC with Synchronous Security and Asynchronous Responsiveness 📺
Two paradigms for secure MPC are synchronous and asynchronous protocols. While synchronous protocols tolerate more corruptions and allow every party to give its input, they are very slow because the speed depends on the conservatively assumed worst-case delay $\Delta$ of the network. In contrast, asynchronous protocols allow parties to obtain output as fast as the actual network allows, a property called \emph{responsiveness}, but unavoidably have lower resilience and parties with slow network connections cannot give input. It is natural to wonder whether it is possible to leverage synchronous MPC protocols to achieve responsiveness, hence obtaining the advantages of both paradigms: full security with responsiveness up to t corruptions, and 'extended' security (full security or security with unanimous abort) with no responsiveness up to a larger threshold T of corruptions. We settle the question by providing matching feasibility and impossibility results: -For the case of unanimous abort as extended security, there is an MPC protocol if and only if T + 2t < n. -For the case of full security as extended security, there is an MPC protocol if and only if T < n/2 and T + 2t < n. In particular, setting t = n/4 allows to achieve a fully secure MPC for honest majority, which in addition benefits from having substantial responsiveness.
2020
ASIACRYPT
A New Decryption Failure Attack against HQC 📺
HQC is an IND-CCA2 KEM running for standardization in NIST's post-quantum cryptography project and has advanced to the second round. It is a code-based scheme in the class of public key encryptions, with given sets of parameters spanning NIST security strength 1, 3 and 5, corresponding to 128, 192 and 256 bits of classic security. In this paper we present an attack recovering the secret key of an HQC instance named hqc-256-1. The attack requires a single precomputation performed once and then never again. The online attack on an HQC instance then submits about $2^{64}$ special ciphertexts for decryption (obtained from the precomputation) and a phase of analysis studies the subset of ciphertexts that are not correctly decrypted. In this phase, the secret key of the HQC instance is determined. The overall complexity is estimated to be \(2^{246}\) if the attacker balances the costs of precomputation and post-processing, thereby claiming a successful attack on hqc-256-1 in the NIST setting. If we allow the precomputation cost to be $2^{254}$, which is below exhaustive key search on a 256 bit secret key, the computational complexity of the later parts can be no more than $2^{64}$. This is a setting relevant to practical security since the large precomputation needs to be done only once. Also, we note that the complexity of the precomputation can be lower if the online attack is allowed to submit more than $2^{64}$ ciphertexts for decryption.
2020
ASIACRYPT
Possibility and Impossibility Results for Receiver Selective Opening Secure PKE in the Multi-Challenge Setting 📺
Public key encryption (PKE) schemes are usually deployed in an open system with numerous users. In practice, it is common that some users are corrupted. A PKE scheme is said to be receiver selective opening (RSO) secure if it can still protect messages transmitted to uncorrupted receivers after the adversary corrupts some receivers and learns their secret keys. This is usually defined by requiring the existence of a simulator that can simulate the view of the adversary given only the opened messages. Existing works construct RSO secure PKE schemes in a single-challenge setting, where the adversary can only obtain one challenge ciphertext for each public key. However, in practice, it is preferable to have a PKE scheme with RSO security in the multi-challenge setting, where public keys can be used to encrypt multiple messages. In this work, we explore the possibility for achieving PKE schemes with receiver selective opening security in the multi-challenge setting. Our contributions are threefold. First, we demonstrate that PKE schemes with RSO security in the single-challenge setting are not necessarily RSO secure in the multi-challenge setting. Then, we show that it is impossible to achieve RSO security for PKE schemes if the number of challenge ciphertexts under each public key is a priori unbounded. In particular, we prove that no PKE scheme can be RSO secure in the $k$-challenge setting (i.e., the adversary can obtain $k$ challenge ciphertexts for each public key) if its secret key contains less than $k$ bits. On the positive side, we give a concrete construction of PKE scheme with RSO security in the $k$-challenge setting, where the ratio of the secret key length to $k$ approaches the lower bound 1.
2020
ASIACRYPT
Cryptography from One-Way Communication: On Completeness of Finite Channels 📺
Garg et al. (Crypto 2015) initiated the study of cryptographic protocols over noisy channels in the non-interactive setting, namely when only one party speaks. A major question left open by this work is the completeness of {\em finite} channels, whose input and output alphabets do not grow with the desired level of security. In this work, we address this question by obtaining the following results: Completeness of Bit-ROT with Inverse Polynomial Error: We show that bit-ROT (i.e., Randomized Oblivious Transfer channel, where each of the two messages is a single bit) can be used to realize general randomized functionalities with inverse polynomial error. Towards this, we provide a construction of string-ROT from bit-ROT with inverse polynomial error. No Finite Channel is Complete with Negligible Error: To complement the above, we show that {\it no} finite channel can be used to realize string-ROT with negligible error, implying that the inverse polynomial error in the completeness of bit-ROT is inherent. This holds even with semi-honest parties and for computational security, and is contrasted with the (negligible-error) completeness of string-ROT shown by Garg et al. Characterization of Finite Channels Enabling Zero-Knowledge Proofs: An important instance of secure computation is zero-knowledge proofs. Noisy channels can potentially be used to realize truly non-interactive zero-knowledge proofs, without trusted common randomness, and with non-transferability and deniability features that cannot be realized in the plain model. Garg et al. obtain such zero-knowledge proofs from the binary erasure channel (BEC) and the binary symmetric channel (BSC). We complete the picture by showing that in fact any non-trivial channel suffices.
2020
ASIACRYPT
Tight Security Analysis of 3-Round Key-Alternating Cipher with A Single Permutation 📺
The tight security bound of the KAC (Key-Alternating Cipher) construction whose round permutations are independent from each other has been well studied. Then a natural question is how the security bound will change when we use fewer permutations in a KAC construction. In CRYPTO 2014, Chen et al. proved that 2-round KAC with a single permutation (2KACSP) has the same security level as the classic one (i.e., 2-round KAC). But we still know little about the security bound of incompletely-independent KAC constructions with more than 2 rounds. In this paper,we will show that a similar result also holds for 3-round case. More concretely, we prove that 3-round KAC with a single permutation (3KACSP) is secure up to $\varTheta(2^{\frac{3n}{4}})$ queries, which also caps the security of 3-round KAC. To avoid the cumbersome graphical illustration used in Chen et al.'s work, a new representation is introduced to characterize the underlying combinatorial problem. Benefited from it, we can handle the knotty dependence in a modular way, and also show a plausible way to study the security of $r$KACSP. Technically, we abstract a type of problems capturing the intrinsic randomness of $r$KACSP construction, and then propose a high-level framework to handle such problems. Furthermore, our proof techniques show some evidence that for any $r$, $r$KACSP has the same security level as the classic $r$-round KAC in random permutation model.
2020
ASIACRYPT
Quantum Collision Attacks on AES-like Hashing with Low Quantum Random Access Memories 📺
At EUROCRYPT 2020, Hosoyamada and Sasaki proposed the first dedicated quantum attack on hash functions -- a quantum version of the rebound attack exploiting differentials whose probabilities are too low to be useful in the classical setting. This work opens up a new perspective toward the security of hash functions against quantum attacks. In particular, it tells us that the search for differentials should not stop at the classical birthday bound. Despite these interesting and promising implications, the concrete attacks described by Hosoyamada and Sasaki make use of large quantum random access memories (qRAMs), a resource whose availability in the foreseeable future is controversial even in the quantum computation community. Without large qRAMs, these attacks incur significant increases in time complexities. In this work, we reduce or even avoid the use of qRAMs by performing a quantum rebound attack based on differentials with non-full-active super S-boxes. Along the way, an MILP-based method is proposed to systematically explore the search space of useful truncated differentials with respect to rebound attacks. As a result, we obtain improved attacks on \aes-\texttt{MMO}, \aes-\texttt{MP}, and the first classical collision attacks on 4- and 5-round \grostl-\texttt{512}. Interestingly, the use of non-full-active super S-box differentials in the analysis of \aes-\texttt{MMO} gives rise to new difficulties in collecting enough starting points. To overcome this issue, we consider attacks involving two message blocks to gain more degrees of freedom, and we successfully compress the qRAM demand of the collision attacks on \texttt{AES}-\texttt{MMO} and \texttt{AES}-\texttt{MP} (EUROCRYPT 2020) from $2^{48}$ to a range from $2^{16}$ to $0$, while still maintaining a comparable time complexity. To the best of our knowledge, these are the first dedicated quantum attacks on hash functions that slightly outperform Chailloux, Naya-Plasencia, and Schrottenloher's generic quantum collision attack (ASIACRYPT 2017) in a model where large qRAMs are not available. This work demonstrates again how a clever combination of classical cryptanalytic technique and quantum computation leads to improved attacks, and shows that the direction pointed out by Hosoyamada and Sasaki deserves further investigation.
2020
ASIACRYPT
An Algebraic Formulation of the Division Property: Revisiting Degree Evaluations, Cube Attacks, and Key-Independent Sums 📺
Since it was proposed in 2015 as a generalization of integral properties, the division property has evolved into a powerful tool for probing the structures of Boolean functions whose algebraic normal forms are not available. We capture the most essential elements for the detection of division properties from a pure algebraic perspective, proposing a technique named as {\it monomial prediction}, which can be employed to determine the presence or absence of a monomial in the product of the coordinate functions of a vectorial Boolean function $\bs f$ by counting the number of the so-called {\it monomial trails} across a sequence of simpler functions whose composition is $\bs f$. Under the framework of the monomial prediction, we formally prove that most algorithms for detecting division properties in previous literature raise no false alarms but may miss. We also establish the equivalence between the monomial prediction and the three-subset bit-based division property without unknown subset presented at EUROCRYPT 2020, and show that these two techniques are perfectly accurate. This algebraic formulation gives more insights into division properties and inspires new search strategies. With the monomial prediction, we obtain the {\it exact} algebraic degrees of \TRIVIUM up to 834 rounds for the first time. In the context of cube attacks, we are able to explore a larger search space in limited time and recover the exact algebraic normal forms of complex superpolies with the help of a divide-and-conquer strategy. As a result, we identify more cubes with smaller dimensions, leading to improvements of some near-optimal attacks against 840-, 841- and 842-round \TRIVIUM.
2020
ASIACRYPT
CCA-Secure (Puncturable) KEMs from Encryption With Non-Negligible Decryption Errors 📺
Public-key encryption (PKE) schemes or key-encapsulation mechanisms (KEMs) are fundamental cryptographic building blocks to realize secure communication protocols. There are several known transformations that generically turn weakly secure schemes into strongly (i.e., IND-CCA) secure ones. While most of these transformations require the weakly secure scheme to provide perfect correctness, Hofheinz, Hövelmanns, and Kiltz (HHK) (TCC 2017) have recently shown that variants of the Fujisaki-Okamoto (FO) transform can work with schemes that have negligible correctness error in the (quantum) random oracle model (QROM). Many recent schemes in the NIST post-quantum competition (PQC) use variants of these transformations. Some of their CPA-secure versions even have a non-negligible correctness error and so the techniques of HHK cannot be applied. In this work, we study the setting of generically transforming PKE schemes with potentially large, i.e., non-negligible, correctness error to ones having negligible correctness error. While there have been previous treatments in an asymptotic setting by Dwork et al. (EUROCRYPT 2004), our goal is to come up with practically efficient compilers in a concrete setting and apply them in two different contexts: firstly, we show how to generically transform weakly secure deterministic or randomized PKEs into CCA-secure KEMs in the (Q)ROM using variants of HHK. This applies to essentially all candidates to the NIST PQC based on lattices and codes with non-negligible error, for which we provide an extensive analysis. We thereby show that it improves some of the code-based candidates. Secondly, we study puncturable KEMs in terms of the Bloom Filter KEM (BFKEM) proposed by Derler et al. (EUROCRYPT 2018) which inherently have a non-negligible correctness error. BFKEMs are a building block to construct fully forward-secret zero round-trip time (0-RTT) key-exchange protocols. In particular, we show how to achieve the first post-quantum secure BFKEM generically from lattices and codes by applying our techniques to identity-based encryption (IBE) schemes with (non-)negligible correctness error.
2020
ASIACRYPT
A Bit-Vector Differential Model for the Modular Addition by a Constant 📺
ARX algorithms are a class of symmetric-key algorithms constructed by Addition, Rotation, and XOR, which achieve the best software performances in low-end microcontrollers. To evaluate the resistance of an ARX cipher against differential cryptanalysis and its variants, the recent automated methods employ constraint satisfaction solvers, such as SMT solvers, to search for optimal characteristics. The main difficulty to formulate this search as a constraint satisfaction problem is obtaining the differential models of the non-linear operations, that is, the constraints describing the differential probability of each non-linear operation of the cipher. While an efficient bit-vector differential model was obtained for the modular addition with two variable inputs, no differential model for the modular addition by a constant has been proposed so far, preventing ARX ciphers including this operation from being evaluated with automated methods. In this paper, we present the first bit-vector differential model for the n-bit modular addition by a constant input. Our model contains O(log2(n)) basic bit-vector constraints and describes the binary logarithm of the differential probability. We also represent an SMT-based automated method to look for differential characteristics of ARX, including constant additions, and we provide an open-source tool ArxPy to find ARX differential characteristics in a fully automated way. To provide some examples, we have searched for related-key differential characteristics of TEA, XTEA, HIGHT, and LEA, obtaining better results than previous works. Our differential model and our automated tool allow cipher designers to select the best constant inputs for modular additions and cryptanalysts to evaluate the resistance of ARX ciphers against differential attacks.
2020
ASIACRYPT
MoniPoly---An Expressive $q$-SDH-Based Anonymous Attribute-Based Credential System 📺
Modern attribute-based anonymous credential (ABC) systems benefit from special encodings that yield expressive and highly efficient show proofs on logical statements. The technique was first proposed by Camenisch and Gro{\ss}, who constructed an SRSA-based ABC system with prime-encoded attributes that offers efficient \textsf{AND}, \textsf{OR} and \textsf{NOT} proofs. While other ABC frameworks have adopted constructions in the same vein, the Camenisch-Gro{\ss} ABC has been the most expressive and asymptotically most efficient proof system to date, even if it was constrained by the requirement of a trusted message-space setup and an inherent restriction to finite-set attributes encoded as primes. In this paper, combining a new set commitment scheme and an SDH-based signature scheme, we present a provably secure ABC system that supports show proofs for complex statements. This construction is not only more expressive than existing approaches, but it is also highly efficient under unrestricted attribute space due to its ECC protocols only requiring a constant number of bilinear pairings by the verifier; none by the prover. Furthermore, we introduce strong security models for impersonation and unlinkability under adaptive active and concurrent attacks to allow for the expressiveness of our ABC as well as for a systematic comparison to existing schemes. Given this foundation, we are the first to comprehensively formally prove the security of an ABC with expressive show proofs. Specifically, we prove the security against impersonation under the $q$-(co-)SDH assumption with a tight reduction. Besides the set commitment scheme, which may be of independent interest, our security models can serve as a foundation for the design of future ABC systems.
2020
ASIACRYPT
Collusion Resistant Trace-and-Revoke for Arbitrary Identities from Standard Assumptions 📺
A traitor tracing scheme is a multi-user public-key encryption scheme where each user in the system holds a decryption key that is associated with the user's identity. Using the public key, a content distributor can encrypt a message to all of the users in the system. At the same time, if a malicious group of users combine their respective decryption keys to build a "pirate decoder," there is an efficient tracing algorithm that the content distributor can use to identify at least one of the keys used to construct the decoder. A trace-and-revoke scheme is an extension of a standard traitor tracing scheme where there is an additional key-revocation mechanism that the content distributor can use to disable the decryption capabilities of compromised keys. Namely, during encryption, the content distributor can encrypt a message with respect to a list of revoked users such that only non-revoked users can decrypt the resulting ciphertext. Trace-and-revoke schemes are challenging to construct. Existing constructions from standard assumptions can only tolerate bounded collusions (i.e., there is an a priori bound on the number of keys an adversary obtains), have system parameters that scale exponentially in the bit-length of the identities, or satisfy weaker notions of traceability that are vulnerable to certain types of "pirate evolution" attacks. In this work, we provide the first construction of a trace-and-revoke scheme that is fully collusion resistant and capable of supporting arbitrary identities (i.e., the identities can be drawn from an exponential-size space). Our scheme supports public encryption and secret tracing, and can be based on the sub-exponential hardness of the LWE problem (with a super-polynomial modulus-to-noise ratio). The ciphertext size in our construction scales logarithmically in the size of the identity space and linearly in the size of the revocation list. Our scheme leverages techniques from both combinatorial and algebraic constructions for traitor tracing.
2020
ASIACRYPT
An Algebraic Attack on Ciphers with Low-Degree Round Functions: Application to Full MiMC 📺
Algebraically simple PRFs, ciphers, or cryptographic hash functions are becoming increasingly popular, for example due to their attractive properties for MPC and new proof systems (SNARKs, STARKs, among many others). In this paper, we focus on the algebraically simple construction MiMC, which became an attractive cryptanalytic target due to its simplicity, but also due to its use as a baseline in a competition for more recent algorithms exploring this design space. For the first time, we are able to describe key-recovery attacks on all full-round versions of MiMC over GF(2^n), requiring half the code book. In the chosen-ciphertext scenario, recovering the key from this data for the n-bit full version of MiMC takes the equivalent of less than 2^(n - log_2(n) + 1) calls to MiMC and negligible amounts of memory. The attack procedure is a generalization of higher-order differential cryptanalysis, and it is based on two main ingredients. First, we present a higher-order distinguisher which exploits the fact that the algebraic degree of MiMC grows significantly slower than originally believed. Secondly, we describe an approach to turn this distinguisher into a key-recovery attack without guessing the full subkey. Finally, we show that approximately ceil(log_3(2 * R)) more rounds (where R = ceil(n * log_3(2)) is the current number of rounds of MiMC-n/n) can be necessary and sufficient to restore the security against the key-recovery attack presented here. The attack has been practically verified on toy versions of MiMC. Note that our attack does not affect the security of MiMC over prime fields.
2020
ASIACRYPT
Adaptively Secure Inner Product Encryption from LWE 📺
Attribute-based encryption (ABE) is an advanced form of encryption scheme allowing for access policies to be embedded within the secret keys and ciphertexts. By now, we have ABEs supporting numerous types of policies based on hardness assumptions over bilinear maps and lattices. However, one of the distinguishing differences between ABEs based on these two breeds of assumptions is that the former can achieve adaptive security for quite expressible policies (e.g., inner-products, boolean formula) while the latter can not. Recently, two adaptively secure lattice-based ABEs have appeared and changed the state of affairs: a non-zero inner-product (NIPE) encryption by Katsumata and Yamada (PKC'19) and an ABE for t-CNF policies by Tsabary (CRYPTO'19). However, the policies supported by these ABEs are still quite limited and do not embrace the more interesting policies that have been studied in the literature. Notably, constructing an adaptively secure inner-product encryption (IPE) based on lattices still remains open. In this work, we propose the first adaptively secure IPE based on the learning with errors (LWE) assumption with sub-exponential modulus size (without resorting to complexity leveraging). Concretely, our IPE supports inner-products over the integers Z with polynomial sized entries and satisfies adaptively weakly-attribute-hiding security. We also show how to convert such an IPE to an IPE supporting inner-products over Z_p for a polynomial-sized p and a fuzzy identity-based encryption (FIBE) for small and large universes. Our result builds on the ideas presented in Tsabary (CRYPTO'19), which uses constrained pseudorandom functions (CPRF) in a semi-generic way to achieve adaptively secure ABEs, and the recent lattice-based adaptively secure CPRF for inner-products by Davidson et al. (CRYPTO'20). Our main observation is realizing how to weaken the conforming CPRF property introduced in Tsabary (CRYPTO'19) by taking advantage of the specific linearity property enjoyed by the lattice evaluation algorithms by Boneh et al. (EUROCRYPT'14).
2020
ASIACRYPT
Scalable Ciphertext Compression Techniques for Post-Quantum KEMs and their Applications 📺
A multi-recipient key encapsulation mechanism, or mKEM, provides a scalable solution to securely communicating to a large group, and offers savings in both bandwidth and computational cost compared to the trivial solution of communicating with each member individually. All prior works on mKEM are only limited to classical assumptions and, although some generic constructions are known, they all require specific properties that are not shared by most post-quantum schemes. In this work, we first provide a simple and efficient generic construction of mKEM that can be instantiated from versatile assumptions, including post-quantum ones. We then study these mKEM instantiations at a practical level using 8 post-quantum KEMs (which are lattice and isogeny-based NIST candidates), and CSIDH, and show that compared to the trivial solution, our mKEM offers savings of at least one order of magnitude in the bandwidth, and make encryption time shorter by a factor ranging from 1.92 to 35. Additionally, we show that by combining mKEM with the TreeKEM protocol used by MLS – an IETF draft for secure group messaging – we obtain significant bandwidth savings.
2020
ASIACRYPT
SILVER - Statistical Independence and Leakage Verification 📺
Implementing cryptographic functions securely in the presence of physical adversaries is still a challenge although a lion's share of research in the physical security domain has been put in development of countermeasures. Among several protection schemes, masking has absorbed the most attention of research in both academic and industrial communities, due to its theoretical foundation allowing to provide proofs or model the achieved security level. In return, masking schemes are difdicult to implement as the implementation process often is manual, complex, and error-prone. This motivated the need for formal verification tools that allow the designers and engineers to analyze and verify the designs before manufacturing. In this work, we present a new framework to analyze and verify masked implementations against various security notions using different security models as reference. In particular, our framework { which directly processes the resulting gate-level netlist of a hardware synthesis { particularly relies on Reduced Ordered Binary Decision Diagrams (ROBDDs) and the concept of statistical independence of probability distributions. Compared to existing tools, our framework captivates due to its simplicity, accuracy, and functionality while still having a reasonable efficiency for many applications and common use-cases.
2020
ASIACRYPT
Lower Bounds on the Degree of Block Ciphers 📺
Only the method to estimate the upper bound of the algebraic degree on block ciphers is known so far, but it is not useful for the designer to guarantee the security. In this paper we provide meaningful lower bounds on the algebraic degree of modern block ciphers.
2020
ASIACRYPT
Improvements of Algebraic Attacks for solving the Rank Decoding and MinRank problems 📺
In this paper, we show how to significantly improve algebraic techniques for solving the MinRank problem, which is ubiquitous in multivariate and rank metric code based cryptography. In the case of the structured MinRank instances arising in the latter, we build upon a recent breakthrough in Bardet et al. (EUROCRYPT 2020) showing that algebraic attacks outperform the combinatorial ones that were considered state of the art up until now. Through a slight modification of this approach, we completely avoid Gr\¨obner bases computations for certain parameters and are left only with solving linear systems. This does not only substantially improve the complexity, but also gives a convincing argument as to why algebraic techniques work in this case. When used against the second round NIST-PQC candidates ROLLO-I-128/192/256, our new attack has bit complexity respectively 71, 87, and 151, to be compared to 117, 144, and 197 as obtained in Bardet et al. (EUROCRYPT 2020). The linear systems arise from the nullity of the maximal minors of a certain matrix associated to the algebraic modeling. We also use a similar approach to improve the algebraic MinRank solvers for the usual MinRank problem. When applied against the second round NIST-PQC candidates GeMSS and Rainbow, our attack has a complexity that is very close to or even slightly better than those of the best known attacks so far. Note that these latter attacks did not rely on MinRank techniques since the MinRank approach used to give complexities that were far away from classical security levels.
2020
ASIACRYPT
Succinct Functional Commitment for a Large Class of Arithmetic Circuits 📺
A succinct functional commitment (SFC) scheme for a circuit class $\mathbf{CC}$ enables, for any circuit $\mathcal{C}\in \mathbf{CC}$, the committer to first succinctly commit to a vector $\vec{\alpha}$, and later succinctly open the commitment to $\mathcal{C} (\vec{\alpha}, \vec{\beta})$, where the verifier chooses $\vec{\beta}$ at the time of opening. Unfortunately, SFC commitment schemes are known only for severely limited function classes like the class of inner products. By making non-black-box use of SNARK-construction techniques, we propose a SFC scheme for the large class of semi-sparse polynomials. The new SFC scheme can be used to, say, efficiently (1) implement sparse polynomials, and (2) aggregate various interesting SFC (e.g., vector commitment and polynomial commitment) schemes. The new scheme is evaluation-binding under a new instantiation of the computational uber-assumption. We provide a thorough analysis of the new assumption.
2020
ASIACRYPT
Secure MPC: Laziness Leads to GOD 📺
Motivated by what we call "honest but lazy” parties in the context of secure multi party computation, we revisit the notion of multi-key FHE schemes (MFHE). In MFHE, any message encrypted using a public key pk_i can be "expanded" so that the resulting ciphertext is encrypted with respect to a set of public keys (pk_1,..,pk_n). Such expanded ciphertexts can be homomorphically evaluated with respect to any circuit to generate a ciphertext ct. Then, this ciphertext ct can be partially decrypted using a secret key sk_i (corresponding to the public key pk_i) to produce a partial decryption p_i. Finally, these partial decryptions {p_{i}}_{i in [n]} can be combined to recover the output. However, this definition of MFHE works only for n-out-of-n access structures and, thus, each node in the system is a point of failure. In the context of "honest but lazy” parties, it is necessary to be able to decrypt even when only given a subset of partial decryptions (say t out of n). In order to solve this problem, we introduce a new notion of multi-key FHE designed to handle arbitrary access patterns that can reconstruct the output. We call it a threshold multi-key FHE scheme (TMFHE). Our main contributions are the following: * We formally define and construct TMFHE for any access structure given by a monotone boolean formula, assuming LWE. * We construct the first simulation-extractable multi-string NIZK from polynomially hard LWE. * We use TMFHE and our multi-string NIZK to obtain the first round-optimal (three round) MPC protocol in the plain model with guaranteed output delivery secure against malicious adversaries or, more generally, mixed adversaries (which supports "honest but lazy” parties), assuming LWE. * Our MPC protocols simultaneously achieve security against the maximum number of corruptions under which guaranteed output delivery is achievable, depth-proportional communication complexity, and reusability.
2020
ASIACRYPT
Cryptographic Group Actions and Applications 📺
Isogeny-based assumptions have emerged as a viable option for quantum-secure cryptography. Recent works have shown how to build efficient (public-key) primitives from isogeny-based assumptions such as CSIDH and CSI-FiSh. However, in its present form, the landscape of isogenies does not seem very amenable to realizing new cryptographic applications. Isogeny-based assumptions often have unique efficiency and security properties, which makes building new cryptographic applications from them a potentially tedious and time-consuming task. In this work, we propose a new framework based on group actions that enables the easy usage of a variety of isogeny-based assumptions. Our framework generalizes the works of Brassard and Yung (Crypto'90) and Couveignes (Eprint'06). We provide new definitions for group actions endowed with natural hardness assumptions that model isogeny-based constructions amenable to group actions such as CSIDH and CSI-FiSh. We demonstrate the utility of our new framework by leveraging it to construct several primitives that were not previously known from isogeny-based assumptions. These include smooth projective hashing, dual-mode PKE, two-message statistically sender-private OT, and Naor-Reingold style PRF. These primitives are useful building blocks for a wide range of cryptographic applications. We introduce a new assumption over group actions called Linear Hidden Shift (LHS) assumption. We then present some discussions on the security of the LHS assumption and we show that it implies symmetric KDM-secure encryption, which in turn enables many other primitives that were not previously known from isogeny-based assumptions.
2020
ASIACRYPT
Security Reductions for White-Box Key-Storage in Mobile Payments 📺
The goal of white-box cryptography is to provide security even when the cryptographic implementation is executed in adversarially controlled environments. White-box implementations nowadays appear in commercial products such as mobile payment applications, e.g., those certified by Mastercard. Interestingly, there, white-box cryptography is championed as a tool for secure storage of payment tokens, and importantly, the white-boxed storage functionality is bound to a hardware functionality to prevent code-lifting attacks. In this paper, we show that the approach of using hardware-binding and obfuscation for secure storage is conceptually sound. Following security specifications by Mastercard and also EMVCo, we first define security for a white-box key derivation functions (WKDF) that is bound to a hardware functionality. WKDFs with hardware-binding model a secure storage functionality, as the WKDFs in turn can be used to derive encryption keys for secure storage. We then provide a proof-of-concept construction of WKDFs based on pseudorandom functions (PRF) and obfuscation. To show that our use of cryptographic primitives is sound, we perform a cryptographic analysis and reduce the security of our WKDF to the cryptographic assumptions of indistinguishability obfuscation and PRF-security. The hardware-functionality that our WKDF is bound to is a PRF-like functionality. Obfuscation helps us to hide the secret key used for the verification, essentially emulating a signature functionality as is provided by the Android key store. We rigorously define the required security properties of a hardware-bound white-box payment application (WPAY) for generating and encrypting valid payment requests. We construct a WPAY, which uses a WKDF as a secure building block. We thereby show that a WKDF can be securely combined with any secure symmetric encryption scheme, including those based on standard ciphers such as AES.
2020
ASIACRYPT
Cryptanalysis of Masked Ciphers: A not so Random Idea 📺
A new approach to the security analysis of hardware-oriented masked ciphers against second-order side-channel attacks is developed. By relying on techniques from symmetric-key cryptanalysis, concrete security bounds are obtained in a variant of the probing model that allows the adversary to make only a bounded, but possibly very large, number of measurements. Specifically, it is formally shown how a bounded-query variant of robust probing security can be reduced to the linear cryptanalysis of masked ciphers. As a result, the compositional issues of higher-order threshold implementations can be overcome without relying on fresh randomness. From a practical point of view, the aforementioned approach makes it possible to transfer many of the desirable properties of first-order threshold implementations, such as their low randomness usage, to the second-order setting. For example, a straightforward application to the block cipher LED results in a masking using less than 700 random bits including the initial sharing. In addition, the cryptanalytic approach introduced in this paper provides additional insight into the design of masked ciphers and allows for a quantifiable trade-off between security and performance.
2020
ASIACRYPT
A Combinatorial Approach to Quantum Random Functions 📺
Quantum pseudorandom functions (QPRFs) extend the classical security of a PRF by allowing the adversary to issue queries on input superpositions. Zhandry [Zhandry, FOCS 2012] showed a separation between the two notions and proved that common construction paradigms are also quantum secure, albeit with a new ad-hoc analysis. In this work, we revisit the question of constructing QPRFs and propose a new method starting from small-domain (classical) PRFs: At the heart of our approach is a new domain-extension technique based on bipartite expanders. Interestingly, our analysis is almost entirely classical. As a corollary of our main theorem, we obtain the first (approximate) key-homomorphic quantum PRF based on the quantum intractability of the learning with errors problem.
2020
ASIACRYPT
Calamari and Falafl: Logarithmic (Linkable) Ring Signatures from Isogenies and Lattices 📺
We construct efficient ring signatures (RS) from isogeny and lattice assumptions. Our ring signatures are based on a logarithmic OR proof for group actions. We instantiate this group action by either the CSIDH group action or an MLWE-based group action to obtain our isogeny-based or lattice-based RS scheme, respectively. Even though the OR proof has a binary challenge space and therefore requires a number of repetitions which is linear in the security parameter, the sizes of our ring signatures are small and scale better with the ring size N than previously known post-quantum ring signatures. We also construct linkable ring signatures (LRS) that are almost as efficient as the non-linkable variants. The isogeny-based scheme produces signatures whose size is an order of magnitude smaller than all previously known logarithmic post-quantum ring signatures, but it is relatively slow (e.g. 5.5 KB signatures and 79 s signing time for rings with 8 members). In comparison, the lattice-based construction is much faster, but has larger signatures (e.g. 30 KB signatures and 90 ms signing time for the same ring size). For small ring sizes our lattice-based ring signatures are slightly larger than state-of-the- art schemes, but they are smaller for ring sizes larger than N approximately 1024.
2020
ASIACRYPT
Crowd Verifiable Zero-Knowledge and End-to-end Verifiable Multiparty Computation 📺
Auditing a secure multiparty computation (MPC) protocol entails the validation of the protocol transcript by a third party that is otherwise untrusted. In this work we introduce the concept of end-to-end verifiable MPC (VMPC), that requires the validation to provide a correctness guarantee even in the setting that all servers, trusted setup primitives and all the client systems utilized by the input-providing users of the MPC protocol are subverted by an adversary. To instantiate VMPC, we introduce a new concept in the setting of zero-knowlegde protocols that we term crowd verifiable zero-knowledge (CVZK). A CVZK protocol enables a prover to convince a set of verifiers about a certain statement, even though each one individually contributes a small amount of entropy for verification and some of them are adversarially controlled. Given CVZK, we present a VMPC protocol that is based on discrete-logarithm related assumptions. At the high level of adversity that VMPC is meant to withstand, it is infeasible to ensure perfect correctness, thus we investigate the classes of functions and verifiability relations that are feasible in our framework, and present a number of possible applications the underlying functions of which can be implemented via VMPC.
2020
ASIACRYPT
Towards Closing The Security Gap of Tweak-aNd-Tweak (TNT) 📺
Tweakable block ciphers (TBCs) have been established as a valuable replacement for many applications of classical block ciphers. While several dedicated TBCs have been proposed in the previous years, generic constructions that build a TBC from a classical block cipher are still highly useful, for example, to reuse an existing implementation. However, most generic constructions need an additional call to either the block cipher or a universal hash function to process the tweak, which limited their efficiency. To address this deficit, Bao et al. proposed Tweak-aNd-Tweak (TNT) at EUROCRYPT'20. Their construction chains three calls to independent keyed permutations and adds the unmodified tweak to the state in between the calls. They further suggested an efficient instantiation TNT-AES that was based on round-reduced AES for each of the permutations. Their work could prove 2n/3-bit security for their construction, where n is the block size in bits. Though, in the absence of an upper bound, their analysis had to consider all possible attack vectors with up to 2^n time, data, and memory. Still, closing the gap between both bounds remained a highly interesting research question. In this work, we show that a variant of Mennink's distinguisher on CLRW2 with O(sqrt{n} 2^{3n/4}) data and O(2^{3n/2}) time from TCC'18 also applies to TNT. We reduce its time complexity to O(sqrt{n} 2^{3n/4}), show the existence of a second similar distinguisher, and demonstrate how to transform the distinguisher to a key-recovery attack on TNT-AES[5,*,*] from an impossible differential. From a constructive point of view, we adapt the rigorous STPRP analysis of CLRW2 by Jha and Nandi to show O(2^{3n/4}) TPRP security for TNT. Thus, we move towards closing the gap between the previous proof and attacks for TNT as well as its proposed instance.
2020
ASIACRYPT
Improved Classical and Quantum Algorithms for Subset-Sum 📺
We present new classical and quantum algorithms for solving random subset-sum instances. First, we improve over the Becker-Coron-Joux algorithm (EUROCRYPT 2011) from $\widetilde{O}(2^{0.291 n})$ down to $\widetilde{O}(2^{0.283 n})$, using more general representations with values in $\{0,1,-1,2\}$. Next, we improve the state of the art of quantum algorithms for this problem in several directions. By combining the Howgrave-Graham-Joux algorithm (EUROCRYPT 2010) and quantum search, we devise an algorithm with asymptotic cost $\widetilde{O}(2^{0.236 n})$, lower than the cost of the quantum walk based on the same classical algorithm proposed by Bernstein, Jeffery, Lange and Meurer (PQCRYPTO 2013). This algorithm has the advantage of using \emph{classical} memory with quantum random access, while the previously known algorithms used the quantum walk framework, and required \emph{quantum} memory with quantum random access. We also propose new quantum walks for subset-sum, performing better than the previous best time complexity of $\widetilde{O}(2^{0.226 n})$ given by Helm and May (TQC 2018). We combine our new techniques to reach a time $\widetilde{O}(2^{0.216 n})$. This time is dependent on a heuristic on quantum walk updates, formalized by Helm and May, that is also required by the previous algorithms. We show how to partially overcome this heuristic, and we obtain an algorithm with quantum time $\widetilde{O}(2^{0.218 n})$ requiring only the standard classical subset-sum heuristics.
2020
ASIACRYPT
KVaC: Key-Value Commitments for Blockchains and Beyond 📺
As blockchains grow in size, validating new transactions becomes more and more resource intensive. To deal with this, there is a need to discover compact encodings of the (effective) state of a blockchain --- an encoding that allows for efficient proofs of membership and updates. In the case of account-based cryptocurrencies, the state can be represented by a key-value map, where keys are the account addresses and values consist of account balance, nonce, etc. We propose a new commitment scheme for key-value maps whose size does not grow with the number of keys, yet proofs of membership are of constant-size. In fact, both the encoding and the proofs consist of just two and three group elements respectively (in groups of unknown order like class groups). Verifying and updating proofs involves just a few group exponentiations. Additive updates to key values enjoy the same level of efficiency too. Key-value commitments can be used to build dynamic accumulators and vector commitments, which find applications in group signatures, anonymous credentials, verifiable databases, interactive oracle proofs, etc. Using our new key-value commitment, we provide the most efficient constructions of (sub)vector commitments to date.
2020
ASIACRYPT
Radical Isogenies 📺
This paper introduces a new approach to computing isogenies called ``radical isogenies'' and a corresponding method to compute chains of $N$-isogenies that is very efficient for small $N$. The method is fully deterministic and completely avoids generating $N$-torsion points. It is based on explicit formulae for the coordinates of an $N$-torsion point $P'$ on the codomain of a cyclic $N$-isogeny $\varphi : E \to E'$, such that composing $\varphi$ with $E' \to E' / \langle P' \rangle$ yields a cyclic $N^2$-isogeny. These formulae are simple algebraic expressions in the coefficients of $E$, the coordinates of a generator $P$ of $\ker \varphi$, and an $N$th root $\sqrtN{\rho}$, where the radicand $\rho$ itself is given by an easily computable algebraic expression in the coefficients of $E$ and the coordinates of $P$. The formulae can be iterated and are particularly useful when computing chains of $N$-isogenies over a finite field $\F_q$ with $\gcd(q-1, N) = 1$, where taking an $N$th root is a simple exponentiation. Compared to the state-of-the-art, our method results in an order of magnitude speed-up for $N \leq 13$; for larger $N$, the advantage disappears due to the increasing complexity of the formulae. When applied to CSIDH, we obtain a speed-up of about $19 \%$ over the implementation by Bernstein, De Feo, Leroux and Smith for the CSURF-512 parameters.
2020
ASIACRYPT
Subvert KEM to Break DEM: Practical Algorithm-Substitution Attacks on Public-Key Encryption 📺
Motivated by the widespread concern about mass surveillance of encrypted communications, Bellare \textit{et al.} introduced at CRYPTO 2014 the notion of Algorithm-Substitution Attack (ASA) where the legitimate encryption algorithm is replaced by a subverted one that aims to undetectably exfiltrate the secret key via ciphertexts. Practically implementable ASAs on various cryptographic primitives (Bellare \textit{et al.}, CRYPTO'14 \& CCS'15; Ateniese \textit{et al.}, CCS'15; Berndt and Li\'{s}kiewicz, CCS'17) have been constructed and analyzed, leaking the secret key successfully. Nevertheless, in spite of much current attention, the practical impact of ASAs (formulated originally for symmetric key cryptography) on public-key (PKE) encryption operations remains unclear, primarily since the encryption operation of PKE does not involve the secret key and previously known ASAs become relatively inefficient for leaking the plaintext due to the logarithmic upper bound of exfiltration rate (Berndt and Li\'{s}kiewicz, CCS'17). In this work, we formulate a practical ASA on PKE encryption algorithm which, perhaps surprisingly, turns out to be much more efficient and robust than existing ones, showing that ASAs on PKE schemes are far more dangerous than previously believed. We mainly target PKE of hybrid encryption which is the most prevalent way to employ PKE in the literature and in practical systems. The main strategy of our ASA is to subvert the underlying key encapsulation mechanism (KEM) so that the session key encapsulated could be efficiently extracted, which, in turn, breaks the data encapsulation mechanism (DEM) enabling us to learn the plaintext itself. Concretely, our non-black-box attack enables recovering the plaintext from only two successive ciphertexts and minimally depends on a short state of previous internal randomness. A widely used class of KEMs is shown to be subvertible by our powerful attack. Our attack relies on a novel identification and formalization of specific non-black-box yet general enough properties that yield practical ASAs on KEMs. More broadly, this may shed some light on exploring the structural weakness of other composed cryptographic primitives, which may make them susceptible to more dangerous ASAs that surpass the logarithmic upper bound of exfiltration rate on universal ASAs.
2020
ASIACRYPT
Inner-Product Functional Encryption with Fine-Grained Access Control 📺
We construct new functional encryption schemes that combine the access control functionality of attribute-based encryption with the possibility of performing linear operations on the encrypted data. While such a primitive could be easily realized from fully fledged functional encryption schemes, what makes our result interesting is the fact that our schemes simultaneously achieve all the following properties. They are public-key, efficient and can be proved secure under standard and well established assumptions (such as LWE or pairings). Furthermore, security is guaranteed in the setting where adversaries are allowed to get functional keys that decrypt the challenge ciphertext. Our first results are two functional encryption schemes for the family of functions that allow users to embed policies (expressed by monotone span programs) in the encrypted data, so that one can generate functional keys to compute weighted sums on the latter. Both schemes are pairing-based and quite generic: they combine the ALS functional encryption scheme for inner products from Crypto 2016 with any attribute-based encryption schemes relying on the dual-system encryption methodology. As an additional bonus, they yield simple and elegant multi-input extensions essentially for free, thereby broadening the set of applications for such schemes. Multi-input is a particularly desirable feature in our setting, since it gives a finer access control over the encrypted data, by allowing users to associate different access policies to different parts of the encrypted data. Our second result builds identity-based functional encryption for inner products from lattices. This is achieved by carefully combining existing IBE schemes from lattices with adapted, LWE-based, variants of ALS. We point out to intrinsic technical bottlenecks to obtain richer forms of access control from lattices. From a conceptual point of view, all our results can be seen as further evidence that more expressive forms of functional encryption can be realized under standard assumptions and with little computational overhead.
2020
ASIACRYPT
Unbounded HIBE with Tight Security 📺
We construct the first unbounded hierarchical identity-based encryption (HIBE) scheme with tight security reductions based on standard assumptions. Our main technical contribution is a novel proof strategy that allows us to tightly randomize user secret keys for identities with arbitrary hierarchy depths using low entropy hidden in a small and hierarchy-independent master public key. The notion of unbounded HIBE is proposed by Lewko and Waters (Eurocrypt 2011). In contrast to most HIBE schemes, an unbounded scheme does not require any maximum depth to be specified in the setup phase, and user secret keys or ciphertexts can be generated for identities of arbitrary depths with hierarchy-independent system parameters. While all the previous unbounded HIBE schemes have security loss that grows at least linearly in the number of user secret key queries, the security loss of our scheme is only dependent on the security parameter, even in the multi-challenge setting, where an adversary can ask for multiple challenge ciphertexts. We prove the adaptive security of our scheme based on the Matrix Decisional Diffie-Hellman assumption in prime-order pairing groups, which generalizes a family of standard Diffie-Hellman assumptions such as k-Linear.
2020
ASIACRYPT
Asymptotically Good Multiplicative LSSS over Galois Rings and Applications to MPC over Z/p^k Z 📺
We study information-theoretic multiparty computation (MPC) protocols over rings Z/p^k Z that have good asymptotic communication complexity for a large number of players. An important ingredient for such protocols is arithmetic secret sharing, i.e., linear secret-sharing schemes with multiplicative properties. The standard way to obtain these over fields is with a family of linear codes C, such that C, $C^\perp$ and C^2 are asymptotically good (strongly multiplicative). For our purposes here it suffices if the square code C^2 is not the whole space, i.e., has codimension at least 1 (multiplicative). Our approach is to lift such a family of codes defined over a finite field F to a Galois ring, which is a local ring that has F as its residue field and that contains Z/p^k Z as a subring, and thus enables arithmetic that is compatible with both structures. Although arbitrary lifts preserve the distance and dual distance of a code, as we demonstrate with a counterexample, the multiplicative property is not preserved. We work around this issue by showing a dedicated lift that preserves \emph{self-orthogonality} (as well as distance and dual distance), for p > 2. Self-orthogonal codes are multiplicative, therefore we can use existing results of asymptotically good self-dual codes over fields to obtain arithmetic secret sharing over Galois rings. For p = 2 we obtain multiplicativity by using existing techniques of secret-sharing using both C and $C^\perp$, incurring a constant overhead. As a result, we obtain asymptotically good arithmetic secret-sharing schemes over Galois rings. With these schemes in hand, we extend existing field-based MPC protocols to obtain MPC over Z/p^k Z, in the setting of a submaximal adversary corrupting less than a fraction 1/2 - \varepsilon of the players, where \varepsilon > 0 is arbitrarily small. We consider 3 different corruption models, and obtain O(n) bits communicated per multiplication for both passive security and active security with abort. For full security with guaranteed output delivery we use a preprocessing model and get O(n) bits per multiplication in the online phase and O(n log n) bits per multiplication in the offline phase. Thus, we obtain true linear bit complexities, without the common assumption that the ring size depends on the number of players.
2020
ASIACRYPT
New results on Gimli: full-permutation distinguishers and improved collisions 📺
Best Paper Award
Gimli is a family of cryptographic primitives (both a hash function and an AEAD scheme) that has been selected for the second round of the NIST competition for standardizing new lightweight designs. The candidate Gimli is based on the permutation Gimli, which was presented at CHES 2017. In this paper, we study the security of both the permutation and the constructions that are based on it. We exploit the slow diffusion in Gimli and its internal symmetries to build, for the first time, a distinguisher on the full permutation of complexity $2^{64}$. We also provide a practical distinguisher on 23 out of the full 24 rounds of Gimli that has been implemented. Next, we give (full state) collision and semi-free-start collision attacks on Gimli-Hash, reaching respectively up to 12 and 18 rounds. On the practical side, we compute a collision on 8-round Gimli-Hash. In the quantum setting, these attacks reach 2 more rounds. Finally, we perform the first study of linear trails in the permutation, and we propose differential-linear cryptanalysis that reach up to 17 rounds of Gimli.
2020
ASIACRYPT
How to Build Optimally Secure PRFs Using Block Ciphers 📺
In EUROCRYPT '96, Aiello and Venkatesan proposed two candidates for $ 2n $-bit to $ 2n $-bit pseudorandom functions (PRFs), called Benes and modified Benes (or mBenes), based on $ n $-bit to $ n $-bit PRFs. While Benes is known to be secure up to $ 2^n $ queries (Patarin, AFRICACRYPT '08), the security of mBenes has only been proved up to $ 2^{n(1-\epsilon)} $ queries for all $ \epsilon > 0 $ by Patarin and Montreuil in ICISC '05. In this work, we show that the composition of a $ 2n $-bit hash function with mBenes is a secure variable input length (VIL) PRF up to $ 2^{n-2} $ queries (given appropriate hash function bounds). We extend our analysis with block ciphers as the underlying primitive and obtain two optimally secure VIL PRFs using block ciphers. The first of these candidates requires $ 6 $ calls to the block cipher. The second candidate requires just $ 4 $ calls to the block cipher, but here the proof is based on Patarin's mirror theory. Further, we instantiate the hash function with a PMAC+/LightMAC+ like hash, to get six candidates for deterministic message authentication codes with optimal security.
2020
ASIACRYPT
Security Limitations of Classical-Client Delegated Quantum Computing 📺
Secure delegated quantum computing allows a computationally weak client to outsource an arbitrary quantum computation to an untrusted quantum server in a privacy-preserving manner. One of the promising candidates to achieve classical delegation of quantum computation is classical-client remote state preparation ($\sf{RSP}_{CC}$), where a client remotely prepares a quantum state using a classical channel. However, the privacy loss incurred by employing $\sf{RSP}_{CC}$ as a sub-module is unclear. In this work, we investigate this question using the Constructive Cryptography framework by Maurer and Renner (ICS'11). We first identify the goal of $\sf{RSP}_{CC}$ as the construction of ideal \RSP resources from classical channels and then reveal the security limitations of using $\sf{RSP}_{CC}$. First, we uncover a fundamental relationship between constructing ideal \RSP resources (from classical channels) and the task of cloning quantum states. Any classically constructed ideal \RSP resource must leak to the server the full classical description (possibly in an encoded form) of the generated quantum state, even if we target computational security only. As a consequence, we find that the realization of common \RSP resources, without weakening their guarantees drastically, is impossible due to the no-cloning theorem. Second, the above result does not rule out that a specific $\sf{RSP}_{CC}$ protocol can replace the quantum channel at least in some contexts, such as the Universal Blind Quantum Computing ($\sf{UBQC}$) protocol of Broadbent et al. (FOCS '09). However, we show that the resulting UBQC protocol cannot maintain its proven composable security as soon as $\sf{RSP}_{CC}$ is used as a subroutine. Third, we show that replacing the quantum channel of the above $\sf{UBQC}$ protocol by the $\sf{RSP}_{CC}$ protocol QFactory of Cojocaru et al. (Asiacrypt '19) preserves the weaker, game-based, security of $\sf{UBQC}$.
2020
ASIACRYPT
Packed Multiplication: How to Amortize the Cost of Side-channel Masking? 📺
Higher-order masking countermeasures provide strong provable security against side-channel attacks at the cost of incurring significant overheads, which largely hinders its applicability. Previous works towards remedying cost mostly concentrated on ``local'' calculations, i.e., optimizing the cost of computation units such as a single AND gate or a field multiplication. This paper explores a complementary ``global'' approach, i.e., considering multiple operations in the masked domain as a batch and reducing randomness and computational cost via amortization. In particular, we focus on the amortization of $\ell$ parallel field multiplications for appropriate integer $\ell > 1$, and design a kit named {\it packed multiplication} for implementing such a batch. Higher-order masking countermeasures provide strong provable security against side-channel attacks at the cost of incurring significant overheads, which largely hinders its applicability. Previous works towards remedying cost mostly concentrated on ``local'' calculations, i.e., optimizing the cost of computation units such as a single AND gate or a field multiplication. This paper explores a complementary ``global'' approach, i.e., considering multiple operations in the masked domain as a batch and reducing randomness and computational cost via amortization. In particular, we focus on the amortization of $\ell$ parallel field multiplications for appropriate integer $\ell > 1$, and design a kit named {\it packed multiplication} for implementing such a batch. For $\ell+d\leq2^m$, when $\ell$ parallel multiplications over $\mathbb{F}_{2^{m}}$ with $d$-th order probing security are implemented, packed multiplication consumes $d^2+2\ell d + \ell$ bilinear multiplications and $2d^2 + d(d+1)/2$ random field variables, outperforming the state-of-the-art results with $O(\ell d^2)$ multiplications and $\ell \left \lfloor d^2/4\right \rfloor + \ell d$ randomness. To prove $d$-probing security for packed multiplications, we introduce some weaker security notions for multiple-inputs-multiple-outputs gadgets and use them as intermediate steps, which may be of independent interest. As parallel field multiplications exist almost everywhere in symmetric cryptography, lifting optimizations from ``local'' to ``global'' substantially enlarges the space of improvements. To demonstrate, we showcase the method on the AES Subbytes step, GCM and TET (a popular disk encryption). Notably, when $d=8$, our implementation of AES Subbytes in ARM Cortex M architecture achieves a gain of up to $33\%$ in total speeds and saves up to $68\%$ random bits than the state-of-the-art bitsliced implementation reported at ASIACRYPT~2018.
2020
ASIACRYPT
Catalic: Delegated PSI Cardinality with Applications to Contact Tracing 📺
Private Set Intersection Cardinality (PSI-CA) allows two parties, each holding a set of items, to learn the size of the intersection of those sets without revealing any additional information. To the best of our knowledge, this work presents the first protocol that allows one of the parties to delegate PSI-CA computation to untrusted servers. At the heart of our delegated PSI-CA protocol is a new oblivious distributed key PRF (Odk-PRF) abstraction, which may be of independent interest. We explore in detail how to use our delegated PSI-CA protocol to perform privacy-preserving contact tracing. It has been estimated that a significant percentage of a given population would need to use a contact tracing app to stop a disease’s spread. Prior privacy-preserving contact tracing systems, however, impose heavy bandwidth or computational demands on client devices. These demands present an economic disincentive to participate for end users who may be billed per MB by their mobile data plan or for users who want to save battery life. We propose Catalic (ContAct TrAcing for LIghtweight Clients), a new contact tracing system that minimizes bandwidth cost and computation workload on client devices. By applying our new delegated PSI-CA protocol, Catalic shifts most of the client-side computation of contact tracing to untrusted servers, and potentially saves each user hundreds of megabytes of mobile data per day while preserving privacy.
2020
ASIACRYPT
Secret-Shared Shuffle 📺
Generating additive secret shares of a shuffled dataset - such that neither party knows the order in which it is permuted - is a fundamental building block in many protocols, such as secure collaborative filtering, oblivious sorting, and secure function evaluation on set intersection. Traditional approaches to this problem either involve expensive public-key based crypto or using symmetric crypto on permutation networks. While public-key-based solutions are bandwidth efficient, they are computation-heavy. On the other hand, constructions based on permutation networks are communication-bound, especially when the dataset contains large elements, for e.g., feature vectors in an ML context. We design a new 2-party protocol for this task of computing secret shares of shuffled data, which we refer to as secret-shared shuffle. Our protocol is secure against a static semi-honest adversary. At the heart of our approach is a new primitive we define (which we call ``Share Translation'') that generates two sets of pseudorandom values ``correlated via the permutation''. This allows us to reduce the problem of shuffling the dataset to the problem of shuffling pseudorandom values, which enables optimizations both in computation and communication. We then design a Share Translation protocol based on oblivious transfer and puncturable PRFs. Our final protocol for secret-shared shuffle uses lightweight operations like XOR and PRGs, and in particular doesn't use public-key operations besides the base OTs. As a result, our protocol is concretely more efficient than the existing solutions. In particular, we are two-three orders of magnitude faster than public-key-based approach and one order of magnitude faster compared to the best known symmetric-key approach when the elements are moderately large.
2020
ASIACRYPT
Multi-Client Oblivious RAM with Poly-Logarithmic Communication 📺
Oblivious RAM enables oblivious access to memory in the single-client setting, which may not be the best fit in the network setting. Multi-client oblivious RAM (MCORAM) considers a collaborative but untrusted environment, where a database owner selectively grants read access and write access to different entries of a confidential database to multiple clients. Their access pattern must remain oblivious not only to the server but also to fellow clients. This upgrade rules out many techniques for constructing ORAM, forcing us to pursue new techniques. MCORAM not only provides an alternative solution to private anonymous data access (Eurocrypt 2019) but also serves as a promising building block for equipping oblivious file systems with access control and extending other advanced cryptosystems to the multi-client setting. Despite being a powerful object, the current state-of-the-art is unsatisfactory: The only existing scheme requires $O(\sqrt n)$ communication and client computation for a database of size $n$. Whether it is possible to reduce these complexities to $\mathsf{polylog}(n)$, thereby matching the upper bounds for ORAM, is an open problem, i.e., can we enjoy access control and client-obliviousness under the same bounds? Our first result answers the above question affirmatively by giving a construction from fully homomorphic encryption (FHE). Our main technical innovation is a new technique for cross-key trial evaluation of ciphertexts. We also consider the same question in the setting with $N$ non-colluding servers, out of which at most $t$ of them can be corrupt. We build multi-server MCORAM from distributed point functions (DPF), and propose new constructions of DPF via a virtualization technique with bootstrapping, assuming the existence of homomorphic secret sharing and pseudorandom generators in NC0, which are not known to imply FHE.
2020
ASIACRYPT
SQISign: Compact Post-Quantum signatures from Quaternions and Isogenies 📺
Best Paper Award
We introduce a new signature scheme, \emph{SQISign}, (for \emph{Short Quaternion and Isogeny Signature}) from isogeny graphs of supersingular elliptic curves. The signature scheme is derived from a new one-round, high soundness, interactive identification protocol. Targeting the post-quantum NIST-1 level of security, our implementation results in signatures of $204$ bytes, secret keys of $16$ bytes and public keys of $64$ bytes. In particular, the signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. On a modern workstation, our implementation in C takes 0.6s for key generation, 2.5s for signing, and 50ms for verification. While the soundness of the identification protocol follows from classical assumptions, the zero-knowledge property relies on the second main contribution of this paper. We introduce a new algorithm to find an isogeny path connecting two given supersingular elliptic curves of known endomorphism rings. A previous algorithm to solve this problem, due to Kohel, Lauter, Petit and Tignol, systematically reveals paths from the input curves to a `special' curve. This leakage would break the zero-knowledge property of the protocol. Our algorithm does not directly reveal such a path, and subject to a new computational assumption, we prove that the resulting identification protocol is zero-knowledge.
2020
ASIACRYPT
Post-Quantum Verification of Fujisaki-Okamoto 📺
We present a computer-verified formalization of the post-quantum security proof of the Fujisaki-Okamoto transform (as analyzed by Hövelmanns, Kiltz, Schäge, and Unruh, PKC 2020). The formalization is done in quantum relational Hoare logic and checked in the qrhl-tool (Unruh, POPL 2019).
2020
ASIACRYPT
Circular Security Is Complete for KDM Security 📺
Circular security is the most elementary form of key-dependent message (KDM) security, which allows us to securely encrypt only a copy of secret key bits. In this work, we show that circular security is complete for KDM security in the sense that an encryption scheme satisfying this security notion can be transformed into one satisfying KDM security with respect to all functions computable by a-priori bounded-size circuits (bounded-KDM security). This result holds in the presence of any number of keys and in any of secret-key/public-key and CPA/CCA settings. Such a completeness result was previously shown by Applebaum (EUROCRYPT 2011) for KDM security with respect to projection functions (projection-KDM security) that allows us to securely encrypt both a copy and a negation of secret key bits. Besides amplifying the strength of KDM security, our transformation in fact can start from an encryption scheme satisfying circular security against CPA attacks and results in one satisfying bounded-KDM security against CCA attacks. This result improves the recent result by Kitagawa and Matsuda (TCC 2019) showing a CPA-to-CCA transformation for KDM secure public-key encryption schemes.
2020
ASIACRYPT
Towards Efficiency-Preserving Round Compression in MPC: Do fewer rounds mean more computation? 📺
Reducing the rounds of interaction in secure multiparty computation (MPC) protocols has been the topic of study of many works. One popular approach to reduce rounds is to construct {\em round compression compilers}. A round compression compiler is one that takes a highly interactive protocol and transforms it into a protocol with far fewer rounds. The design of round compression compilers has traditionally focused on preserving the security properties of the underlying protocol and in particular, not much attention has been given towards preserving their computational and communication efficiency. Indeed, the recent round compression compilers that yield round-optimal MPC protocols incur large computational and communication overhead. In this work, we initiate the study of {\em efficiency-preserving} round compression compilers, i.e. compilers that translate the efficiency benefits of the underlying highly interactive protocols to the fewer round setting. Focusing on the honest majority setting (with near-optimal corruption threshold $\frac{1}{2} - \varepsilon$, for any $\varepsilon > 0$), we devise a new compiler that yields two round (i.e., round optimal) semi-honest MPC with similar communication efficiency as the underlying (arbitrary round) protocol. By applying our compiler on the most efficient known MPC protocols, we obtain a two-round semi-honest protocol based on one-way functions, with total communication (and per-party computation) cost $\widetilde{O}(s+n^4)$ -- a significant improvement over prior two-round protocols with cost $\widetilde{O}(n^\tau s+n^{\tau+1}d)$, where $\tau\geq 2$, $s$ is the size of the circuit computing the function and $d$ the corresponding depth. Our result can also be extended to handle malicious adversaries, either using stronger assumptions in the public key infrastructure (PKI) model, or in the plain model using an extra round. An artifact of our approach is that the resultant protocol is ``unbalanced'' in the amount of computation performed by different parties. We give evidence that this is {\em necessary} in our setting. Our impossibility result makes novel use of the ``MPC-in-the-head" paradigm which has typically been used to demonstrate feasibility results.
2020
ASIACRYPT
Quantum Circuit Implementations of AES with Fewer Qubits 📺
We propose some quantum circuit implementations of AES with the following improvements. Firstly, we propose some quantum circuits of the AES S-box and S-box$^{-1}$,which require fewer qubits than prior work. Secondly, we reduce the number of qubits in the zig-zag method by introducing the S-box$^{-1}$ operation in our quantum circuits of AES. Thirdly, we present a method to reduce the number of qubits in the key schedule of AES. While the previous quantum circuits of AES-128, AES-192, and AES-256 need at least 864, 896, and 1232 qubits respectively,our quantum circuit implementations of AES-128, AES-192, and AES-256 only require 512, 640, and 768 qubits respectively, where the number of qubits is reduced by more than 30\%.
2020
ASIACRYPT
ALBATROSS: publicly AttestabLe BATched Randomness based On Secret Sharing 📺
In this paper we present ALBATROSS, a family of multiparty randomness generation protocols with guaranteed output delivery and public verification that allows to trade off corruption tolerance for a much improved amortized computational complexity. Our basic stand alone protocol is based on publicly verifiable secret sharing (PVSS) and is secure under in the random oracle model under the decisional Diffie-Hellman (DDH) hardness assumption. We also address the important issue of constructing Universally Composable randomness beacons, showing two UC versions of Albatross: one based on simple UC NIZKs and another one based on novel efficient ``designated verifier'' homomorphic commitments. Interestingly this latter version can be instantiated from a global random oracle under the weaker Computational Diffie-Hellman (CDH) assumption. An execution of ALBATROSS with $n$ parties, out of which up to $t=(1/2-\epsilon)\cdot n$ are corrupt for a constant $\epsilon>0$, generates $\Theta(n^2)$ uniformly random values, requiring in the worst case an amortized cost per party of $\Theta(\log n)$ exponentiations per random value. We significantly improve on the SCRAPE protocol (Cascudo and David, ACNS 17), which required $\Theta(n^2)$ exponentiations per party to generate one uniformly random value. This is mainly achieved via two techniques: first, the use of packed Shamir secret sharing for the PVSS; second, the use of linear $t$-resilient functions (computed via a Fast Fourier Transform-based algorithm) to improve the randomness extraction.
2020
ASIACRYPT
Oblivious Pseudorandom Functions from Isogenies 📺
An oblivious PRF, or OPRF, is a protocol between a client and a server, where the server has a key $k$ for a secure pseudorandom function $F$, and the client has an input $x$ for the function. At the end of the protocol the client learns $F(k,x)$, and nothing else, and the server learns nothing. An OPRF is verifiable if the client is convinced that the server has evaluated the PRF correctly with respect to a prior commitment to $k$. OPRFs and verifiable OPRFs have numerous applications, such as private-set-intersection protocols, password-based key-exchange protocols, and defense against denial-of-service attacks. Existing OPRF constructions use RSA-, Diffie-Hellman-, and lattice-type assumptions. The first two are not post-quantum secure. In this paper we construct OPRFs and verifiable OPRFs from isogenies. Our main construction uses isogenies of supersingular elliptic curves over $\Fpp$ and tries to adapt the Diffie-Hellman OPRF to that setting. However, a recent attack on supersingular-isogeny systems due to Galbraith~et~al.~[ASIACRYPT 2016] makes this approach difficult to secure. To overcome this attack, and to validate the server's response, we develop two new zero-knowledge protocols that convince each party that its peer has sent valid messages. With these protocols in place, we obtain an OPRF in the SIDH setting and prove its security in the UC framework. Our second construction is an adaptation of the Naor-Reingold PRF to commutative group actions. Combining it with recent constructions of oblivious transfer from isogenies, we obtain an OPRF in the CSIDH setting.
2020
ASIACRYPT
Non-Interactive Composition of Sigma-Protocols via Share-then-Hash 📺
Proofs of partial knowledge demonstrate the possession of certain subsets of witnesses for a given collection of statements x_1,\dots,x_n. Cramer, Damg{\aa}rd, and Schoenmakers (CDS), built proofs of partial knowledge, given "atomic" protocols for individual statements x_i, by having the prover randomly secret share the verifier's challenge and using the shares as challenges for the atomic protocols. This simple and highly-influential transformation has been used in numerous applications, ranging from anonymous credentials to ring signatures. We consider what happens if, instead of using the shares directly as challenges, the prover first hashes them. We show that this elementary enhancement can result in significant benefits: - the proof contains a {\em single} atomic transcript per statement x_i, - it suffices that the atomic protocols are k-special sound for k \geq 2, - when compiled using the Fiat-Shamir heuristic, the protocol retains its soundness in the {\em non-programmable} random oracle model. None of the above features is satisfied by the CDS transformation.
2020
CRYPTO
Compressed Sigma-Protocol Theory and Practical Application to Plug & Play Secure Algorithmics 📺
Sigma-Protocols provide a well-understood basis for secure algorithmics. Recently, Bulletproofs (Bootle et al., EUROCRYPT 2016, and Bünz et al., S&P 2018) have been proposed as a drop-in replacement in case of zero-knowledge (ZK) for arithmetic circuits, achieving logarithmic communication instead of linear. Its pivot is an ingenious, logarithmic-size proof of knowledge BP for certain quadratic relations. However, reducing ZK for general relations to it forces a somewhat cumbersome ``reinvention'' of cryptographic protocol theory. We take a rather different viewpoint and reconcile Bulletproofs with Sigma-Protocol Theory such that (a) simpler circuit ZK is developed within established theory, while (b) achieving exactly the same logarithmic communication. The natural key here is linearization. First, we repurpose BPs as a blackbox compression mechanism for standard Sigma-Protocols handling ZK proofs of general linear relations (on compactly committed secret vectors); our pivot. Second, we reduce the case of general nonlinear relations to blackbox applications of our pivot via a novel variation on arithmetic secret sharing based techniques for Sigma-Protocols (Cramer et al., ICITS 2012). Orthogonally, we enhance versatility by enabling scenarios not previously addressed, e.g., when a secret input is dispersed across several commitments. Standard implementation platforms leading to logarithmic communication follow from a Discrete-Log assumption or a generalized Strong-RSA assumption. Also, under a Knowledge-of-Exponent Assumption (KEA) communication drops to constant, as in ZK-SNARKS. All in all, our theory should more generally be useful for modular (``plug & play'') design of practical cryptographic protocols; this is further evidenced by our separate work (2020) on proofs of partial knowledge.
2020
CRYPTO
New Techniques for Traitor Tracing: Size N^{1/3} and More from Pairings 📺
The best existing traitor tracing scheme from pairings achieves $O(\sqrt{N})$-sized parameters, which has stood since 2006. This intuitively seems to be consistent with the fact that pairings allow for degree-2 computations, yielding a quadratic compression. In this work, we show that this intuition is false by building a traitor tracing scheme from pairings with $O(\sqrt[3]{N})$-sized parameters. We obtain our scheme by developing a number of new traitor tracing techniques offering various trade-offs that were not possible before, giving the first significant parameter improvements in pairings-based traitor tracing in over a decade.
2020
CRYPTO
New Techniques for Zero-Knowledge: Leveraging Inefficient Provers to Reduce Assumptions, Interaction, and Trust 📺
We present a transformation from NIZK with inefficient provers in the uniform random string (URS) model to ZAPs (two message witness indistinguishable proofs) with inefficient provers. While such a transformation was known for the case where the prover is efficient, the security proof breaks down if the prover is inefficient. Our transformation is obtained via new applications of Nisan-Wigderson designs, a combinatorial object originally introduced in the derandomization literature. We observe that our transformation is applicable both in the setting of super-polynomial provers/poly-time adversaries, as well as a new fine-grained setting, where the prover is polynomial time and the verifier/simulator/zero knowledge distinguisher are in a lower complexity class, such as $\mathsf{NC}^1$. We also present $\mathsf{NC}^1$-fine-grained NIZK in the URS model for all of NP from the worst-case assumption $\oplus L/\poly \not\subseteq \mathsf{NC}^1$. Our techniques yield the following applications: --ZAPs for $\mathsf{AM}$ from Minicrypt assumptions (with super-polynomial time provers), --$\mathsf{NC}^1$-fine-grained ZAPs for $\mathsf{NP}$ from worst-case assumptions, --Protocols achieving an ``offline'' notion of NIZK (oNIZK) in the standard (no-CRS) model with uniform soundness in both the super-polynomial setting (from Minicrypt assumptions) and the $\mathsf{NC}^1$-fine-grained setting (from worst-case assumptions). The oNIZK notion is sufficient for use in indistinguishability-based proofs.
2020
CRYPTO
On Succinct Arguments and Witness Encryption from Groups 📺
Succinct non-interactive arguments (SNARGs) enable proofs of NP statements with very low communication. Recently, there has been significant work in both theory and practice on constructing SNARGs with very short proofs. Currently, the state-of-the-art in succinctness is due to Groth (Eurocrypt 2016) who constructed a SNARG from bilinear maps where the proof consists of just 3 group elements. In this work, we first construct a concretely-efficient designated-verifier (preprocessing) SNARG with inverse polynomial soundness, where the proof consists of just 2 group elements in a standard (generic) group. This leads to a 50% reduction in concrete proof size compared to Groth's construction. We follow the approach of Bitansky et al. (TCC 2013) who describe a compiler from linear PCPs to SNARGs in the preprocessing model. Our improvement is based on a new linear PCP packing technique that allows us to construct 1-query linear PCPs which can then be compiled into a SNARG (using ElGamal encryption over a generic group). An appealing feature of our new SNARG is that the verifier can precompute a statement-independent lookup table in an offline phase; verifying proofs then only requires 2 exponentiations and a single table lookup. This makes our new designated-verifier SNARG appealing in settings that demand fast verification and minimal communication. We then turn to the question of constructing arguments where the proof consists of a single group element. Here, we first show that any (possibly interactive) argument for a language L where the verification algorithm is "generic" (i.e., only performs generic group operations) and the proof consists of a single group element, implies a witness encryption scheme for L. We then show that under a yet-unproven, but highly plausible, hypothesis on the hardness of approximating the minimal distance of linear codes, we can construct a 2-message laconic argument for NP where the proof consists of a single group element. Under the same hypothesis, we obtain a witness encryption scheme for NP in the generic group model. Along the way, we show that under a conceptually-similar but proven hardness of approximation result, there is a 2-message laconic argument for NP with negligible soundness error where the prover's message consists of just 2 group elements. In both settings, we obtain laconic arguments (and linear PCPs) with linear decision procedures. Our constructions circumvent a previous lower bound by Groth on such argument systems with linear decision procedures by relying on imperfect completeness. Namely, our constructions have vanishing but not negligible completeness error, while the lower bound of Groth implicitly assumes negligible completeness error of the underlying argument. Our techniques thus highlight new avenues for designing linear PCPs, succinct arguments, and witness encryption schemes.
2020
CRYPTO
Cryptanalytic Extraction of Neural Network Models 📺
We argue that the machine learning problem of model extraction is actually a cryptanalytic problem in disguise, and should be studied as such. Given oracle access to a neural network, we introduce a differential attack that can efficiently steal the parameters of the remote model up to floating point precision. Our attack relies on the fact that ReLU neural networks are piecewise linear functions, and thus queries at the critical points reveal information about the model parameters. We evaluate our attack on multiple neural network models and extract models that are 2^20 times more precise and require 100x fewer queries than prior work. For example, we extract a 100,000 parameter neural network trained on the MNIST digit recognition task with 2^21.5 queries in under an hour, such that the extracted model agrees with the oracle on all inputs up to a worst-case error of 2^-25, or a model with 4,000 parameters in 2^18.5 queries with worst-case error of 2^-40.4. Code is available at https://github.com/google-research/cryptanalytic-model-extraction.
2020
CRYPTO
Dynamic Decentralized Functional Encryption 📺
We introduce Dynamic Decentralized Functional Encryption (DDFE), a generalization of Functional Encryption which allows multiple users to join the system dynamically, without relying on a trusted third party or on expensive and interactive Multi-Party Computation protocols. This notion subsumes existing multi-user extensions of Functional Encryption, such as Multi-Input, Multi-Client, and Ad Hoc Multi-Input Functional Encryption. We define and construct schemes for various functionalities which serve as building-blocks for latter primitives and may be useful in their own right, such as a scheme for dynamically computing sums in any Abelian group. These constructions build upon simple primitives in a modular way, and have instantiations from well-studied assumptions, such as DDH or LWE. Our constructions culminate in an Inner-Product scheme for computing weighted sums on aggregated encrypted data, from standard assumptions in prime-order groups in the Random Oracle Model.
2020
CRYPTO
Slide Reduction, Revisited—Filling the Gaps in SVP Approximation 📺
We show how to generalize Gama and Nguyen's slide reduction algorithm [STOC '08] for solving the approximate Shortest Vector Problem over lattices (SVP) to allow for arbitrary block sizes, rather than just block sizes that divide the rank n of the lattice. This leads to significantly better running times for most approximation factors. We accomplish this by combining slide reduction with the DBKZ algorithm of Micciancio and Walter [Eurocrypt '16]. We also show a different algorithm that works when the block size is quite large---at least half the total rank. This yields the first non-trivial algorithm for sublinear approximation factors. Together with some additional optimizations, these results yield significantly faster provably correct algorithms for \delta-approximate SVP for all approximation factors n^{1/2+\eps} \leq \delta \leq n^{O(1)}, which is the regime most relevant for cryptography. For the specific values of \delta = n^{1-\eps} and \delta = n^{2-\eps}, we improve the exponent in the running time by a factor of 2 and a factor of 1.5 respectively.
2020
CRYPTO
Practical Product Proofs for Lattice Commitments 📺
We construct a practical lattice-based zero-knowledge argument for proving multiplicative relations between committed values. The underlying commitment scheme that we use is the currently most efficient one of Baum et al. (SCN 2018), and the size of our multiplicative proof is only slightly larger than of the one for just proving knowledge of the committed values. We additionally improve on the results of Lyubashevsky and Seiler (Eurocrypt 2018) to show that the above-mentioned techniques can work over rings $Z_q[X]/(X^d+1)$ where $X^d+1$ splits into low-degree factors, which is a property necessary for many applications. In particular, we use Fourier analysis to show that the NTT coefficients of random small-norm challenges are not concentrated on any particular value.
2020
CRYPTO
Anonymous Tokens with Private Metadata Bit 📺
We present a cryptographic construction for anonymous tokens with private metadata bit, called PMBTokens. This primitive enables an issuer to provide a user with a lightweight, single-use anonymous trust token that can embed a single private bit, which is accessible only to the party who holds the secret authority key and is private with respect to anyone else. Our construction generalizes and extends the functionality of Privacy Pass (PETS’18) with this private metadata bit capability. It provides unforgeability, unlinkability, and privacy for the metadata bit properties based on the DDH and CTDH assumptions in the random oracle model. Both Privacy Pass and PMBTokens rely on non-interactive zero-knowledge proofs (NIZKs). We present new techniques to remove the need for NIZKs, while still achieving unlinkability. We implement our constructions and we report their efficiency costs.
2020
CRYPTO
Round-optimal Black-box Commit-and-prove with Succinct Communication 📺
We give a four-round black-box construction of a commit-and-prove protocol with succinct communication. Our construction is WI and has constant soundness error, and it can be upgraded into a one that is ZK and has negligible soundness error by relying on a round-preserving transformation of Khurana et al. (TCC 2018). Our construction is obtained by combining the MPC-in-the-head technique of Ishai et al. (SICOMP 2009) with the two-round succinct argument of Kalai et al. (STOC 2014), and the main technical novelty lies in the analysis of the soundness---we show that, although the succinct argument of Kalai et al. does not necessarily provide soundness for NP statements, it can be used in the MPC-in-the-head technique for proving the consistency of committed MPC views. Our construction is based on sub-exponentially hard collision-resistant hash functions, two-round PIRs, and two-round OTs.
2020
CRYPTO
Black-Box Transformations from Passive to Covert Security with Public Verifiability 📺
In the context of secure computation, protocols with security against covert adversaries ensure that any misbehavior by malicious parties will be detected by the honest parties with some constant probability. As such, these protocols provide better security guarantees than passively secure protocols and, moreover, are easier to construct than protocols with full security against active adversaries. Protocols that, upon detecting a cheating attempt, allow the honest parties to compute a certificate that enables third parties to verify whether an accused party misbehaved or not are called publicly verifiable. In this work, we present the first generic compilers for constructing two-party protocols with covert security and public verifiability from protocols with passive security. We present two separate compilers, which are both fully blackbox in the underlying protocols they use. Both of them only incur a constant multiplicative factor in terms of bandwidth overhead and a constant additive factor in terms of round complexity on top of the passively secure protocols they use. The first compiler applies to all two-party protocols that have no private inputs. This class of protocols covers the important class of preprocessing protocols that are used to setup correlated randomness among parties. We use our compiler to obtain the first secret-sharing based two-party protocol with covert security and public verifiability. Notably, the produced protocol achieves public verifiability essentially for free when compared with the best known previous solutions based on secret-sharing that did not provide public verifiability Our second compiler constructs protocols with covert security and public verifiability for arbitrary functionalities from passively secure protocols. It uses our first compiler to perform a setup phase, which is independent of the parties' inputs as well as the protocol they would like to execute. Finally, we show how to extend our techniques to obtain multiparty computation protocols with covert security and public verifiability against arbitrary constant fractions of corruptions.
2020
CRYPTO
Black-box use of One-way Functions is Useless for Optimal Fair Coin-Tossing 📺
A two-party fair coin-tossing protocol guarantees output delivery to the honest party even when the other party aborts during the protocol execution. Cleve (STOC--1986) demonstrated that a computationally bounded fail-stop adversary could alter the output distribution of the honest party by (roughly) $1/r$ (in the statistical distance) in an $r$-message coin-tossing protocol. An optimal fair coin-tossing protocol ensures that no adversary can alter the output distribution beyond $1/r$. In a seminal result, Moran, Naor, and Segev (TCC--2009) constructed the first optimal fair coin-tossing protocol using (unfair) oblivious transfer protocols. Whether the existence of oblivious transfer protocols is a necessary hardness of computation assumption for optimal fair coin-tossing remains among the most fundamental open problems in theoretical cryptography. The results of Impagliazzo and Luby (FOCS–1989) and Cleve and Impagliazzo (1993) prove that optimal fair coin-tossing implies the necessity of one-way functions' existence; a significantly weaker hardness of computation assumption compared to the existence of secure oblivious transfer protocols. However, the sufficiency of the existence of one-way functions is not known. Towards this research endeavor, our work proves a black-box separation of optimal fair coin-tossing from the existence of one-way functions. That is, the black-box use of one-way functions cannot enable optimal fair coin-tossing. Following the standard Impagliazzo and Rudich (STOC--1989) approach of proving black-box separations, our work considers any $r$-message fair coin-tossing protocol in the random oracle model where the parties have unbounded computational power. We demonstrate a fail-stop attack strategy for one of the parties to alter the honest party's output distribution by $1/\sqrt r$ by making polynomially-many additional queries to the random oracle. As a consequence, our result proves that the $r$-message coin-tossing protocol of Blum (COMPCON--1982) and Cleve (STOC--1986), which uses one-way functions in a black-box manner, is the best possible protocol because an adversary cannot change the honest party's output distribution by more than $1/\sqrt r$. Several previous works, for example, Dachman--Soled, Lindell, Mahmoody, and Malkin (TCC--2011), Haitner, Omri, and Zarosim (TCC--2013), and Dachman--Soled, Mahmoody, and Malkin (TCC--2014), made partial progress on proving this black-box separation assuming some restrictions on the coin-tossing protocol. Our work diverges significantly from these previous approaches to prove this black-box separation in its full generality. The starting point is the recently introduced potential-based inductive proof techniques for demonstrating large gaps in martingales in the information-theoretic plain model. Our technical contribution lies in identifying a global invariant of communication protocols in the random oracle model that enables the extension of this technique to the random oracle model.
2020
CRYPTO
Order-Fairness for Byzantine Consensus 📺
Decades of research in both cryptography and distributed systems has extensively studied the problem of state machine replication, also known as Byzantine consensus. A consensus protocol must usually satisfy two properties: {\em consistency} and {\em liveness}. These properties ensure that honest participating nodes agree on the same log and dictate when fresh transactions get added. They fail, however, to ensure against adversarial manipulation of the actual {\em ordering} of transactions in the log. Indeed, in leader-based protocols (almost all protocols used today), malicious leaders can directly choose the final transaction ordering. To rectify this problem, we propose a third consensus property: {\em transaction order-fairness}. We initiate the first formal investigation of order-fairness and explain its fundamental importance. We also provide several natural definitions for order-fairness and analyze the assumptions necessary to realize them. We also propose a new class of consensus protocols called Aequitas. Aequitas protocols are the first to achieve order-fairness in addition to consistency and liveness. They can be realized in a black-box way using existing broadcast and agreement primitives (or indeed using any consensus protocol), and work in both synchronous and asynchronous network models.
2020
CRYPTO
Universally Composable Relaxed Password Authenticated Key Exchange 📺
Protocols for password authenticated key exchange (PAKE) allow two parties who share only a weak password to agree on a cryptographic key. We revisit the notion of PAKE in the universal composability (UC) framework, and propose a relaxation of the PAKE functionality of Canetti et al. that we call lazy-extraction PAKE (lePAKE). Our relaxation allows the ideal-world adversary to postpone its password guess until after a session is complete. We argue that this relaxed notion still provides meaningful security in the password-only setting. As our main result, we show that several PAKE protocols that were previously only proven secure with respect to a ``game-based'' definition of security can be shown to UC-realize the lePAKE functionality in the random-oracle model. These include SPEKE, SPAKE2, and TBPEKE, the most efficient PAKE schemes currently known.
2020
CRYPTO
Incompressible Encodings 📺
An incompressible encoding can probabilistically encode some data $m$ into a codeword $c$, which is not much larger. Anyone can decode $c$ to recover the original data $m$. However, $c$ cannot be efficiently compressed, even if the original data $m$ is given to the decompression procedure for free. In other words, $c$ is a representation of $m$, yet is computationally incompressible even given $m$. An incompressible encoding is composable if many encodings cannot be simultaneously compressed into anything sufficiently smaller than their concatenation. A recent work of Damgard, Ganesh and Orlandi (CRYPTO '19) defined a variant of incompressible encodings and gave an applications to ``proofs of replicated storage''. They constructed incompressible encodings in an ideal permutation model over a structured domain, but it was left open if they can be constructed under standard assumptions, or even in the more basic random-oracle model. In this work, we give new constructions, negative results and applications of incompressible encodings: * We construct incompressible encodings in the common random string (CRS) model under the Decisional Composite Residuosity (DCR) or Learning with Errors (LWE) assumptions. However, the construction has several drawbacks: (1) it is not composable, (2) it only achieves selective security, and (3) the CRS is as long as the data $m$. * We leverage the above construction to also get a scheme in the random-oracle model, under the same assumptions, that avoids all of the above drawbacks. Furthermore, it is significantly more efficient than the prior ideal-model construction. * We give black-box separations, showing that incompressible encodings in the plain model cannot be proven secure under any standard hardness assumption, and incompressible encodings in the CRS model must inherently suffer from all of the drawbacks above. * We give a new application to ``big-key cryptography in the bounded-retrieval model'', where secret keys are made intentionally huge to make them hard to exfiltrate. Using incompressible encodings, we can get all the security benefits of a big key without wasting storage space, by having the key to encode useful data.
2020
CRYPTO
Guaranteed Output Delivery Comes Free in Honest Majority MPC 📺
We study the communication complexity of unconditionally secure MPC with guaranteed output delivery over point-to-point channels for corruption threshold t < n/2, assuming the existence of a public broadcast channel. We ask the question: “is it possible to construct MPC in this setting s.t. the communication complexity per multiplication gate is linear in the number of parties?” While a number of works have focused on reducing the communication complexity in this setting, the answer to the above question has remained elusive until now. We also focus on the concrete communication complexity of evaluating each multiplication gate. We resolve the above question in the affirmative by providing an MPC with communication complexity O(Cn\phi) bits (ignoring fixed terms which are independent of the circuit) where \phi is the length of an element in the field, C is the size of the (arithmetic) circuit, n is the number of parties. This is the first construction where the asymptotic communication complexity matches the best-known semi-honest protocol. This represents a strict improvement over the previously best-known communication complexity of O(C(n\phi + \kappa) + D_Mn^2\kappa) bits, where \kappa is the security parameter and D_M is the multiplicative depth of the circuit. Furthermore, the concrete communication complexity per multiplication gate is 5.5 field elements per party in the best case and 7.5 field elements in the worst case when one or more corrupted parties have been identified. This also roughly matches the best-known semi-honest protocol, which requires 5.5 field elements per gate. The above also yields the first secure-with-abort MPC protocol with the same cost per multiplication gate as the best-known semi-honest protocol. Our main result is obtained by compiling the secure-with-abort MPC protocol into a fully secure one.
2020
CRYPTO
Cryptanalysis of The Lifted Unbalanced Oil Vinegar Signature Scheme 📺
In 2017, Ward Beullens et al. submitted Lifted Unbalanced Oil and Vinegar (LUOV), a signature scheme based on the famous multivariate public-key cryptosystem (MPKC) called Unbalanced Oil and Vinegar (UOV), to NIST for the competition for post-quantum public-key scheme standardization. The defining feature of LUOV is that, though the public key P works in the extension field of degree r of F2, the coefficients of P come from F2. This is done to significantly reduce the size of P. The LUOV scheme is now in the second round of the NIST PQC standardization process. In this paper, we introduce a new attack on LUOV. It exploits the "lifted" structure of LUOV to reduce direct attacks on it to those over a subfield. We show that this reduces the complexity below the targeted security for the NIST postquantum standardization competition.
2020
CRYPTO
Handling Adaptive Compromise for Practical Encryption Schemes 📺
Early Career Researcher Award
We provide a new definitional framework capturing the multi-user security of encryption schemes and pseudorandom functions in the face of adversaries that can adaptively compromise users' keys. We provide a sequence of results establishing the security of practical symmetric encryption schemes under adaptive compromise in the random oracle or ideal cipher model. The bulk of analysis complexity for adaptive compromise security is relegated to the analysis of lower-level primitives such as pseudorandom functions. We apply our framework to give proofs of security for the BurnBox system for privacy in the face of border searches and the in-use searchable symmetric encryption scheme due to Cash et al. In both cases, prior analyses had bugs that our framework helps avoid.
2020
CRYPTO
Delegation with Updatable Unambiguous Proofs and PPAD-Hardness 📺
In this work, we show the hardness of finding a Nash equilibrium, a \PPAD-complete problem, based on the quasi-polynomial hardness of the decisional assumption on groups with bilinear maps introduced by Kalai, Paneth and Yang [STOC 2019]. Towards this goal, we construct an {\em unambiguous} and {\em updatable} delegation scheme under this assumption for deterministic computations running in super-polynomial time and polynomial space. This delegation scheme, which is of independent interest, is publicly verifiable and non-interactive in the common reference string (CRS) model. It is {\em unambiguous} meaning that it is hard to compute two different proofs for the same statement. It is {\em updatable} meaning that given a proof for the statement that a Turing machine $M$ reaches configuration $\conf_T$ in $T$ steps, one can {\em efficiently} generate a proof for the statement that $M$ reaches configuration $\conf_{T+1}$ in $T+1$ steps.
2020
CRYPTO
The Memory-Tightness of Authenticated Encryption 📺
This paper initiates the study of the provable security of authenticated encryption (AE) in the memory-bounded setting. Recent works -- Tessaro and Thiruvengadam (TCC '18), Jaeger and Tessaro (EUROCRYPT '19), and Dinur (EUROCRYPT '20) -- focus on confidentiality, and look at schemes for which trade-offs between the attacker's memory and its data complexity are inherent. Here, we ask whether these results and techniques can be lifted to the full AE setting, which additionally asks for integrity. We show both positive and negative results. On the positive side, we provide tight memory-sensitive bounds for the security of GCM and its generalization, CAU (Bellare and Tackmann, CRYPTO '16). Our bounds apply to a restricted case of AE security which abstracts the deployment within protocols like TLS, and rely on a new memory-tight reduction to corresponding restricted notions of confidentiality and integrity. In particular, our reduction uses an amount of memory which linearly depends on that of the given adversary, as opposed to only imposing a constant memory overhead as in earlier works (Auerbach et al, CRYPTO '17). On the negative side, we show that a large class of black-box reductions cannot generically lift confidentiality and integrity security to a joint definition of AE security in a memory-tight way.
2020
CRYPTO
Efficient Pseudorandom Correlation Generators from Ring-LPN 📺
Secure multiparty computation can often utilize a trusted source of correlated randomness to achieve better efficiency. A recent line of work, initiated by Boyle et al. (CCS 2018, Crypto 2019), showed how useful forms of correlated randomness can be generated using a cheap, one-time interaction, followed by only ``silent'' local computation. This is achieved via a \emph{pseudorandom correlation generator} (PCG), a deterministic function that stretches short correlated seeds into long instances of a target correlation. Previous works constructed concretely efficient PCGs for simple but useful correlations, including random oblivious transfer and vector-OLE, together with efficient protocols to distribute the PCG seed generation. Most of these constructions were based on variants of the Learning Parity with Noise (LPN) assumption. PCGs for other useful correlations had poor asymptotic and concrete efficiency. In this work, we design a new class of efficient PCGs based on different flavors of the {\em ring-LPN} assumption. Our new PCGs can generate OLE correlations, authenticated multiplication triples, matrix product correlations, and other types of useful correlations over large fields. These PCGs are more efficient by orders of magnitude than the previous constructions and can be used to improve the preprocessing phase of many existing MPC protocols.
2020
CRYPTO
Lattice-Based Blind Signatures, Revisited 📺
We observe that all previously known lattice-based blind signatures schemes contain subtle flaws in their security proofs (e.g.,~Rückert, ASIACRYPT '08) or can be attacked (e.g., BLAZE by Alkadri et al., FC~'20). Motivated by this, we revisit the problem of constructing blind signatures from standard lattice assumptions. We propose a new three-round lattice-based blind signature scheme whose security can be proved, in the random oracle model, from the standard SIS assumption. Our starting point is a modified version of the insecure three-round BLAZE scheme, which itself is based Lyubashevsky's three-round identification scheme combined with a new aborting technique to reduce the correctness error. Our proof builds upon and extends the recent modular framework for blind signatures of Hauck, Kiltz, and Loss (EUROCRYPT~'19). It also introduces several new techniques to overcome the additional challenges posed by the correctness error which is inherent to all lattice-based constructions. While our construction is mostly of theoretical interest, we believe it to be an important stepping stone for future works in this area.
2020
CRYPTO
Fully Deniable Interactive Encryption 📺
Deniable encryption (Canetti \emph{et al.}, Crypto 1996) enhances secret communication over public channels, providing the additional guarantee that the secrecy of communication is protected even if the parties are later coerced (or willingly bribed) to expose their entire internal states: plaintexts, keys and randomness. To date, constructions of deniable encryption --- and more generally, interactive deniable communication --- only address restricted cases where only {\em one} party is compromised (Sahai and Waters, STOC 2014). The main question --- whether deniable communication is at all possible if {\em both} parties are coerced at once --- has remained open. We resolve this question in the affirmative, presenting a communication protocol that is {\em fully deniable} under coercion of both parties. Our scheme has three rounds, assumes subexponentially secure indistinguishability obfuscation and one-way functions, and uses a short global reference string that is generated once at system set-up and suffices for an unbounded number of encryptions and decryptions. Of independent interest, we introduce a new notion called \emph{off-the-record deniability}, which protects parties even when their claimed internal states are inconsistent (a case not covered by prior definitions). Our scheme satisfies both standard deniability and off-the-record deniability.
2020
CRYPTO
Time-Space Tradeoffs and Short Collisions in Merkle-Damgård Hash Functions 📺
We study collision-finding against Merkle-Damgård hashing in the random-oracle model by adversaries with an arbitrary $S$-bit auxiliary advice input about the random oracle and $T$ queries. Recent work showed that such adversaries can find collisions (with respect to a random IV) with advantage $\Omega(ST^2/2^n)$, where $n$ is the output length, beating the birthday bound by a factor of $S$. These attacks were shown to be optimal. We observe that the collisions produced are very long, on the order $T$ blocks, which would limit their practical relevance. We prove several results related to improving these attacks to find short collisions. We first exhibit a simple attack for finding $B$-block-long collisions achieving advantage $\tilde{\Omega}(STB/2^n)$. We then study if this attack is optimal. We show that the prior technique based on the bit-fixing model (used for the $ST^2/2^n$ bound) provably cannot reach this bound, and towards a general result we prove there are qualitative jumps in the optimal attacks for finding length $1$, length $2$, and unbounded-length collisions. Namely, the optimal attacks achieve (up to logarithmic factors) order of $(S+T)/2^n$, $ST/2^n$ and $ST^2/2^n$ advantage. We also give an upper bound on the advantage of a restricted class of short-collision finding attacks via a new analysis on the growth of trees in random functional graphs that may be of independent interest.
2020
CRYPTO
Lattice Reduction for Modules, or How to Reduce ModuleSVP to ModuleSVP 📺
We show how to generalize lattice reduction algorithms to module lattices. Specifically, we reduce $\gamma$-approximate ModuleSVP over module lattices with rank $k \geq2$ to $\gamma'$-approximate ModuleSVP over module lattices with rank $2 \leq \beta \leq k$. To do so, we modify the celebrated slide-reduction algorithm of Gama and Nguyen to work with module filtrations, a high-dimensional generalization of the ($\Z$-)basis of a lattice. The particular value of $\gamma$ that we achieve depends on the underlying number field $K$, the order $R \subseteq \mathcal{O}_K$, and the embedding (as well as, of course, $k$ and $\beta$). However, for reasonable choices of these parameters, the approximation factor that we achieve is surprisingly close to the one achieved by ``plain'' lattice reduction algorithms, which require an arbitrary SVP oracle in the same dimension. In other words, we show that ModuleSVP oracles are nearly as useful as SVP oracles for solving approximate ModuleSVP in higher dimensions. Our result generalizes the recent independent result of Lee, Pellet-Mary, Stehl\'e, and Wallet, which works in the important special case when $\beta = 2$ and $R = \mathcal{O}_K$ is the ring of integers of $K$ under the canonical embedding. Indeed, at a high level our reduction can be thought of as a generalization of theirs in roughly the same way that block reduction generalizes LLL reduction.
2020
CRYPTO
Random Probing Security: Verification, Composition, Expansion and New Constructions 📺
Masking countermeasure is among the most powerful countermeasures to counteract side-channel attacks. Leakage models have been exhibited to theoretically reason on the security of such masked implementations. So far, the most widely used leakage model is the probing model defined by Ishai, Sahai, and Wagner at (CRYPTO 2003). While it is advantageously convenient for security proofs, it does not capture an adversary exploiting full leakage traces as, e.g., in horizontal attacks. Those attacks target the multiple manipulation of the same share to average a constant noise and recover the corresponding value. To capture a wider class of attacks another model was introduced and is referred to as the random probing model. From a leakage parameter p, each wire of the circuit leaks its value with probability p. While this model much better reflects the physical reality of side channels, it requires more complex security proofs and does not yet come with practical constructions. In this paper, we define the first framework dedicated to the random probing model. We provide an automatic tool, called VRAPS, to quantify the random probing security of a circuit from its leakage probability. We also formalize a composition property for secure random probing gadgets and exhibit its relation to the strong non-interference (SNI) notion used in the context of probing security. We then revisit the expansion idea proposed by Ananth, Ishai, and Sahai (CRYPTO 2018) and introduce a compiler that builds a random probing secure circuit from small base gadgets achieving a random probing expandability property. We instantiate this compiler with small gadgets for which we verify the expected properties directly from our automatic tool. Our construction can tolerate a leakage probability up to 2^−8, against 2^−25 for the previous construction, with a better asymptotic complexity.
2020
CRYPTO
Alzette: a 64-bit ARX-box (feat. CRAX and TRAX) 📺
S-boxes are the only source of non-linearity in many symmetric cryptographic primitives. While they are often defined as being functions operating on a small space, some recent designs propose the use of much larger ones (e.g., 32 bits). In this context, an S-box is then defined as a subfunction whose cryptographic properties can be estimated precisely. In this paper, we present a 64-bit ARX-based S-box called Alzette which can be evaluated in constant time using only 12 instructions on modern CPUs. Its parallel application can also leverage vector (SIMD) instructions. One iteration of Alzette has differential and linear properties comparable to those of the AES S-box, while two iterations are at least as secure as the AES super S-box. Since the state size is much larger than the typical 4 or 8 bits, the study of the relevant cryptographic properties of Alzette is not trivial. We further discuss how such wide S-boxes could be used to construct round functions of 64-, 128- and 256-bit (tweakable) block ciphers with good cryptographic properties that are guaranteed even in the related-tweak setting. We use these structures to design a very lightweight 64-bit block cipher (CRAX) which outerperforms SPECK-64/128 for short messages on micro-controllers, and a 256-bit tweakable block cipher (TRAX) which can be used to obtain strong security guarantees against powerful adversaries (nonce misuse, quantum attacks).
2020
CRYPTO
The MALICIOUS Framework: Embedding Backdoors into Tweakable Block Ciphers 📺
Inserting backdoors in encryption algorithms has long seemed like a very interesting, yet difficult problem. Most attempts have been unsuccessful for symmetric-key primitives so far and it remains an open problem of how to build such ciphers. In this work, we propose the MALICIOUS framework, a new method to build tweakable block ciphers that have a backdoor hidden, which allows to retrieve the secret key. Our backdoor is differential in nature: a specific related-tweak differential path with high probability is hidden during design phase of the cipher. We explain how the backdoor can be used to practically recover the secret key of a user for any entity knowing the backdoor and we also argue why even knowing the presence of the backdoor and the workings of the cipher will not permit to retrieve the backdoor for an external user. We analyze the security of our construction in the classical black-box model and we show that retrieving the backdoor (the hidden high-probability differential path) is very difficult. We instantiate our framework by proposing the LowMC-M construction, a new family of tweakable block ciphers based on instances of the LowMC cipher, which allow such backdoor embedding. Generating LowMC-M instances is trivial and the LowMC-M family has basically the same efficiency as the LowMC instances it is based on.
2020
CRYPTO
Reverse Firewalls for Actively Secure MPCs 📺
Reverse firewalls were introduced at Eurocrypt 2015 by Miro-nov and Stephens-Davidowitz, as a method for protecting cryptographic protocols against attacks on the devices of the honest parties. In a nutshell: a reverse firewall is placed outside of a device and its goal is to ``sanitize'' the messages sent by it, in such a way that a malicious device cannot leak its secrets to the outside world. It is typically assumed that the cryptographic devices are attacked in a ``functionality-preserving way'' (i.e.~informally speaking, the functionality of the protocol remains unchanged under this attacks). In their paper, Mironov and Stephens-Davidowitz construct a protocol for passively-secure two-party computations with firewalls, leaving extension of this result to stronger models as an open question. In this paper, we address this problem by constructing a protocol for secure computation with firewalls that has two main advantages over the original protocol from Eurocrypt 2015. Firstly, it is a \emph{multi}party computation protocol (i.e.~it works for an arbitrary number $n$ of the parties, and not just for $2$). Secondly, it is secure in much stronger corruption settings, namely in the \emph{actively corruption model}. More precisely: we consider an adversary that can fully corrupt up to $n-1$ parties, while the remaining parties are corrupt in a functionality-preserving way. Our core techniques are: malleable commitments and malleable non-interactive zero-knowledge, which in particular allow us to create a novel protocol for multiparty augmented coin-tossing into the well with reverse firewalls (that is based on a protocol of Lindell from Crypto 2001).
2020
CRYPTO
Fast reduction of algebraic lattices over cyclotomic fields 📺
We introduce a framework generalizing lattice reduction algorithms to module lattices in order to \emph{practically} and \emph{efficiently} solve the $\gamma$-Hermite Module-SVP problem over arbitrary cyclotomic fields. The core idea is to exploit the structure of the subfields for designing a recursive strategy of reduction in the tower of fields we are working in. Besides, we demonstrate how to leverage the inherent symplectic geometry existing such fields to provide a significant speed-up of the reduction for rank two modules. As a byproduct, we also generalize to all cyclotomic fields and provide speedups for many previous number theoretical algorithms, in particular to the rounding in the so-called Log-unit lattice. Quantitatively, we show that a module of rank 2 over a cyclotomic field of degree $n$ can be heuristically reduced within approximation factor $2^{\tilde{O}(n)}$ in time $\tilde{O}(n^2B)$, where $B$ is the bitlength of the entries. For $B$ large enough, this complexity shrinks to $\tilde{O}(n^{\log_2 3}B)$. This last result is particularly striking as it goes below the estimate of $n^2B$ swaps given by the classical analysis of the LLL algorithm using the decrease of the \emph{potential} of the basis. Finally, all this framework is fully parallelizable, and we provide a full implementation. We apply it to break multilinear cryptographic candidates on concrete proposed parameters. We were able to reduce matrices of dimension 4096 with 6675-bit integers in 4 days, which is more than a million times faster than previous state-of-the-art implementations. Eventually, we demonstrate a quasicubic time for the Gentry-Szydlo algorithm which finds a generator given the relative norm and a basis of an ideal. This algorithm is important in cryptanalysis and requires efficient ideal multiplications and lattice reductions; as such we can practically use it in dimension 1024.
2020
CRYPTO
Fiat-Shamir for Repeated Squaring with Applications to PPAD-Hardness and VDFs 📺
The Fiat-Shamir transform is a methodology for compiling a (public-coin) interactive proof system for a language $L$ into a {\em non-interactive} argument system for $L$. Proving security of the Fiat-Shamir transform in the standard model, especially in the context of \emph{succinct} arguments, is largely an unsolved problem. The work of Canetti et al. (STOC 2019) proved the security of the Fiat-Shamir transform applied to the Goldwasser-Kalai-Rothblum (STOC 2008) succinct interactive proof system under a very strong ``optimal learning with errors'' assumption. Achieving a similar result under standard assumptions remains an important open question. In this work, we consider the problem of compiling a different succinct interactive proof system: Pietrzak's proof system (ITCS 2019) for the iterated squaring problem. We construct a hash function family (with evaluation time roughly $2^{\lambda^{\epsilon}}$) that guarantees the soundness of Fiat-Shamir for this protocol assuming the sub-exponential ($2^{-n^{1-\epsilon}}$)-hardness of the $n$-dimensional learning with errors problem. (The latter follows from the worst-case $2^{n^{1-\epsilon}}$ hardness of lattice problems.) More generally, we extend the ``bad-challenge function'' methodology of Canetti et al. for proving the soundness of Fiat-Shamir to a class of protocols whose bad-challenge functions are {\em not} efficiently computable. As a corollary (following Choudhuri et al., ePrint 2019 and Ephraim et al., EUROCRYPT 2020), we construct hard-on-average problems in the complexity class $\mathbf{CLS}\subset \mathbf{PPAD}$ under the $2^{\secp^\epsilon}$-hardness of the repeated squaring problem and the $2^{-n^{1-\epsilon}}$-hardness of the learning with errors problem. Under the additional assumption that the repeated squaring problem is ``inherently sequential'', we also obtain a Verifiable Delay Function (Boneh et al., EUROCRYPT 2018) in the standard model. Finally, we give additional PPAD-hardness and VDF instantiations demonstrating a broader tradeoff between the strength of the repeated squaring assumption and the strength of the lattice assumption.
2020
CRYPTO
Mode-Level vs. Implementation-Level Physical Security in Symmetric Cryptography: A Practical Guide Through the Leakage-Resistance Jungle 📺
Triggered by the increasing deployment of embedded cryptographic devices (e.g., for the IoT), the design of authentication, encryption and authenticated encryption schemes enabling improved security against side-channel attacks has become an important research direction. Over the last decade, a number of modes of operation have been proposed and analyzed under different abstractions. In this paper, we investigate the practical consequences of these findings. For this purpose, we first translate the physical assumptions of leakage-resistance proofs into minimum security requirements for implementers. Thanks to this (heuristic) translation, we observe that (i) security against physical attacks can be viewed as a tradeoff between mode-level and implementation-level protection mechanisms, and (i}) security requirements to guarantee confidentiality and integrity in front of leakage can be concretely different for the different parts of an implementation. We illustrate the first point by analyzing several modes of operation with gradually increased leakage-resistance. We illustrate the second point by exhibiting leveled implementations, where different parts of the investigated schemes have different security requirements against leakage, leading to performance improvements when high physical security is needed. We finally initiate a comparative discussion of the different solutions to instantiate the components of a leakage-resistant authenticated encryption scheme.
2020
CRYPTO
Non-Interactive Zero-Knowledge Arguments for QMA, with preprocessing 📺
We initiate the study of non-interactive zero-knowledge (NIZK) arguments for languages in QMA. Our first main result is the following: if Learning With Errors (LWE) is hard for quantum computers, then any language in QMA has an NIZK argument with preprocessing. The preprocessing in our argument system consists of (i) the generation of a CRS and (ii) a single (instance-independent) quantum message from verifier to prover. The instance-dependent phase of our argument system involves only a single classical message from prover to verifier. Importantly, verification in our protocol is entirely classical, and the verifier needs not have quantum memory; its only quantum actions are in the preprocessing phase. Our second contribution is to extend the notion of a classical proof of knowledge to the quantum setting. We introduce the notions of arguments and proofs of quantum knowledge (AoQK/PoQK), and we show that our non-interactive argument system satisfies the definition of an AoQK. In particular, we explicitly construct an extractor which can recover a quantum witness from any prover which is successful in our protocol. Finally, we show that any language in QMA has an (interactive) proof of quantum knowledge.
2020
CRYPTO
Cryptanalysis of LEDAcrypt 📺
We report on the concrete cryptanalysis of LEDAcrypt, a 2nd Round candidate in NIST's Post-Quantum Cryptography standardization process and one of 17 encryption schemes that remain as candidates for near-term standardization. LEDAcrypt consists of a public-key encryption scheme built from the McEliece paradigm and a key-encapsulation mechanism (KEM) built from the Niederreiter paradigm, both using a quasi-cyclic low-density parity-check (QC-LDPC) code. In this work, we identify a large class of extremely weak keys and provide an algorithm to recover them. For example, we demonstrate how to recover $1$ in $2^{47.79}$ of LEDAcrypt's keys using only $2^{18.72}$ guesses at the 256-bit security level. This is a major, practical break of LEDAcrypt. Further, we demonstrate a continuum of progressively less weak keys (from extremely weak keys up to all keys) that can be recovered in substantially less work than previously known. This demonstrates that the imperfection of LEDAcrypt is fundamental to the system's design.
2020
CRYPTO
Security Analysis of NIST CTR-DRBG 📺
We study the security of CTR-DRBG, one of NIST’s recommended Pseudorandom Number Generator (PRNG) designs. Recently, Woodage and Shumow (Eurocrypt’ 19), and then Cohney et al. (S&P’ 20) point out some potential vulnerabilities in both NIST specification and common implementations of CTR-DRBG. While these researchers do suggest counter-measures, the security of the patched CTR-DRBG is still questionable. Our work fills this gap, proving that CTR-DRBG satisfies the robustness notion of Dodis et al. (CCS’13), the standard security goal for PRNGs.
2020
CRYPTO
A Polynomial-Time Algorithm for Solving the Hidden Subset Sum Problem 📺
At Crypto '99, Nguyen and Stern described a lattice based algorithm for solving the hidden subset sum problem, a variant of the classical subset sum problem where the n weights are also hidden. While the Nguyen-Stern algorithm works quite well in practice for moderate values of n, we argue that its complexity is actually exponential in n; namely in the final step one must recover a very short basis of a n-dimensional lattice, which takes exponential-time in n, as one must apply BKZ reduction with increasingly large block-sizes. In this paper, we describe a variant of the Nguyen-Stern algorithm that works in polynomial-time. The first step is the same orthogonal lattice attack with LLL as in the original algorithm. In the second step, instead of applying BKZ, we use a multivariate technique that recovers the short lattice vectors and finally the hidden secrets in polynomial time. Our algorithm works quite well in practice, as we can reach n=250 in a few hours on a single PC.
2020
CRYPTO
A Tight Parallel Repetition Theorem for Partially Simulatable Interactive Arguments via Smooth KL-Divergence 📺
Hardness amplification is a central problem in the study of interactive protocols. While "natural" parallel repetition transformation is known to reduce the soundness error of some special cases of interactive arguments: three-message protocols (Bellare, Impagliazzo, and Naor [FOCS '97]) and public-coin protocols (Hastad, Pass, Wikstrom, and Pietrzak [TCC '10], Chung and Lu [TCC '10] and Chung and Pass [TCC '15]), it fails to do so in the general case (the above Bellare et al.; also Pietrzak and Wikstrom [TCC '07]). The only known round-preserving approach that applies to all interactive arguments is Haitner's random-terminating transformation [SICOMP '13], who showed that the parallel repetition of the transformed protocol reduces the soundness error at a weak exponential rate: if the original m-round protocol has soundness error (1 − ε) then the n-parallel repetition of its random-terminating variant has soundness error (1 − ε)^{ε n/m^4} (omitting constant factors). Hastad et al. have generalized this result to the so-called partially simulatable interactive arguments. In this work we prove that parallel repetition of random-terminating arguments reduces the soundness error at a much stronger exponential rate: the soundness error of the n parallel repetition is (1 − ε)^{n/m}, only an m factor from the optimal rate of (1 − ε)^n achievable in public-coin and three-message arguments. The result generalizes to partially simulatable arguments. This is achieved by presenting a tight bound on a relaxed variant of the KL-divergence between the distribution induced by our reduction and its ideal variant, a result whose scope extends beyond parallel repetition proofs. We prove the tightness of the above bound for random-terminating arguments, by presenting a matching protocol.
2020
CRYPTO
Quantifying the Security Cost of Migrating Protocols to Practice 📺
We give a framework for relating the quantitative, concrete security of a "reference'' protocol (say, one appearing in an academic paper) to that of some derived, "real'' protocol (say, appearing in a cryptographic standard). It is based on the indifferentiability framework of Maurer, Renner, and Holenstein (MRH), whose application has been exclusively focused upon non-interactive cryptographic primitives, e.g., hash functions and Feistel networks. Our extension of MRH is supported by a clearly defined execution model, and two composition lemmata, all formalized in a modern pseudocode language. Together, these allow for precise statements about game-based security properties of cryptographic objects (interactive or not) at various levels of abstraction, As a real-world application, we design and prove tight security bounds for a potential TLS 1.3 extension that integrates the SPAKE2 password-authenticated key-exchange into the existing handshake. (This is a problem of current interest to the IETF.)
2020
CRYPTO
Cryptanalysis Results on Spook: Bringing Full-round Shadow-512 to the Light 📺
Spook is one of the 32 candidates that has made it to the second round of the NIST Lightweight Cryptography Standardization process, and is particularly interesting since it proposes differential side channel resistance. In this paper, we present practical distinguishers of the full 6-step version of the underlying permutations of Spook, namely Shadow-512 and Shadow-384, solving challenges proposed by the designers on the permutation. We also propose practical forgeries with 4-step Shadow for the S1P mode of operation in the nonce misuse scenario, which is allowed by the CIML2 security game considered by the authors. All the results presented in this paper have been implemented.
2020
CRYPTO
Collusion Resistant Watermarkable PRFs from Standard Assumptions 📺
A software watermarking scheme can embed a message into a program without significantly changing its functionality. Moreover, any attempt to remove the embedded message in a marked program will substantially change the functionality of the program. Prior constructions of watermarking schemes focus on watermarking cryptographic functions, such as pseudorandom function (PRF), public key encryption, etc. A natural security requirement for watermarking schemes is collusion resistance, where the adversary’s goal is to remove the embedded messages given multiple marked versions of the same program. Currently, this strong security guarantee has been achieved by watermarking schemes for public key cryptographic primitives from standard assumptions (Goyal et al., CRYPTO 2019) and by watermarking schemes for PRFs from indistinguishability obfuscation (Yang et al., ASIACRYPT 2019). However, no collusion resistant watermarking scheme for PRF from standard assumption is known. In this work, we solve this problem by presenting a generic construction that upgrades a watermarkable PRF without collusion resistance to a collusion resistant one. One appealing feature of our construction is that it can preserve the security properties of the original scheme. For example, if the original scheme has security with extraction queries, the new scheme is also secure with extraction queries. Besides, the new scheme can achieve unforgeability even if the original scheme does not provide this security property. Instantiating our construction with existing watermarking schemes for PRF, we obtain collusion resistant watermarkable PRFs from standard assumptions, offering various security properties.
2020
CRYPTO
Comparing the difficulty of factorization and discrete logarithm: a 240-digit experiment 📺
We report on two new records: the factorization of RSA-240, a 795-bit number, and a discrete logarithm computation over a 795-bit prime field. Previous records were the factorization of RSA-768 in 2009 and a 768-bit discrete logarithm computation in 2016. Our two computations at the 795-bit level were done using the same hardware and software, and show that computing a discrete logarithm is not much harder than a factorization of the same size. Moreover, thanks to algorithmic variants and well-chosen parameters, our computations were significantly less expensive than anticipated based on previous records. The last page of this paper also reports on the factorization of RSA-250.
2020
CRYPTO
Automatic Verification of Differential Characteristics: Application to Reduced Gimli 📺
Since Keccak was selected as the SHA-3 standard, more and more permutation-based primitives have been proposed. Different from block ciphers, there is no round key in the underlying permutation for permutation-based primitives. Therefore, there is a higher risk for a differential characteristic of the underlying permutation to become incompatible when considering the dependency of difference transitions over different rounds. However, in most of the MILP or SAT based models to search for differential characteristics, only the difference transitions are involved and are treated as independent in different rounds, which may cause that an invalid one is found for the underlying permutation. To overcome this obstacle, we are motivated to design a model which automatically avoids the inconsistency in the search for differential characteristics. Our technique is to involve both the difference transitions and value transitions in the constructed model. Such an idea is inspired by the algorithm to find SHA-2 characteristics as proposed by Mendel et al. in ASIACRYPT 2011, where the differential characteristic and the conforming message pair are simultaneously searched. As a first attempt, our new technique will be applied to the Gimli permutation, which was proposed in CHES 2017. As a result, we reveal that some existing differential characteristics of reduced Gimli are indeed incompatible, one of which is found in the Gimli document. In addition, since only the permutation is analyzed in the Gimli document, we are lead to carry out a comprehensive study, covering the proposed hash scheme and the authenticated encryption (AE) scheme specified for Gimli, which has become a second round candidate of the NIST lightweight cryptography standardization process. For the hash scheme, a semi-free-start (SFS) collision attack can reach up to 8 rounds starting from an intermediate round. For the AE scheme, a state recovery attack is demonstrated to achieve up to 9 rounds. It should be emphasized that our analysis does not threaten the security of Gimli.
2020
CRYPTO
Faster Enumeration-based Lattice Reduction: Root Hermite Factor k^(1/(2k)) in Time k^(k/8 + o(k)) 📺
We give a lattice reduction algorithm that achieves root Hermite factor k^(1/(2k)) in time k^(k/8 + o(k)) and polynomial memory. This improves on the previously best known enumeration-based algorithms which achieve the same quality, but in time k^(k/(2e) + o(k)). A cost of k^(k/8 + o(k)) was previously mentioned as potentially achievable (Hanrot-Stehlé’10) or as a heuristic lower bound (Nguyen’10) for enumeration algorithms. We prove the complexity and quality of our algorithm under a heuristic assumption and provide empirical evidence from simulation and implementation experiments attesting to its performance for practical and cryptographic parameter sizes. Our work also suggests potential avenues for achieving costs below k^(k/8 + o(k)) for the same root Hermite factor, based on the geometry of SDBKZ-reduced bases.
2020
CRYPTO
Rounding in the Rings 📺
In this work, we conduct a comprehensive study on establishing hardness reductions for (Module) Learning with Rounding over rings (RLWR). Towards this, we present an algebraic framework of LWR, inspired by a recent work of Peikert and Pepin (TCC '19). Then we show a search-to-decision reduction for Ring-LWR, generalizing a result in the plain LWR setting by Bogdanov et al. (TCC '15). Finally, we show a reduction from Ring-LWE to Module Ring-LWR (even for leaky secrets), generalizing the plain LWE to LWR reduction by Alwen et al. (Crypto '13). One of our central techniques is a new ring leftover hash lemma, which might be of independent interests.
2020
CRYPTO
Verifiable Registration-Based Encryption 📺
In recent work, Garg, Hajiabadi, Mahmoody, and Rahimi (TCC 2018) introduced a new encryption framework, which they referred to as Registration-Based Encryption (RBE). The central motivation behind RBE was to provide a novel methodology for solving the well-known key-escrow problem in Identity-Based Encryption (IBE) systems. Informally, in an RBE system, there is no private-key generator unlike IBE systems, but instead, it is replaced with a public key accumulator. Every user in an RBE system samples its own public-secret key pair and sends the public key to the accumulator for registration. The key accumulator has no secret state and is only responsible for compressing all the registered user identity-key pairs into a short public commitment. Here the encryptor only requires the compressed parameters along with the target identity, whereas a decryptor requires supplementary key material along with the secret key associated with the registered public key. The initial construction by Garg et al. based on standard assumptions only provided weak efficiency properties. In a follow-up work by Garg, Hajiabadi, Mahmoody, Rahimi, and Sekar (PKC 2019), they gave an efficient RBE construction from standard assumptions. However, both these works considered the key accumulator to be honest which might be too strong an assumption in real-world scenarios. In this work, we initiate a formal study of RBE systems with malicious key accumulators. To that end, we introduce a strengthening of the RBE framework which we call Verifiable RBE (VRBE). A VRBE system additionally gives the users an extra capability to obtain short proofs from the key accumulator proving correct (and unique) registration for every registered user as well as proving non-registration for any yet unregistered identity. We construct VRBE systems that provide succinct proofs of registration and non-registration from standard assumptions (such as CDH, Factoring, LWE). Our proof systems also naturally allow a much more efficient audit process which can be performed by any non-participating third party as well. A by-product of our approach is that we provide a more efficient RBE construction than that provided in the prior work of Garg \etal\cite{GHMRS19}. And, lastly, we initiate a study on the extension of VRBE to a wider range of access and trust structures.
2020
CRYPTO
Indifferentiability for Public Key Cryptosystems 📺
We initiate the study of indifferentiability for public key encryption and other public key primitives. Our main results are definitions and constructions of public key cryptosystems that are indifferentiable from ideal cryptosystems, in the random oracle model. Cryptosystems include: 1) Public key encryption; 2) Digital signatures; 3) Non-interactive key agreement. Our schemes are based on relatively standard public key assumptions. By being indifferentiable from an ideal object, our schemes automatically satisfy a wide range of security properties, including any property representable as a single-stage game, and can be composed to operate in higher-level protocols.
2020
CRYPTO
Asymptotic complexities of discrete logarithm algorithms in pairing-relevant finite fields 📺
We study the discrete logarithm problem at the boundary case between small and medium characteristic finite fields, which is precisely the area where finite fields used in pairing-based cryptosystems live. In order to evaluate the security of pairing-based protocols, we thoroughly analyze the complexity of all the algorithms that coexist at this boundary case: the Quasi-Polynomial algorithms, the Number Field Sieve and its many variants, and the Function Field Sieve. We adapt the latter to the particular case where the extension degree is composite, and show how to lower the complexity by working in a shifted function field. All this study finally allows us to give precise values for the characteristic asymptotically achieving the highest security level for pairings. Surprisingly enough, there exist special characteristics that are as secure as general ones.
2020
CRYPTO
The Measure-and-Reprogram Technique 2.0: Multi-Round Fiat-Shamir and More 📺
We revisit recent works by Don, Fehr, Majenz and Schaffner and by Liu and Zhandry on the security of the Fiat-Shamir transformation of sigma-protocols in the quantum random oracle model (QROM). Two natural questions that arise in this context are: (1) whether the results extend to the Fiat-Shamir transformation of {\em multi-round} interactive proofs, and (2) whether Don et al.'s O(q^2) loss in security is optimal. Firstly, we answer question (1) in the affirmative. As a byproduct of solving a technical difficulty in proving this result, we slightly improve the result of Don et al., equipping it with a cleaner bound and an even simpler proof. We apply our result to digital signature schemes showing that it can be used to prove strong security for schemes like MQDSS in the QROM. As another application we prove QROM-security of a non-interactive OR proof by Liu, Wei and Wong. As for question (2), we show via a Grover-search based attack that Don et al.'s quadratic security loss for the Fiat-Shamir transformation of sigma-protocols is optimal up to a small constant factor. This extends to our new multi-round result, proving it tight up to a factor that depends on the number of rounds only, i.e. is constant for any constant-round interactive proof.
2020
CRYPTO
MPC with Friends and Foes 📺
Classical definitions for secure multiparty computation assume the existence of a single adversarial entity controlling the set of corrupted parties. Intuitively, the definition requires that the view of the adversary, corrupting t parties, in a real-world execution can be simulated by an adversary in an ideal model, where parties interact only via a trusted-party. No restrictions, however, are imposed on the view of honest parties in the protocol, thus, if honest parties obtain information about the private inputs of other honest parties -- it is not counted as a violation of privacy. This is arguably undesirable in many situations that fall into the MPC framework. Nevertheless, there are secure protocols (e.g., the 2-round multiparty protocol of Ishai et al. [CRYPTO 2010] tolerating a single corrupted party) that instruct the honest parties to reveal their private inputs to all other honest parties (once the malicious party is somehow identified). In this paper, we put forth a new security notion, which we call FaF-security, extending the classical notion. In essence, (t,h^*)-FaF-security requires the view of a subset of up to h^* honest parties to also be simulatable in the ideal model (in addition to the view of the malicious adversary, corrupting up to t parties). This property should still hold, even if the adversary leaks information to honest parties by sending them non-prescribed messages. We provide a thorough exploration of the new notion, investigating it in relation to a variety of existing security notions. We further investigate the feasibility of achieving FaF-security and show that every functionality can be computed with (computational) (t,h^*)-FaF full-security, if and only if 2t+ h^*<m. Interestingly, the lower-bound result actually shows that even fair FaF-security is impossible in general when 2t+ h^*\ge m (surprisingly, the view of the malicious attacker is not used as the trigger for the attack). We also investigate the optimal round complexity for (t,h^*)-Faf-secure protocols and give evidence that the leakage of private inputs of honest parties in the protocol of Ishai et al. [CRYPTO 2010] is inherent.
2020
CRYPTO
Better Concrete Security for Half-Gates Garbling (in the Multi-Instance Setting) 📺
We study the concrete security of high-performance implementations of half-gates garbling, which all rely on (hardware-accelerated) AES. We find that current instantiations using k-bit wire labels can be completely broken—in the sense that the circuit evaluator learns all the inputs of the circuit garbler—in time O(2k/C), where C is the total number of (non-free) gates that are garbled, possibly across multiple independent executions. The attack can be applied to existing circuit-garbling libraries using k = 80 when C ≈ $10^9$, and would require 267 machine-months and cost about $3500 to implement on the Google Cloud Platform. Since the attack can be entirely parallelized, the attack could be carried out in about a month using ≈ 250 machines. With this as our motivation, we seek a way to instantiate the hash function in the half-gates scheme so as to achieve better concrete security. We present a construction based on AES that achieves optimal security in the single-instance setting (when only a single circuit is garbled). We also show how to modify the half-gates scheme so that its concrete security does not degrade in the multi-instance setting. Our modified scheme is as efficient as prior work in networks with up to 2 Gbps bandwidth.
2020
CRYPTO
Always Have a Backup Plan: Fully Secure Synchronous MPC with Asynchronous Fallback 📺
Protocols for secure Multi-Party Computation (MPC) can be classified according to the underlying communication model. Two prominent communication models considered in the literature are the synchronous and asynchronous models, which considerably differ in terms of the achievable security guarantees. Synchronous MPC protocols can achieve the optimal corruption threshold $n/2$ and allow every party to give input, but become completely insecure when synchrony assumptions are violated. On the other hand, asynchronous MPC protocols remain secure under arbitrary network conditions, but can tolerate only $n/3$ corruptions and parties with slow connections unavoidably cannot give input. A natural question is whether there exists a protocol for MPC that can tolerate up to $t_s < n/2$ corruptions under a synchronous network and $t_a < n/3$ corruptions even when the network is asynchronous. We answer this question by showing tight feasibility and impossibility results. More specifically, we show that such a protocol exists if and only if $t_a + 2t_s < n$ and the number of inputs taken into account under an asynchronous network is at most $n-t_s$.
2020
CRYPTO
New Constructions of Hinting PRGs, OWFs with Encryption, and more 📺
Over the last few years, there has been a surge of new cryptographic results, including laconic oblivious transfer, (anonymous/ hierarchical) identity-based encryption, trapdoor functions, chosen-ciphertext security transformations, designated-verifier zero-knowledge proofs, due to a beautiful framework recently introduced in the works of Cho et al. (Crypto 2017), and Dottling and Garg (Crypto 2017). The primitive of one-way function with encryption (OWFE) and its relatives (chameleon encryption, one-time signatures with encryption, hinting PRGs, trapdoor hash encryption, batch encryption) has been a centerpiece in all these results. While there exist multiple realizations of OWFE (and its relatives) from a variety of assumptions such as CDH, Factoring, and LWE, all such constructions fall under the same general ``missing block" framework. Although this framework has been instrumental in opening up a new pathway towards various cryptographic functionalities via the abstraction of OWFE (and its relatives), it has been accompanied by undesirable inefficiencies that might inhibit a much wider adoption in many practical scenarios. Motivated by the surging importance of the OWFE abstraction (and its relatives), a natural question to ask is whether the existing approaches can be diversified to not only obtain more constructions from different assumptions, but also in developing newer frameworks. We believe answering this question will eventually lead to important and previously unexplored performance trade-offs in the overarching applications of this novel cryptographic paradigm. In this work, we propose a new accumulation-style framework for building a new class of OWFE as well as hinting PRG constructions with a special focus on achieving shorter ciphertext size and shorter public parameter size (respectively). Such performance improvements parlay into shorter parameters in their corresponding applications. Briefly, we explore the following performance trade-offs --- (1) for OWFE, our constructions outperform in terms of ciphertext size as well as encryption time, but this comes at the cost of larger evaluation and setup times, (2) for hinting PRGs, our constructions provide a rather dramatic trade-off between evaluation time versus parameter size, with our construction leading to significantly shorter public parameter size. The trade-off enabled by our hinting PRG construction also leads to interesting implications in the CPA-to-CCA transformation provided in. We also provide concrete performance measurements for our constructions and compare them with existing approaches. We believe highlighting such trade-offs will lead to wider adoption of these abstractions in a practical sense.
2020
CRYPTO
Two-Sided Malicious Security for Private Intersection-Sum with Cardinality 📺
Private intersection-sum with cardinality allows two parties, where each party holds a private set and one of the parties additionally holds a private integer value associated with each element in her set, to jointly compute the cardinality of the intersection of the two sets as well as the sum of the associated integer values for all the elements in the intersection, and nothing beyond that. We present a new construction for private intersection sum with cardinality that provides malicious security with abort and guarantees that both parties receive the output upon successful completion of the protocol. A central building block for our constructions is a primitive called shuffled distributed oblivious PRF (DOPRF), which is a PRF that offers oblivious evaluation using a secret key shared between two parties, and in addition to this allows obliviously permuting the PRF outputs of several parallel oblivious evaluations. We present the first construction for shuffled DOPRF with malicious security. We further present several new sigma proof protocols for relations across Pedersen commitments, ElGamal encryptions, and Camenisch-Shoup encryptions that we use in our main construction, for which we develop new batching techniques to reduce communication. We implement and evaluate the efficiency of our protocol and show that we can achieve communication cost that is only 4-5x greater than the most efficient semi-honest protocol. When measuring monetary cost of executing the protocol in the cloud, our protocol is 25x more expensive than the semi-honest protocol. Our construction also allows for different parameter regimes that enable trade-offs between communication and computation.
2020
CRYPTO
Private Set Intersection in the Internet Setting From Lightweight Oblivious PRF 📺
We present a new protocol for two-party private set intersection (PSI) with semi-honest security in the plain model and one-sided malicious security in the random oracle model. Our protocol achieves a better balance between computation and communication than existing PSI protocols. Specifically, our protocol is the fastest in networks with moderate bandwidth (e.g., 30 - 100 Mbps). Considering the monetary cost (proposed by Pinkas et al. in CRYPTO 2019) to run the protocol on a cloud computing service, our protocol also compares favorably. Underlying our PSI protocol is a new lightweight multi-point oblivious pesudorandom function (OPRF) protocol based on oblivious transfer (OT) extension. We believe this new protocol may be of independent interest.
2020
CRYPTO
Spartan: Efficient and general-purpose zkSNARKs without trusted setup 📺
This paper introduces Spartan, a new family of zero-knowledge succinct non- interactive arguments of knowledge (zkSNARKs) for the rank-1 constraint satisfiabil- ity (R1CS), an NP-complete language that generalizes arithmetic circuit satisfiability. A distinctive feature of Spartan is that it offers the first zkSNARKs without trusted setup (i.e., transparent zkSNARKs) for NP where verifying a proof incurs sub-linear costs—without requiring uniformity in the NP statement’s structure. Furthermore, Spartan offers zkSNARKs with a time-optimal prover, a property that has remained elusive for nearly all zkSNARKs in the literature. To achieve these results, we introduce new techniques that we compose with the sum-check protocol, a seminal interactive proof protocol: (1) computation commit- ments, a primitive to create a succinct commitment to a description of a computation; this technique is crucial for a verifier to achieve sub-linear costs after investing a one-time, public computation to preprocess a given NP statement; (2) SPARK, a cryptographic compiler to transform any existing extractable polynomial commitment scheme for multilinear polynomials to one that efficiently handles sparse multilinear polynomials; this technique is critical for achieving a time-optimal prover; and (3) a compact encoding of an R1CS instance as a low-degree polynomial. The end result is a public-coin succinct interactive argument of knowledge for NP (which can be viewed as a succinct variant of the sum-check protocol); we transform it into a zkSNARK using prior techniques. By applying SPARK to different commitment schemes, we obtain several zkSNARKs where the verifier’s costs and the proof size range from $O(log^2{n})$ to $O(\sqrt{n})$ depending on the underlying commitment scheme ($n$ denotes the size of the NP statement). These schemes do not require a trusted setup except for one that requires a universal trusted setup. We implement Spartan as a library in about 8,000 lines of Rust. We use the library to build a transparent zkSNARK in the random oracle model where security holds under the discrete logarithm assumption. We experimentally evaluate it and compare with recent zkSNARKs for R1CS instance sizes up to 220 constraints. Among schemes without trusted setup, Spartan offers the fastest prover with speedups of 36--$152\times$ depending on the baseline, produces proofs that are shorter by 1.2--$416\times$, and incurs the lowest verification times with speedups of 3.6--$1326\times$. When compared to the state-of-the-art zkSNARK with trusted setup, Spartan’s prover is $2\times$ faster for arbitrary R1CS instances and $16\times$ faster for data-parallel workloads.
2020
CRYPTO
A non-PCP Approach to Succinct Quantum-Safe Zero-Knowledge 📺
Today's most compact zero-knowledge arguments are based on the hardness of the discrete logarithm problem and related classical assumptions. If one is interested in quantum-safe solutions, then all of the known techniques stem from the PCP-based framework of Kilian (STOC 92) which can be instantiated based on the hardness of any collision-resistant hash function. Both approaches produce asymptotically logarithmic sized arguments but, by exploiting extra algebraic structure, the discrete logarithm arguments are a few orders of magnitude more compact in practice than the generic constructions.\\ In this work, we present the first (poly)-logarithmic \emph{post-quantum} zero-knowledge arguments that deviate from the PCP approach. At the core of succinct zero-knowledge proofs are succinct commitment schemes (in which the commitment and the opening proof are sub-linear in the message size), and we propose two such constructions based on the hardness of the (Ring)-Short Integer Solution (Ring-SIS) problem, each having certain trade-offs. For commitments to $N$ secret values, the communication complexity of our first scheme is $\tilde{O}(N^{1/c})$ for any positive integer $c$, and $O(\log^2 N)$ for the second. %Both of our protocols have somewhat large \emph{slack}, which in lattice constructions is the ratio of the norm of the extracted secrets to the norm of the secrets that the honest prover uses in the proof. The lower this factor, the smaller we can choose the practical parameters. For a fixed value of this factor, our $\tilde{O}(N^{1/c})$-argument actually achieves lower communication complexity. Both of these are a significant theoretical improvement over the previously best lattice construction by Bootle et al. (CRYPTO 2018) which gave $O(\sqrt{N})$-sized proofs.
2020
CRYPTO
Lower Bounds for Encrypted Multi-Maps and Searchable Encryption in the Leakage Cell Probe Model 📺
Encrypted multi-maps (EMMs) enable clients to outsource the storage of a multi-map to a potentially untrusted server while maintaining the ability to perform operations in a privacy-preserving manner. EMMs are an important primitive as they are an integral building block for many practical applications such as searchable encryption and encrypted databases. In this work, we formally examine the tradeoffs between privacy and efficiency for EMMs. Currently, all known dynamic EMMs with constant overhead reveal if two operations are performed on the same key or not that we denote as the global key-equality pattern. In our main result, we present strong evidence that the leakage of the global key-equality pattern is inherent for any dynamic EMM construction with $O(1)$ efficiency. In particular, we consider the slightly smaller leakage of decoupled key-equality pattern where leakage of key-equality between update and query operations is decoupled and the adversary only learns whether two operations of the same type are performed on the same key or not. We show that any EMM with at most decoupled key-equality pattern leakage incurs $\Omega(\log n)$ overhead in the leakage cell probe model. This is tight as there exist ORAM-based constructions of EMMs with logarithmic slowdown that leak no more than the decoupled key-equality pattern (and actually, much less). Furthermore, we present stronger lower bounds that encrypted multi-maps leaking at most the decoupled key-equality pattern but are able to perform one of either the update or query operations in the plaintext still require $\Omega(\log n)$ overhead. Finally, we extend our lower bounds to show that dynamic, response-hiding searchable encryption schemes must also incur $\Omega(log n)$ overhead even when one of either the document updates or searches may be performed in the plaintext.
2020
CRYPTO
Generically Speeding-Up Repeated Squaring is Equivalent to Factoring: Sharp Thresholds for All Generic-Ring Delay Functions 📺
Despite the fundamental importance of delay functions, repeated squaring in RSA groups (Rivest, Shamir and Wagner '96) is the only candidate offering both a useful structure and a realistic level of practicality. Somewhat unsatisfyingly, its sequentiality is provided directly by assumption (i.e., the function is assumed to be a delay function). We prove sharp thresholds on the sequentiality of all generic-ring delay functions relative to an RSA modulus based on the hardness of factoring in the standard model. In particular, we show that generically speeding-up repeated squaring (even with a preprocessing stage and any polynomial number parallel processors) is equivalent to factoring. More generally, based on the (essential) hardness of factoring, we prove that any generic-ring function is in fact a delay function, admitting a sharp sequentiality threshold that is determined by our notion of sequentiality depth. Moreover, we show that generic-ring functions admit not only sharp sequentiality thresholds, but also sharp pseudorandomness thresholds.
2020
CRYPTO
Scalable Pseudorandom Quantum States 📺
Efficiently sampling a quantum state that is hard to distinguish from a truly random quantum state is an elementary task in quantum information theory that has both computational and physical uses. This is often referred to as pseudorandom (quantum) state generator, or PRS generator for short. In existing constructions of PRS generators, security scales with the number of qubits in the states, i.e.\ the (statistical) security parameter for an $n$-qubit PRS is roughly $n$. Perhaps counter-intuitively, $n$-qubit PRS are not known to imply $k$-qubit PRS even for $k<n$. Therefore the question of \emph{scalability} for PRS was thus far open: is it possible to construct $n$-qubit PRS generators with security parameter $\secp$ for all $n, \secp$. Indeed, we believe that PRS with tiny (even constant) $n$ and large $\secp$ can be quite useful. We resolve the problem in this work, showing that any quantum-secure one-way function implies scalable PRS. We follow the paradigm of first showing a \emph{statistically} secure construction when given oracle access to a random function, and then replacing the random function with a quantum-secure (classical) pseudorandom function to achieve computational security. However, our methods deviate significantly from prior works since scalable pseudorandom states require randomizing the amplitudes of the quantum state, and not just the phase as in all prior works. We show how to achieve this using Gaussian sampling.
2020
CRYPTO
Overcoming Impossibility Results in Composable Security using Interval-Wise Guarantees 📺
Composable security definitions, at times called simulation-based definitions, provide strong security guarantees that hold in any context. However, they are also met with some skepticism due to many impossibility results; goals such as commitments and zero-knowledge that are achievable in a stand-alone sense were shown to be unachievable composably (without a setup) since provably no efficient simulator exists. In particular, in the context of adaptive security, the so-called "simulator commitment problem" arises: once a party gets corrupted, an efficient simulator is unable to be consistent with its pre-corruption outputs. A natural question is whether such impossibility results are unavoidable or only artifacts of frameworks being too restrictive. In this work, we propose a novel type of composable security statement that evades the commitment problem. Our new type is able to express the composable guarantees of schemes that previously did not have a clear composable understanding. To this end, we leverage the concept of system specifications in the Constructive Cryptography framework, capturing the conjunction of several interval-wise guarantees, each specifying the guarantees between two events. We develop the required theory and present the corresponding new composition theorem. We present three applications of our theory. First, we show in the context of symmetric encryption with adaptive corruption how our notion naturally captures the expected confidentiality guarantee---the messages remain confidential until either party gets corrupted---and that it can be achieved by any standard semantically secure scheme (negating the need for non-committing encryption). Second, we present a composable formalization of (so far only known to be standalone secure) commitment protocols, which is instantiable without a trusted setup like a CRS. We show it to be sufficient for being used in coin tossing over the telephone, one of the early intuitive applications of commitments. Third, we reexamine a result by Hofheinz, Matt, and Maurer [Asiacrypt'15] implying that IND-ID-CPA security is not the right notion for identity-based encryption, unmasking this claim as an unnecessary framework artifact.
2020
CRYPTO
Adaptively Secure Constrained Pseudorandom Functions in the Standard Model 📺
Constrained pseudorandom functions (CPRFs) allow learning "constrained" PRF keys that can evaluate the PRF on a subset of the input space, or based on some predicate. First introduced by Boneh and Waters [AC’13], Kiayias et al. [CCS’13] and Boyle et al. [PKC’14], they have shown to be a useful cryptographic primitive with many applications. These applications often require CPRFs to be adaptively secure, which allows the adversary to learn PRF values and constrained keys in an arbitrary order. However, there is no known construction of adaptively secure CPRFs based on a standard assumption in the standard model for any non-trivial class of predicates. Moreover, even if we rely on strong tools such as indistinguishability obfuscation (IO), the state-of-the-art construction of adaptively secure CPRFs in the standard model only supports the limited class of NC1 predicates. In this work, we develop new adaptively secure CPRFs for various predicates from different types of assumptions in the standard model. Our results are summarized below. - We construct adaptively secure and O(1)-collusion-resistant CPRFs for t-conjunctive normal form (t-CNF) predicates from one-way functions (OWFs) where t is a constant. Here, O(1)-collusion-resistance means that we can allow the adversary to obtain a constant number of constrained keys. Note that t-CNF includes bit-fixing predicates as a special case. - We construct adaptively secure and single-key CPRFs for inner-product predicates from the learning with errors (LWE) assumption. Here, single-key means that we only allow the adversary to learn one constrained key. Note that inner-product predicates include t-CNF predicates for a constant t as a special case. Thus, this construction supports a more expressive class of predicates than that supported by the first construction though it loses the collusion-resistance and relies on a stronger assumption. - We construct adaptively secure and O(1)-collusion-resistant CPRFs for all circuits from the LWE assumption and indistinguishability obfuscation (IO). The first and second constructions are the first CPRFs for any non-trivial predicates to achieve adaptive security outside of the random oracle model or relying on strong cryptographic assumptions. Moreover, the first construction is also the first to achieve any notion of collusion-resistance in this setting. Besides, we prove that the first and second constructions satisfy weak 1-key privacy, which roughly means that a constrained key does not reveal the corresponding constraint. The third construction is an improvement over previous adaptively secure CPRFs for less expressive predicates based on IO in the standard model.
2020
CRYPTO
Functional Encryption for Attribute-Weighted Sums from k-Lin 📺
We present functional encryption schemes for attribute-weighted sums, where encryption takes as input N attribute-value pairs (x_i,z_i) where x_i is public and z_i is private; secret keys are associated with arithmetic branching programs f, and decryption returns the weighted sum \sum_{i=1}^N f(x_i) z_i while leaking no additional information about the z_i's. Our main construction achieves (1) compact public parameters and key sizes that are independent of N and the secret key can decrypt a ciphertext for any a-priori unbounded N; (2) short ciphertexts that grow with N and the size of z_i but not x_i; (3) simulation-based security against unbounded collusions; (4) relies on the standard k-linear assumption in prime-order bilinear groups.
2020
CRYPTO
Multiparty Generation of an RSA Modulus 📺
We present a new multiparty protocol for the distributed generation of biprime RSA moduli, with security against any subset of maliciously colluding parties assuming oblivious transfer and the hardness of factoring. Our protocol is highly modular, and its uppermost layer can be viewed as a template that generalizes the structure of prior works and leads to a simpler security proof. We introduce a combined sampling-and-sieving technique that eliminates both the inherent leakage in the approach of Frederiksen et al. (Crypto'18), and the dependence upon additively homomorphic encryption in the approach of Hazay et al. (JCrypt'19). We combine this technique with an efficient, privacy-free check to detect malicious behavior retroactively when a sampled candidate is not a biprime, and thereby overcome covert rejection-sampling attacks and achieve both asymptotic and concrete efficiency improvements over the previous state of the art.
2020
CRYPTO
A key-recovery timing attack on post-quantum primitives using the Fujisaki-Okamoto transformation and its application on FrodoKEM 📺
In the implementation of post-quantum primitives, it is well known that all computations that handle secret information need to be implemented to run in constant time. Using the Fujisaki-Okamoto transformation or any of its different variants, a CPA-secure primitive can be converted into an IND-CCA secure KEM. In this paper we show that although the transformation does not handle secret information apart from calls to the CPA-secure primitive, it has to be implemented in constant time. Namely, if the ciphertext comparison step in the transformation is leaking side-channel information, we can launch a key-recovery attack. Several proposed schemes in round 2 of the NIST post-quantum standardization project are susceptible to the proposed attack and we develop and show the details of the attack on one of them, being FrodoKEM. It is implemented on the reference implementation of FrodoKEM, which is claimed to be secure against all timing attacks. In the experiments, the attack code is able to extract the secret key for all security levels using about $2^{30}$ decapsulation calls.
2020
CRYPTO
Interactive Proofs for Social Graphs 📺
We consider interactive proofs for social graphs, where the verifier has only oracle access to the graph and can query for the $i^{th}$ neighbor of a vertex $v$, given $i$ and $v$. In this model, we construct a doubly-efficient public-coin two-message interactive protocol for estimating the size of the graph to within a multiplicative factor $\epsilon>0$. The verifier performs $\widetilde{O}(1/\epsilon^2 \cdot \tau \cdot \Delta)$ queries to the graph, where $\tau$ is the mixing time of the graph and $\Delta$ is the average degree of the graph. The prover runs in quasi-linear time in the number of nodes in the graph. Furthermore, we develop a framework for computing the average of essentially any (reasonable) function $f$ of vertices of the graph. Using this framework, we can estimate many health measures of social graphs such as the clustering coefficients and the average degree, where the verifier performs only a small number of queries to the graph. Using the Fiat-Shamir paradigm, we are able to transform the above protocols to a non-interactive argument in the random oracle model. The result is that any social media company (Facebook, Twitter, etc.) can publish, once and for all, a short proof for the size or health of their social network. This proof can be publicly verified by any single user using a small number of queries to the graph.
2020
CRYPTO
NIZK from LPN and Trapdoor Hash via Approximate-Correlation Intractability 📺
We present new Non-Interactive Zero-Knowledge argument systems (NIZK), based on standard assumptions that were previously not known to imply it. In particular, we rely on the hardness of both the learning parity with noise (LPN) assumption, and the existence of trapdoor hash functions (TDH, defined by Döttling et al., Crypto 2019). TDH can be based on a number of standard assumptions, including DDH, QR, DCR, and LWE. We rely on the Correlation Intractability (CI) framework for converting \Sigma-protocols into NIZK, but deviate from prior works in considering CI for searchable relations where the search function has a probabilistic representation by a simple function class (linear or constant degree in our instantiations). Namely, there is a distribution over simple functions that computes each output bit of the search function with all but small (constant) probability. We present a new tool for proving CI for such function classes via a notion that we call Approximate-Correlation Intractability. This notion requires that CI holds even against approximations of a given function class. We show that approximate-correlation intractability for just constant degree functions suffices if the underlying \Sigma-protocol is implemented using an extractable commitment scheme with approximately low-degree extraction, and that such a commitment scheme can be constructed based on LPN. We then show how to construct approximate CI hash functions for this class from any suitable rate-1 TDH (with an enhanced correctness property that is satisfied by all existing constructions).
2020
CRYPTO
Random Self-reducibility of Ideal-SVP via Arakelov Random Walks 📺
Fixing a number field, the space of all ideal lattices, up to isometry, is naturally an Abelian group, called the *Arakelov class group*. This fact, well known to number theorists, has so far not been explicitly used in the literature on lattice-based cryptography. Remarkably, the Arakelov class group is a combination of two groups that have already led to significant cryptanalytic advances: the class group and the unit torus. In the present article, we show that the Arakelov class group has more to offer. We start with the development of a new versatile tool: we prove that, subject to the Riemann Hypothesis for Hecke L-functions, certain random walks on the Arakelov class group have a rapid mixing property. We then exploit this result to relate the average-case and the worst-case of the Shortest Vector Problem in ideal lattices. Our reduction appears particularly sharp: for Hermite-SVP in ideal lattices of certain cyclotomic number fields, it loses no more than a $\tilde O(\sqrt n)$ factor on the Hermite approximation factor. Furthermore, we suggest that this rapid-mixing theorem should find other applications in cryptography and in algorithmic number theory.
2020
CRYPTO
Nearly Optimal Robust Secret Sharing against Rushing Adversaries 📺
Robust secret sharing is a strengthening of standard secret sharing that allows the shared secret to be recovered even if some of the shares being used in the reconstruction have been adversarially modified. In this work, we study the setting where out of all the $n$ shares, the adversary is allowed to adaptively corrupt and modify up to $t$ shares, where $n = 2t+1$.\footnote{Note that if the adversary is allowed to modify any more shares, then correct reconstruction would be impossible.} Further, we deal with \emph{rushing} adversaries, meaning that the adversary is allowed to see the honest parties' shares before modifying its own shares. It is known that when $n = 2t+1$, to share a secret of length $m$ bits and recover it with error less than $2^{-\sec}$, shares of size at least $m+\sec$ bits are needed. Recently, Bishop, Pastro, Rajaraman, and Wichs (EUROCRYPT 2016) constructed a robust secret sharing scheme with shares of size $m + O(\sec\cdot\polylog(n,m,\sec))$ bits that is secure in this setting against non-rushing adversaries. Later, Fehr and Yuan (EUROCRYPT 2019) constructed a scheme that is secure against rushing adversaries, but has shares of size $m + O(\sec\cdot n^{\eps}\cdot \polylog(n,m,\sec))$ bits for an arbitrary constant $\eps > 0$. They also showed a variant of their construction with share size $m + O(\sec\cdot\polylog(n,m,\sec))$ bits, but with super-polynomial reconstruction time. We present a robust secret sharing scheme that is simultaneously close-to-optimal in all of these respects -- it is secure against rushing adversaries, has shares of size $m+O(\sec \log{n} (\log{n}+\log{m}))$ bits, and has polynomial-time sharing and reconstruction. Central to our construction is a polynomial-time algorithm for a problem on semi-random graphs that arises naturally in the paradigm of local authentication of shares used by us and in the aforementioned work.
2020
CRYPTO
Efficient Constant-Round MPC with Identifiable Abort and Public Verifiability 📺
Recent years have seen a tremendous growth in the interest in se- cure multiparty computation (MPC) and its applications. While much progress has been made concerning its efficiency, many current, state-of-the-art protocols are vulnerable to Denial of Service attacks, where a cheating party may prevent the honest parties from learning the output of the computation, whilst remaining anonymous. The security model of identifiable abort aims to prevent these at- tacks, by allowing honest parties to agree upon the identity of a cheating party, who can then be excluded in the future. Several existing MPC protocols offer security with identifiable abort against a dishonest majority of corrupted parties. However, all of these protocols have a round complexity that scales linearly with the depth of the circuit (and are therefore unsuitable for use in high latency net- works) or use cryptographic primitives or techniques that have a high computa- tional overhead. In this work, we present the first efficient MPC protocols with identifiable abort in the dishonest majority setting, which run in a constant number of rounds and make only black-box use of cryptographic primitives. Our main construction is built from highly efficient primitives in a careful way to achieve identifiability at a low cost. In particular, we avoid the use of public-key operations outside of a setup phase, incurring a relatively low overhead on top of the fastest currently known constant-round MPC protocols based on garbled circuits. Our construction also avoids the use of adaptively secure primitives and heavy zero-knowledge machinery, which was inherent in previous works. In addition, we show how to upgrade our protocol to achieve public verifiability using a public bulletin board, allowing any external party to verify correctness of the computation or identify a cheating party.
2020
CRYPTO
Shorter Non-Interactive Zero-Knowledge Arguments and ZAPs for Algebraic Languages 📺
We put forth a new framework for building pairing-based non-interactive zero-knowledge (NIZK) arguments for a wide class of algebraic languages, which are an extension of linear languages, containing disjunctions of linear languages and more. Our approach differs from the Groth-Sahai methodology, in that we rely on pairings to compile a Sigma-protocol into a NIZK. Our framework enjoys a number of interesting features: - conceptual simplicity, parameters derive from the Sigma-protocol; - proofs as short as resulting from the Fiat-Shamir heuristic applied to the underlying Sigma-protocol; - fully adaptive soundness and perfect zero-knowledge in the common random string model with a single random group element as CRS; - yields simple and efficient two-round, public coin, publicly-verifiable perfect witness- indistinguishable (WI) arguments(ZAPs) in the plain model. To our knowledge, this is the first construction of two-rounds statistical witness-indistinguishable arguments from pairing assumptions. Our proof system relies on a new (static, falsifiable) assumption over pairing groups which generalizes the standard kernel Diffie-Hellman assumption in a natural way and holds in the generic group model (GGM) and in the algebraic group model (AGM). Replacing Groth-Sahai \NIZKs with our new proof system allows to improve several important cryptographic primitives. In particular, we obtain the shortest tightly-secure structure-preserving signature scheme (which are a core component in anonymous credentials), the shortest tightly-secure quasi-adaptive \NIZK with unbounded simulation soundness (which in turns implies the shortest tightly-mCCA-secure cryptosystem), and shorter ring signatures.
2020
CRYPTO
Non-Malleability against Polynomial Tampering 📺
We present the first explicit construction of a non-malleable code that can handle tampering functions that are bounded-degree polynomials. Prior to our work, this was only known for degree-1 polynomials (affine tampering functions), due to Chattopadhyay and Li (STOC 2017). As a direct corollary, we obtain an explicit non-malleable code that is secure against tampering by bounded-size arithmetic circuits. We show applications of our non-malleable code in constructing non-malleable secret sharing schemes that are robust against bounded-degree polynomial tampering. In fact our result is stronger: we can handle adversaries that can adaptively choose the polynomial tampering function based on initial leakage of a bounded number of shares. Our results are derived from explicit constructions of seedless non-malleable extractors that can handle bounded-degree polynomial tampering functions. Prior to our work, no such result was known even for degree-2 (quadratic) polynomials.
2020
CRYPTO
The Summation-Truncation Hybrid: Reusing Discarded Bits for Free 📺
A well-established PRP-to-PRF conversion design is truncation: one evaluates an $n$-bit pseudorandom permutation on a certain input, and truncates the result to $a$ bits. The construction is known to achieve tight $2^{n-a/2}$ security. Truncation has gained popularity due to its appearance in the GCM-SIV key derivation function (ACM CCS 2015). This key derivation function makes four evaluations of AES, truncates the outputs to $n/2$ bits, and concatenates these to get a $2n$-bit subkey. In this work, we demonstrate that truncation is wasteful. In more detail, we present the Summation-Truncation Hybrid (STH). At a high level, the construction consists of two parallel evaluations of truncation, where the truncated $(n-a)$-bit chunks are not discarded but rather summed together and appended to the output. We prove that STH achieves a similar security level as truncation, and thus that the $n-a$ bits of extra output is rendered for free. In the application of GCM-SIV, the current key derivation can be used to output $3n$ bits of random material, or it can be reduced to three primitive evaluations. Both changes come with no security loss.
2020
CRYPTO
Security Analysis and Improvements for the IETF MLS Standard for Group Messaging 📺
Secure messaging (SM) protocols allow users to communicate securely over untrusted infrastructure. In contrast to most other secure communication protocols (such as TLS, SSH, or Wireguard), SM sessions may be long-lived (e.g., years) and highly asynchronous. In order to deal with likely state compromises of users during the lifetime of a session, SM protocols do not only protect authenticity and privacy, but they also guarantee forward secrecy (FS) and post-compromise security (PCS). The former ensures that messages sent and received before a state compromise remain secure, while the latter ensures that users can recover from state compromise as a consequence of normal protocol usage. SM has received considerable attention in the two-party case, where prior work has studied the well-known double-ratchet paradigm, in particular, and SM as a cryptographic primitive, in general. Unfortunately, this paradigm does not scale well to the problem of secure group messaging (SGM). In order to address the lack of satisfactory SGM protocols, the IETF has launched the message-layer security (MLS) working group, which aims to standardize an eponymous SGM protocol. In this work we analyze the TreeKEM protocol, which is at the core of the SGM protocol proposed by the MLS working group. On a positive note, we show that TreeKEM achieves PCS in isolation (and slightly more). However, we observe that the current version of TreeKEM does not provide an adequate form of FS. More precisely, our work proceeds by formally capturing the exact security of TreeKEM as a so-called continuous group key agreement (CGKA) protocol, which we believe to be a primitive of independent interest. To address the insecurity of TreeKEM, we propose a simple modification to TreeKEM inspired by recent work of Jost et al. (EUROCRYPT '19) and an idea due to Kohbrok (MLS Mailing List). We then show that the modified version of TreeKEM comes with almost no efficiency degradation but achieves optimal (according to MLS specification) CGKA security, including FS and PCS. Our work also lays out how a CGKA protocol can be used to design a full SGM protocol.
2020
CRYPTO
LWE with Side Information: Attacks and Concrete Security Estimation 📺
We propose a framework for cryptanalysis of lattice-based schemes, when side information --in the form of "hints''-- about the secret and/or error is available. Our framework generalizes the so-called primal lattice reduction attack, and allows the progressive integration of hints before running a final lattice reduction step. Our techniques for integrating hints include sparsifying the lattice, projecting onto and intersecting with hyperplanes, and/or altering the distribution of the secret vector. Our main contribution is to propose a toolbox and a methodology to integrate such hints into lattice reduction attacks and to predict the performance of those lattice attacks with side information. While initially designed for side-channel information, our framework can also be used in other cases: exploiting decryption failures, or simply exploiting constraints imposed by certain schemes (LAC, Round5, NTRU), that were previously not known to (slightly) benefit from lattice attacks. We implement a Sage 9.0 toolkit to actually mount such attacks with hints when computationally feasible, and to predict their performances on larger instances. We provide several end-to-end application examples, such as an improvement of a single trace attack on Frodo by Bos et al (SAC 2018). Contrary to ad-hoc practical attacks exploiting side-channel leakage, our work is a generic way to estimate security loss even given very little side-channel information.
2020
CRYPTO
Non-Malleable Secret Sharing against Bounded Joint-Tampering Attacks in the Plain Model 📺
Secret sharing enables a dealer to split a secret into a set of shares, in such a way that certain authorized subsets of share holders can reconstruct the secret, whereas all unauthorized subsets cannot. Non-malleable secret sharing (Goyal and Kumar, STOC 2018) additionally requires that, even if the shares have been tampered with, the reconstructed secret is either the original or a completely unrelated one. In this work, we construct non-malleable secret sharing tolerating $p$-time {\em joint-tampering} attacks in the plain model (in the computational setting), where the latter means that, for any $p>0$ fixed {\em a priori}, the attacker can tamper with the same target secret sharing up to $p$ times. In particular, assuming one-to-one one-way functions, we obtain: - A secret sharing scheme for threshold access structures which tolerates joint $p$-time tampering with subsets of the shares of maximal size ({\em i.e.}, matching the privacy threshold of the scheme). This holds in a model where the attacker commits to a partition of the shares into non-overlapping subsets, and keeps tampering jointly with the shares within such a partition (so-called {\em selective partitioning}). - A secret sharing scheme for general access structures which tolerates joint $p$-time tampering with subsets of the shares of size $O(\sqrt{\log n})$, where $n$ is the number of parties. This holds in a stronger model where the attacker is allowed to adaptively change the partition within each tampering query, under the restriction that once a subset of the shares has been tampered with jointly, that subset is always either tampered jointly or not modified by other tampering queries (so-called {\em semi-adaptive partitioning}). At the heart of our result for selective partitioning lies a new technique showing that every one-time {\em statistically} non-malleable secret sharing against joint tampering is in fact {\em leakage-resilient} non-malleable ({\em i.e.},\ the attacker can leak jointly from the shares prior to tampering). We believe this may be of independent interest, and in fact we show it implies lower bounds on the share size and randomness complexity of statistically non-malleable secret sharing against {\em independent} tampering.
2020
CRYPTO
Stacked Garbling: Garbled Circuit Proportional to Longest Execution Path 📺
Secure two party computation (2PC) of arbitrary programs can be efficiently achieved using garbled circuits (GC). The bottleneck of GC efficiency is communication. It is widely believed that it is necessary to transmit the entire GC during 2PC, even for conditional branches that are not taken. This folklore belief is false. We propose a novel GC technique, stacked garbling, that eliminates the communication cost of inactive conditional branches. We extend the ideas of conditional GC evaluation explored in (Kolesnikov, Asiacrypt 18) and (Heath and Kolesnikov, Eurocrypt 20). Unlike these works, ours is for general 2PC where no player knows which conditional branch is taken. Our garbling scheme, Stack, requires communication proportional to the longest execution path rather than to the entire circuit. Stack is compatible with state-of-the-art techniques, such as free XOR and half-gates. Stack is a garbling scheme. As such, it can be plugged into a variety of existing protocols, and the resulting round complexity is the same as that of standard GC. The approach does incur computation cost quadratic in the conditional branching factor vs linear in standard schemes, but the tradeoff is beneficial for most programs: GC computation even on weak hardware is faster than GC transmission on fast channels. We implemented Stack in C++. Stack reduces communication cost by approximately the branching factor: for 16 branches, communication is reduced by 10.5x. In terms of wall-clock time for circuits with branching factor 16 over a 50 Mbps WAN on a laptop, Stack outperforms state-of- the-art half-gates-based 2PC by more than 4x.
2020
CRYPTO
Out of Oddity -- New Cryptanalytic Techniques against Symmetric Primitives Optimized for Integrity Proof Systems 📺
The security and performance of many integrity proof systems like SNARKs, STARKs and Bulletproofs highly depend on the underlying hash function. For this reason several new proposals have recently been developed. These primitives obviously require an in-depth security evaluation, especially since their implementation constraints have led to less standard design approaches. This work compares the security levels offered by two recent families of such primitives, namely GMiMC and HadesMiMC. We exhibit low-complexity distinguishers against the GMiMC and HadesMiMC permutations for most parameters proposed in recently launched public challenges for STARK-friendly hash functions. In the more concrete setting of the sponge construction corresponding to the practical use in the ZK-STARK protocol, we present a practical collision attack on a round-reduced version of GMiMC and a preimage attack on some instances of HadesMiMC. To achieve those results, we adapt and generalize several cryptographic techniques to fields of odd characteristic.
2020
CRYPTO
Fast and Secure Updatable Encryption 📺
Updatable encryption allows a client to outsource ciphertexts to some untrusted server and periodically rotate the encryption key. The server can update ciphertexts from an old key to a new key with the help of an update token, received from the client, which should not reveal anything about plaintexts to an adversary. We provide a new and highly efficient suite of updatable encryption schemes that we collectively call SHINE. In the variant designed for short messages, ciphertext generation consists of applying one permutation and one exponentiation (per message block), while updating ciphertexts requires just one exponentiation. Variants for longer messages provide much stronger security guarantees than prior work that has comparable efficiency. We present a new confidentiality notion for updatable encryption schemes that implies prior notions. We prove that SHINE is secure under our new confidentiality definition while also providing ciphertext integrity.
2020
CRYPTO
Amplifying the Security of Functional Encryption, Unconditionally 📺
Security amplification is a fundamental problem in cryptography. In this work, we study security amplification for functional encryption. We show two main results: - For any constant epsilon in (0,1), we can amplify an epsilon-secure FE scheme for P/poly which is secure against all polynomial sized adversaries to a fully secure FE scheme for P/poly, unconditionally. - For any constant epsilon in (0,1), we can amplify an epsilon-secure FE scheme for P/poly which is secure against subexponential sized adversaries to a subexponentially secure FE scheme for P/poly, unconditionally. Furthermore, both of our amplification results preserve compactness of the underlying FE scheme. Previously, amplification results for FE were only known assuming subexponentially secure LWE. Along the way, we introduce a new form of homomorphic secret sharing called set homomorphic secret sharing that may be of independent interest. Additionally, we introduce a new technique, which allows one to argue security amplification of nested primitives, and prove a general theorem that can be used to analyze the security amplification of parallel repetitions.
2020
CRYPTO
A Classification of Computational Assumptions in the Algebraic Group Model 📺
We give a taxonomy of computational assumptions in the algebraic group model (AGM). We first analyze the Uber assumption family for bilinear groups defined by Boyen and then extend it in multiple ways to cover assumptions such as Gap Diffie-Hellman and the LRSW assumption. We show that in the AGM every member of these families reduces to the q-discrete logarithm (DL) problem, for some q that depends on the degrees of the polynomials defining the assumption. Using the meta-reduction technique, we then separate (q+1)-DL from q-DL, which thus yields a classification of all members of the extended Uber-assumption families. We finally show that there are strong assumptions, such as one-more DL, that provably fall outside our classification, as we prove that they cannot be reduced to q-DL even in the AGM.
2020
CRYPTO
Improved Primitives for MPC over Mixed Arithmetic-Binary Circuits 📺
This work introduces novel techniques to improve the translation between arithmetic and binary data types in multi-party computation. To this end, we introduce a new approach to performing these conversions, using what we call \emph{extended doubly-authenticated bits} (edaBits), which correspond to shared integers in the arithmetic domain whose bit decomposition is shared in the binary domain. These can be used to considerably increase the efficiency of non-linear operations such as truncation, secure comparison and bit-decomposition. Our eDaBits are similar to the \emph{daBits} technique introduced by Rotaru et al.~(Indocrypt 2019). However, our main observations are that (1) applications that benefit from daBits can also benefit from edaBits in the same way, and (2) we can generate edaBits directly in a much more efficeint way than computing them directly from a set of DaBits. Technically, the second contribution is much more challenging, and involves a novel cut and choose technique that may be of independent interest, and requires taking advantage of natural tamper-resilient properties of binary circuits that occur in our construction to obtain the best level of efficiency. Finally, we show how our eDaBits can be applied to efficiently implement various non-linear protocols of interest, and we thoroughly analyze their correctness for both signed and unsigned integers. The results of this work can be applied to any corruption threshold, although they seem best suited to dishonest majority protocols such as SPDZ. We implement and benchmark our constructions, and experimentally verify that our technique yield a substantial increase in effiency. Our eDaBits save in communication by a factor that lies between $2$ and $170$ for secure comparisons with respect to a purely arithmetic approach, and between $2$ and $60$ with respect to using daBits. Improvements in throughput per second are more subdued but still as high as a factor of $47$. We also apply our novel machinery to the tasks of biometric matching and convolutional neural networks, obtaining a noticeable improvement as well.
2020
CRYPTO
Leakage-Resilient Key Exchange and Two-Seed Extractors 📺
Can Alice and Bob agree on a uniformly random secret key without having any truly secret randomness to begin with? Here we consider a setting where Eve can get partial leakage on the internal state of both Alice and Bob individually before the protocol starts. They then run a protocol using their states without any additional randomness and need to agree on a shared key that looks uniform to Eve, even after observing the leakage and the protocol transcript. We focus on non-interactive (one round) key exchange (NIKE), where Alice and Bob send one message each without waiting for one another. We first consider this problem in the symmetric-key setting, where the states of Alice and Bob include a shared secret as well as individual uniform randomness. However, since Eve gets leakage on these states, Alice and Bob need to perform privacy amplification to derive a fresh secret key from them. Prior solutions require Alice and Bob to sample fresh uniform randomness during the protocol, while in our setting all of their randomness was already part of their individual states a priori and was therefore subject to leakage. We show an information-theoretic solution to this problem using a novel primitive that we call a two-seed extractor, which we in turn construct by drawing a connection to communication-complexity lower-bounds in the number-on-forehead (NOF) model. We then turn to studying this problem in the public-key setting, where the states of Alice and Bob consist of independent uniform randomness. Unfortunately, we give a black-box separation showing that leakage-resilient NIKE in this setting cannot be proven secure via a black-box reduction under any game-based assumption when the leakage is super-logarithmic. This includes virtually all assumptions used in cryptography, and even very strong assumptions such as indistinguishability obfuscation (iO). Nevertheless, we also provide positive results that get around the above separation: -We show that every key exchange protocol (e.g., Diffie-Hellman) is secure when the leakage amount is logarithmic, or potentially even greater if we assume sub-exponential security without leakage. -We notice that the black-box separation does not extend to schemes in the common reference string (CRS) model, or to schemes with preprocessing, where Alice and Bob can individually pre-process their random coins to derive their secret state prior to leakage. We give a solution in the CRS model with preprocessing using bilinear maps. We also give solutions in just the CRS model alone (without preprocessing) or just with preprocessing (without a CRS), using iO and lossy functions.
2020
EUROCRYPT
Key-Homomorphic Pseudorandom Functions from LWE with Small Modulus 📺
Pseudorandom functions (PRFs) are fundamental objects in cryptography that play a central role in symmetric-key cryptography. Although PRFs can be constructed from one-way functions generically, these black-box constructions are usually inefficient and require deep circuits to evaluate compared to direct PRF constructions that rely on specific algebraic assumptions. From lattices, one can directly construct PRFs from the Learning with Errors (LWE) assumption (or its ring variant) using the result of Banerjee, Peikert, and Rosen (Eurocrypt 2012) and its subsequent works. However, all existing PRFs in this line of work rely on the hardness of the LWE problem where the associated modulus is super-polynomial in the security parameter. In this work, we provide two new PRF constructions from the LWE problem that each focuses on either minimizing the depth of its evaluation circuit or providing key-homomorphism while relying on the hardness of the LWE problem with only a polynomial modulus. Along the way, we introduce a new variant of the LWE problem called the Learning with Rounding and Errors (LWRE) problem. We show that for certain settings of parameters, the LWRE problem is as hard as the LWE problem. We then show that the hardness of the LWRE problem naturally induces a pseudorandom synthesizer that can be used to construct a low-depth PRF. The techniques that we introduce to study the LWRE problem can then be used to derive variants of existing key-homomorphic PRFs whose security can be reduced from the hardness of the LWE problem with a much smaller modulus.
2020
EUROCRYPT
Stacked Garbling for Disjunctive Zero-Knowledge Proofs 📺
Zero-knowledge (ZK) proofs (ZKP) have received wide attention, focusing on non-interactivity, short proof size, and fast verification time. We focus on the fastest total proof time, in particular for large Boolean circuits. Under this metric, Garbled Circuit (GC)-based ZKP (Jawurek et al., [JKO], CCS 2013) remained the state-of-the-art technique due to the low-constant linear scaling of computing the garbling. We improve GC-ZKP for proof statements with conditional clauses. Our communication is proportional to the longest branch rather than to the entire proof statement. This is most useful when the number m of branches is large, resulting in up to factor m× improvement over JKO. In our proof-of-concept illustrative application, prover P demonstrates knowledge of a bug in a codebase consisting of any number of snippets of actual C code. Our computation cost is linear in the size of the code- base and communication is constant in the number of snippets. That is, we require only enough communication for a single largest snippet! Our conceptual contribution is stacked garbling for ZK, a privacy-free circuit garbling scheme that can be used with the JKO GC-ZKP protocol to construct more efficient ZKP. Given a Boolean circuit C and computational security parameter k, our garbling is L · k bits long, where L is the length of the longest execution path in C. All prior concretely efficient garbling schemes produce garblings of size |C| · k. The computational cost of our scheme is not increased over prior state-of-the-art. We implement our GC-ZKP and demonstrate significantly improved (m× over JKO) ZK performance for functions with branching factor m. Compared with recent ZKP (STARK, Libra, KKW, Ligero, Aurora, Bulletproofs), our scheme offers much better proof times for larger circuits (35-1000× or more, depending on circuit size and compared scheme). For our illustrative application, we consider four C code snippets, each of about 30-50 LOC; one snippet allows an invalid memory dereference. The entire proof takes 0.15 seconds and communication is 1.5 MB.
2020
EUROCRYPT
Lightweight Authenticated Encryption Mode Suitable for Threshold Implementation 📺
This paper proposes tweakable block cipher (TBC) based modes \textsf{PFB}\_\textsf{Plus} and \textsf{PFB}$\omega$ that are efficient in threshold implementations (TI). Let $t$ be an algebraic degree of a target function, e.g. $t=1$ (resp. $t>1$) for linear (resp. non-linear) function. The $d$-th order TI encodes the internal state into $d t + 1$ shares. Hence, the area size increases proportionally to the number of shares. This implies that TBC based modes can be smaller than block cipher (BC) based modes in TI because TBC requires $s$-bit block to ensure $s$-bit security, e.g. \textsf{PFB} and \textsf{Romulus}, while BC requires $2s$-bit block. However, even with those TBC based modes, the minimum we can reach is 3 shares of $s$-bit state with $t=2$ and the first-order TI ($d=1$). Our first design \textsf{PFB}\_\textsf{Plus} aims to break the barrier of the $3s$-bit state in TI. The block size of an underlying TBC is $s/2$ bits and the output of TBC is linearly expanded to $s$ bits. This expanded state requires only 2 shares in the first-order TI, which makes the total state size $2.5s$ bits. We also provide rigorous security proof of \textsf{PFB}\_\textsf{Plus}. Our second design \textsf{PFB}$\omega$ further increases a parameter $\omega$: a ratio of the security level $s$ to the block size of an underlying TBC. We prove security of \textsf{PFB}$\omega$ for any $\omega$ under some assumptions for an underlying TBC and for parameters used to update a state. Next, we show a concrete instantiation of \textsf{PFB}\_\textsf{Plus} for 128-bit security. It requires a TBC with 64-bit block, 128-bit key and 128-bit tweak, while no existing TBC can support it. We design a new TBC by extending \textsf{SKINNY} and provide basic security evaluation. Finally, we give hardware benchmarks of \textsf{PFB}\_\textsf{Plus} in the first-order TI to show that TI of \textsf{PFB}\_\textsf{Plus} is smaller than that of \textsf{PFB} by more than one thousand gates and is the smallest within the schemes having 128-bit security.
2020
EUROCRYPT
On Instantiating the Algebraic Group Model from Falsifiable Assumptions 📺
We provide a standard-model implementation (of a relaxation) of the algebraic group model (AGM, [Fuchsbauer, Kiltz, Loss, CRYPTO 2018]). Specifically, we show that every algorithm that uses our group is algebraic, and hence "must know" a representation of its output group elements in terms of its input group elements. Here, "must know" means that a suitable extractor can extract such a representation efficiently. We stress that our implementation relies only on falsifiable assumptions in the standard model, and in particular does not use any knowledge assumptions. As a consequence, our group allows to transport a number of results obtained in the AGM into the standard model, under falsifiable assumptions. For instance, we show that in our group, several Diffie-Hellman-like assumptions (including computational Diffie-Hellman) are equivalent to the discrete logarithm assumption. Furthermore, we show that our group allows to prove the Schnorr signature scheme tightly secure in the random oracle model. Our construction relies on indistinguishability obfuscation, and hence should not be considered as a practical group itself. However, our results show that the AGM is a realistic computational model (since it can be instantiated in the standard model), and that results obtained in the AGM are also possible with standard-model groups.
2020
EUROCRYPT
How to Extract Useful Randomness from Unreliable Sources 📺
For more than 30 years, cryptographers have been looking for public sources of uniform randomness in order to use them as a set-up to run appealing cryptographic protocols without relying on trusted third parties. Unfortunately, nowadays it is fair to assess that assuming the existence of physical phenomena producing public uniform randomness is far from reality.  It is known that uniform randomness cannot be extracted from a single weak source. A well-studied way to overcome this is to consider several independent weak sources. However, this means we must trust the various sampling processes of weak randomness from physical processes. Motivated by the above state of affairs, this work considers a set-up where players can access multiple {\em potential} sources of weak randomness, several of which may be jointly corrupted by a computationally unbounded adversary. We introduce {\em SHELA} (Somewhere Honest Entropic Look Ahead) sources to model this situation. We show that there is no hope of extracting uniform randomness from a {\em SHELA} source. Instead, we focus on the task of {\em Somewhere-Extraction} (i.e., outputting several candidate strings, some of which are uniformly distributed -- yet we do not know which). We give explicit constructions of {\em Somewhere-Extractors} for {\em SHELA} sources with good parameters. Then, we present applications of the above somewhere-extractor where the public uniform randomness can be replaced by the output of such extraction from corruptible sources, greatly outperforming trivial solutions. The output of somewhere-extraction is also useful in other settings, such as a suitable source of random coins for many randomized algorithms. In another front, we comprehensively study the problem of {\em Somewhere-Extraction} from a {\em weak} source, resulting in a series of bounds. Our bounds highlight the fact that, in most regimes of parameters (including those relevant for applications), {\em SHELA} sources significantly outperform {\em weak} sources of comparable parameters both when it comes to the process of {\em Somewhere-Extraction}, or in the task of amplification of success probability in randomized algorithms. Moreover, the low quality of somewhere-extraction from weak sources excludes its use in various efficient applications.
2020
EUROCRYPT
Everybody’s a Target: Scalability in Public-Key Encryption 📺
For 1<=m<=n, we consider a natural m-out-of-n multi-instance scenario for a public-key encryption (PKE) scheme. An adversary, given n independent instances of PKE, wins if he breaks at least m out of the n instances. In this work, we are interested in the scaling factor of PKE schemes, SF, which measures how well the difficulty of breaking m out of the n instances scales in m. That is, a scaling factor SF=l indicates that breaking m out of n instances is at least l times more difficult than breaking one single instance. A PKE scheme with small scaling factor hence provides an ideal target for mass surveillance. In fact, the Logjam attack (CCS 2015) implicitly exploited, among other things, an almost constant scaling factor of ElGamal over finite fields (with shared group parameters). For Hashed ElGamal over elliptic curves, we use the generic group model to describe how the scaling factor depends on the scheme's granularity. In low granularity, meaning each public key contains its independent group parameter, the scheme has optimal scaling factor SF=m; In medium and high granularity, meaning all public keys share the same group parameter, the scheme still has a reasonable scaling factor SF=sqrt(m). Our findings underline that instantiating ElGamal over elliptic curves should be preferred to finite fields in a multi-instance scenario. As our main technical contribution, we derive new generic-group lower bounds of Omega(sqrt(mp)) on the complexity of solving both the m-out-of-n Gap Discrete Logarithm and the m-out-of-n Gap Computational Diffie-Hellman problem over groups of prime order p, extending a recent result by Yun (EUROCRYPT 2015). We establish the lower bound by studying the hardness of a related computational problem which we call the search-by-hypersurface problem.
2020
EUROCRYPT
OptORAMa: Optimal Oblivious RAM 📺
Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC '87 and J. ACM '96) is a technique for provably obfuscating programs' access patterns, such that the access patterns leak no information about the programs' secret inputs. To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy's original ORAM work for statistical security and in a somewhat restricted model (the so called \emph{balls-and-bins} model), and recently by Larsen and Nielsen (CRYPTO '18) for computational security. A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS '18) who gave a construction with $O(\log N\cdot \log\log N)$ amortized blowup, assuming one-way functions. One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of $n$ elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our $O(n)$ algorithm improves the previously best known deterministic or randomized algorithms whose running time is $O(n \cdot\log n)$ or $O(n \cdot\log \log n)$, respectively.
2020
EUROCRYPT
Improving Key-Recovery in Linear Attacks: Application to 28-round PRESENT 📺
Linear cryptanalysis is one of the most important tools in use for the security evaluation of symmetric primitives. Many improvements and refinements have been published since its introduction, and many applications on different ciphers have been found. Among these upgrades, Collard et al. proposed in 2007 an acceleration of the key-recovery part of Algorithm 2 for last-round attacks based on the FFT. In this paper we present a generalized, matrix-based version of the previous algorithm which easily allows to take into consideration an arbitrary number of key-recovery rounds. We also provide efficient variants that exploit the key-schedule relations and that can be combined with multiple linear attacks. Using our algorithms we provide some new cryptanalysis on PRESENT, including, to the best of our knowledge, the first attack on 28 rounds.
2020
EUROCRYPT
(One) failure is not an option: Bootstrapping the search for failures in lattice-based encryption schemes 📺
Lattice-based encryption schemes are often subject to the possibility of decryption failures, in which valid encryptions are decrypted incorrectly. Such failures, in large number, leak information about the secret key, enabling an attack strategy alternative to pure lattice reduction. Extending the "failure boosting" technique of D'Anvers et al. in PKC 2019, we propose an approach that we call "directional failure boosting" that uses previously found "failing ciphertexts" to accelerate the search for new ones. We analyse in detail the case where the lattice is defined over polynomial ring modules quotiented by <X^N + 1> and demonstrate it on a simple Mod-LWE-based scheme parametrized à la Kyber768/Saber. We show that, using our technique, for a given secret key (single-target setting), the cost of searching for additional failing ciphertexts after one or more have already been found, can be sped up dramatically. We thus demonstrate that, in this single-target model, these schemes should be designed so that it is hard to even obtain one decryption failure. Besides, in a wider security model where there are many target secret keys (multi-target setting), our attack greatly improves over the state of the art.
2020
EUROCRYPT
Which Languages Have 4-Round Fully Black-Box Zero-Knowledge Arguments from One-Way Functions? 📺
We prove that if a language $\cL$ has a 4-round fully black-box zero-knowledge argument with negligible soundness based on one-way functions, then $\overline{\cL} \in \MA$. Since $\coNP \subseteq \MA$ implies that the polynomial hierarchy collapses, our result implies that $\NP$-complete languages are unlikely to have 4-round fully black-box zero-knowledge arguments based on one-way functions. In TCC 2018, Hazay and Venkitasubramaniam, and Khurana, Ostrovsky, and Srinivasan demonstrated 4-round fully black-box zero-knowledge arguments for all languages in $\NP$ based on injective one-way functions. Their results also imply a 5-round protocol based on one-way functions. In essence, our result resolves the round complexity of fully black-box zero-knowledge arguments based on one-way functions.
2020
EUROCRYPT
Modeling for Three-Subset Division Property without Unknown Subset -- Improved Cube Attacks against Trivium and Grain-128AEAD 📺
A division property is a generic tool to search for integral distinguishers, and automatic tools such as MILP or SAT/SMT allow us to evaluate the propagation efficiently. In the application to stream ciphers, it enables us to estimate the security of cube attacks theoretically, and it leads to the best key-recovery attacks against well-known stream ciphers. However, it was reported that some of the key-recovery attacks based on the division property degenerate to distinguishing attacks due to the inaccuracy of the division property. Three-subset division property (without unknown subset) is a promising method to solve this inaccuracy problem, and a new algorithm using automatic tools for the three-subset division property was recently proposed at Asiacrypt2019. In this paper, we first show that this state-of-the-art algorithm is not always efficient and we cannot improve the existing key-recovery attacks. Then, we focus on the feature of the three-subset division property without unknown subset and propose another new efficient algorithm using automatic tools. Our algorithm is more efficient than existing algorithms, and it can improve existing key-recovery attacks. In the application to Trivium, we show a 841-round key-recovery attack. We also show that a 855-round key-recovery attack, which was proposed at CRYPTO2018, has a critical flaw and does not work. As a result, our 841-round attack becomes the best key-recovery attack. In the application to Grain-128AEAD, we show that the known 184-round key-recovery attack degenerates to distinguishing attacks. Then, the distinguishing attacks are improved up to 189 rounds, and we also show the best key-recovery attack against 190 rounds.
2020
EUROCRYPT
Indistinguishability Obfuscation Without Maps: Attacks and Fixes for Noisy Linear FE 📺
Candidates of Indistinguishability Obfuscation (iO) can be categorized as ``direct'' or ``bootstrapping based''. Direct constructions rely on high degree multilinear maps [GGH13,GGHRSW13] and provide heuristic guarantees, while bootstrapping based constructions [LV16,Lin17,LT17,AJLMS19,Agr19,JLMS19] rely, in the best case, on bilinear maps as well as new variants of the Learning With Errors (LWE) assumption and pseudorandom generators. Recent times have seen exciting progress in the construction of indistinguishability obfuscation (iO) from bilinear maps (along with other assumptions) [LT17,AJLMS19,JLMS19,Agr19]. As a notable exception, a recent work by Agrawal [Agr19] provided a construction for iO without using any maps. This work identified a new primitive, called Noisy Linear Functional Encryption (NLinFE) that provably suffices for iO and gave a direct construction of NLinFE from new assumptions on lattices. While a preliminary cryptanalysis for the new assumptions was provided in the original work, the author admitted the necessity of performing significantly more cryptanalysis before faith could be placed in the security of the scheme. Moreover, the author did not suggest concrete parameters for the construction. In this work, we fill this gap by undertaking the task of thorough cryptanalytic study of NLinFE. We design two attacks that let the adversary completely break the security of the scheme. Our attacks are completely new and unrelated to attacks that were hitherto used to break other candidates of iO. To achieve this, we develop new cryptanalytic techniques which (we hope) will inform future designs of the primitive of NLinFE. From the knowledge gained by our cryptanalytic study, we suggest modifications to the scheme. We provide a new scheme which overcomes the vulnerabilities identified before. We also provide a thorough analysis of all the security aspects of this scheme and argue why plausible attacks do not work. We additionally provide concrete parameters with which the scheme may be instantiated. We believe the security of NLinFE stands on significantly firmer footing as a result of this work.
2020
EUROCRYPT
Low Error Efficient Computational Extractors in the CRS Model 📺
In recent years, there has been exciting progress on building two-source extractors for sources with low min-entropy. Unfortunately, all known explicit constructions of two-source extractors in the low entropy regime suffer from non-negligible error, and building such extractors with negligible error remains an open problem. We investigate this problem in the computational setting, and obtain the following results. We construct an explicit 2-source extractor, and even an explicit non-malleable extractor, with negligible error, for sources with low min-entropy, under computational assumptions in the Common Random String (CRS) model. More specifically, we assume that a CRS is generated once and for all, and allow the min-entropy sources to depend on the CRS. We obtain our constructions by using the following transformations. - Building on the technique of [BHK11], we show a general transformation for converting any computational 2-source extractor (in the CRS model) into a computational non-malleable extractor (in the CRS model), for sources with similar min-entropy. We emphasize that the resulting computational non-malleable extractor is resilient to arbitrarily many tampering attacks (a property that is impossible to achieve information theoretically). This may be of independent interest. This transformation uses cryptography, and in particular relies on the sub-exponential hardness of the Decisional Diffie Hellman (DDH) assumption. - Next, using the blueprint of [BACD+17], we give a transformation converting our computational non-malleable extractor (in the CRS model) into a computational 2-source extractor for sources with low min-entropy (in the CRS model). Our 2-source extractor works for unbalanced sources: specifically, we require one of the sources to be larger than a specific polynomial in the other. This transformation does not incur any additional assumptions. Our analysis makes a novel use of the leakage lemma of Gentry and Wichs [GW11].
2020
EUROCRYPT
The Price of Active Security in Cryptographic Protocols 📺
We construct the first actively-secure Multi-Party Computation (MPC) protocols with an \emph{arbitrary} number of parties in the dishonest majority setting, for an \emph{arbitrary} field $\FF$ with \emph{constant communication overhead} over the ``passive-GMW'' protocol (Goldreich, Micali and Wigderson, STOC `87). Our protocols rely on passive implementations of Oblivious Transfer (OT) in the boolean setting and Oblivious Linear function Evaluation (OLE) in the arithmetic setting. Previously, such protocols were only known over sufficiently large fields (Genkin et al. STOC `14) or a constant number of parties (Ishai et al. CRYPTO `08). Conceptually, our protocols are obtained via a new compiler from a passively-secure protocol for a distributed multiplication functionality $\cF_\mult$, to an actively-secure protocol for general functionalities. Roughly, $\cF_\mult$ is parameterized by a linear-secret sharing scheme $\cS$, where it takes $\cS$-shares of two secrets and returns $\cS$-shares of their product. We show that our compilation is concretely efficient for sufficiently large fields, resulting in an overhead of 2 when securely computing natural circuits. Our compiler has two additional benefits: (1) it can rely on \emph{any} passive implementation of $\cF_\mult$, which, besides the standard implementation based on OT (for boolean) and OLE (for arithmetic) allows us to rely on implementations based on threshold cryptosystems (Cramer et al. Eurocrypt `01); and (2) it can rely on weaker-than-passive (i.e., imperfect/leaky) implementations, which in some parameter regimes yield actively-secure protocols with overhead less than 2. Instantiating this compiler with an ``honest-majority'' implementations of $\cF_\mult$, we obtain the first honest-majority protocol with optimal corruption threshold for boolean circuits with constant communication overhead over the best passive protocol (Damg{\aa}rd and Nielsen, CRYPTO `07).
2020
EUROCRYPT
Transparent SNARKs from DARK Compilers 📺
We construct a new polynomial commitment scheme for univariate and multivariate polynomials over finite fields, with public-coin evaluation proofs that have logarithmic communication and verification cost in the number of coefficients of the polynomial. The underlying technique is a Diophantine Argument of Knowledge (DARK), leveraging integer representations of polynomials and groups of unknown order. Security is shown from the strong RSA and the adaptive root assumption. Moreover, the scheme does not require a trusted setup if instantiated with class groups. We apply this new cryptographic compiler to a restricted class of algebraic linear IOPs in order to obtain doubly-efficient public-coin IPs with succinct communication and witness-extended emulation for any NP relation. Allowing for linear preprocessing, the online verifier's work is logarithmic in the circuit complexity of the relation. Concretely, we obtain quasi-linear prover time when compiling the IOP employed in Sonic(MBKM, CCS 19). Applying the Fiat-Shamir transform in the random oracle model results in a SNARK system with quasi-linear preprocessing, quasi-linear (online) prover time, logarithmic proof size, and logarithmic (online) verification time for arbitrary circuits. The SNARK is also concretely efficient with 8.4KB proofs and 75ms verification time for circuits with 1 million gates. Most importantly, this SNARK is transparent: it does not require a trusted setup. We also obtain zk-SNARKs by applying a variant of our polynomial commitment scheme that is hiding and offers zero-knowledge evaluation proofs. This construction is the first transparent zk-SNARK that has both a practical prover time as well as strictly logarithmic proof size and verification time. We call our system Supersonic.
2020
EUROCRYPT
TNT: How to Tweak a Block Cipher 📺
In this paper, we propose Tweak-aNd-Tweak (TNT for short) mode, which builds a tweakable block cipher from three independent block ciphers. TNT handles the tweak input by simply XOR-ing the unmodified tweak into the internal state of block ciphers twice. Due to its simplicity, TNT can also be viewed as a way of turning a block cipher into a tweakable block cipher by dividing the block cipher into three chunks, and adding the tweak at the two cutting points only. TNT is proven to be of beyond-birthday-bound $2^{2n/3}$ security, under the assumption that the three chunks are independent secure $n$-bit SPRPs. It clearly brings minimum possible overhead to both software and hardware implementations. To demonstrate this, an instantiation named TNT-AES with 6, 6, 6 rounds of AES as the underlying block ciphers is proposed. Besides the inherent proven security bound and tweak-independent rekeying feature of the TNT mode, the performance of TNT-AES is comparable with all existing TBCs designed through modular methods.
2020
EUROCRYPT
Quantum Security Analysis of CSIDH 📺
CSIDH is a recent proposal for post-quantum non-interactive key-exchange, based on supersingular elliptic curve isogenies. It is similar in design to a previous scheme by Couveignes, Rostovtsev and Stolbunov, but aims at an improved balance between efficiency and security. In the proposal, the authors suggest concrete parameters in order to meet some desired levels of quantum security. These parameters are based on the hardness of recovering a hidden isogeny between two elliptic curves, using a quantum subexponential algorithm of Childs, Jao and Soukharev. This algorithm combines two building blocks: first, a quantum algorithm for recovering a hidden shift in a commutative group. Second, a computation in superposition of all isogenies originating from a given curve, which the algorithm calls as a black box. In this paper, we give a comprehensive security analysis of CSIDH. Our first step is to revisit three quantum algorithms for the abelian hidden shift problem from the perspective of non-asymptotic cost, with trade-offs between their quantum and classical complexities. Second, we complete the non-asymptotic study of the black box in the hidden shift algorithm. We give a quantum procedure that evaluates CSIDH-512 using less than 40~000 logical qubits. This allows us to show that the parameters proposed by the authors of CSIDH do not meet their expected quantum security.
2020
EUROCRYPT
Continuous Verifiable Delay Functions 📺
We introduce the notion of a continuous verifiable delay function (cVDF): a function g which is (a) iteratively sequential---meaning that evaluating the iteration $g^{(t)}$ of g (on a random input) takes time roughly t times the time to evaluate g, even with many parallel processors, and (b) (iteratively) verifiable---the output of $g^{(t)}$ can be efficiently verified (in time that is essentially independent of t). In other words, the iterated function $g^{(t)}$ is a verifiable delay function (VDF) (Boneh et al., CRYPTO '18), having the property that intermediate steps of the computation (i.e., $g^{(t')}$ for t'<t) are publicly and continuously verifiable. We demonstrate that cVDFs have intriguing applications: (a) they can be used to construct public randomness beacons that only require an initial random seed (and no further unpredictable sources of randomness), (b) enable outsourceable VDFs where any part of the VDF computation can be verifiably outsourced, and (c) have deep complexity-theoretic consequences: in particular, they imply the existence of depth-robust moderately-hard Nash equilibrium problem instances, i.e. instances that can be solved in polynomial time yet require a high sequential running time. Our main result is the construction of a cVDF based on the repeated squaring assumption and the soundness of the Fiat-Shamir (FS) heuristic for constant-round proofs. We highlight that when viewed as a (plain) VDF, our construction requires a weaker FS assumption than previous ones (earlier constructions require the FS heuristic for either super-logarithmic round proofs, or for arguments).
2020
EUROCRYPT
Generic-Group Delay Functions Require Hidden-Order Groups 📺
Despite the fundamental importance of delay functions, underlying both the classic notion of a time-lock puzzle and the more recent notion of a verifiable delay function, the only known delay function that offers both sufficient structure for realizing these two notions and a realistic level of practicality is the ``iterated squaring'' construction of Rivest, Shamir and Wagner. This construction, however, is based on rather strong assumptions in groups of hidden orders, such as the RSA group (which requires a trusted setup) or the class group of an imaginary quadratic number field (which is still somewhat insufficiently explored from the cryptographic perspective). For more than two decades, the challenge of constructing delay functions in groups of known orders, admitting a variety of well-studied instantiations, has eluded the cryptography community. In this work we prove that there are no constructions of generic-group delay functions in cyclic groups of known orders: We show that for any delay function that does not exploit any particular property of the representation of the underlying group, there exists an attacker that completely breaks the function's sequentiality when given the group's order. As any time-lock puzzle and verifiable delay function give rise to a delay function, our result holds for these two notions we well, and explains the lack of success in resolving the above-mentioned long-standing challenge. Moreover, our result holds even if the underlying group is equipped with a d-linear map, for any constant d>=2 (and even for super-constant values of d under certain conditions).
2020
EUROCRYPT
Fractal: Post-Quantum and Transparent Recursive Proofs from Holography 📺
We present a new methodology to efficiently realize recursive composition of succinct non-interactive arguments of knowledge (SNARKs). Prior to this work, the only known methodology relied on pairing-based SNARKs instantiated on cycles of pairing-friendly elliptic curves, an expensive algebraic object. Our methodology does not rely on any special algebraic objects and, moreover, achieves new desirable properties: it is post-quantum and it is transparent (the setup is public coin). We exploit the fact that recursive composition is simpler for SNARKs with preprocessing, and the core of our work is obtaining a preprocessing zkSNARK for rank-1 constraint satisfiability (R1CS) that is post-quantum and transparent. We obtain this latter by establishing a connection between holography and preprocessing in the random oracle model, and then constructing a holographic proof for R1CS. We experimentally validate our methodology, demonstrating feasibility in practice.
2020
EUROCRYPT
Finding Hash Collisions with Quantum Computers by Using Differential Trails with Smaller Probability than Birthday Bound 📺
In this paper we spot light on dedicated quantum collision attacks on concrete hash functions, which has not received much attention so far. In the classical setting, the generic complexity to find collisions of an $n$-bit hash function is $O(2^{n/2})$, thus classical collision attacks based on differential cryptanalysis such as rebound attacks build differential trails with probability higher than $2^{-n/2}$. By the same analogy, generic quantum algorithms such as the BHT algorithm find collisions with complexity $O(2^{n/3})$. With quantum algorithms, a pair of messages satisfying a differential trail with probability $p$ can be generated with complexity $p^{-1/2}$. Hence, in the quantum setting, some differential trails with probability up to $2^{-2n/3}$ that cannot be exploited in the classical setting may be exploited to mount a collision attack in the quantum setting. In particular, the number of attacked rounds may increase. In this paper, we attack two international hash function standards: AES-MMO and Whirlpool. For AES-MMO, we present a $7$-round differential trail with probability $2^{-80}$ and use it to find collisions with a quantum version of the rebound attack, while only $6$ rounds can be attacked in the classical setting. For Whirlpool, we mount a collision attack based on a $6$-round differential trail from a classical rebound distinguisher with a complexity higher than the birthday bound. This improves the best classical attack on 5 rounds by 1. We also show that those trails are optimal in our approach. Our results have two important implications. First, there seems to exist a common belief that classically secure hash functions will remain secure against quantum adversaries. Indeed, several second-round candidates in the NIST post-quantum competition use existing hash functions, say SHA-3, as quantum secure ones. Our results disprove this common belief. Second, our observation suggests that differential trail search should not stop with probability $2^{-n/2}$ but should consider up to $2^{-2n/3}$. Hence it deserves to revisit the previous differential trail search activities.
2020
EUROCRYPT
Signatures from Sequential-OR Proofs 📺
OR-proofs enable a prover to show that it knows the witness for one of many statements, or that one out of many statements is true. OR-proofs are a remarkably versatile tool, used to strengthen security properties, design group and ring signature schemes, and achieve tight security. The common technique to build OR-proofs is based on an approach introduced by Cramer, Damgaard, and Schoenmakers (CRYPTO'94), where the prover splits the verifier's challenge into random shares and computes proofs for each statement in parallel. In this work we study a different, less investigated OR-proof technique, highlighted by Abe, Ohkubo, and Suzuki (ASIACRYPT'02). The difference is that the prover now computes the individual proofs sequentially. We show that such sequential OR-proofs yield signature schemes which can be proved secure in the non-programmable random oracle model. We complement this positive result with a black-box impossibility proof, showing that the same is unlikely to be the case for signatures derived from traditional OR-proofs. We finally argue that sequential-OR signature schemes can be proved secure in the quantum random oracle model, albeit with very loose bounds and by programming the random oracle.
2020
EUROCRYPT
Extracting Randomness from Extractor-Dependent Sources 📺
We revisit the well-studied problem of extracting nearly uniform randomness from an arbitrary source of sufficient min-entropy. Strong seeded extractors solve this problem by relying on a public random seed, which is unknown to the source. Here, we consider a setting where the seed is reused over time and the source may depend on prior calls to the extractor with the same seed. Can we still extract nearly uniform randomness? In more detail, we assume the seed is chosen randomly, but the source can make arbitrary oracle queries to the extractor with the given seed before outputting a sample. We require that the sample has entropy and differs from any of the previously queried values. The extracted output should look uniform even to a distinguisher that gets the seed. We consider two variants of the problem, depending on whether the source only outputs the sample, or whether it can also output some correlated public auxiliary information that preserves the sample's entropy. Our results are: * Without Auxiliary Information: We show that every pseudo-random function (PRF) with a sufficiently high security level is a good extractor in this setting, even if the distinguisher is computationally unbounded. We further show that the source necessarily needs to be computationally bounded and that such extractors imply one-way functions. * With Auxiliary Information: We construct secure extractors in this setting, as long as both the source and the distinguisher are computationally bounded. We give several constructions based on different intermediate primitives, yielding instantiations based on the DDH, DLIN, LWE or DCR assumptions. On the negative side, we show that one cannot prove security against computationally unbounded distinguishers in this setting under any standard assumption via a black-box reduction. Furthermore, even when restricting to computationally bounded distinguishers, we show that there exist PRFs that are insecure as extractors in this setting and that a large class of constructions cannot be proven secure via a black-box reduction from standard assumptions.
2020
EUROCRYPT
Measure-Rewind-Measure: Tighter Quantum Random Oracle Model Proofs for One-Way to Hiding and CCA Security 📺
We introduce a new technique called `Measure-Rewind-Measure' (MRM) to achieve tighter security proofs in the quantum random oracle model (QROM). We first apply our MRM technique to derive a new security proof for a variant of the `double-sided' quantum One-Way to Hiding Lemma (O2H) of Bindel et al. [TCC 2019] which, for the first time, avoids the square-root advantage loss in the security proof. In particular, it bypasses a previous `impossibility result' of Jiang, Zhang and Ma [IACR eprint 2019]. We then apply our new O2H Lemma to give a new tighter security proof for the Fujisaki-Okamoto transform for constructing a strong (INDCCA) Key Encapsulation Mechanism (KEM) from a weak (INDCPA) public-key encryption scheme satisfying a mild injectivity assumption.
2020
EUROCRYPT
Friet: an Authenticated Encryption Scheme with Built-in Fault Detection 📺
In this work we present a duplex-based authenticated encryption scheme Friet based on a new permutation called Friet-P. We designed Friet-P with a novel approach for cryptographic permutations and block ciphers that takes fault-attack resistance into account and that we introduce in this paper. In this method, we build a permutation f_C to be embedded in a larger one f. First, we define f as a sequence of steps that all abide a chosen error-correcting code C, i.e., that map C-codewords to C-codewords. Then, we embed f_C in f by first encoding its input to an element of C, applying f and then decoding back from C. This last step detects a fault when the output of f is not in C. We motivate the design of the permutation we use in Friet and report on performance in soft- and hardware. We evaluate the fault-detection capabilities of the software and simulated hardware implementations with attacks. Finally, we perform a leakage evaluation. Our code is available at https://github.com/thisimon/Friet.git.
2020
EUROCRYPT
New Slide Attacks on Almost Self-Similar Ciphers 📺
The slide attack is a powerful cryptanalytic tool which has the unusual property that it can break iterated block ciphers with a complexity that does not depend on their number of rounds. However, it requires complete self similarity in the sense that all the rounds must be identical. While this can be the case in Feistel structures, this rarely happens in SP networks since the last round must end with an additional post-whitening subkey. In addition, in many SP networks the final round has additional asymmetries - for example, in AES the last round omits the MixColumns operation. Such asymmetry in the last round can make it difficult to utilize most of the advanced tools which were developed for slide attacks, such as deriving from one slid pair additional slid pairs by repeatedly re-encrypting their ciphertexts. Consequently, almost all the successful applications of slide attacks against real cryptosystems (e.g., FF3, GOST, SHACAL-1, etc.) had targeted Feistel structures rather than SP networks. In this paper we overcome this last round problem by developing four new types of slide attacks. We demonstrate their power by applying them to many types of AES-like structures (with and without linear mixing in the last round, with known or secret S-boxes, with periodicity of 1,2 and 3 in their subkeys, etc). In most of these cases, the time complexity of our attack is close to $2^{n/2}$, the smallest possible complexity for most slide attacks. Our new slide attacks have several unique properties: The first uses slid sets in which each plaintext from the first set forms a slid pair with some plaintext from the second set, but without knowing the exact correspondence. The second makes it possible to create from several slid pairs an exponential number of new slid pairs which form a hypercube spanned by the given pairs. The third has the unusual property that it is always successful, and the fourth can use known messages instead of chosen messages, with only slightly higher time complexity.
2020
EUROCRYPT
Formalizing Data Deletion in the Context of the Right to be Forgotten 📺
The right of an individual to request the deletion of their personal data by an entity that might be storing it -- referred to as \emph{the right to be forgotten} -- has been explicitly recognized, legislated, and exercised in several jurisdictions across the world, including the European Union, Argentina, and California. However, much of the discussion surrounding this right offers only an intuitive notion of what it means for it to be fulfilled -- of what it means for such personal data to be deleted. In this work, we provide a formal definitional framework for the right to be forgotten using tools and paradigms from cryptography. In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals' data when a request is made of it to delete some of this data. Our framework captures most, though not all, relevant aspects of typical systems involved in data processing. While it cannot be viewed as expressing the statements of current laws (especially since these are rather vague in this respect), our work offers technically precise definitions that represent possibilities for what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require. Finally, with the goal of demonstrating the applicability of our framework and definitions, we consider various natural and simple scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at implementing systems offering deletion guarantees, and also describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities.
2020
EUROCRYPT
PSI from PaXoS: Fast, Malicious Private Set Intersection 📺
We present a 2-party private set intersection (PSI) protocol which provides security against malicious participants, yet is almost as fast as the fastest known semi-honest PSI protocol of Kolesnikov et al. (CCS 2016). Our protocol is based on a new approach for two-party PSI, which can be instantiated to provide security against either malicious or semi-honest adversaries. The protocol is unique in that the only difference between the semi-honest and malicious versions is an instantiation with different parameters for a linear error-correction code. It is also the first PSI protocol which is concretely efficient while having linear communication and security against malicious adversaries, while running in the OT-hybrid model (assuming a non-programmable random oracle). State of the art semi-honest PSI protocols take advantage of cuckoo hashing, but it has proven a challenge to use cuckoo hashing for malicious security. Our protocol is the first to use cuckoo hashing for malicious- secure PSI. We do so via a new data structure, called a probe-and-XOR of strings (PaXoS), which may be of independent interest. This abstraction captures important properties of previous data structures, most notably garbled Bloom filters. While an encoding by a garbled Bloom filter is larger by a factor of $\Omega(\lambda)$ than the original data, we describe a significantly improved PaXoS based on cuckoo hashing that achieves constant rate while being no worse in other relevant efficiency measures.
2020
EUROCRYPT
Efficient simulation of random states and random unitaries 📺
We consider the problem of efficiently simulating random quantum states and random unitary operators, in a manner which is convincing to unbounded adversaries with black-box oracle access. This problem has previously only been considered for restricted adversaries. Against adversaries with an a priori bound on the number of queries, it is well-known that t-designs suffice. Against polynomial-time adversaries, one can use pseudorandom states (PRS) and pseudorandom unitaries (PRU), as defined in a recent work of Ji, Liu, and Song; unfortunately, no provably secure construction is known for PRUs. In our setting, we are concerned with unbounded adversaries. Nonetheless, we are able to give stateful quantum algorithms which simulate the ideal object in both settings of interest. In the case of Haar-random states, our simulator is polynomial-time, has negligible error, and can also simulate verification and reflection through the simulated state. This yields an immediate application to quantum money: a money scheme which is information-theoretically unforgeable and untraceable. In the case of Haar-random unitaries, our simulator takes polynomial space, but simulates both forward and inverse access with zero error. These results can be seen as the first significant steps in developing a theory of lazy sampling for random quantum objects.
2020
EUROCRYPT
Compact Adaptively Secure ABE from k-Lin: Beyond NC1 and towards NL 📺
We present a new general framework for constructing compact and adaptively secure attribute-based encryption (ABE) schemes from k-Lin in asymmetric bilinear pairing groups. Previously, the only construction [Kowalczyk and Wee, Eurocrypt '19] that simultaneously achieves compactness and adaptive security from static assumptions supports policies represented by Boolean formulae. Our framework enables supporting more expressive policies represented by arithmetic branching programs. Our framework extends to ABE for policies represented by uniform models of computation such as Turing machines. Such policies enjoy the feature of being applicable to attributes of arbitrary lengths. We obtain the first compact adaptively secure ABE for deterministic and non-deterministic finite automata (DFA and NFA) from k-Lin, previously unknown from any static assumptions. Beyond finite automata, we obtain the first ABE for large classes of uniform computation, captured by deterministic and non-deterministic logspace Turing machines (the complexity classes L and NL) based on k-Lin. Our ABE scheme has compact secret keys of size linear in the description size of the Turing machine M. The ciphertext size grows linearly in the input length, but also linearly in the time complexity, and exponentially in the space complexity. Irrespective of compactness, we stress that our scheme is the first that supports large classes of Turing machines based solely on standard assumptions. In comparison, previous ABE for general Turing machines all rely on strong primitives related to indistinguishability obfuscation.
2020
EUROCRYPT
Evolving Ramp Secret Sharing with a Small Gap 📺
Evolving secret-sharing schemes, introduced by Komargodski, Naor, and Yogev (TCC 2016b), are secret-sharing schemes in which there is no a-priory upper bound on the number of parties that will participate. The parties arrive one by one and when a party arrives the dealer gives it a share; the dealer cannot update this share when other parties arrive. Motivated by the fact that when the number of parties is known, ramp secret-sharing schemes are more efficient than threshold secret-sharing schemes, we study evolving ramp secret-sharing schemes. Specifically, we study evolving $(b(j),g(j))$-ramp secret-sharing schemes, where $g,b: \NN\to \NN$ are non-decreasing functions. In such schemes, any set of parties that for some $j$ contains $g(j)$ parties from the first parties that arrive can reconstruct the secret, and any set such that for every $j$ contains less than $b(j)$ parties from the first $j$ parties that arrive cannot learn any information about the secret. We focus on the case that the gap is small, namely $g(j)-b(j)=j^{\beta}$ for $0<\beta<1$. We show that there is an evolving ramp secret-sharing scheme with gap $t^{\beta}$, in which the share size of the $j$-th party is $\tilde{O}(j^{4-\frac{1}{\log^2 {1/\beta}}})$. Furthermore, we show that our construction results in much better share size for fixed values of $\beta$, i.e., there is an evolving ramp secret-sharing scheme with gap $\sqrt{j}$, in which the share size of the $j$-th party is $\tilde{O}(j)$. Our construction should be compared to the best known evolving $g(j)$-threshold secret-sharing schemes (i.e., when $b(j)=g(j)-1$) in which the share size of the $j$-th party is $\tilde{O}(j^4)$. Thus, our construction offers a significant improvement for every constant $\beta$, showing that allowing a gap between the sizes of the authorized and unauthorized sets can reduce the share size. In addition, we present an evolving $(k/2,k)$-ramp secret-sharing scheme for a constant $k$ (which can be very big), where any set of parties of size at least $k$ can reconstruct the secret and any set of parties of size at most $k/2$ cannot learn any information about the secret. The share size of the $j$-th party in our construction is $O(\log k\log j)$. This is an improvement over the best known evolving $k$-threshold secret-sharing schemes in which the share size of the $j$-th party is $O(k\log j)$.
2020
EUROCRYPT
Quantum-access-secure message authentication via blind-unforgeability 📺
Formulating and designing authentication of classical messages in the presence of adversaries with quantum query access has been a challenge, as the familiar classical notions of unforgeability do not directly translate into meaningful notions in the quantum setting. A particular difficulty is how to fairly capture the notion of ``predicting an unqueried value'' when the adversary can query in quantum superposition. We propose a natural definition of unforgeability against quantum adversaries called blind unforgeability. This notion defines a function to be predictable if there exists an adversary who can use "partially blinded" oracle access to predict values in the blinded region. We support the proposal with a number of technical results. We begin by establishing that the notion coincides with EUF-CMA in the classical setting and go on to demonstrate that the notion is satisfied by a number of simple guiding examples, such as random functions and quantum-query-secure pseudorandom functions. We then show the suitability of blind unforgeability for supporting canonical constructions and reductions. We prove that the "hash-and-MAC" paradigm and the Lamport one-time digital signature scheme are indeed unforgeable according to the definition. In this setting, we additionally define and study a new variety of quantum-secure hash functions called Bernoulli-preserving. Finally, we demonstrate that blind unforgeability is strictly stronger than a previous definition of Boneh and Zhandry [EUROCRYPT '13, CRYPTO '13] and resolve an open problem concerning this previous definition by constructing an explicit function family which is forgeable yet satisfies the definition.
2020
EUROCRYPT
He Gives C-Sieves on the CSIDH 📺
Recently, Castryck, Lange, Martindale, Panny, and Renes proposed \emph{CSIDH} (pronounced ``sea-side'') as a candidate post-quantum ``commutative group action.'' It has attracted much attention and interest, in part because it enables noninteractive Diffie--Hellman-like key exchange with quite small communication. Subsequently, CSIDH has also been used as a foundation for digital signatures. In 2003--04, Kuperberg and then Regev gave asymptotically subexponential quantum algorithms for ``hidden shift'' problems, which can be used to recover the CSIDH secret key from a public key. In late 2011, Kuperberg gave a follow-up quantum algorithm called the \emph{collimation sieve} (``c-sieve'' for short), which improves the prior ones, in particular by using exponentially less quantum memory and offering more parameter tradeoffs. While recent works have analyzed the concrete cost of the original algorithms (and variants) against CSIDH, nothing of this nature was previously available for the c-sieve. This work fills that gap. Specifically, we generalize Kuperberg's collimation sieve to work for arbitrary finite cyclic groups, provide some practical efficiency improvements, give a classical (i.e., non-quantum) simulator, run experiments for a wide range of parameters up to the actual CSIDH-512 group order, and concretely quantify the complexity of the c-sieve against CSIDH. Our main conclusion is that the proposed CSIDH parameters provide relatively little quantum security beyond what is given by the cost of quantumly evaluating the CSIDH group action itself (on a uniform superposition). For example, the cost of CSIDH-512 key recovery is only about~$2^{16}$ quantum evaluations using~$2^{40}$ bits of quantumly accessible \emph{classical} memory (plus relatively small other resources). This improves upon a prior estimate of~$2^{32.5}$ evaluations and~$2^{31}$ qubits of \emph{quantum} memory, for a variant of Kuperberg's original sieve. Under the plausible assumption that quantum evaluation does not cost much more than what is given by a recent ``best case'' analysis, CSIDH-512 can therefore be broken using significantly less than~$2^{64}$ quantum T-gates. This strongly invalidates its claimed NIST level~1 quantum security, especially when accounting for the MAXDEPTH restriction. Moreover, under analogous assumptions for CSIDH-1024 and -1792, which target higher NIST security levels, except near the high end of the MAXDEPTH range even these instantiations fall short of level~1.
2020
EUROCRYPT
Hardness of LWE on General Entropic Distributions 📺
The hardness of the Learning with Errors (LWE) problem is by now a cornerstone of the cryptographic landscape, allowing to con- struct cryptographic schemes with properties unknown under other as- sumptions, and being conjectured to be resilient to quantum attacks. LWE is essentially the task of solving a noisy system of random linear equations over uniformly random secret variables (“the LWE secret”), evaluated modulo some integer. In applications the secret variables usu- ally correspond to the secret key of the cryptographic scheme. It is therefore of great importance to understand what happens when the secret variables are not sampled uniformly (but still have some entropy). This is relevant for settings where an adversary manages to obtain partial information on the secret (a.k.a key leakage), for various theoretical ap- plications, and also for practical use where for efficiency or convenience it is easier to sample the secret from some non-uniform distribution. This so called “Entropic LWE” problem has been studied in a number of works, starting with Goldwasser et al. (ICS 2010). However, so far it was only known how to prove the hardness of Entropic LWE for secret distributions supported inside a ball of small radius. In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ball-bounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highly-composite moduli, and using black-box impossibility for arbitrary moduli. Technically, we show that the entropic hardness of LWE relies on a sim- ple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distri- bution s, given s + e, where e is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ball-bounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.
2020
EUROCRYPT
Candidate iO From Homomorphic Encryption Schemes 📺
We propose a new approach to construct general-purpose indistinguishability obfuscation (iO). Our construction is obtained via a new intermediate primitive that we call split fully-homomorphic encryption (split FHE), which we show to be sufficient for constructing iO. Specifically, split FHE is FHE where decryption takes the following two-step syntactic form: (i) A secret decryption step uses the secret key and produces a hint which is (asymptotically) shorter than the length of the encrypted message, and (ii) a public decryption step that only requires the ciphertext and the previously generated hint (and not the entire secret key), and recovers the encrypted message. In terms of security, the hints for a set of ciphertexts should not allow one to violate semantic security for any other ciphertexts. Next, we show a generic candidate construction of split FHE based on three building blocks: (i) A standard FHE scheme with linear decrypt-and-multiply (which can be instantiated with essentially all LWE-based constructions), (ii) a linearly homomorphic encryption scheme with short decryption hints (such as the Damgard-Jurik encryption scheme, based on the DCR problem), and (iii) a cryptographic hash function (which can be based on a variety of standard assumptions). Our approach is heuristic in the sense that our construction is not provably secure and makes implicit assumptions about the interplay between these underlying primitives. We show evidence that this construction is secure by providing an argument in an appropriately defined oracle model. We view our construction as a big departure from the state-of-the-art constructions, and it is in fact quite simple.
2020
EUROCRYPT
Rational isogenies from irrational endomorphisms 📺
In this paper, we introduce a polynomial-time algorithm to compute a connecting $\mathcal{O}$-ideal between two supersingular elliptic curves over $\mathbb{F}_p$ with common $\mathbb{F}_p$-endomorphism ring $\mathcal{O}$, given a description of their full endomorphism rings. This algorithm provides a reduction of the security of the CSIDH cryptosystem to the problem of computing endomorphism rings of supersingular elliptic curves. A similar reduction for SIDH appeared at Asiacrypt 2016, but relies on totally different techniques. Furthermore, we also show that any supersingular elliptic curve constructed using the complex-multiplication method can be located precisely in the supersingular isogeny graph by explicitly deriving a path to a known base curve. This result prohibits the use of such curves as a building block for a hash function into the supersingular isogeny graph.
2020
EUROCRYPT
Resource-Restricted Cryptography: Revisiting MPC Bounds in the Proof-of-Work Era 📺
Traditional bounds on synchronous Byzantine agreement (BA) and secure multi-party computation (MPC) establish that in absence of a private correlated-randomness setup, such as a PKI, protocols can tolerate up to $t<n/3$ of the parties being malicious. The introduction of ``Nakamoto style'' consensus, based on Proof-of-Work (PoW) blockchains, put forth a somewhat different flavor of BA, showing that even a majority of corrupted parties can be tolerated as long as the majority of the computation resources remain at honest hands. This assumption on honest majority of some resource was also extended to other resources such as stake, space, etc., upon which blockchains achieving Nakamoto-style consensus were built that violated the $t<n/3$ bound in terms of number of party corruptions. The above state of affairs begs the question of whether the seeming mismatch is due to different goals and models, or whether the resource-restricting paradigm can be generically used to circumvent the $n/3$ lower bound. In this work we study this question and formally demonstrate how the above paradigm changes the rules of the game in cryptographic definitions. First, we abstract the core properties that the resource-restricting paradigm offers by means of a functionality {\em wrapper}, in the UC framework, which when applied to a standard point-to-point network restricts the ability (of the adversary) to send new messages. We show that such a wrapped network can be implemented using the resource-restricting paradigm---concretely, using PoWs and honest majority of computing power---and that the traditional $t<n/3$ impossibility results fail when the parties have access to such a network. Our construction is in the {\em fresh} Common Reference String (CRS) model---i.e., it assumes a CRS which becomes available to the parties at the same time as to the adversary. We then present constructions for BA and MPC, which given access to such a network tolerate $t<n/2$ corruptions without assuming a private correlated randomness setup. We also show how to remove the freshness assumption from the CRS by leveraging the power of a random oracle. Our MPC protocol achieves the standard notion of MPC security, where parties might have dedicated roles, as is for example the case in Oblivious Transfer protocols. This is in contrast to existing solutions basing MPC on PoWs, which associate roles to pseudonyms but do not link these pseudonyms with the actual parties.
2020
EUROCRYPT
Statistical ZAPR Arguments from Bilinear Maps 📺
Dwork and Naor (FOCS '00) defined ZAPs as 2-message witness-indistinguishable proofs that are public-coin. We relax this to \emph{ZAPs with private Randomness} (ZAPRs), where the verifier can use private coins to sample the first message (independently of the statement being proved), but the proof must remain publicly verifiable given only the protocol transcript. In particular, ZAPRs are \emph{reusable}, meaning that the first message can be reused for multiple proofs without compromising security. Known constructions of ZAPs from trapdoor permutations or bilinear maps are only computationally WI (and statistically sound). Two recent results of Badrinarayanan-Fernando-Jain-Khurana-Sahai and Goyal-Jain-Jin-Malavolta [EUROCRYPT '20] construct the first \emph{statistical ZAP arguments}, which are statistically WI (and computationally sound), from the quasi-polynomial LWE assumption. Here, we construct \emph{statistical ZAPR arguments} from the quasi-polynomial decision-linear (DLIN) assumption on groups with a bilinear map. Our construction relies on a combination of several tools including Groth-Ostrovsky-Sahai NIZK and NIWI [EUROCRYPT '06, CRYPTO '06, JACM '12], ``sometimes-binding statistically hiding commitments'' [Kalai-Khurana-Sahai, EUROCRYPT '18] and the ``MPC-in-the-head'' technique [Ishai-Kushilevitz-Ostrovsky-Sahai, STOC '07].
2020
EUROCRYPT
An Algebraic Attack on Rank Metric Code-Based Cryptosystems 📺
The Rank metric decoding problem is the main problem considered in cryptography based on codes in the rank metric. Very efficient schemes based on this problem or quasi-cyclic versions of it have been proposed recently, such as those in the submissions ROLLO and RQC currently at the second round of the NIST Post-Quantum Cryptography Standardization Process. While combinatorial attacks on this problem have been extensively studied and seem now well understood, the situation is not as satisfactory for algebraic attacks, for which previous work essentially suggested that they were ineffective for cryptographic parameters. In this paper, starting from Ourivski and Johansson's algebraic modelling of the problem into a system of polynomial equations, we show how to augment this system with easily computed equations so that the augmented system is solved much faster via Gröbner bases. This happens because the augmented system has solving degree $r$, $r+1$ or $r+2$ depending on the parameters, where $r$ is the rank weight, which we show by extending results from Verbel \emph{et al.} (PQCrypto 2019) on systems arising from the MinRank problem; with target rank $r$, Verbel \emph{et al.} lower the solving degree to $r+2$, and even less for some favorable instances that they call ``superdetermined''. We give complexity bounds for this approach as well as practical timings of an implementation using \texttt{magma}. This improves upon the previously known complexity estimates for both Gröbner basis and (non-quantum) combinatorial approaches, and for example leads to an attack in 200 bits on ROLLO-I-256 whose claimed security was 256 bits.
2020
EUROCRYPT
SPARKs: Succinct Parallelizable Arguments of Knowledge 📺
We introduce the notion of a Succinct Parallelizable Argument of Knowledge (SPARK). This is an argument system with the following three properties for computing and proving a time T (non-deterministic) computation: - The prover's (parallel) running time is T + polylog T. (In other words, the prover's running time is essentially T for large computation times!) - The prover uses at most polylog T processors. - The communication complexity and verifier complexity are both polylog T. While the third property is standard in succinct arguments, the combination of all three is desirable as it gives a way to leverage moderate parallelism in favor of near-optimal running time. We emphasize that even a factor two overhead in the prover's parallel running time is not allowed. Our main results are the following, all for non-deterministic polynomial-time RAM computation. We construct (1) an (interactive) SPARK based solely on the existence of collision-resistant hash functions, and (2) a non-interactive SPARK based on any collision-resistant hash function and any SNARK with quasi-linear overhead (as satisfied by recent SNARK constructions).
2020
EUROCRYPT
Statistical Zaps and New Oblivious Transfer Protocols 📺
We study the problem of achieving statistical privacy in interactive proof systems and oblivious transfer -- two of the most well studied two-party protocols -- when limited rounds of interaction are available. -- Statistical Zaps: We give the first construction of statistical Zaps, namely, two-round statistical witness-indistinguishable (WI) protocols with a public-coin verifier. Our construction achieves computational soundness based on the quasi-polynomial hardness of learning with errors assumption. -- Three-Round Statistical Receiver-Private Oblivious Transfer: We give the first construction of a three-round oblivious transfer (OT) protocol -- in the plain model -- that achieves statistical privacy for receivers and computational privacy for senders against malicious adversaries, based on polynomial-time assumptions. The round-complexity of our protocol is optimal. We obtain our first result by devising a public-coin approach to compress sigma protocols, without relying on trusted setup. To obtain our second result, we devise a general framework via a new notion of statistical hash commitments that may be of independent interest.
2020
EUROCRYPT
Impossibility Results for Lattice-Based Functional Encryption Schemes 📺
Functional Encryption denotes a form of encryption where a master secret key-holder can control which functions a user can evaluate on encrypted data. Learning With Errors (LWE) (Regev, STOC'05) is known to be a useful cryptographic hardness assumption which implies strong primitives such as, for example, fully homomorphic encryption (Brakerski et al., ITCS'12) and lockable obfuscation (Goyal et al., Wichs et al., FOCS'17). Despite its strength, however, there is just a limited number of functional encryption schemes which can be based on LWE. In fact, there are functional encryption schemes which can be achieved by using pairings but for which no secure instantiations from lattice-based assumptions are known: function-hiding inner product encryption (Lin, Baltico et al., CRYPTO'17) and compact quadratic functional encryption (Abdalla et al., CRYPTO'18). This raises the question whether there are some mathematical barriers which hinder us from realizing function-hiding and compact functional encryption schemes from lattice-based assumptions as LWE. To study this problem, we prove an impossibility result for function-hiding functional encryption schemes which meet some algebraic restrictions at ciphertext encryption and decryption. Those restrictions are met by a lot of attribute-based, identity-based and functional encryption schemes whose security stems from LWE. Therefore, we see our results as important indications why it is hard to construct new functional encryption schemes from LWE and which mathematical restrictions have to be overcome to construct secure lattice-based functional encryption schemes for new functionalities.
2020
EUROCRYPT
Non-Interactive Zero-Knowledge in Pairing-Free Groups from Weaker Assumptions 📺
We provide new constructions of non-interactive zero-knowledge arguments (NIZKs) for NP from discrete-logarithm-style assumptions over cyclic groups, without relying on pairings. A previous construction from (Canetti et al., Eurocrypt'18) achieves such NIZKs under the assumption that no efficient adversary can break the key-dependent message (KDM) security of (additive) ElGamal with respect to all (even inefficient) functions over groups of size $2^\lambda$, with probability better than $\poly(\lambda)/2^{\lambda}$. This is an extremely strong, non-falsifiable assumption. In particular, even mild (polynomial) improvements over the current best known attacks on the discrete logarithm problem would already contradict this assumption. (Canetti et al. STOC'19) describe how to improve the assumption to rely only on KDM security with respect to all efficient functions, therefore obtaining an assumption that is (in spirit) falsifiable. Our first construction improves this state of affairs. We provide a construction of NIZKs for NP under the CDH assumption together with the assumption that no efficient adversary can break the key-dependent message one-wayness of ElGamal with respect to efficient functions over groups of size $2^\lambda$, with probability better than $\poly(\lambda)/2^{c\lambda}$ (denoted $2^{-c\lambda}$-OWKDM), for a constant $c = 3/4$. Unlike the previous assumption, our assumption leaves an exponential gap between the best known attack and the required security guarantee. We also analyse whether we could build NIZKs when CDH does not hold. As a second contribution, we construct an infinitely often NIZK argument system for NP (where soundness and zero-knowledge are only guaranteed to hold for infinitely many security parameters), under the $2^{-c\lambda}$-OWKDM security of ElGamal with $c = 28/29+o(1)$, together with the existence of low-depth pseudorandom generators.
2020
EUROCRYPT
Security under Message-Derived Keys: Signcryption in iMessage 📺
At the core of Apple's iMessage is a SignCryption scheme that involves symmetric encryption of a message under a key that is derived from the message itself. To capture this, we formalize a primitive we call Encryption under Message-Derived Keys (EMDK). We prove security of the EMDK scheme underlying iMessage. We use this to prove security of the SignCryption scheme itself, with respect to definitions of SignCryption we give that enhance prior ones to cover issues peculiar to messaging protocols. Our provable-security results are quantitative, and we discuss the practical implications for iMessage.
2020
EUROCRYPT
Succinct Non-Interactive Secure Computation 📺
We present the first maliciously secure protocol for succinct non-interactive secure two-party computation (SNISC): Each player sends just a single message whose length is (essentially) independent of the running time of the function to be computed. The protocol does not require any trusted setup, satisfies superpolynomial-time simulation-based security (SPS), and is based on (subexponential) security of the Learning With Errors (LWE) assumption. We do not rely on SNARKs or "knowledge of exponent"-type assumptions. Since the protocol is non-interactive, the relaxation to SPS security is needed, as standard polynomial-time simulation is impossible; however, a slight variant of our main protocol yields a SNISC with polynomial-time simulation in the CRS model.
2020
EUROCRYPT
Separate Your Domains: NIST PQC KEMs, Oracle Cloning and Read-Only Indifferentiability 📺
It is convenient and common for schemes in the random oracle model to assume access to multiple random oracles (ROs), leaving to implementations the task --we call it oracle cloning-- of constructing them from a single RO. The first part of the paper is a case study of oracle cloning in KEM submissions to the NIST Post-Quantum Cryptography standardization process. We give key-recovery attacks on some submissions arising from mistakes in oracle cloning, and find other submissions using oracle cloning methods whose validity is unclear. Motivated by this, the second part of the paper gives a theoretical treatment of oracle cloning. We give a definition of what is an "oracle cloning method" and what it means for such a method to "work," in a framework we call read-only indifferentiability, a simple variant of classical indifferentiability that yields security not only for usage in single-stage games but also in multi-stage ones. We formalize domain separation, and specify and study many oracle cloning methods, including common domain-separating ones, giving some general results to justify (prove read-only indifferentiability of) certain classes of methods. We are not only able to validate the oracle cloning methods used in many of the unbroken NIST PQC KEMs, but also able to specify and validate oracle cloning methods that may be useful beyond that.
2020
EUROCRYPT
Efficient Constructions for Almost-everywhere Secure Computation 📺
We study the problem of {\em almost-everywhere reliable message transmission}; a key component in designing efficient and secure MPC protocols for sparsely connected networks. The goal is to design low-degree networks which allow a large fraction of honest nodes to communicate reliably even while linearly many nodes can experience byzantine corruption and deviate arbitrarily from the assigned protocol.\\ \noindent In this paper, we achieve a $\log$-degree network with a polylogarithmic work complexity protocol, thereby improving over the state-of-the-art result of Chandran {\em et al.} (ICALP 2010) who required a polylogarithmic-degree network and had a linear work complexity. In addition, we also achieve: \begin{itemize} \item A work efficient version of Dwork et. al.'s (STOC 1986) butterfly network. \item An improvement upon the state of the art protocol of Ben-or and Ron (Information Processing Letters 1996) in the randomized corruption model---both in work-efficiency and in resilience.
2020
EUROCRYPT
Broadcast-Optimal Two-Round MPC 📺
An intensive effort by the cryptographic community to minimize the round complexity of secure multi-party computation (MPC) has recently led to optimal two-round protocols from minimal assumptions. Most of the proposed solutions, however, make use of a broadcast channel in every round, and it is unclear if the broadcast channel can be replaced by standard point-to-point communication in a round-preserving manner, and if so, at what cost on the resulting security. In this work, we provide a complete characterization of the trade-off between number of broadcast rounds and achievable security level for two-round MPC tolerating arbitrarily many active corruptions. Specifically, we consider all possible combinations of broadcast and point-to-point rounds against the three standard levels of security for maliciously se- cure MPC protocols, namely, security with identifiable, unanimous, and selective abort. For each of these notions and each combination of broadcast and point-to-point rounds, we provide either a tight feasibility or an infeasibility result of two-round MPC. Our feasibility results hold assuming two-round OT in the CRS model, whereas our impossibility results hold given any correlated randomness.
2020
EUROCRYPT
Tornado: Automatic Generation of Probing-Secure Masked Bitsliced Implementations 📺
Cryptographic implementations deployed in real world devices often aim at (provable) security against the powerful class of side-channel attacks while keeping reasonable performances. Last year at Asiacrypt, a new formal verification tool named tightPROVE was put forward to exactly determine whether a masked implementation is secure in the well-deployed probing security model for any given security order t. Also recently, a compiler named Usuba was proposed to automatically generate bitsliced implementations of cryptographic primitives. This paper goes one step further in the security and performances achievements with a new automatic tool named Tornado. In a nutshell, from the high-level description of a cryptographic primitive, Tornado produces a functionally equivalent bitsliced masked implementation at any desired order proven secure in the probing model, but additionally in the so-called register probing model which much better fits the reality of software implementations. This framework is obtained by the integration of Usuba with tightPROVE+, which extends tightPROVE with the ability to verify the security of implementations in the register probing model and to fix them with inserting refresh gadgets at carefully chosen locations accordingly. We demonstrate Tornado on the lightweight cryptographic primitives selected to the second round of the NIST competition and which somehow claimed to be masking friendly. It advantageously displays performances of the resulting masked implementations for several masking orders and prove their security in the register probing model.
2020
EUROCRYPT
Mathematics and Cryptography: A Marriage of Convenience? 📺
Invited Paper
Mathematics and cryptography have a long history together, with the ups and downs inherent in any long relationship. Whether it is a marriage of convenience or a love match, their progeny have lives of their own and have had an impact on the world. This invited lecture will briefly recall some high points from the past, give speculation and encouragement for the future of this marriage, and give counseling on how to improve communication, resolve conflicts, and play well together, based on personal experience and lessons learned.
2020
EUROCRYPT
Mind the Composition: Birthday Bound Attacks on EWCDMD and SoKAC21 📺
In an early version of CRYPTO'17, Mennink and Neves proposed \textsf{EWCDMD}, a dual of \textsf{EWCDM}, and showed $n$-bit security, where $n$ is the block size of the underlying block cipher. In CRYPTO'19, Chen et al. proposed permutation based design \textsf{SoKAC21} and showed $2n/3$-bit security, where $n$ is the input size of the underlying permutation. In this paper we show birthday bound attacks on \textsf{EWCDMD} and \textsf{SoKAC21}, invalidating their security claims. Both attacks exploit an inherent composition nature present in the constructions. Motivated by the above two attacks exploiting the composition nature, we consider some generic relevant composition based constructions of ideal primitives (possibly in the ideal permutation and random oracle model) and present birthday bound distinguishers for them. In particular, we demonstrate a birthday bound distinguisher against (1) a secret random permutation followed by a public random function and (2) composition of two secret random functions. Our distinguishers for \textsf{SoKAC21} and \textsf{EWCDMD} are direct consequences of (1) and (2) respectively.
2020
EUROCRYPT
Double-Base Chains for Scalar Multiplications on Elliptic Curves 📺
Double-base chains (DBCs) are widely used to speed up scalar multiplications on elliptic curves. We present three results of DBCs. First, we display a structure of the set containing all DBCs and propose an iterative algorithm to compute the number of DBCs for a positive integer. This is the first polynomial time algorithm to compute the number of DBCs for positive integers. Secondly, we present an asymptotic lower bound on average Hamming weights of DBCs $\frac{\log n}{8.25}$ for a positive integer $n$. This result answers an open question about the Hamming weights of DBCs. Thirdly, we propose a new algorithm to generate an optimal DBC for any positive integer. The time complexity of this algorithm is $\mathcal{O}\left(\left(\log n\right)^2 \log\log n\right)$ bit operations and the space complexity is $\mathcal{O}\left(\left(\log n\right)^{2}\right)$ bits of memory. This algorithm accelerates the recoding procedure by more than $6$ times compared to the state-of-the-art Bernstein, Chuengsatiansup, and Lange's work. The Hamming weights of optimal DBCs are over $60$\% smaller than those of NAFs. Experimental results show that scalar multiplication using our optimal DBC is about $13$\% faster than that using non-adjacent form on elliptic curves over large prime fields.
2020
EUROCRYPT
Low Weight Discrete Logarithms and Subset Sum in $2^{0.65n}$ with Polynomial Memory 📺
We propose two heuristic polynomial memory collision finding algorithms for the low Hamming weight discrete logarithm problem in any abelian group $G$. The first one is a direct adaptation of the Becker-Coron-Joux (BCJ) algorithm for subset sum to the discrete logarithm setting. The second one significantly improves on this adaptation for all possible weights using a more involved application of the representation technique together with some new Markov chain analysis. In contrast to other low weight discrete logarithm algorithms, our second algorithm's time complexity interpolates to Pollard's $|G|^{\frac 1 2}$ bound for general discrete logarithm instances. We also introduce a new heuristic subset sum algorithm with polynomial memory that improves on BCJ's $2^{0.72n}$ time bound for random subset sum instances $a_1, \ldots, a_n, t \in \Z_{2^n}$. Technically, we introduce a novel nested collision finding for subset sum -- inspired by the NestedRho algorithm from Crypto '16 -- that recursively produces collisions. We first show how to instantiate our algorithm with run time $2^{0.649n}$. Using further tricks, we are then able to improve its complexity down to $2^{0.645n}$.
2020
EUROCRYPT
Private Information Retrieval with Sublinear Online Time 📺
Best Young Researcher Award
We present the first protocols for private information retrieval that allow fast (sublinear-time) database lookups without increasing the server-side storage requirements. To achieve these efficiency goals, our protocols work in an offline/online model. In an offline phase, which takes place before the client has decided which database bit it wants to read, the client fetches a short string from the servers. In a subsequent online phase, the client can privately retrieve its desired bit of the database by making a second query to the servers. By pushing the bulk of the server-side computation into the offline phase (which is independent of the client’s query), our protocols allow the online phase to complete very quickly—in time sublinear in the size of the database. Our protocols can provide statistical security in the two-server setting and computational security in the single-server setting. Finally, we prove that, in this model, our protocols are optimal in terms of the trade-off they achieve between communication and running time.
2020
EUROCRYPT
Side-channel Masking with Pseudo-Random Generator 📺
High-order masking countermeasures against side-channel attacks usually require plenty of randomness during their execution. For security against t probes, the classical ISW countermeasure requires O(t^2 s) random bits, where s is the circuit size. However running a True Random Number Generator (TRNG) can be costly in practice and become a bottleneck on embedded devices. In [IKL+13] the authors introduced the notion of robust pseudo-random number generator (PRG), which must remain secure even against an adversary who can probe at most t wires. They showed that when embedding a robust PRG within a private circuit, the number of random bits can be reduced to O(t^4), that is independent of the circuit size s (up to a logarithmic factor). Using bipartite expander graphs, this can be further reduced to O(t^(3+eps)); however the resulting construction is unpractical. In this paper we describe a practical construction where the number of random bits is only O(t^2) for security against t probes, without expander graphs; moreover the running time of each pseudo-random generation goes down from O(t^4) to O(t). Our technique consists in using multiple independent PRGs instead of a single one. We show that for ISW circuits, the robustness property of the PRG is not required anymore, which leads to simple and efficient constructions. For example, for AES we only need 48 bytes of randomness to get second-order security (t=2), instead of 2880 in the original Rivain-Prouff countermeasure; when implemented on an ARM-based embedded device with a relatively slow TRNG, we obtain a 50% speed-up compared to Rivain-Prouff.
2020
EUROCRYPT
Blackbox Secret Sharing Revisited: A Coding-Theoretic Approach with Application to Expansionless Near-Threshold Schemes 📺
A {\em blackbox} secret sharing (BBSS) scheme works in exactly the same way for all finite Abelian groups $G$; it can be instantiated for any such group $G$ and {\em only} black-box access to its group operations and to random group elements is required. A secret is a single group element and each of the $n$ players' shares is a vector of such elements. Share-computation and secret-reconstruction is by integer linear combinations. These do not depend on $G$, and neither do the privacy and reconstruction parameters $t,r$. This classical, fundamental primitive was introduced by Desmedt and Frankel (CRYPTO 1989) in their context of ``threshold cryptography.'' The expansion factor is the total number of group elements in a full sharing divided by $n$. For threshold BBSS with $t$-privacy ($1\leq t \leq n-1$), $t+1$-reconstruction and arbitrary $n$, constructions with minimal expansion $O(\log n)$ exist (CRYPTO 2002, 2005). These results are firmly rooted in number theory; each makes (different) judicious choices of orders in number fields admitting a vector of elements of very large length (in the number field degree) whose corresponding Vandermonde-determinant is sufficiently controlled so as to enable BBSS by a suitable adaptation of Shamir's scheme. Alternative approaches generally lead to very large expansion. The state of the art of BBSS has not changed for the last 15 years. Our contributions are two-fold. (1) We introduce a novel, nontrivial, effective construction of BBSS based on {\em coding theory} instead of number theory. For threshold-BBSS we also achieve minimal expansion factor $O(\log n)$. (2) Our method is more versatile. Namely, we show, for the first time, BBSS that is {\em near-threshold}, i.e., $r-t$ is an arbitrarily small constant fraction of $n$, {\em and} that has expansion factor~$O(1)$, i.e., individual share-vectors of {\em constant} length (``asymptotically expansionless''). Threshold can be concentrated essentially freely across full range. We also show expansion is minimal for near-threshold and that such BBSS cannot be attained by previous methods. Our general construction is based on a well-known mathematical principle, the local-global principle. More precisely, we first construct BBSS over local rings through either Reed-Solomon or algebraic geometry codes. We then ``glue'' these schemes together in a dedicated manner to obtain a global secret sharing scheme, i.e., defined over the integers, which, as we finally prove using novel insights, has the desired BBSS properties. Though our main purpose here is advancing BBSS for its own sake, we also briefly address possible protocol applications.
2020
EUROCRYPT
Tight Security Bounds for Double-block Hash-then-Sum MACs 📺
In this work, we study the security of deterministic MAC constructions with a double-block internal state, captured by the double-block hash-then-sum (DBH) paradigm. Most DBH constructions, including PolyMAC, SUM-ECBC, PMAC-Plus, 3kf9 and LightMAC-Plus, have been proved to be pseudorandom up to 2^{2n/3} queries when they are instantiated with an n-bit block cipher, while the best known generic attacks require 2^{3n/4} queries. We close this gap by proving the PRF-security of DBH constructions up to 2^{3n/4} queries (ignoring the maximum message length). The core of the security proof is to refine Mirror theory that systematically estimates the number of solutions to a system of equations and non-equations, and apply it to prove the security of the finalization function. Then we identify security requirements of the internal hash functions to ensure 3n/4-bit security of the resulting constructions when combined with the finalization function. Within this framework, we prove the security of DBH whose internal hash function is given as the concatenation of a universal hash function using two independent keys. This class of constructions include PolyMAC and SUM-ECBC. Moreover, we prove the security of PMAC-Plus, 3kf9 and LightMAC-Plus up to 2^{3n/4} queries.
2020
EUROCRYPT
On the Quantum Complexity of the Continuous Hidden Subgroup Problem 📺
The Hidden Subgroup Problem (HSP) aims at capturing all problems that are susceptible to be solvable in quantum polynomial time following the blueprints of Shor's celebrated algorithm. Successful solutions to this problems over various commutative groups allow to efficiently perform number-theoretic tasks such as factoring or finding discrete logarithms. The latest successful generalization (Eisenträger et al. STOC 2014) considers the problem of finding a full-rank lattice as the hidden subgroup of the continuous vector space R^m, even for large dimensions m. It unlocked new cryptanalytic algorithms (Biasse-Song SODA 2016, Cramer et al. EUROCRYPT 2016 and 2017), in particular to find mildly short vectors in ideal lattices. The cryptanalytic relevance of such a problem raises the question of a more refined and quantitative complexity analysis. In the light of the increasing physical difficulty of maintaining a large entanglement of qubits, the degree of concern may be different whether the above algorithm requires only linearly many qubits or a much larger polynomial amount of qubits. This is the question we start addressing with this work. We propose a detailed analysis of (a variation of) the aforementioned HSP algorithm, and conclude on its complexity as a function of all the relevant parameters. Our modular analysis is tailored to support the optimization of future specialization to cases of cryptanalytic interests. We suggest a few ideas in this direction.
2020
EUROCRYPT
Statistical ZAP Arguments 📺
Dwork and Naor (FOCS'00) first introduced and constructed two message public coin witness indistinguishable proofs (ZAPs) for NP based on trapdoor permutations. Since then, ZAPs have also been obtained based on the decisional linear assumption on bilinear maps, and indistinguishability obfuscation, and have proven extremely useful in the design of several cryptographic primitives. However, all known constructions of two-message public coin (or even publicly verifiable) proof systems only guarantee witness indistinguishability against computationally bounded verifiers. In this paper, we construct the first public coin two message witness indistinguishable (WI) arguments for NP with {\em statistical} privacy, assuming quasi-polynomial hardness of the learning with errors (LWE) assumption. We also show that the same protocol has a super-polynomial simulator (SPS), which yields the first public-coin SPS statistical zero knowledge argument. Prior to this, there were no known constructions of two-message publicly verifiable WI protocols under lattice assumptions, even satisfying the weaker notion of computational witness indistinguishability.
2020
EUROCRYPT
Tight Time-Space Lower Bounds for Finding Multiple Collision Pairs and Their Applications 📺
We consider a \emph{collision search problem} (CSP), where given a parameter $C$, the goal is to find $C$ collision pairs in a random function $f:[N] \rightarrow [N]$ (where $[N] = \{0,1,\ldots,N-1\})$ using $S$ bits of memory. Algorithms for CSP have numerous cryptanalytic applications such as space-efficient attacks on double and triple encryption. The best known algorithm for CSP is \emph{parallel collision search} (PCS) published by van Oorschot and Wiener, which achieves the time-space tradeoff $T^2 \cdot S = \tilde{O}(C^2 \cdot N)$. In this paper, we prove that any algorithm for CSP satisfies $T^2 \cdot S = \tilde{\Omega}(C^2 \cdot N)$, hence the best known time-space tradeoff is optimal (up to poly-logarithmic factors in $N$). On the other hand, we give strong evidence that proving similar unconditional time-space tradeoff lower bounds on CSP applications (such as breaking double and triple encryption) may be very difficult, and would imply a breakthrough in complexity theory. Hence, we propose a new restricted model of computation and prove that under this model, the best known time-space tradeoff attack on double encryption is optimal.
2020
EUROCRYPT
Integral Matrix Gram Root and Lattice Gaussian Sampling without Floats 📺
Many advanced lattice based cryptosystems require to sample lattice points from Gaussian distributions. One challenge for this task is that all current algorithms resort to floating-point arithmetic (FPA) at some point, which has numerous drawbacks in practice: it requires numerical stability analysis, extra storage for high-precision, lazy/backtracking techniques for efficiency, and may suffer from weak determinism which can completely break certain schemes. In this paper, we give techniques to implement Gaussian sampling over general lattices without using FPA. To this end, we revisit the approach of Peikert, using perturbation sampling. Peikert's approach uses continuous Gaussian sampling and some decomposition $\BSigma = \matA \matA^t$ of the target covariance matrix $\BSigma$. The suggested decomposition, e.g. the Cholesky decomposition, gives rise to a square matrix $\matA$ with real (not integer) entries. Our idea, in a nutshell, is to replace this decomposition by an integral one. While there is in general no integer solution if we restrict $\matA$ to being a square matrix, we show that such a decomposition can be efficiently found by allowing $\matA$ to be wider (say $n \times 9n$). This can be viewed as an extension of Lagrange's four-square theorem to matrices. In addition, we adapt our integral decomposition algorithm to the ring setting: for power-of-2 cyclotomics, we can exploit the tower of rings structure for improved complexity and compactness.
2020
EUROCRYPT
The Retracing Boomerang Attack 📺
Boomerang attacks are extensions of differential attacks, that make it possible to combine two unrelated differential properties of the first and second part of a cryptosystem with probabilities $p$ and $q$ into a new differential-like property of the whole cryptosystem with probability $p^2q^2$ (since each one of the properties has to be satisfied twice). In this paper we describe a new version of boomerang attacks which uses the counterintuitive idea of throwing out most of the data in order to force equalities between certain values on the ciphertext side. In certain cases, this creates a correlation between the four probabilistic events, which increases the probability of the combined property to $p^2q$ and increases the signal to noise ratio of the resultant distinguisher. We call this variant a {\it retracing boomerang attack} since we make sure that the boomerang we throw follows the same path on its forward and backward directions. To demonstrate the power of the new technique, we apply it to the case of 5-round AES. This version of AES was repeatedly attacked by a large variety of techniques, but for twenty years its complexity had remained stuck at $2^{32}$. At Crypto'18 it was finally reduced to $2^{24}$ (for full key recovery), and with our new technique we can further reduce the complexity of full key recovery to the surprisingly low value of $2^{16.5}$ (i.e., only $90,000$ encryption/decryption operations are required for a full key recovery on half the rounds of AES). In addition to improving previous attacks, our new technique unveils a hidden relationship between boomerang attacks and two other cryptanalytic techniques, the yoyo game and the recently introduced mixture differentials.
2020
EUROCRYPT
Optimal Merging in Quantum $k$-xor and $k$-sum Algorithms 📺
The $k$-xor or Generalized Birthday Problem aims at finding, given $k$ lists of bit-strings, a $k$-tuple among them XORing to 0. If the lists are unbounded, the best classical (exponential) time complexity has withstood since Wagner's CRYPTO 2002 paper. If the lists are bounded (of the same size) and such that there is a single solution, the \emph{dissection algorithms} of Dinur \emph{et al.} (CRYPTO 2012) improve the memory usage over a simple meet-in-the-middle. In this paper, we study quantum algorithms for the $k$-xor problem. With unbounded lists and quantum access, we improve previous work by Grassi \emph{et al.} (ASIACRYPT 2018) for almost all $k$. Next, we extend our study to lists of any size and with classical access only. We define a set of ``merging trees'' which represent the best known strategies for quantum and classical merging in $k$-xor algorithms, and prove that our method is optimal among these. Our complexities are confirmed by a Mixed Integer Linear Program that computes the best strategy for a given $k$-xor problem. All our algorithms apply also when considering modular additions instead of bitwise xors. This framework enables us to give new improved quantum $k$-xor algorithms for all $k$ and list sizes. Applications include the subset-sum problem, LPN with limited memory and the multiple-encryption problem.
2020
EUROCRYPT
Implementing Grover oracles for quantum key search on AES and LowMC 📺
Grover's search algorithm gives a quantum attack against block ciphers by searching for a key that matches a small number of plaintext-ciphertext pairs. This attack uses O(N) calls to the cipher to search a key space of size N. Previous work in the specific case of AES derived the full gate cost by analyzing quantum circuits for the cipher, but focused on minimizing the number of qubits. In contrast, we study the cost of quantum key search attacks under a depth restriction and introduce techniques that reduce the oracle depth, even if it requires more qubits. As cases in point, we design quantum circuits for the block ciphers AES and LowMC. Our circuits give a lower overall attack cost in both the gate count and depth-times-width cost models. In NIST's post-quantum cryptography standardization process, security categories are defined based on the concrete cost of quantum key search against AES. We present new, lower cost estimates for each category, so our work has immediate implications for the security assessment of post-quantum cryptography. As part of this work, we release Q# implementations of the full Grover oracle for AES-128, -192, -256 and for the three LowMC instantiations used in Picnic, including unit tests and code to reproduce our quantum resource estimates. To the best of our knowledge, these are the first two such full implementations and automatic resource estimations.
2020
EUROCRYPT
Marlin: Preprocessing zkSNARKs with Universal and Updatable SRS 📺
We present a general methodology to construct preprocessing zkSNARKs where the structured reference string (SRS) is universal and updatable. This exploits a novel application of *holographic* IOPs, a natural generalization of holographic PCPs [Babai et al., STOC 1991]. We use our methodology to obtain a preprocessing zkSNARK where the SRS has linear size and arguments have constant size. Our construction improves on Sonic [Maller et al., CCS 2019], the prior state of the art in this setting, in all efficiency parameters: proving is an order of magnitude faster and verification is twice as fast, even with smaller SRS size and argument size. Our construction is most efficient when instantiated in the algebraic group model (also used by Sonic), but we also demonstrate how to realize it under concrete knowledge assumptions. The core of our zkSNARK is a new holographic IOP for rank-1 constraint satisfiability (R1CS), which is the first to achieve linear proof length and constant query complexity (among other efficiency features).
2020
EUROCRYPT
Private Aggregation from Fewer Anonymous Messages 📺
Consider the setup where $n$ parties are each given an element~$x_i$ in the finite field $\F_q$ and the goal is to compute the sum $\sum_i x_i$ in a secure fashion and with as little communication as possible. We study this problem in the \emph{anonymized model} of Ishai et al.~(FOCS 2006) where each party may broadcast anonymous messages on an insecure channel. We present a new analysis of the one-round ``split and mix'' protocol of Ishai et al. In order to achieve the same security parameter, our analysis reduces the required number of messages by a $\Theta(\log n)$ multiplicative factor. We also prove lower bounds showing that the dependence of the number of messages on the domain size, the number of parties, and the security parameter is essentially tight. Using a reduction of Balle et al. (2019), our improved analysis of the protocol of Ishai et al. yields, in the same model, an $\left(\varepsilon, \delta\right)$-differentially private protocol for aggregation that, for any constant $\varepsilon > 0$ and any $\delta = \frac{1}{\poly(n)}$, incurs only a constant error and requires only a \emph{constant number of messages} per party. Previously, such a protocol was known only for $\Omega(\log n)$ messages per party.
2020
EUROCRYPT
Two-Round Oblivious Transfer from CDH or LPN 📺
We show a new general approach for constructing maliciously-secure two-round oblivious transfer (OT). Specifically, we provide a generic sequence of transformations to upgrade a very basic notion of two-roundOT, which we call elementary OT, to UC-secure OT. We then give simple constructions of elementary OT under the Computational Diffie-Hellman(CDH) assumption or the Learning Parity with Noise (LPN) assumption, yielding the first constructions of malicious (UC-secure) two-round OT under these assumptions. Since two-round OT is complete for two-round 2-party and multi-party computation in the malicious setting, we also achieve the first constructions of the latter under these assumptions.
2020
EUROCRYPT
New Constructions of Statistical NIZKs: Dual-Mode DV-NIZKs and More 📺
Non-interactive zero-knowledge proofs (NIZKs) are important primitives in cryptography. A major challenge since the early works on NIZKs has been to construct NIZKs with a statistical zero-knowledge guarantee against unbounded verifiers. In the common reference string (CRS) model, such "statistical NIZK arguments" are currently known from k-Lin in a pairing-group and from LWE. In the (reusable) designated-verifier model (DV-NIZK), where a trusted setup algorithm generates a reusable verification key for checking proofs, we also have a construction from DCR. If we relax our requirements to computational zero-knowledge, we additionally have NIZKs from factoring and CDH in a pairing group in the CRS model, and from nearly all assumptions that imply public-key encryption (e.g., CDH, LPN, LWE) in the designated-verifier model. Thus, there still remains a gap in our understanding of statistical NIZKs in both the CRS and the designated-verifier models. In this work, we develop new techniques for constructing statistical NIZK arguments. First, we construct statistical DV-NIZK arguments from the k-Lin assumption in pairing-free groups, the QR assumption, and the DCR assumption. These are the first constructions in pairing-free groups and from QR that satisfy statistical zero-knowledge. All of our constructions are secure even if the verification key is chosen maliciously (i.e., they are "malicious-designated-verifier" NIZKs), and moreover, they satisfy a "dual-mode" property where the CRS can be sampled from two computationally indistinguishable distributions: one distribution yields statistical DV-NIZK arguments while the other yields computational DV-NIZK proofs. We then show how to adapt our k-Lin construction in a pairing group to obtain new publicly-verifiable statistical NIZK arguments from pairings with a qualitatively weaker assumption than existing constructions of pairing-based statistical NIZKs. Our constructions follow the classic paradigm of Feige, Lapidot, and Shamir (FLS). While the FLS framework has traditionally been used to construct computational (DV)-NIZK proofs, we newly show that the same framework can be leveraged to construct dual-mode (DV)-NIZKs.
2020
EUROCRYPT
Combiners for Functional Encryption, Unconditionally 📺
Functional encryption (FE) combiners allow one to combine many candidates for a functional encryption scheme, possibly based on different computational assumptions, into another functional encryption candidate with the guarantee that the resulting candidate is secure as long as at least one of the original candidates is secure. The fundamental question in this area is whether FE combiners exist. There have been a series of works Ananth et. al. (CRYPTO '16), Ananth-Jain-Sahai (EUROCRYPT '17), Ananth et. al (TCC '19) on constructing FE combiners from various assumptions. We give the first unconditional construction of combiners for functional encryption, resolving this question completely. Our construction immediately implies an unconditional universal functional encryption scheme, an FE scheme that is secure if such an FE scheme exists. Previously such results either relied on algebraic assumptions or required subexponential security assumptions.
2020
EUROCRYPT
On the Streaming Indistinguishability of a Random Permutation and a Random Function 📺
An adversary with $S$ bits of memory obtains a stream of $Q$ elements that are uniformly drawn from the set $\{1,2,\ldots,N\}$, either with or without replacement. This corresponds to sampling $Q$ elements using either a random function or a random permutation. The adversary's goal is to distinguish between these two cases. This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary's advantage is upper bounded by $\sqrt{Q \cdot S/N}$. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of $O(\log N)$ if $Q \cdot S \approx N$. However, the bound's proof assumed an unproven combinatorial conjecture. Moreover, if $Q \cdot S \ll N$ there is a gap between the upper bound of $\sqrt{Q \cdot S/N}$ and the $Q \cdot S/N$ advantage obtained by known attacks. In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of $O(\log Q \cdot Q \cdot S/N)$ on the adversary's advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a hybrid argument that gives rise to a reduction from the unique-disjointness communication complexity problem to streaming.
2020
EUROCRYPT
Fault Template Attacks on Block Ciphers Exploiting Fault Propagation 📺
Fault attacks (FA) are one of the potent practical threats to modern cryptographic implementations. Over the years the FA techniques have evolved, gradually moving towards the exploitation of device-centric properties of the faults. In this paper, we exploit the fact that activation and propagation of a fault through a given combinational circuit (i.e., observability of a fault) is data-dependent. Next, we show that this property of combinational circuits leads to powerful Fault Template Attacks (FTA), even for implementations having dedicated protections against both power and fault-based vulnerabilities. The attacks found in this work are applicable even if the fault injection is made at the middle rounds of a block cipher, which are out of reach for most of the other existing fault analysis strategies. Quite evidently, they also work for a known-plaintext scenario. Moreover, the middle round attacks are entirely blind in the sense that no access to the ciphertexts (correct/faulty) or plaintexts are required. The adversary is only assumed to have the power of repeating an unknown plaintext several times. Practical validation over a hardware implementation of SCA-FA protected PRESENT, and simulated evaluation on a public software implementation of protected AES prove the efficacy of the proposed attacks.
2020
EUROCRYPT
On the Memory-Tightness of Hashed ElGamal 📺
We study the memory-tightness of security reductions in public-key cryptography, focusing in particular on Hashed ElGamal. We prove that any {\em straightline} (i.e., without rewinding) black-box reduction needs memory which grows linearly with the number of queries of the adversary it has access to, as long as this reduction treats the underlying group generically. This makes progress towards proving a conjecture by Auerbach {\em et al.} (CRYPTO 2017), and is also the first lower bound on memory-tightness for a concrete cryptographic scheme (as opposed to generalized reductions across security notions). Our proof relies on compression arguments in the generic group model.
2020
EUROCRYPT
On a Generalization of Substitution-Permutation Networks: The HADES Design Strategy 📺
Keyed and unkeyed cryptographic permutations often iterate simple round functions. Substitution-permutation networks (SPNs) are an approach that is popular since the mid 1990s. One of the new directions in the design of these round functions is to reduce the substitution (S-Box) layer from a full one to a partial one, uniformly distributed over all the rounds. LowMC and Zorro are examples of this approach. A relevant freedom in the design space is to allow for a highly non-uniform distribution of S-Boxes. However, choosing rounds that are so different from each other is very rarely done, as it makes security analysis and implementation much harder. We develop the design strategy HADES and an analysis framework for it, which despite this increased complexity allows for security arguments against many classes of attacks, similar to earlier simpler SPNs. The framework builds upon the wide trail design strategy, and it additionally allows for security arguments against algebraic attacks, which are much more of a concern when algebraically simple S-Boxes are used. Subsequently, this is put into practice by concrete instances and benchmarks for a use case that generally benefits from a smaller number of S-Boxes and showcases the diversity of design options we support: A candidate cipher natively working with objects in GF(p), for securing data transfers with distributed databases using secure multiparty computation (MPC). Compared to the currently fastest design MiMC, we observe significant improvements in online bandwidth requirements and throughput with a simultaneous reduction of preprocessing effort, while having a comparable online latency.
2020
EUROCRYPT
Security of Hedged Fiat-Shamir Signatures under Fault Attacks 📺
Deterministic generation of per-signature randomness has been a widely accepted solution to mitigate the catastrophic risk of randomness failure in Fiat--Shamir type signature schemes. However, recent studies have practically demonstrated that such de-randomized schemes, including EdDSA, are vulnerable to differential fault attacks, which enable adversaries to recover the entire secret signing key, by artificially provoking randomness reuse or corrupting computation in other ways. In order to balance concerns of both randomness failures and the threat of fault injection, some signature designs are advocating a ``hedged'' derivation of the per-signature randomness, by hashing the secret key, message, and a nonce. Despite the growing popularity of the hedged paradigm in practical signature schemes, to the best of our knowledge, there has been no attempt to formally analyze the fault resilience of hedged signatures. We perform a formal security analysis of the fault resilience of signature schemes constructed via the Fiat--Shamir transform. We propose a model to characterize bit-tampering fault attacks, and investigate their impact across different steps of the signing operation. We prove that, for some types of faults, attacks are mitigated by the hedged paradigm, while attacks remain possible for others. As concrete case studies, we then apply our results to XEdDSA, a hedged version of EdDSA used in the Signal messaging protocol, and to Picnic2, a hedged Fiat--Shamir signature scheme in Round 2 of the NIST Post-Quantum standardization process.
2020
EUROCRYPT
Compact NIZKs from Standard Assumptions on Bilinear Maps 📺
A non-interactive zero-knowledge (NIZK) protocol enables a prover to convince a verifier of the truth of a statement without leaking any other information by sending a single message. The main focus of this work is on exploring short pairing-based NIZKs for all NP languages based on standard assumptions. In this regime, the seminal work of Groth, Ostrovsky, and Sahai (J.ACM'12) (GOS-NIZK) is still considered to be the state-of-the-art. Although fairly efficient, one drawback of GOS-NIZK is that the proof size is multiplicative in the circuit size computing the NP relation. That is, the proof size grows by $O(|C|k)$, where $C$ is the circuit for the NP relation and $k$ is the security parameter. By now, there have been numerous follow-up works focusing on shortening the proof size of pairing-based NIZKs, however, thus far, all works come at the cost of relying either on a non-standard knowledge-type assumption or a non-static $q$-type assumption. Specifically, improving the proof size of the original GOS-NIZK under the same standard assumption has remained as an open problem. Our main result is a construction of a pairing-based NIZK for all of NP whose proof size is additive in $|C|$, that is, the proof size only grows by $|C| +poly(k)$, based on the decisional linear (DLIN) assumption. Since the DLIN assumption is the same assumption underlying GOS-NIZK, our NIZK is a strict improvement on their proof size. As by-products of our main result, we also obtain the following two results: (1) We construct a perfectly zero-knowledge NIZK (NIPZK) for NP relations computable in NC1 with proof size $|w|poly(k)$ where $|w|$ is the witness length based on the DLIN assumption. This is the first pairing-based NIPZK for a non-trivial class of NP languages whose proof size is independent of $|C|$ based on a standard assumption. (2) We construct a universally composable (UC) NIZK for NP relations computable in NC1 in the erasure-free adaptive setting whose proof size is $|w|poly(k)$ from the DLIN assumption. This is an improvement over the recent result of Katsumata, Nishimaki, Yamada, and Yamakawa (CRYPTO'19), which gave a similar scheme based on a non-static $q$-type assumption. The main building block for all of our NIZKs is a constrained signature scheme with decomposable online-offline efficiency. This is a property which we newly introduce in this paper and construct from the DLIN assumption. We believe this construction is of an independent interest.
2020
EUROCRYPT
Key Recovery from Gram--Schmidt Norm Leakage in Hash-and-Sign Signatures over NTRU Lattices 📺
In this paper, we initiate the study of side-channel leakage in hash-and-sign lattice-based signatures, with particular emphasis on the two efficient implementations of the original GPV lattice-trapdoor paradigm for signatures, namely NIST second-round candidate Falcon and its simpler predecessor DLP. Both of these schemes implement the GPV signature scheme over NTRU lattices, achieving great speed-ups over the general lattice case. Our results are mainly threefold. First, we identify a specific source of side-channel leakage in most implementations of those schemes, namely, the one-dimensional Gaussian sampling steps within lattice Gaussian sampling. It turns out that the implementations of these steps often leak the Gram--Schmidt norms of the secret lattice basis. Second, we elucidate the link between this leakage and the secret key, by showing that the entire secret key can be efficiently reconstructed solely from those Gram--Schmidt norms. The result makes heavy use of the algebraic structure of the corresponding schemes, which work over a power-of-two cyclotomic field. Third, we concretely demonstrate the side-channel attack against DLP (but not Falcon due to the different structures of the two schemes). The challenge is that timing information only provides an approximation of the Gram--Schmidt norms, so our algebraic recovery technique needs to be combined with pruned tree search in order to apply it to approximate values. Experimentally, we show that around $2^{35}$ DLP traces are enough to reconstruct the entire key with good probability.
2020
EUROCRYPT
Adaptively Secure ABE for DFA from k-Lin and More 📺
In this work, we present: - the first adaptively secure ABE for DFA from the k-Lin assumption in prime-order bilinear groups; this resolves one of open problems posed by Waters [CRYPTO'12]; - the first ABE for NFA from the k-Lin assumption, provided the number of accepting paths is smaller than the order of the underlying group; the scheme achieves selective security; - the first compact adaptively secure ABE (supporting unbounded multi-use of attributes) for branching programs from the k-Lin assumption, which generalizes and simplifies the recent result of Kowalczyk and Wee for boolean formula (NC1) [EUROCRYPT'19]. Our adaptively secure ABE for DFA relies on a new combinatorial mechanism avoiding the exponential security loss in the number of states when naively combining two recent techniques from CRYPTO'19 and EUROCRYPT'19. This requires us to design a selectively secure ABE for NFA; we give a construction which is sufficient for our purpose and of independent interest. Our ABE for branching programs leverages insights from our ABE for DFA.
2020
EUROCRYPT
Blind Schnorr Signatures and Signed ElGamal Encryption in the Algebraic Group Model 📺
The Schnorr blind signing protocol allows blind issuing of Schnorr signatures, one of the most widely used signatures. Despite its practical relevance, its security analysis is unsatisfactory. The only known security proof is informal and in the combination of the generic group model (GGM) and the random oracle model (ROM) assuming that the ``ROS problem'' is hard. The situation is similar for (Schnorr-)signed ElGamal encryption, a simple CCA2-secure variant of ElGamal. We analyze the security of these schemes in the algebraic group model (AGM), an idealized model closer to the standard model than the GGM. We first prove tight security of Schnorr signatures from the discrete logarithm assumption (DL) in the AGM+ROM. We then give a rigorous proof for blind Schnorr signatures in the AGM+ROM assuming hardness of the one-more discrete logarithm problem and ROS. As ROS can be solved in sub-exponential time using Wagner's algorithm, we propose a simple modification of the signing protocol, which leaves the signatures unchanged. It is therefore compatible with systems that already use Schnorr signatures, such as blockchain protocols. We show that the security of our modified scheme relies on the hardness of a problem related to ROS that appears much harder. Finally, we give tight reductions, again in the AGM+ROM, of the CCA2 security of signed ElGamal encryption to DDH and signed hashed ElGamal key encapsulation to DL.
2020
EUROCRYPT
Lower Bounds for Leakage-Resilient Secret Sharing 📺
Threshold secret sharing allows a dealer to split a secret into $n$ shares such that any authorized subset of cardinality at least $t$ of those shares efficiently reveals the secret, while at the same time any unauthorized subset of cardinality less than $t$ contains no information about the secret. Leakage-resilience additionally requires that the secret remains hidden even if one is given a bounded amount of additional leakage from every share. In this work, we study leakage-resilient secret sharing schemes and prove a lower bound on the share size and the required amount randomness of any information-theoretically secure scheme. We prove that for any information-theoretically secure leakage-resilient secret sharing scheme either the amount of randomness across all shares or the share size has to be linear in $n$. More concretely, for a secret sharing scheme with $p$-bit long shares, $\ell$-bit leakage per share, where $\widehat{t}$ shares uniquely define the remaining $n - \widehat{t}$ shares, it has to hold that $p \ge \frac{\ell (n - t)}{\widehat{t}}$. We use this lower bound to gain further insights into a question that was recently posed by Benhamouda et al. (CRYPTO'18), who ask to what extend existing regular secret sharing schemes already provide protection against leakage. The authors proved that Shamir's secret sharing is $1$-bit leakage-resilient for reconstruction thresholds $t \geq 0.85n$ and conjectured that it is also $1$-bit leakage-resilient for any other threshold that is a constant fraction of the total number of shares. We do not disprove their conjecture, but show that it is the best one could possibly hope for. Concretely, we show that for large enough $n$ and any constant $0< c < 1$ it holds that Shamir's secret sharing scheme is \emph{not} leakage-resilient for $t \leq \frac{cn}{\log n}$. In contrast to the setting with information-theoretic security, we show that our lower bound does not hold in the computational setting. That is, we show how to construct a leakage-resilient secret sharing scheme in the random oracle model that is secure against computationally bounded adversaries and violates the lower bound stated above.
2020
EUROCRYPT
Secure Multi-party Quantum Computation with a Dishonest Majority 📺
The cryptographic task of secure multi-party (classical) computation has received a lot of attention in the last decades. Even in the extreme case where a computation is performed between k mutually distrustful players, and security is required even for the single honest player if all other players are colluding adversaries, secure protocols are known. For quantum computation, on the other hand, protocols allowing arbitrary dishonest majority have only been proven for k=2. In this work, we generalize the approach taken by Dupuis, Nielsen and Salvail (CRYPTO 2012) in the two-party setting to devise a secure, efficient protocol for multi-party quantum computation for any number of players k, and prove security against up to k-1 colluding adversaries. The quantum round complexity of the protocol for computing a quantum circuit of {CNOT, T} depth d is O(k (d + log n)), where n is the security parameter. To achieve efficiency, we develop a novel public verification protocol for the Clifford authentication code, and a testing protocol for magic-state inputs, both using classical multi-party computation.
2020
EUROCRYPT
Sigma protocols for MQ, PKP and SIS, and fishy signature schemes 📺
This work presents sigma protocols to prove knowledge of: - a solution to a system of quadratic polynomials, - a solution to an instance of the Permuted Kernel Problem and - a witness for a variety of lattice statements (including SIS). Our sigma protocols have soundness error 1/q', where q' is any number bounded by the size of the underlying finite field. This is much better than existing proofs, which have soundness error 2/3 or (q'+1)/2q'. The prover and verifier time our proofs are O(q'). We achieve this by first constructing so-called sigma protocols with helper, which are sigma protocols where the prover and the verifier are assisted by a trusted third party, and then eliminating the helper from the proof with a "cut-and-choose" protocol. We apply the Fiat-Shamir transform to obtain signature schemes with security proof in the QROM. We show that the resulting signature schemes, which we call the "MUltivariate quaDratic FIat-SHamir" scheme (MUDFISH) and the "ShUffled Solution to Homogeneous linear SYstem FIat-SHamir" scheme (SUSHSYFISH), are more efficient than existing signatures based on the MQ problem and the Permuted Kernel Problem. Our proof system can be used to improve the efficiency of applications relying on (generalizations of) Stern's protocol. We show that the proof size of our SIS proof is smaller than that of Stern's protocol by an order of magnitude and that our proof is more efficient than existing post-quantum secure SIS proofs.
2020
PKC
Fast, Compact, and Expressive Attribute-Based Encryption 📺
Attribute-based encryption (ABE) is an advanced cryptographic tool and useful to build various types of access control systems. Toward the goal of making ABE more practical, we propose key-policy (KP) and ciphertext-policy (CP) ABE schemes, which first support unbounded sizes of attribute sets and policies with negation and multi-use of attributes, allow fast decryption, and are adaptively secure under a standard assumption, simultaneously. Our schemes are more expressive than previous schemes and efficient enough. To achieve the adaptive security along with the other properties, we refine the technique introduced by Kowalczyk and Wee (Eurocrypt’19) so that we can apply the technique more expressive ABE schemes. Furthermore, we also present a new proof technique that allows us to remove redundant elements used in their ABE schemes. We implement our schemes in 128-bit security level and present their benchmarks for an ordinary personal computer and smartphones. They show that all algorithms run in one second with the personal computer when they handle any policy or attribute set with one hundred attributes.
2020
PKC
Adaptive Simulation Security for Inner Product Functional Encryption 📺
Inner product functional encryption ( $${mathsf {IPFE}}$$ ) [ 1 ] is a popular primitive which enables inner product computations on encrypted data. In $${mathsf {IPFE}}$$ , the ciphertext is associated with a vector $$varvec{x}$$ , the secret key is associated with a vector $$varvec{y}$$ and decryption reveals the inner product $$langle varvec{x},varvec{y} angle $$ . Previously, it was known how to achieve adaptive indistinguishability ( $$mathsf {IND}$$ ) based security for $${mathsf {IPFE}}$$ from the $$mathsf {DDH}$$ , $$mathsf {DCR}$$ and $$mathsf {LWE}$$ assumptions [ 8 ]. However, in the stronger simulation ( $$mathsf {SIM}$$ ) based security game, it was only known how to support a restricted adversary that makes all its key requests either before or after seeing the challenge ciphertext, but not both. In more detail, Wee [ 46 ] showed that the $$mathsf {DDH}$$ -based scheme of Agrawal et al. (Crypto 2016) achieves semi-adaptive simulation-based security, where the adversary must make all its key requests after seeing the challenge ciphertext. On the other hand, O’Neill showed that all $$mathsf {IND}$$ -secure $${mathsf {IPFE}}$$ schemes (which may be based on $$mathsf {DDH}$$ , $$mathsf {DCR}$$ and $$mathsf {LWE}$$ ) satisfy $$mathsf {SIM}$$ based security in the restricted model where the adversary makes all its key requests before seeing the challenge ciphertext. In this work, we resolve the question of $$mathsf {SIM}$$ -based security for $${mathsf {IPFE}}$$ by showing that variants of the $${mathsf {IPFE}}$$ constructions by Agrawal et al. , based on $$mathsf {DDH}$$ , Paillier and $$mathsf {LWE}$$ , satisfy the strongest possible adaptive $$mathsf {SIM}$$ -based security where the adversary can make an unbounded number of key requests both before and after seeing the (single) challenge ciphertext. This establishes optimal security of the $${mathsf {IPFE}}$$ schemes, under all hardness assumptions on which it can (presently) be based.
2020
PKC
Verifiable Inner Product Encryption Scheme 📺
In the standard setting of functional encryption (FE), we assume both the Central Authority (CA) and the encryptors to run their respective algorithms faithfully. Badrinarayanan et al. [ASIACRYPT 2016] proposed the concept of verifiable FE, which essentially guarantees that dishonest encryptors and authorities, even when colluding together, are not able to generate ciphertexts and tokens that give “inconsistent” results. They also provide a compiler turning any perfectly correct FE into a verifiable FE, but do not give efficient constructions. In this paper we improve on this situation by considering Inner-Product Encryption (IPE), which is a special case of functional encryption and a primitive that has attracted wide interest from both practitioners and researchers in the last decade. Specifically, we construct the first efficient verifiable IPE (VIPE) scheme according to the inner-product functionality of Katz, Sahai and Waters [EUROCRYPT 2008]. To instantiate the general construction of Badrinarayanan et al. we need to solve several additional challenges. In particular, we construct the first efficient perfectly correct IPE scheme. Our VIPE satisfies unconditional verifiability, whereas its privacy relies on the DLin assumption.
2020
PKC
A New Paradigm for Public-Key Functional Encryption for Degree-2 Polynomials 📺
We give the first public-key functional encryption that supports the generation of functional decryption keys for degree-2 polynomials, with succinct ciphertexts, whose semi-adaptive simulation-based security is proven under standard assumptions. At the heart of our new paradigm lies a so-called partially function-hiding functional encryption scheme for inner products, which admits public-key instances, and that is sufficient to build functional encryption for degree-2 polynomials. Doing so, we improve upon prior works, such as the constructions from Lin (CRYPTO 17) or Ananth Sahai (EUROCRYPT 17), both of which rely on function-hiding inner product FE, that can only exist in the private-key setting. The simplicity of our construction yields the most efficient FE for quadratic functions from standard assumptions (even those satisfying a weaker security notion). The interest of our methodology is that the FE for quadratic functions that builds upon any partially function-hiding FE for inner products inherits the security properties of the latter. In particular, we build a partially function-hiding FE for inner products that enjoys simulation security, in the semi-adaptive setting, where the challenge sent from the adversary can be chosen adaptively after seeing the public key (but before corrupting functional decryption keys). This is in contrast from prior public-key FE for quadratic functions from Baltico et al. (CRYPTO 17), which only achieved an indistinguishability-based, selective security. As a bonus, we show that we can obtain security against Chosen-Ciphertext Attacks straightforwardly. Even though this is the de facto security notion for encryption, this was not achieved by prior functional encryption schemes for quadratic functions, where the generic Fujisaki Okamoto transformation (CRYPTO 99) does not apply.
2020
PKC
Master-Key KDM-Secure IBE from Pairings 📺
Identity-based encryption (IBE) is a generalization of public-key encryption (PKE) by allowing encryptions to be made to user identities. In this work, we seek to obtain IBE schemes that achieve key-dependent-message (KDM) security with respect to messages that depend on the master secret key. Previous KDM-secure schemes only achieved KDM security in simpler settings, in which messages may only depend on user secret keys. An important motivation behind studying master-KDM security is the application of this notion in obtaining generic constructions of KDM-CCA secure PKE, a primitive notoriously difficult to realize. We give the first IBE that achieves master-KDM security from standard assumptions in pairing groups. Our construction is modular and combines techniques from KDM-secure PKE based from hash-proof systems, together with IBE that admits a tight security proof in the multi-challenge setting, which happens to be unexpectedly relevant in the context of KDM security. In fact, to the best of our knowledge, this is the first setting where techniques developed in the context of realizing tightly secure cryptosystems have led to a new feasibility result. As a byproduct, our KDM-secure IBE, and thus the resulting KDM-CCA-secure PKE both enjoy a tight security reduction, independent of the number of challenge ciphertexts, which was not achieved before.
2020
PKC
Hierarchical Identity-Based Encryption with Tight Multi-challenge Security 📺
We construct the first hierarchical identity-based encryption (HIBE) scheme with tight adaptive security in the multi-challenge setting, where adversaries are allowed to ask for ciphertexts for multiple adaptively chosen identities. Technically, we develop a novel technique that can tightly introduce randomness into user secret keys for hierarchical identities in the multi-challenge setting, which cannot be easily achieved by the existing techniques for tightly multi-challenge secure IBE. In contrast to the previous constructions, the security of our scheme is independent of the number of user secret key queries and that of challenge ciphertext queries. We prove the tight security of our scheme based on the Matrix Decisional Diffie-Hellman Assumption, which is an abstraction of standard and simple decisional Diffie-Hellman assumptions, such as the k -Linear and SXDH assumptions. Finally, we also extend our ideas to achieve tight chosen-ciphertext security and anonymity, respectively. These security notions for HIBE have not been tightly achieved in the multi-challenge setting before.
2020
PKC
The Usefulness of Sparsifiable Inputs: How to Avoid Subexponential iO 📺
We consider the problem of removing subexponential reductions to indistinguishability obfuscation (iO) in the context of obfuscating probabilistic programs. Specifically, we show how to apply complexity absorption (Zhandry Crypto 2016) to the recent notion of probabilistic indistinguishability obfuscation (piO, Canetti et al. TCC 2015). As a result, we obtain a variant of piO which allows to obfuscate a large class of probabilistic programs, from polynomially secure indistinguishability obfuscation and extremely lossy functions. Particularly, our piO variant is able to obfuscate circuits with specific input domains regardless of the performed computation. We then revisit several (direct or indirect) applications of piO, and obtain – a fully homomorphic encryption scheme (without circular security assumptions), – a multi-key fully homomorphic encryption scheme with threshold decryption, – an encryption scheme secure under arbitrary key-dependent messages, – a spooky encryption scheme for all circuits, – a function secret sharing scheme with additive reconstruction for all circuits, all from polynomially secure iO, extremely lossy functions, and, depending on the scheme, also other (but polynomial and comparatively mild) assumptions. All of these assumptions are implied by polynomially secure iO and the (non-polynomial, but very well-investigated) exponential DDH assumption. Previously, all the above applications required to assume the subexponential security of iO (and more standard assumptions).
2020
PKC
Witness Maps and Applications 📺
We introduce the notion of Witness Maps as a cryptographic notion of a proof system. A Unique Witness Map (UWM) deterministically maps all witnesses for an $$mathbf {NP}$$ statement to a single representative witness, resulting in a computationally sound, deterministic-prover, non-interactive witness independent proof system. A relaxation of UWM, called Compact Witness Map (CWM), maps all the witnesses to a small number of witnesses, resulting in a “lossy” deterministic-prover, non-interactive proof-system. We also define a Dual Mode Witness Map (DMWM) which adds an “extractable” mode to a CWM. Our main construction is a DMWM for all $$mathbf {NP}$$ relations, assuming sub-exponentially secure indistinguishability obfuscation ( $${imathcal {O}}$$ ), along with standard cryptographic assumptions. The DMWM construction relies on a CWM and a new primitive called Cumulative All-Lossy-But-One Trapdoor Functions (C-ALBO-TDF), both of which are in turn instantiated based on $${imathcal {O}}$$ and other primitives. Our instantiation of a CWM is in fact a UWM; in turn, we show that a UWM implies Witness Encryption. Along the way to constructing UWM and C-ALBO-TDF, we also construct, from standard assumptions, Puncturable Digital Signatures and a new primitive called Cumulative Lossy Trapdoor Functions (C-LTDF). The former improves up on a construction of Bellare et al. (Eurocrypt 2016), who relied on sub-exponentially secure $${imathcal {O}}$$ and sub-exponentially secure OWF. As an application of our constructions, we show how to use a DMWM to construct the first leakage and tamper-resilient signatures with a deterministic signer , thereby solving a decade old open problem posed by Katz and Vaikunthanathan (Asiacrypt 2009), by Boyle, Segev and Wichs (Eurocrypt 2011), as well as by Faonio and Venturi (Asiacrypt 2016). Our construction achieves the optimal leakage rate of $$1 - o(1)$$ .
2020
PKC
Memory-Tight Reductions for Practical Key Encapsulation Mechanisms 📺
The efficiency of a black-box reduction is an important goal of modern cryptography. Traditionally, the time complexity and the success probability were considered as the main aspects of efficiency measurements. In CRYPTO 2017, Auerbach et al. introduced the notion of memory-tightness in cryptographic reductions and showed a memory-tight reduction of the existential unforgeability of the RSA-FDH signature scheme. Unfortunately, their techniques do not extend directly to the reductions involving intricate RO-programming. The problem seems to be inherent as all the other existing results on memory-tightness are lower bounds and impossibility results. In fact, Auerbach et al. conjectured that a memory-tight reduction for security of Hashed-ElGamal KEM is impossible. We refute the above conjecture. Using a simple RO simulation technique, we provide memory-tight reductions of security of the Cramer-Shoup and the ECIES version of Hashed-ElGamal KEM. We prove memory-tight reductions for different variants of Fujisaki-Okamoto Transformation. We analyze the modular transformations introduced by Hofheinz, Hövermanns and Kiltz (TCC 2017). In addition to the constructions involving implicit rejection, we present a memory-tight reduction for the security of the transformation $$mathsf{ ext {QFO}_m^perp }$$ . Our techniques can withstand correctness-errors, and applicable to several lattice-based KEM candidates.
2020
PKC
Toward RSA-OAEP Without Random Oracles 📺
We show new partial and full instantiation results under chosen-ciphertext security for the widely implemented and standardized RSA-OAEP encryption scheme of Bellare and Rogaway (EUROCRYPT 1994) and two variants. Prior work on such instantiations either showed negative results or settled for “passive” security notions like IND-CPA. More precisely, recall that RSA-OAEP adds redundancy and randomness to a message before composing two rounds of an underlying Feistel transform, whose round functions are modeled as random oracles (ROs), with RSA. Our main results are: Either of the two oracles (while still modeling the other as a RO) can be instantiated in RSA-OAEP under IND-CCA2 using mild standard-model assumptions on the round functions and generalizations of algebraic properties of RSA shown by Barthe, Pointcheval, and Báguelin (CCS 2012). The algebraic properties are only shown to hold at practical parameters for small encryption exponent ( $$e=3$$ ), but we argue they have value for larger e as well. Both oracles can be instantiated simultaneously for two variants of RSA-OAEP, called “ t -clear” and “ s -clear” RSA-OAEP. For this we use extractability-style assumptions in the sense of Canetti and Dakdouk (TCC 2010) on the round functions, as well as novel yet plausible “XOR-type” assumptions on RSA. While admittedly strong, such assumptions may nevertheless be necessary at this point to make positive progress. In particular, our full instantiations evade impossibility results of Shoup (J. Cryptology 2002), Kiltz and Pietrzak (EUROCRYPT 2009), and Bitansky et al. (STOC 2014). Moreover, our results for s -clear RSA-OAEP yield the most efficient RSA-based encryption scheme proven IND-CCA2 in the standard model (using bold assumptions on cryptographic hashing) to date.
2020
PKC
Public-Key Puncturable Encryption: Modular and Compact Constructions 📺
We revisit the method of designing public-key puncturable encryption schemes and present a generic conversion by leveraging the techniques of distributed key-distribution and revocable encryption. In particular, we first introduce a refined version of identity-based revocable encryption, named key-homomorphic identity-based revocable key encapsulation mechanism with extended correctness . Then, we propose a generic construction of puncturable key encapsulation mechanism from the former by merging the idea of distributed key-distribution. Compared to the state-of-the-art, our generic construction supports unbounded number of punctures and multiple tags per message, thus achieving more fine-grained revocation of decryption capability. Further, it does not rely on random oracles , not suffer from non-negligible correctness error, and results in a variety of efficient schemes with distinct features. More precisely, we obtain the first scheme with very compact ciphertexts in the standard model, and the first scheme with support for both unbounded size of tags per ciphertext and unbounded punctures as well as constant-time puncture operation. Moreover, we get a comparable scheme proven secure under the standard DBDH assumption, which enjoys both faster encryption and decryption than previous works based on the same assumption, especially when the number of tags associated with the ciphertext is large.
2020
PKC
Flexible Authenticated and Confidential Channel Establishment (fACCE): Analyzing the Noise Protocol Framework 📺
The Noise protocol framework is a suite of channel establishment protocols, of which each individual protocol ensures various security properties of the transmitted messages, but keeps specification, implementation, and configuration relatively simple. Implementations of the Noise protocols are themselves, due to the employed primitives, very performant. Thus, despite its relative youth, Noise is already used by large-scale deployed applications such as WhatsApp and Slack. Though the Noise specification describes and claims the security properties of the protocol patterns very precisely, there has been no computational proof yet. We close this gap. Noise uses only a limited number of cryptographic primitives which makes it an ideal candidate for reduction-based security proofs. Due to its patterns’ characteristics as channel establishment protocols, and the usage of established keys within the handshake, the authenticated and confidential channel establishment (ACCE) model (Jager et al. CRYPTO 2012) seems to perfectly fit for an analysis of Noise. However, the ACCE model strictly divides protocols into two non-overlapping phases: the pre-accept phase (i.e., the channel establishment) and post-accept phase (i.e., the channel). In contrast, Noise allows the transmission of encrypted messages as soon as any key is established (for instance, before authentication between parties has taken place), and then incrementally increases the channel’s security guarantees. By proposing a generalization of the original ACCE model, we capture security properties of such staged channel establishment protocols flexibly – comparably to the multi-stage key exchange model (Fischlin and Günther CCS 2014). We give security proofs for eight of the 15 basic Noise patterns in the full version (EPRINT 2019/436) and exemplify them by the proof of the  XK pattern in this article.
2020
PKC
Limits on the Efficiency of (Ring) LWE Based Non-interactive Key Exchange 📺
$$mathsf {LWE}$$ based key-exchange protocols lie at the heart of post-quantum public-key cryptography. However, all existing protocols either lack the non-interactive nature of Diffie-Hellman key-exchange or polynomial $$mathsf {LWE}$$ -modulus, resulting in unwanted efficiency overhead. We study the possibility of designing non-interactive $$mathsf {LWE}$$ -based protocols with polynomial $$mathsf {LWE}$$ -modulus. To this end, We identify and formalize simple non-interactive and polynomial $$mathsf {LWE}$$ -modulus variants of existing protocols, where Alice and Bob simultaneously exchange one or more (ring) $$mathsf {LWE}$$ samples with polynomial $$mathsf {LWE}$$ -modulus and then run individual key reconciliation functions to obtain the shared key. We point out central barriers and show that such non-interactive key-exchange protocols are impossible if: (1) the reconciliation functions first compute the inner product of the received $$mathsf {LWE}$$ sample with their private $$mathsf {LWE}$$ secret. This impossibility is information theoretic. (2) One of the reconciliation functions does not depend on the error of the transmitted $$mathsf {LWE}$$ sample. This impossibility assumes hardness of $$mathsf {LWE}$$ . We give further evidence that progress in either direction, of giving an $$mathsf {LWE}$$ -based $$mathrm {NIKE}$$ protocol or proving impossibility of one will lead to progress on some other well-studied questions in cryptography. Overall, our results show possibilities and challenges in designing simple (ring) $$mathsf {LWE}$$ -based non-interactive key exchange protocols.
2020
PKC
PAKEs: New Framework, New Techniques and More Efficient Lattice-Based Constructions in the Standard Model 📺
Password-based authenticated key exchange (PAKE) allows two parties with a shared password to agree on a session key. In the last decade, the design of PAKE protocols from lattice assumptions has attracted lots of attention. However, existing solutions in the standard model do not have appealing efficiency. In this work, we first introduce a new PAKE framework. We then provide two realizations in the standard model, under the Learning With Errors (LWE) and Ring-LWE assumptions, respectively. Our protocols are much more efficient than previous proposals, thanks to three novel technical ingredients that may be of independent interests. The first ingredient consists of two approximate smooth projective hash (ASPH) functions from LWE, as well as two ASPHs from Ring-LWE. The latter are the first ring-based constructions in the literature, one of which only has a quasi-linear runtime while its function value contains $$varTheta (n)$$ field elements (where n is the degree of the polynomial defining the ring). The second ingredient is a new key conciliation scheme that is approximately rate-optimal and that leads to a very efficient key derivation for PAKE protocols. The third one is a new authentication code that allows to verify a MAC with a noisy key.
2020
PKC
Constraining and Watermarking PRFs from Milder Assumptions 📺
Constrained pseudorandom functions (C-PRFs) let the possessor of a secret key delegate the ability to evaluate the function on certain authorized inputs, while keeping the remaining function values pseudorandom. A constraint-hiding constrained PRF (CHC-PRF) additionally conceals the predicate that determines which inputs are authorized. These primitives have a wealth of applications, including watermarking schemes, symmetric deniable encryption, and updatable garbled circuits. Recent works have constructed (CH)C-PRFs from rather aggressive parameterizations of Learning With Errors (LWE) with subexponential modulus-noise ratios, even for relatively simple “puncturing” or $$ ext {NC}^{1}$$ circuit constraints. This corresponds to strong lattice assumptions and inefficient constructions, and stands in contrast to LWE-based unconstrained PRFs and fully homomorphic encryption schemes, which can be based on quasi-polynomial or even (nearly) polynomial modulus-noise ratios. In this work we considerably improve the LWE assumptions needed for building (constraint-hiding) constrained PRFs and watermarking schemes. In particular, for CHC-PRFs and related watermarking schemes we improve the modulus-noise ratio to $$lambda ^{O((d+log lambda ) log lambda )}$$ for depth- d circuit constraints, which is merely quasi-polynomial for $$ ext {NC}^{1}$$ circuits and closely related watermarking schemes. For (constraint-revealing) C-PRFs for $$ ext {NC}^{1}$$ we do even better, obtaining a nearly polynomial $$lambda ^{omega (1)}$$ ratio. These improvements are partly enabled by slightly modifying the definition of C-PRFs, in a way that is still compatible with many of their applications. Finally, as a contribution of independent interest we build CHC-PRFs for special constraint classes from generic , weaker assumptions: we obtain bit-fixing constraints based on the minimal assumption of one-way functions, and hyperplane-membership constraints based on key-homomorphic PRFs.
2020
PKC
Bringing Order to Chaos: The Case of Collision-Resistant Chameleon-Hashes 📺
Chameleon-hash functions, introduced by Krawczyk and Rabin at NDSS 2000, are trapdoor collision-resistant hash-functions parametrized by a public key. If the corresponding secret key is known, arbitrary collisions for the hash function can be efficiently found. Chameleon-hash functions have prominent applications in the design of cryptographic primitives, such as lifting non-adaptively secure signatures to adaptively secure ones. Recently, this primitive also received a lot of attention as a building block in more complex cryptographic applications ranging from editable blockchains to advanced signature and encryption schemes. We observe that in latter applications various different notions of collision-resistance are used, and it is not always clear if the respective notion does really cover what seems intuitively required by the application. Therefore, we revisit existing collision-resistance notions in the literature, study their relations, and—using the example of the recent redactable blockchain proposals—discuss which practical impact different notions of collision-resistance might have. Moreover, we provide a stronger, and arguably more desirable, notion of collision-resistance than what is known from the literature. Finally, we present a surprisingly simple and efficient black-box construction of chameleon-hash functions achieving this strong notion.
2020
PKC
Concretely-Efficient Zero-Knowledge Arguments for Arithmetic Circuits and Their Application to Lattice-Based Cryptography 📺
In this work we present a new interactive Zero-Knowledge Argument of knowledge for general arithmetic circuits. Our protocol is based on the “MPC-in-the-head”-paradigm of Ishai et al. (STOC 2009) and follows the recent “MPC-in-the-head with Preprocessing” as proposed by Katz, Kolesnikov and Wang (ACM CCS 2018). However, in contrast to Katz et al. who used the “cut-and-choose” approach for pre-processing, we show how to incorporate the well-known “sacrificing” paradigm into “MPC-in-the-head”, which reduces the proof size when working over arithmetic circuits. Our argument system uses only lightweight symmetric-key primitives and utilizes a simplified version of the so-called SPDZ-protocol. Based on specific properties of our protocol we then show how it can be used to construct an efficient Zero-Knowledge Argument of Knowledge for instances of the Short Integer Solution (SIS) problem. We present different protocols that are tailored to specific uses of SIS, while utilizing the advantages of our scheme. In particular, we present a variant of our argument system that allows the parties to sample the circuit “on the fly”, which may be of independent interest. We furthermore implemented our Zero-Knowledge argument for SIS and show that using our protocols it is possible to run a complete interactive proof, even for general SIS instances which result in a circuit with $${>}10^6$$ gates, in less than 0.5 s .
2020
PKC
Updateable Inner Product Argument with Logarithmic Verifier and Applications 📺
We propose an improvement for the inner product argument of Bootle et al. (EUROCRYPT’16). The new argument replaces the unstructured common reference string (the commitment key) by a structured one. We give two instantiations of this argument, for two different distributions of the CRS. In the designated verifier setting, this structure can be used to reduce verification from linear to logarithmic in the circuit size. The argument can be compiled to the publicly verifiable setting in asymmetric bilinear groups. The new common reference string can easily be updateable. The argument can be directly used to improve verification of Bulletproofs range proofs (IEEE SP’18). On the other hand, to use the improved argument to prove circuit satisfiability with logarithmic verification, we adapt recent techniques from Sonic (ACM CCS’19) to work with the new common reference string. The resulting argument is secure under standard assumptions (in the Random Oracle Model), in contrast with Sonic and recent works that improve its efficiency (Plonk, Marlin, AuroraLight), which, apart from the Random Oracle Model, need either the Algebraic Group Model or Knowledge Type assumptions.
2020
PKC
On Black-Box Extensions of Non-interactive Zero-Knowledge Arguments, and Signatures Directly from Simulation Soundness 📺
Highly efficient non-interactive zero-knowledge arguments (NIZK) are often constructed for limited languages and it is not known how to extend them to cover wider classes of languages in general. In this work we initiate a study on black-box language extensions for conjunctive and disjunctive relations, that is, building a NIZK system for $${mathcal L}diamond hat{{mathcal L}}$$ (with $$diamond in {wedge , vee }$$ ) based on NIZK systems for languages $${mathcal L}$$ and $$hat{{mathcal L}}$$ . While the conjunctive extension of NIZKs is straightforward by simply executing the given NIZKs in parallel, it is not known how disjunctive extensions could be achieved in a black-box manner. Besides, observe that the simple conjunctive extension does not work in the case of simulation-sound NIZKs (SS-NIZKs), as pointed out by Sahai (Sahai, FOCS 1999). Our main contribution is an impossibility result that negates the existence of the above extensions and implies other non-trivial separations among NIZKs, SS-NIZKs, and labelled SS-NIZKs. Motivated by the difficulty of such transformations, we additionally present an efficient construction of signature schemes based on unbounded simulation-sound NIZKs (USS-NIZKs) for any language without language extensions.
2020
PKC
On QA-NIZK in the BPK Model 📺
Recently, Bellare et al. defined subversion-resistance (security in the case the CRS creator may be malicious) for NIZK. In particular, a Sub-ZK NIZK is zero-knowledge, even in the case of subverted CRS. We study Sub-ZK QA-NIZKs, where the CRS can depend on the language parameter. First, we observe that subversion zero-knowledge (Sub-ZK) in the CRS model corresponds to no-auxiliary-string non-black-box NIZK in the Bare Public Key model, and hence, the use of non-black-box techniques is needed to obtain Sub-ZK. Second, we give a precise definition of Sub-ZK QA-NIZKs that are (knowledge-)sound if the language parameter but not the CRS is subverted and zero-knowledge even if both are subverted. Third, we prove that the most efficient known QA-NIZK for linear subspaces by Kiltz and Wee is Sub-ZK under a new knowledge assumption that by itself is secure in (a weaker version of) the algebraic group model. Depending on the parameter setting, it is (knowledge-)sound under different non-falsifiable assumptions, some of which do not belong to the family of knowledge assumptions.
2020
PKC
Improved Discrete Gaussian and Subgaussian Analysis for Lattice Cryptography 📺
Discrete Gaussian distributions over lattices are central to lattice-based cryptography, and to the computational and mathematical aspects of lattices more broadly. The literature contains a wealth of useful theorems about the behavior of discrete Gaussians under convolutions and related operations. Yet despite their structural similarities, most of these theorems are formally incomparable, and their proofs tend to be monolithic and written nearly “from scratch,” making them unnecessarily hard to verify, understand, and extend. In this work we present a modular framework for analyzing linear operations on discrete Gaussian distributions. The framework abstracts away the particulars of Gaussians, and usually reduces proofs to the choice of appropriate linear transformations and elementary linear algebra. To showcase the approach, we establish several general properties of discrete Gaussians, and show how to obtain all prior convolution theorems (along with some new ones) as straightforward corollaries. As another application, we describe a self-reduction for Learning With Errors (LWE) that uses a fixed number of samples to generate an unlimited number of additional ones (having somewhat larger error). The distinguishing features of our reduction are its simple analysis in our framework, and its exclusive use of discrete Gaussians without any loss in parameters relative to a prior mixed discrete-and-continuous approach. As a contribution of independent interest, for subgaussian random matrices we prove a singular value concentration bound with explicitly stated constants, and we give tighter heuristics for specific distributions that are commonly used for generating lattice trapdoors. These bounds yield improvements in the concrete bit-security estimates for trapdoor lattice cryptosystems.
2020
PKC
Almost Tight Security in Lattice with Polynomial Moduli - PRF, IBE, All-but-many LTF, and More 📺
Achieving tight security is a fundamental task in cryptography. While one of the most important purposes of this task is to improve the overall efficiency of a construction (by allowing smaller security parameters), many current lattice-based instantiations do not completely achieve the goal. Particularly, a super-polynomial modulus seems to be necessary in all prior work for (almost) tight schemes that allow the adversary to conduct queries, such as PRF, IBE, and Signatures. As the super-polynomial modulus would affect the noise-to-modulus ratio and thus increase the parameters, this might cancel out the advantages (in efficiency) brought from the tighter analysis. To determine the full power of tight security/analysis in lattices, it is necessary to determine whether the super-polynomial modulus restriction is inherent. In this work, we remove the super-polynomial modulus restriction for many important primitives – PRF, IBE, All-but-many Lossy Trapdoor Functions, and Signatures. The crux relies on an improvement over the framework of Boyen and Li (Asiacrypt 16), and an almost tight reduction from LWE to LWR, which improves prior work by Alwen et al. (Crypto 13), Bogdanov et al. (TCC 16), and Bai et al. (Asiacrypt 15). By combining these two advances, we are able to derive these almost tight schemes under LWE with a polynomial modulus.
2020
PKC
The Randomized Slicer for CVPP: Sharper, Faster, Smaller, Batchier 📺
Following the recent line of work on solving the closest vector problem with preprocessing (CVPP) using approximate Voronoi cells, we improve upon previous results in the following ways: We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis–Laarhoven–De Weger [PQCrypto 2019] and Laarhoven [MathCrypt 2019]. We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities. We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker–Gama–Joux [Cryptology ePrint Archive, 2015]. Using $$2^{0.185d + o(d)}$$ memory, we can solve a single CVPP instance in $$2^{0.264d + o(d)}$$ time. We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using $$2^{0.208d + o(d)}$$ memory, we can heuristically solve CVPP instances in $$2^{0.234d + o(d)}$$ amortized time, for batches of size at least $$2^{0.058d + o(d)}$$ . Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure.
2020
PKC
Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices: KEMs and Signatures of Smaller Sizes 📺
Currently, lattice-based cryptosystems are less efficient than their number-theoretic counterparts (based on RSA, discrete logarithm, etc.) in terms of key and ciphertext (signature) sizes. For adequate security the former typically needs thousands of bytes while in contrast the latter only requires at most hundreds of bytes. This significant difference has become one of the main concerns in replacing currently deployed public-key cryptosystems with lattice-based ones. Observing the inherent asymmetries in existing lattice-based cryptosystems, we propose asymmetric variants of the (module-)LWE and (module-)SIS assumptions, which yield further size-optimized KEM and signature schemes than those from standard counterparts. Following the framework of Lindner and Peikert (CT-RSA 2011) and the Crystals-Kyber proposal (EuroS&P 2018), we propose an IND-CCA secure KEM scheme from the hardness of the asymmetric module-LWE (AMLWE), whose asymmetry is fully exploited to obtain shorter public keys and ciphertexts. To target at a 128-bit quantum security, the public key (resp., ciphertext) of our KEM only has 896 bytes (resp., 992 bytes). Our signature scheme bears most resemblance to and improves upon the Crystals-Dilithium scheme (ToCHES 2018). By making full use of the underlying asymmetric module-LWE and module-SIS assumptions and carefully selecting the parameters, we construct an SUF-CMA secure signature scheme with shorter public keys and signatures. For a 128-bit quantum security, the public key (resp., signature) of our signature scheme only has 1312 bytes (resp., 2445 bytes). We adapt the best known attacks and their variants to our AMLWE and AMSIS problems and conduct a comprehensive and thorough analysis of several parameter choices (aiming at different security strengths) and their impacts on the sizes, security and error probability of lattice-based cryptosystems. Our analysis demonstrates that AMLWE and AMSIS problems admit more flexible and size-efficient choices of parameters than the respective standard versions.
2020
PKC
MPSign: A Signature from Small-Secret Middle-Product Learning with Errors 📺
We describe a digital signature scheme $$mathsf {MPSign}$$ , whose security relies on the conjectured hardness of the Polynomial Learning With Errors problem ( $$mathsf {PLWE}$$ ) for at least one defining polynomial within an exponential-size family (as a function of the security parameter). The proposed signature scheme follows the Fiat-Shamir framework and can be viewed as the Learning With Errors counterpart of the signature scheme described by Lyubashevsky at Asiacrypt 2016, whose security relies on the conjectured hardness of the Polynomial Short Integer Solution ( $$mathsf {PSIS}$$ ) problem for at least one defining polynomial within an exponential-size family. As opposed to the latter, $$mathsf {MPSign}$$ enjoys a security proof from $$mathsf {PLWE}$$ that is tight in the quantum-access random oracle model. The main ingredient is a reduction from $$mathsf {PLWE}$$ for an arbitrary defining polynomial among exponentially many, to a variant of the Middle-Product Learning with Errors problem ( $$mathsf {MPLWE}$$ ) that allows for secrets that are small compared to the working modulus. We present concrete parameters for $$mathsf {MPSign}$$ using such small secrets, and show that they lead to significant savings in signature length over Lyubashevsky’s Asiacrypt 2016 scheme (which uses larger secrets) at typical security levels. As an additional small contribution, and in contrast to $$mathsf {MPSign}$$ (or $$mathsf {MPLWE}$$ ), we present an efficient key-recovery attack against Lyubashevsky’s scheme (or the inhomogeneous $$mathsf {PSIS}$$ problem), when it is used with sufficiently small secrets, showing the necessity of a lower bound on secret size for the security of that scheme.
2020
PKC
Witness Indistinguishability for Any Single-Round Argument with Applications to Access Control 📺
Consider an access policy for some resource which only allows access to users of the system who own a certain set of attributes. Specifically, we consider the case where such an access structure is defined by some monotone function $$f:{0,1}^N ightarrow {0,1}$$ , belonging to some class of function $$F$$ (e.g. conjunctions, space bounded computation), where N is the number of possible attributes. In this work we show that any succinct single-round delegation scheme for the function class $$F$$ can be converted into a succinct single-round private access control protocol. That is, a verifier can be convinced that an approved user (i.e. one which holds an approved set of attributes) is accessing the system, without learning any additional information about the user or the set of attributes. As a main tool of independent interest, we show that assuming a quasi-polynomially secure two-message oblivious transfer scheme with statistical sender privacy (which can be based on quasi-polynomial hardness of the DDH, QR, DCR or LWE assumptions), we can convert any single-round protocol into a witness indistinguishable one, with similar communication complexity.
2020
PKC
Boosting Verifiable Computation on Encrypted Data 📺
We consider the setting in which an untrusted server stores a collection of data and is asked to compute a function over it. In this scenario, we aim for solutions where the untrusted server does not learn information about the data and is prevented from cheating. This problem is addressed by verifiable and private delegation of computation, proposed by Gennaro, Gentry and Parno (CRYPTO’10), a notion that is close to both the active areas of homomorphic encryption and verifiable computation (VC). However, in spite of the efficiency advances in the respective areas, VC protocols that guarantee privacy of the inputs are still expensive. The only exception is a protocol by Fiore, Gennaro and Pastro (CCS’14) that supports arithmetic circuits of degree at most 2. In this paper we propose new efficient protocols for VC on encrypted data that improve over the state of the art solution of Fiore et al. in multiple aspects. First, we can support computations of degree higher than 2. Second, we achieve public delegatability and public verifiability whereas Fiore et al. need the same secret key to encode inputs and verify outputs. Third, we achieve a new property that guarantees that verifiers can be convinced about the correctness of the outputs without learning information on the inputs. The key tool to obtain our new protocols is a new SNARK that can efficiently handle computations over a quotient polynomial ring, such as the one used by Ring-LWE somewhat homomorphic encryption schemes. This SNARK in turn relies on a new commit-and-prove SNARK for proving evaluations on the same point of several committed polynomials. We propose a construction of this scheme under an extractability assumption over bilinear groups in the random oracle model.
2020
PKC
Lossy CSI-FiSh: Efficient Signature Scheme with Tight Reduction to Decisional CSIDH-512 📺
Recently, Beullens, Kleinjung, and Vercauteren (Asiacrypt’19) provided the first practical isogeny-based digital signature, obtained from the Fiat-Shamir (FS) paradigm. They worked with the CSIDH-512 parameters and passed through a new record class group computation. However, as with all standard FS signatures, the security proof is highly non-tight and the concrete parameters are set under the heuristic that the only way to attack the scheme is by finding collisions for a hash function. In this paper, we propose an FS-style signature scheme, called Lossy CSI-FiSh, constructed using the CSIDH-512 parameters and with a security proof based on the “Lossy Keys” technique introduced by Kiltz, Lyubashevsky and Schaffner (Eurocrypt’18). Lossy CSI-FiSh is provably secure under the same assumption which underlies the security of the key exchange protocol CSIDH (Castryck et al. (Asiacrypt’18)) and is almost as efficient as CSI-FiSh. For instance, aiming for small signature size, our scheme is expected to take around $$approx 800$$  ms to sign/verify while producing signatures of size $$approx 280$$ bytes. This is only twice slower than CSI-FiSh while having similar signature size for the same parameter set. As an additional benefit, our scheme is by construction secure both in the classical and quantum random oracle model.
2020
PKC
Threshold Schemes from Isogeny Assumptions 📺
We initiate the study of threshold schemes based on the Hard Homogeneous Spaces (HHS) framework of Couveignes. Quantum-resistant HHS based on supersingular isogeny graphs have recently become usable thanks to the record class group precomputation performed for the signature scheme CSI-FiSh. Using the HHS equivalent of the technique of Shamir’s secret sharing in the exponents , we adapt isogeny based schemes to the threshold setting. In particular we present threshold versions of the CSIDH public key encryption, and the CSI-FiSh signature schemes. The main highlight is a threshold version of CSI-FiSh which runs almost as fast as the original scheme, for message sizes as low as 1880 B, public key sizes as low as 128 B, and thresholds up to 56; other speed-size-threshold compromises are possible.
2020
PKC
Topology-Hiding Computation for Networks with Unknown Delays 📺
Topology-Hiding Computation (THC) allows a set of parties to securely compute a function over an incomplete network without revealing information on the network topology. Since its introduction in TCC’15 by Moran et al., the research on THC has focused on reducing the communication complexity, allowing larger graph classes, and tolerating stronger corruption types. All of these results consider a fully synchronous model with a known upper bound on the maximal delay of all communication channels. Unfortunately, in any realistic setting this bound has to be extremely large, which makes all fully synchronous protocols inefficient. In the literature on multi-party computation, this is solved by considering the fully asynchronous model. However, THC is unachievable in this model (and even hard to define), leaving even the definition of a meaningful model as an open problem. The contributions of this paper are threefold. First, we introduce a meaningful model of unknown and random communication delays for which THC is both definable and achievable. The probability distributions of the delays can be arbitrary for each channel, but one needs to make the (necessary) assumption that the delays are independent. The existing fully-synchronous THC protocols do not work in this setting and would, in particular, leak information about the topology. Second, in the model with trusted stateless hardware boxes introduced at Eurocrypt’18 by Ball et al., we present a THC protocol that works for any graph class. Third, we explore what is achievable in the standard model without trusted hardware and present a THC protocol for specific graph types (cycles and trees) secure under the DDH assumption. The speed of all protocols scales with the actual (unknown) delay times, in contrast to all previously known THC protocols whose speed is determined by the assumed upper bound on the network delay.
2020
PKC
Sublinear-Round Byzantine Agreement Under Corrupt Majority 📺
Although Byzantine Agreement (BA) has been studied for three decades, perhaps somewhat surprisingly, there still exist significant gaps in our understanding regarding its round complexity. A long-standing open question is the following: can we achieve BA with sublinear round complexity under corrupt majority? Due to the beautiful works by Garay et al. (FOCS’07) and Fitzi and Nielsen (DISC’09), we have partial and affirmative answers to this question albeit for the narrow regime $$f = n/2 + o(n)$$ where f is the number of corrupt nodes and n is the total number of nodes. So far, no positive result is known about the setting $$f > 0.51n$$ even for static corruption! In this paper, we make progress along this somewhat stagnant front. We show that there exists a corrupt-majority BA protocol that terminates in $$O(frac{1}{epsilon } log frac{1}{delta })$$ rounds in the worst case, satisfies consistency with probability at least $$1 - delta $$ , and tolerates $$(1-epsilon )$$ fraction of corrupt nodes. Our protocol secures against an adversary that can corrupt nodes adaptively during the protocol execution but cannot perform “after-the-fact” removal of honest messages that have already been sent prior to corruption. Our upper bound is optimal up to a logarithmic factor in light of the elegant $$varOmega (1/epsilon )$$ lower bound by Garay et al. (FOCS’07).
2020
PKC
Bandwidth-Efficient Threshold EC-DSA 📺
Threshold Signatures allow n parties to share the power of issuing digital signatures so that any coalition of size at least $$t+1$$ can sign, whereas groups of t or less players cannot. Over the last few years many schemes addressed the question of realizing efficient threshold variants for the specific case of EC-DSA signatures. In this paper we present new solutions to the problem that aim at reducing the overall bandwidth consumption. Our main contribution is a new variant of the Gennaro and Goldfeder protocol from ACM CCS 2018 that avoids all the required range proofs, while retaining provable security against malicious adversaries in the dishonest majority setting. Our experiments show that – for all levels of security – our signing protocol reduces the bandwidth consumption of best previously known secure protocols for factors varying between 4.4 and 9, while key generation is consistently two times less expensive. Furthermore compared to these same protocols, our signature generation is faster for 192-bits of security and beyond.
2020
PKC
Blazing Fast OT for Three-Round UC OT Extension 📺
Oblivious Transfer (OT) is an important building block for multi-party computation (MPC). Since OT requires expensive public-key operations, efficiency-conscious MPC protocols use an OT extension (OTE) mechanism [Beaver 96, Ishai et al. 03] to provide the functionality of many independent OT instances with the same sender and receiver, using only symmetric-key operations plus few instances of some base OT protocol. Consequently there is significant interest in constructing OTE friendly protocols, namely protocols that, when used as base-OT for OTE, result in extended OT that are both round-efficient and cost-efficient. We present the most efficient OTE-friendly protocol to date. Specifically: Our base protocol incurs only 3 exponentiations per instance. Our base protocol results in a 3 round extended OT protocol. The extended protocol is UC secure in the Observable Random Oracle Model (ROM) under the CDH assumption. For comparison, the state of the art for base OTs that result in 3-round OTE are proven only in the programmable ROM, and require 4 exponentiations under Interactive DDH or 6 exponentiations under DDH [Masney-Rindal 19]. We also implement our protocol and benchmark it against the Simplest OT protocol [Chou and Orlandi, Latincrypt 2015], which is the most efficient and widely used OT protocol but not known to suffice for OTE. The computation cost is roughly the same in both cases. Interestingly, our base OT is also 3 rounds. However, we slightly modify the extension mechanism (which normally adds a round) so as to preserve the number of rounds in our case.
2020
PKC
Going Beyond Dual Execution: MPC for Functions with Efficient Verification 📺
The dual execution paradigm of Mohassel and Franklin (PKC’06) and Huang, Katz and Evans (IEEE ’12) shows how to achieve the notion of 1-bit leakage security at roughly twice the cost of semi-honest security for the special case of two-party secure computation . To date, there are no multi-party computation (MPC) protocols that offer such a strong trade-off between security and semi-honest performance. Our main result is to address this shortcoming by designing 1-bit leakage protocols for the multi-party setting, albeit for a special class of functions. We say that function f ( x ,  y ) is efficiently verifiable by g if the running time of g is always smaller than f and $$g(x,y,z)=1$$ if and only if $$f(x,y)=z$$ . In the two-party setting, we first improve dual execution by observing that the “second execution” can be an evaluation of g instead of f , and that by definition, the evaluation of g is asymptotically more efficient. Our main MPC result is to construct a 1-bit leakage protocol for such functions from any passive protocol for f that is secure up to additive errors and any active protocol for g . An important result by Genkin et al. (STOC ’14) shows how the classic protocols by Goldreich et al. (STOC ’87) and Ben-Or et al. (STOC ’88) naturally support this property, which allows to instantiate our compiler with two-party and multi-party protocols. A key technical result we prove is that the passive protocol for distributed garbling due to Beaver et al. (STOC ’90) is in fact secure up to additive errors against malicious adversaries, thereby, yielding another powerful instantiation of our paradigm in the constant-round multi-party setting. As another concrete example of instantiating our approach, we present a novel protocol for computing perfect matching that is secure in the 1-bit leakage model and whose communication complexity is less than the honest-but-curious implementations of textbook algorithms for perfect matching.
2020
PKC
Mon$\mathbb {Z}_{2^{k}}$a: Fast Maliciously Secure Two Party Computation on $\mathbb {Z}_{2^{k}}$ 📺
In this paper we present a new 2-party protocol for secure computation over rings of the form $$mathbb {Z}_{2^k}$$ . As many recent efficient MPC protocols supporting dishonest majority, our protocol consists of a heavier (input-independent) pre-processing phase and a very efficient online stage. Our offline phase is similar to BeDOZa (Bendlin et al. Eurocrypt 2011) but employs Joye-Libert (JL, Eurocrypt 2013) as underlying homomorphic cryptosystem and, notably, it can be proven secure without resorting to the expensive sacrifice step. JL turns out to be particularly well suited for the ring setting as it naturally supports $$mathbb {Z}_{2^k}$$ as underlying message space. Moreover, it enjoys several additional properties (such as valid ciphertext-verifiability and efficiency) that make it a very good fit for MPC in general. As a main technical contribution we show how to take advantage of all these properties (and of more properties that we introduce in this work, such as a ZK proof of correct multiplication) in order to design a two-party protocol that is efficient, fast and easy to implement in practice. Our solution is particularly well suited for relatively large choices of k ( e.g. $$k=128$$ ), but compares favorably with the state of the art solution of SPD $$mathbb {Z}_{2^k}$$ (Cramer et al. Crypto 2018) already for the practically very relevant case of $$mathbb {Z}_{2^{64}}$$ .
2020
PKC
Generic Authenticated Key Exchange in the Quantum Random Oracle Model 📺
We propose $$mathsf {FO_mathsf {AKE}}$$ , a generic construction of two-message authenticated key exchange (AKE) from any passively secure public key encryption (PKE) in the quantum random oracle model (QROM). Whereas previous AKE constructions relied on a Diffie-Hellman key exchange or required the underlying PKE scheme to be perfectly correct, our transformation allows arbitrary PKE schemes with non-perfect correctness. Dealing with imperfect schemes is one of the major difficulties in a setting involving active attacks. Our direct construction, when applied to schemes such as the submissions to the recent NIST post-quantum competition, is more natural than previous AKE transformations. Furthermore, we avoid the use of (quantum-secure) digital signature schemes which are considerably less efficient than their PKE counterparts. As a consequence, we can instantiate our AKE transformation with any of the submissions to the recent NIST competition, e.g., ones based on codes and lattices. $$mathsf {FO_mathsf {AKE}}$$ can be seen as a generalisation of the well known Fujisaki-Okamoto transformation (for building actively secure PKE from passively secure PKE) to the AKE setting. As a helper result, we also provide a security proof for the Fujisaki-Okamoto transformation in the QROM for PKE with non-perfect correctness which is tighter and tolerates a larger correctness error than previous proofs.
2020
PKC
Threshold Ring Signatures: New Definitions and Post-quantum Security 📺
A t -out-of- N threshold ring signature allows t parties to jointly and anonymously compute a signature on behalf on N public keys, selected in an arbitrary manner among the set of all public keys registered in the system. Existing definitions for t -out-of- N threshold ring signatures guarantee security only when the public keys are honestly generated, and many even restrict the ability of the adversary to actively participate in the computation of the signatures. Such definitions do not capture the open settings envisioned for threshold ring signatures, where parties can independently add themselves to the system, and join other parties for the computation of the signature. Furthermore, known constructions of threshold ring signatures are not provably secure in the post-quantum setting, either because they are based on non-post quantum secure problems (e.g. Discrete Log, RSA), or because they rely on transformations such as Fiat-Shamir, that are not always secure in the quantum random oracle model (QROM). In this paper, we provide the first definition of t -out-of- N threshold ring signatures against active adversaries who can participate in the system and arbitrarily deviate from the prescribed procedures. Second, we present a post-quantum secure realization based on any (post-quantum secure) trapdoor commitment, which we prove secure in the QROM. Our construction is black-box and it can be instantiated with any trapdoor commitment, thus allowing the use of a variety of hardness assumptions.
2020
PKC
Tight and Optimal Reductions for Signatures Based on Average Trapdoor Preimage Sampleable Functions and Applications to Code-Based Signatures 📺
The GPV construction [ GPV08 ] presents a generic construction of signature schemes in the Hash and Sign paradigm and is used in some lattice based signatures. This construction requires a family $$mathcal {F}$$ of trapdoor preimage sampleable functions (TPSF). In this work we extend this notion to the weaker Average TPSF (ATPSF) and show that the GPV construction also holds for ATPSF in the Random Oracle Model (ROM). We also introduce the problem of finding a Claw with a random function (Claw(RF)) and present a tight security reduction to the Claw(RF) problem. Our reduction is also optimal meaning that an algorithm that solves the Claw(RF) problem breaks the scheme. We extend these results to the quantum setting and prove this same tight and optimal reduction in the QROM. Finally, we apply these results to code-based signatures, notably the Wave signature scheme and prove security for it in the ROM and the QROM, improving and extending the original analysis of [ DST19a ].
2020
PKC
Faster Cofactorization with ECM Using Mixed Representations 📺
This paper introduces a novel implementation of the elliptic curve factoring method specifically designed for medium-size integers such as those arising by billions in the cofactorization step of the Number Field Sieve. In this context, our algorithm requires fewer modular multiplications than any other publicly available implementation. The main ingredients are: the use of batches of primes, fast point tripling, optimal double-base decompositions and Lucas chains, and a good mix of Edwards and Montgomery representations.
2020
PKC
Improved Classical Cryptanalysis of SIKE in Practice 📺
The main contribution of this work is an optimized implementation of the van Oorschot-Wiener (vOW) parallel collision finding algorithm. As is typical for cryptanalysis against conjectured hard problems (e. g. factoring or discrete logarithms), challenges can arise in the implementation that are not captured in the theory, making the performance of the algorithm in practice a crucial element of estimating security. We present a number of novel improvements, both to generic instantiations of the vOW algorithm finding collisions in arbitrary functions, and to its instantiation in the context of the supersingular isogeny key encapsulation (SIKE) protocol, that culminate in an improved classical cryptanalysis of the computational supersingular isogeny (CSSI) problem. In particular, we present a scalable implementation that can be applied to the Round-2 parameter sets of SIKE that can be used to give confidence in their security levels.
2020
PKC
A Short-List of Pairing-Friendly Curves Resistant to Special TNFS at the 128-Bit Security Level 📺
There have been notable improvements in discrete logarithm computations in finite fields since 2015 and the introduction of the Tower Number Field Sieve algorithm (TNFS) for extension fields. The Special TNFS is very efficient in finite fields that are target groups of pairings on elliptic curves, where the characteristic is special (e.g. sparse). The key sizes for pairings should be increased, and alternative pairing-friendly curves can be considered. We revisit the Special variant of TNFS for pairing-friendly curves. In this case the characteristic is given by a polynomial of moderate degree (between 4 and 38) and tiny coefficients, evaluated at an integer (a seed). We present a polynomial selection with a new practical trade-off between degree and coefficient size. As a consequence, the security of curves computed by Barbulescu, El Mrabet and Ghammam in 2019 should be revised: we obtain a smaller estimated cost of STNFS for all curves except BLS12 and BN. To obtain TNFS-secure curves, we reconsider the Brezing–Weng generic construction of families of pairing-friendly curves and estimate the cost of our new Special TNFS algorithm for these curves. This improves on the work of Fotiadis and Konstantinou, Fotiadis and Martindale, and Barbulescu, El Mrabet and Ghammam. We obtain a short-list of interesting families of curves that are resistant to the Special TNFS algorithm, of embedding degrees 10 to 16 for the 128-bit security level. We conclude that at the 128-bit security level, BLS-12 and Fotiadis–Konstantinou–Martindale curves with $$k=12$$ over a 440 to 448-bit prime field seem to be the best choice for pairing efficiency. We also give hints at the 192-bit security level.
2020
PKC
Privacy-Preserving Authenticated Key Exchange and the Case of IKEv2 📺
In this paper, we present a strong, formal, and general-purpose cryptographic model for privacy-preserving authenticated key exchange (PPAKE) protocols. PPAKE protocols are secure in the traditional AKE sense but additionally guarantee the confidentiality of the identities used in communication sessions. Our model has several useful and novel features, among others: it is a proper extension of classical AKE models, guarantees in a strong sense that the confidentiality of session keys is independent from the secrecy of the used identities, and it is the first to support what we call dynamic modes, where the responsibility of selecting the identities of the communication partners may vary over several protocol runs. We show the validity of our model by applying it to the cryptographic core of IPsec IKEv2 with signature-based authentication where the need for dynamic modes is practically well-motivated. In our analysis, we not only show that this protocol provides strong classical AKE security guarantees but also that the identities that are used by the parties remain hidden in successful protocol runs. Historically, the Internet Key Exchange (IKE) protocol was the first real-world AKE to incorporate privacy-preserving techniques. However, lately privacy-preserving techniques have gained renewed interest in the design process of important protocols like TLS 1.3 (with encrypted SNI) and NOISE. We believe that our new model can be a solid foundation to analyze these and other practical protocols with respect to their privacy guarantees, in particular, in the now so wide-spread scenario where multiple virtual servers are hosted on a single machine.
2020
PKC
Linearly-Homomorphic Signatures and Scalable Mix-Nets 📺
Anonymity is a primary ingredient for our digital life. Several tools have been designed to address it such as, for authentication, blind signatures, group signatures or anonymous credentials and, for confidentiality, randomizable encryption or mix-nets. When it comes to complex electronic voting schemes, random shuffling of authenticated ciphertexts with mix-nets is the only known tool. However, it requires huge and complex zero-knowledge proofs to guarantee the actual permutation of the initial ciphertexts in a privacy-preserving way. In this paper, we propose a new approach for proving correct shuffling of signed ElGamal ciphertexts: the mix-servers can simply randomize individual ballots, which means the ciphertexts, the signatures, and the verification keys, with an additional global proof of constant size, and the output will be publicly verifiable. The security proof is in the generic bilinear group model. The computational complexity for the each mix-server is linear in the number of ballots. Verification is also linear in the number of ballots, but independent of the number of rounds of mixing. This leads to a new highly scalable technique. Our construction makes use of linearly-homomorphic signatures, with new features, that are of independent interest.
2020
PKC
Efficient Redactable Signature and Application to Anonymous Credentials 📺
Let us assume that Alice has received a constant-size signature on a set of messages $${m_i}_{i=1}^n$$ from some organization. Depending on the situation, Alice might need to disclose, prove relations about or hide some of these messages. Ideally, the complexity of the corresponding protocols should not depend on the hidden messages. In particular, if Alice wants to disclose only k messages, then the authenticity of the latter should be verifiable in at most O ( k ) operations. Many solutions were proposed over the past decades, but they only provide a partial answer to this problem. In particular, we note that they suffer either from the need to prove knowledge of the hidden elements or from the inability to prove that the latter satisfy some relations. In this paper, we propose a very efficient constant-size redactable signature scheme that addresses all the problems above. Signatures can indeed be redacted to remain valid only on a subset of k messages included in $${m_i}_{i=1}^n$$ . The resulting redacted signature consists of 4 elements and can be verified with essentially k exponentiations. Different shows of the same signature can moreover be made unlinkable leading to a very efficient anonymous credentials system.
2020
TCC
Non-Malleable Codes, Extractors and Secret Sharing for Interleaved Tampering and Composition of Tampering 📺
Non-malleable codes were introduced by Dziembowski, Pietrzak, and Wichs (JACM 2018) as a generalization of standard error correcting codes to handle severe forms of tampering on codewords. This notion has attracted a lot of recent research, resulting in various explicit constructions, which have found applications in tamper-resilient cryptography and connections to other pseudorandom objects in theoretical computer science. We continue the line of investigation on explicit constructions of non-malleable codes in the information theoretic setting, and give explicit constructions for several new classes of tampering functions. These classes strictly generalize several previously studied classes of tampering functions, and in particular extend the well studied split-state model which is a "compartmentalized" model in the sense that the codeword is partitioned \emph{a prior} into disjoint intervals for tampering. Specifically, we give explicit non-malleable codes for the following classes of tampering functions. (1) Interleaved split-state tampering: Here the codeword is partitioned in an unknown way by an adversary, and then tampered with by a split-state tampering function. (2) Affine tampering composed with split-state tampering: In this model, the codeword is first tampered with by a split-state adversary, and then the whole tampered codeword is further tampered with by an affine function. In fact our results are stronger, and we can handle affine tampering composed with interleaved split-state tampering. Our results are the first explicit constructions of non-malleable codes in any of these tampering models. As applications, they also directly give non-malleable secret-sharing schemes with binary shares in the split-state joint tampering model and the stronger model of affine tampering composed with split-state joint tampering. We derive all these results from explicit constructions of seedless non-malleable extractors, which we believe are of independent interest. Using our techniques, we also give an improved seedless extractor for an unknown interleaving of two independent sources.
2020
TCC
On Pseudorandom Encodings 📺
We initiate a study of \emph{pseudorandom encodings}: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, ``honey encryption'' and steganography. The main question we ask is whether \emph{every} efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a two-way relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multi-party computation and questions in the domain of steganography.
2020
TCC
On the Complexity of Arithmetic Secret Sharing 📺
Since the mid 2000s, asymptotically-good strongly-multiplicative linear (ramp) secret sharing schemes over a fixed finite field have turned out as a central theoretical primitive in numerous constant-communication-rate results in multi-party cryptographic scenarios, and, surprisingly, in two-party cryptography as well. Known constructions of this most powerful class of arithmetic secret sharing schemes all rely heavily on algebraic geometry (AG), i.e., on dedicated AG codes based on asymptotically good towers of algebraic function fields defined over finite fields. It is a well-known open question since the first (explicit) constructions of such schemes appeared in CRYPTO 2006 whether the use of ``heavy machinery'' can be avoided here. i.e., the question is whether the mere existence of such schemes can also be proved by ``elementary'' techniques only (say, from classical algebraic coding theory), even disregarding effective construction. So far, there is no progress. In this paper we show the theoretical result that, (1) {\em no matter whether this open question has an affirmative answer or not}, these schemes {\em can} be constructed explicitly by {\em elementary algorithms} defined in terms of basic algebraic coding theory. This pertains to all relevant operations associated to such schemes, including, notably, the generation of an instance for a given number of players $n$, as well as error correction in the presence of corrupt shares. We further show that (2) the algorithms are {\em quasi-linear time} (in $n$); this is (asymptotically) significantly more efficient than the known constructions. That said, the {\em analysis} of the mere termination of these algorithms {\em does} still rely on algebraic geometry, in the sense that it requires ``blackbox application'' of suitable {\em existence} results for these schemes. Our method employs a nontrivial, novel adaptation of a classical (and ubiquitous) paradigm from coding theory that enables transformation of {\em existence} results on asymptotically good codes into {\em explicit construction} of such codes via {\em concatenation}, at some constant loss in parameters achieved. In a nutshell, our generating idea is to combine a cascade of explicit but ``asymptotically-bad-yet-good-enough schemes'' with an asymptotically good one in such a judicious way that the latter can be selected with exponentially small number of players in that of the compound scheme. This opens the door to efficient, elementary exhaustive search. In order to make this work, we overcome a number of nontrivial technical hurdles. Our main handles include a novel application of the recently introduced notion of Reverse Multiplication-Friendly Embeddings (RMFE) from CRYPTO 2018, as well as a novel application of a natural variant in arithmetic secret sharing from EUROCRYPT 2008.
2020
TCC
Proof-Carrying Data from Accumulation Schemes 📺
Recursive proof composition has been shown to lead to powerful primitives such as incrementally-verifiable computation (IVC) and proof-carrying data (PCD). All existing approaches to recursive composition take a succinct non-interactive argument of knowledge (SNARK) and use it to prove a statement about its own verifier. This technique requires that the verifier run in time sublinear in the size of the statement it is checking, a strong requirement that restricts the class of SNARKs from which PCD can be built. This in turn restricts the efficiency and security properties of the resulting scheme. Bowe, Grigg, and Hopwood (ePrint 2019/1021) outlined a novel approach to recursive composition, and applied it to a particular SNARK construction which does *not* have a sublinear-time verifier. However, they omit details about this approach and do not prove that it satisfies any security property. Nonetheless, schemes based on their ideas have already been implemented in software. In this work we present a collection of results that establish the theoretical foundations for a generalization of the above approach. We define an *accumulation scheme* for a non-interactive argument, and show that this suffices to construct PCD, even if the argument itself does not have a sublinear-time verifier. Moreover we give constructions of accumulation schemes for SNARKs, which yield PCD schemes with novel efficiency and security features.
2020
TCC
Linear-Time Arguments with Sublinear Verification from Tensor Codes 📺
Minimizing the computational cost of the prover is a central goal in the area of succinct arguments. In particular, it remains a challenging open problem to construct a succinct argument where the prover runs in linear time and the verifier runs in polylogarithmic time. We make progress towards this goal by presenting a new linear-time probabilistic proof. For any fixed ? > 0, we construct an interactive oracle proof (IOP) that, when used for the satisfiability of an N-gate arithmetic circuit, has a prover that uses O(N) field operations and a verifier that uses O(N^?) field operations. The sublinear verifier time is achieved in the holographic setting for every circuit (the verifier has oracle access to a linear-size encoding of the circuit that is computable in linear time). When combined with a linear-time collision-resistant hash function, our IOP immediately leads to an argument system where the prover performs O(N) field operations and hash computations, and the verifier performs O(N^?) field operations and hash computations (given a short digest of the N-gate circuit).
2020
TCC
On Perfect Correctness in (Lockable) Obfuscation 📺
In a lockable obfuscation scheme a party takes as input a program P, a lock value alpha, a message msg, and produces an obfuscated program P'. The obfuscated program can be evaluated on an input x to learn the message msg if P(x)= alpha. The security of such schemes states that if alpha is randomly chosen (independent of P and msg), then one cannot distinguish an obfuscation of $P$ from a dummy obfuscation. Existing constructions of lockable obfuscation achieve provable security under the Learning with Errors assumption. One limitation of these constructions is that they achieve only statistical correctness and allow for a possible one-sided error where the obfuscated program could output the msg on some value x where P(x) \neq alpha. In this work we motivate the problem of studying perfect correctness in lockable obfuscation for the case where the party performing the obfuscation might wish to inject a backdoor or hole in the correctness. We begin by studying the existing constructions and identify two components that are susceptible to imperfect correctness. The first is in the LWE-based pseudo-random generators (PRGs) that are non-injective, while the second is in the last level testing procedure of the core constructions. We address each in turn. First, we build upon previous work to design injective PRGs that are provably secure from the LWE assumption. Next, we design an alternative last level testing procedure that has an additional structure to prevent correctness errors. We then provide surgical proof of security (to avoid redundancy) that connects our construction to the construction by Goyal, Koppula, and Waters (GKW). Specifically, we show how for a random value alpha an obfuscation under our new construction is indistinguishable from an obfuscation under the existing GKW construction.
2020
TCC
Barriers for Succinct Arguments in the Random Oracle Model 📺
We establish barriers on the efficiency of succinct arguments in the random oracle model. We give evidence that, under standard complexity assumptions, there do not exist succinct arguments where the argument verifier makes a small number of queries to the random oracle. The new barriers follow from new insights into how probabilistic proofs play a fundamental role in constructing succinct arguments in the random oracle model. *IOPs are necessary for succinctness.* We prove that any succinct argument in the random oracle model can be transformed into a corresponding interactive oracle proof (IOP). The query complexity of the IOP is related to the succinctness of the argument. *Algorithms for IOPs.* We prove that if a language has an IOP with good soundness relative to query complexity, then it can be decided via a fast algorithm with small space complexity. By combining these results we obtain barriers for a large class of deterministic and non-deterministic languages. For example, a succinct argument for 3SAT with few verifier queries implies an IOP with good parameters, which in turn implies a fast algorithm for 3SAT that contradicts the Exponential-Time Hypothesis. We additionally present results that shed light on the necessity of several features of probabilistic proofs that are typically used to construct succinct arguments, such as holography and state restoration soundness. Our results collectively provide an explanation for "why" known constructions of succinct arguments have a certain structure.
2020
TCC
Schr{\"o}dinger's Pirate: How To Trace a Quantum Decoder 📺
We explore the problem of traitor tracing where the pirate decoder can contain a quantum state. Our main results include: - We show how to overcome numerous definitional challenges to give a meaningful notion of tracing for quantum decoders - We give negative results, demonstrating barriers to adapting classical tracing algorithms to the quantum decoder setting. - On the other hand, we show how to trace quantum decoders in the setting of (public key) private linear broadcast encryption, capturing a common approach to traitor tracing.
2020
TCC
Expected Constant Round Byzantine Broadcast under Dishonest Majority 📺
Byzantine Broadcast (BB) is a central question in distributed systems, and an important challenge is to understand its round complexity. Under the honest majority setting, it is long known that there exist randomized protocols that can achieve BB in expected constant rounds, regardless of the number of nodes $n$. However, whether we can match the expected constant round complexity in the corrupt majority setting --- or more precisely, when $f \geq n/2 + \omega(1)$ --- remains unknown, where $f$ denotes the number of corrupt nodes. In this paper, we are the first to resolve this long-standing question. We show how to achieve BB in expected $O((n/(n-f))^2)$ rounds. In particular, even when 99\% of the nodes are corrupt we can achieve expected constant rounds. Our results hold under both a static adversary and a weakly adaptive adversary who cannot perform ``after-the-fact removal'' of messages already sent by a node before it becomes corrupt.
2020
TCC
A Lower Bound for One-Round Oblivious RAM 📺
We initiate a fine-grained study of the round complexity of Oblivious RAM (ORAM). We prove that any one-round balls-in-bins ORAM that does not duplicate balls must have either $\Omega(\sqrt{N})$ bandwidth or $\Omega(\sqrt{N})$ client memory, where $N$ is the number of memory slots being simulated. This shows that such schemes are strictly weaker than general (multi-round) ORAMs or those with server computation, and in particular implies that a one-round version of the original square-root ORAM of Goldreich and Ostrovksy (J. ACM 1996) is optimal. We prove this bound via new techniques that differ from those of Goldreich and Ostrovksy, and of Larsen and Nielsen (CRYPTO 2018), which achieved an $\Omega(\log N)$ bound for balls-in-bins and general multi-round ORAMs respectively. Finally we give a weaker extension of our bound that allows for limited duplication of balls, and also show that our bound extends to multiple-round ORAMs of a restricted form that include the best known constructions.
2020
TCC
Can a Blockchain Keep a Secret? 📺
Blockchains are gaining traction and acceptance, not just for cryptocurrencies, but increasingly as an architecture for distributed computing. In this work we seek solutions that allow a \emph{public} blockchain to act as a trusted long-term repository of secret information: Our goal is to deposit a secret with the blockchain, specify how it is to be used (e.g., the conditions under which it is released), and have the blockchain keep the secret and use it only in the specified manner (e.g., release only it once the conditions are met). This simple functionality enables many powerful applications, including signing statements on behalf of the blockchain, using it as the control plane for a storage system, performing decentralized program-obfuscation-as-a-service, and many more. Using proactive secret sharing techniques, we present a scalable solution for implementing this functionality on a public blockchain, in the presence of a mobile adversary controlling a small minority of the participants. The main challenge is that, on the one hand, scalability requires that we use small committees to represent the entire system, but, on the other hand, a mobile adversary may be able to corrupt the entire committee if it is small. For this reason, existing proactive secret sharing solutions are either non-scalable or insecure in our setting. We approach this challenge via "player replaceability", which ensures the committee is anonymous until after it performs its actions. Our main technical contribution is a system that allows sharing and re-sharing of secrets among the members of small dynamic committees, without knowing who they are until after they perform their actions and erase their secrets. Our solution handles a fully mobile adversary corrupting roughly 1/4 of the participants at any time, and is scalable in terms of both the number of parties and the number of time intervals.
2020
TCC
New Techniques in Replica Encodings with Client Setup 📺
A proof of replication system is a cryptographic primitive that allows a server (or group of servers) to prove to a client that it is dedicated to storing multiple copies or replicas of a file. Until recently, all such protocols required fined-grained timing assumptions on the amount of time it takes for a server to produce such replicas. Damgard, Ganesh, and Orlandi [DGO19] proposed a novel notion that we will call proof of replication with client setup. Here, a client first operates with secret coins to generate the replicas for a file. Such systems do not inherently have to require fine-grained timing assumptions. At the core of their solution to building proofs of replication with client setup is an abstraction called replica encodings. Briefly, these comprise a private coin scheme where a client algorithm given a file m can produce an encoding \sigma. The encodings have the property that, given any encoding \sigma, one can decode and retrieve the original file m. Secondly, if a server has significantly less than n·|m| bit of storage, it cannot reproduce n encodings. The authors give a construction of encodings from ideal permutations and trapdoor functions. In this work, we make three central contributions: 1) Our first contribution is that we discover and demonstrate that the security argument put forth by [DGO19] is fundamentally flawed. Briefly, the security argument makes assumptions on the attacker's storage behavior that does not capture general attacker strategies. We demonstrate this issue by constructing a trapdoor permutation which is secure assuming indistinguishability obfuscation, serves as a counterexample to their claim (for the parameterization stated). 2) In our second contribution we show that the DGO construction is actually secure in the ideal permutation model from any trapdoor permutation when parameterized correctly. In particular, when the number of rounds in the construction is equal to \lambda·n·b where \lambda is the security parameter, n is the number of replicas and b is the number of blocks. To do so we build up a proof approach from the ground up that accounts for general attacker storage behavior where we create an analysis technique that we call "sequence-then-switch". 3) Finally, we show a new construction that is provably secure in the random oracle (or random function) model. Thus requiring less structure on the ideal function.
2020
TCC
Round-Efficient Byzantine Broadcast under Strongly Adaptive and Majority Corruptions 📺
The round complexity of Byzantine Broadcast (BB) has been a central question in distributed systems and cryptography. In the honest majority setting, expected constant round protocols have been known for decades even in the presence of a strongly adaptive adversary. In the corrupt majority setting, however, no protocol with sublinear round complexity is known, even when the adversary is allowed to {\it strongly adaptively} corrupt only 51\% of the players, and even under reasonable setup or cryptographic assumptions. Recall that a strongly adaptive adversary can examine what original message an honest player would have wanted to send in some round, adaptively corrupt the player in the same round and make it send a completely different message instead. In this paper, we are the first to construct a BB protocol with sublinear round complexity in the corrupt majority setting. Specifically, assuming the existence of time-lock puzzles with suitable hardness parameters and other standard cryptographic assumptions, we show how to achieve BB in $(\frac{n}{n-f})^2 \cdot \poly\log \lambda$ rounds with $1-\negl(\lambda)$ probability, where $n$ denotes the total number of players, $f$ denotes the maximum number of corrupt players, and $\lambda$ is the security parameter. Our protocol completes in polylogarithmically many rounds even when 99\% of the players can be corrupt.
2020
TCC
Quantum encryption with certified deletion 📺
Given a ciphertext, is it possible to prove the deletion of the underlying plaintext? Since classical ciphertexts can be copied, clearly such a feat is impossible using classical information alone. In stark contrast to this, we show that quantum encodings enable certified deletion. More precisely, we show that it is possible to encrypt classical data into a quantum ciphertext such that the recipient of the ciphertext can produce a classical string which proves to the originator that the recipient has relinquished any chance of recovering the plaintext should the key be revealed. Our scheme is feasible with current quantum technology: the honest parties only require quantum devices for single-qubit preparation and measurements; the scheme is also robust against noise in these devices.Furthermore, we provide an analysis that is suitable in the finite-key regime.
2020
TCC
Secure Quantum Extraction Protocols 📺
\noindent Knowledge extraction, typically studied in the classical setting, is at the heart of several cryptographic protocols. The prospect of quantum computers forces us to revisit the concept of knowledge extraction in the presence of quantum adversaries. \par We introduce the notion of secure quantum extraction protocols. A secure quantum extraction protocol for an NP relation $\rel$ is a classical interactive protocol between a sender and a receiver, where the sender gets as input the instance $\inst$ and witness $\witness$ while the receiver only gets the instance $\inst$ as input. There are two properties associated with a secure quantum extraction protocol: (a) {\em Extractability}: for any efficient quantum polynomial-time (QPT) adversarial sender, there exists a QPT extractor that can extract a witness $\witness'$ such that $(\inst,\witness') \in \rel$ and, (b) {\em Zero-Knowledge}: a malicious receiver, interacting with the sender, should not be able to learn any information about $\witness$. \par We study and construct two flavors of secure quantum extraction protocols. \begin{itemize} \item {\bf Security against QPT malicious receivers}: First we consider the setting when the malicious receiver is a QPT adversary. In this setting, we construct a secure quantum extraction protocol for NP assuming the existence of quantum fully homomorphic encryption satisfying some mild properties (already satisfied by existing constructions [Mahadev, FOCS'18, Brakerski CRYPTO'18]) and quantum hardness of learning with errors. The novelty of our construction is a new non-black-box technique in the quantum setting. All previous extraction techniques in the quantum setting were solely based on quantum rewinding. \item {\bf Security against classical PPT malicious receivers}: We also consider the setting when the malicious receiver is a classical probabilistic polynomial time (PPT) adversary. In this setting, we construct a secure quantum extraction protocol for NP solely based on the quantum hardness of learning with errors. Furthermore, our construction satisfies {\em quantum-lasting security}: a malicious receiver cannot later, long after the protocol has been executed, use a quantum computer to extract a valid witness from the transcript of the protocol. \end{itemize} \noindent Both the above extraction protocols are {\em constant round} protocols. \par We present an application of secure quantum extraction protocols to zero-knowledge (ZK). Assuming quantum hardness of learning with errors, we present the first construction of ZK argument systems for NP in constant rounds based on the quantum hardness of learning with errors with: (a) zero-knowledge against QPT malicious verifiers and, (b) soundness against classical PPT adversaries. Moreover, our construction satisfies the stronger (quantum) auxiliary-input zero knowledge property and thus can be composed with other protocols secure against quantum adversaries.
2020
TCC
On Statistical Security in Two-Party Computation 📺
There has been a large body of work characterizing the round complexity of general-purpose maliciously secure two-party computation (2PC) against probabilistic polynomial time adversaries. This is particularly true for zero-knowledge, which is a special case of 2PC. In fact, in the special case of zero knowledge, optimal protocols with unconditional security against one of the two players have also been meticulously studied and constructed. On the other hand, general-purpose maliciously secure 2PC with statistical or unconditional security against one of the two participants, has remained largely unexplored so far. In this work, we initiate the study of such protocols, which we refer to as 2PC with one-sided statistical security. We completely settle the round complexity of 2PC with one-sided statistical security with respect to black-box simulation by obtaining the following tight results: - In a setting where only one party obtains an output, we design 2PC in 4 rounds with statistical security against receivers and computational security against senders. - In a setting where both parties obtain outputs, we design 2PC in 5 rounds with computational security against the party that obtains output first and statistical security against the party that obtains output last. Katz and Ostrovsky (CRYPTO 2004) showed that 2PC with black-box simulation requires at least 4 rounds when one party obtains an output and 5 rounds when both parties obtain outputs, even when only computational security is desired against both parties. Thus in these settings, not only are our results tight, but they also show that statistical security is achievable at no extra cost to round complexity. This still leaves open the question of whether 2PC can be achieved with black-box simulation in 4 rounds with statistical security against senders and computational security against receivers. Based on a lower bound on computational zero-knowledge proofs due to Katz (TCC 2008), we observe that the answer is negative unless the polynomial hierarchy collapses.
2020
TCC
CP-ABE for Circuits (and more) in the Symmetric Key Setting 📺
The celebrated work of Gorbunov, Vaikuntanathan and Wee [GVW13] provided the first key policy attribute based encryption scheme (ABE) for circuits from the Learning With Errors (LWE) assumption. However, the arguably more natural ciphertext policy variant has remained elusive, and is a central primitive not yet known from LWE. In this work, we construct the first symmetric key ciphertext policy attribute based encryption scheme (CP-ABE) for all polynomial sized circuits from the learning with errors (LWE) assumption. In more detail, the ciphertext for a message m is labelled with an access control policy f, secret keys are labelled with public attributes x from the domain of f and decryption succeeds to yield the hidden message m if and only if f(x) = 1. The size of our public and secret key do not depend on the size of the circuits supported by the scheme – this enables our construction to support circuits of unbounded size (but bounded depth). Our construction is secure against collusions of unbounded size. We note that current best CP-ABE schemes [BSW07, Wat11, LOS+10, OT10, LW12, RW13, Att14, Wee14, AHY15, CGW15, AC17, KW19] rely on pairings and only support circuits in the class NC1 (albeit in the public key setting). We adapt our construction to the public key setting for the case of bounded size circuits. The size of the ciphertext and secret key as well as running time of encryption, key generation and decryption satisfy the efficiency properties desired from CP-ABE, assuming that all algorithms have RAM access to the public key. However, the running time of the setup algorithm and size of the public key depends on the circuit size bound, restricting the construction to support circuits of a-priori bounded size. We remark that the inefficiency of setup is somewhat mitigated by the fact that setup must only be run once. We generalize our construction to consider attribute and function hiding. The compiler of lockable obfuscation upgrades any attribute based encryption scheme to predicate encryption, i.e. with attribute hiding [GKW17, WZ17]. Since lockable obfuscation can be constructed from LWE, we achieve ciphertext policy predicate encryption immediately. For function privacy, we show that the most natural notion of function hiding ABE for circuits, even in the symmetric key setting, is sufficient to imply indistinguishability obfuscation. We define a suitable weakening of function hiding to sidestep the implication and provide a construction to achieve this notion for both the key policy and ciphertext policy case. Previously, the largest function class for which function private predicate encryption (supporting unbounded keys) could be achieved was inner product zero testing, by Shen, Shi and Waters [SSW09].
2020
TCC
Optimal Broadcast Encryption from LWE and Pairings in the Standard Model 📺
Broadcast Encryption with optimal parameters was a long-standing problem, whose first solution was provided in an elegant work by Boneh, Waters and Zhandry \cite{BWZ14}. However, this work relied on multilinear maps of logarithmic degree, which is not considered a standard assumption. Recently, Agrawal and Yamada \cite{AY20} improved this state of affairs by providing the first construction of optimal broadcast encryption from Bilinear Maps and Learning With Errors (LWE). However, their proof of security was in the generic bilinear group model. In this work, we improve upon their result by providing a new construction and proof in the standard model. In more detail, we rely on the Learning With Errors (LWE) assumption and the Knowledge of OrthogonALity Assumption (KOALA) \cite{BW19} on bilinear groups. Our construction combines three building blocks: a (computational) nearly linear secret sharing scheme with compact shares which we construct from LWE, an inner-product functional encryption scheme with special properties which is constructed from the bilinear Matrix Decision Diffie Hellman (MDDH) assumption, and a certain form of hyperplane obfuscation, which is constructed using the KOALA assumption. While similar to that of Agrawal and Yamada, our construction provides a new understanding of how to decompose the construction into simpler, modular building blocks with concrete and easy-to-understand security requirements for each one. We believe this sheds new light on the requirements for optimal broadcast encryption, which may lead to new constructions in the future.
2020
TCC
Equipping Public-Key Cryptographic Primitives with Watermarking (or: A Hole Is to Watermark) 📺
Program watermarking enables users to embed an arbitrary string called a mark into a program while preserving the functionality of the program. Adversaries cannot remove the mark without destroying the functionality. Although there exist generic constructions of watermarking schemes for public-key cryptographic (PKC) primitives, those schemes are constructed from scratch and not efficient. In this work, we present a general framework to equip a broad class of PKC primitives with an efficient watermarking scheme. The class consists of PKC primitives that have a canonical all-but-one (ABO) reduction. Canonical ABO reductions are standard techniques to prove selective security of PKC primitives, where adversaries must commit a target attribute at the beginning of the security game. Thus, we can obtain watermarking schemes for many existing efficient PKC schemes from standard cryptographic assumptions via our framework. Most well-known selectively secure PKC schemes have canonical ABO reductions. Notably, we can achieve watermarking for public-key encryption whose ciphertexts and secret-keys are constant-size, and that is chosen-ciphertext secure. Our approach accommodates the canonical ABO reduction technique to the puncturable pseudorandom function (PRF) technique, which is used to achieve watermarkable PRFs. We find that canonical ABO reductions are compatible with such puncturable PRF-based watermarking schemes.
2020
TCC
Robust Secret Sharing with Almost Optimal Share Size and Security Against Rushing Adversaries 📺
We show a robust secret sharing scheme for a maximal threshold $t < n/2$ that features an optimal overhead in share size, offers security against a rushing adversary, and runs in polynomial time. Previous robust secret sharing schemes for $t < n/2$ either suffered from a suboptimal overhead, offered no (provable) security against a rushing adversary, or ran in superpolynomial time.
2020
TCC
The Share Size of Secret-Sharing Schemes for Almost All Access Structures and Graphs 📺
The share size of general secret-sharing schemes is poorly understood. The gap between the best known upper bound on the total share size per party of $2^{0.64n}$ (Applebaum et al., STOC 2020) and the best known lower bound of $\Omega(n/\log n)$ (Csirmaz, J. of Cryptology 1997) is huge (where $n$ is the number of parties in the scheme). To gain some understanding on this problem, we study the share size of secret-sharing schemes of almost all access structures, i.e., of almost all collections of authorized sets. This is motivated by the fact that in complexity, many times almost all objects are hardest (e.g., most Boolean functions require exponential size circuits). All previous constructions of secret-sharing schemes were for the worst access structures (i.e., all access structures) or for specific families of access structures. We prove upper bounds on the share size for almost all access structures. We combine results on almost all monotone Boolean functions (Korshunov, Probl. Kibern. 1981) and a construction of (Liu and Vaikuntanathan, STOC 2018) and conclude that almost all access structures have a secret-sharing scheme with share size $2^{\tilde{O}(\sqrt{n})}$. We also study graph secret-sharing schemes. In these schemes, the parties are vertices of a graph and a set can reconstruct the secret if and only if it contains an edge. Again, for this family there is a huge gap between the upper bounds -- $O(n/\log n)$ (Erd\"{o}s and Pyber, Discrete Mathematics 1997) -- and the lower bounds -- $\Omega(\log n)$ (van Dijk, Des. Codes Crypto. 1995). We show that for almost all graphs, the share size of each party is $n^{o(1)}$. This result is achieved by using robust 2-server conditional disclosure of secrets protocols, a new primitive introduced and constructed in (Applebaum et al., STOC 2020), and the fact that the size of the maximal independent set in a random graph is small. Finally, using robust conditional disclosure of secrets protocols, we improve the total share size for all very dense graphs.
2020
TCC
Batch Verification for Statistical Zero Knowledge Proofs 📺
A statistical zero-knowledge proof (SZK) for a problem $\Pi$ enables a computationally unbounded prover to convince a polynomial-time verifier that $x \in \Pi$ without revealing any additional information about $x$ to the verifier, in a strong information-theoretic sense. Suppose, however, that the prover wishes to convince the verifier that $k$ separate inputs $x_1,\dots,x_k$ all belong to $\Pi$ (without revealing anything else). A naive way of doing so is to simply run the SZK protocol separately for each input. In this work we ask whether one can do better -- that is, is efficient batch verification possible for SZK? We give a partial positive answer to this question by constructing a batch verification protocol for a natural and important subclass of SZK -- all problems $\Pi$ that have a non-interactive SZK protocol (in the common random string model). More specifically, we show that, for every such problem $\Pi$, there exists an honest-verifier SZK protocol for batch verification of $k$ instances, with communication complexity $poly(n) + k \cdot poly(\log{n},\log{k})$, where $poly$ refers to a fixed polynomial that depends only on $\Pi$ (and not on $k$). This result should be contrasted with the naive solution, which has communication complexity $k \cdot poly(n)$. Our proof leverages a new NISZK-complete problem, called Approximate Injectivity, that we find to be of independent interest. The goal in this problem is to distinguish circuits that are nearly injective, from those that are non-injective on almost all inputs.
2020
TCC
Coupling of Random Systems 📺
This paper makes three contributions. First, we present a simple theory of random systems. The main idea is to think of a probabilistic system as an equivalence class of distributions over deterministic systems. Second, we demonstrate how in this new theory, the optimal information-theoretic distinguishing advantage between two systems can be characterized merely in terms of the statistical distance of probability distributions, providing a more elementary understanding of the distance of systems. In particular, two systems that are epsilon-close in terms of the best distinguishing advantage can be understood as being equal with probability 1-epsilon, a property that holds statically, without even considering a distinguisher, let alone its interaction with the systems. Finally, we exploit this new characterization of the distinguishing advantage to prove that any threshold combiner is an amplifier for indistinguishability in the information-theoretic setting, generalizing and simplifying results from Maurer, Pietrzak, and Renner (CRYPTO 2007).
2020
TCC
Towards Defeating Backdoored Random Oracles: Indifferentiability with Bounded Adaptivity 📺
In the backdoored random-oracle (BRO) model, besides access to a random function $\hash$, adversaries are provided with a backdoor oracle that can compute arbitrary leakage functions $f$ of the function table of $\hash$. Thus, an adversary would be able to invert points, find collisions, test for membership in certain sets, and more. This model was introduced in the work of Bauer, Farshim, and Mazaheri (Crypto 2018) and extends the auxiliary-input idealized models of Unruh (Crypto 2007), Dodis, Guo, and Katz (Eurocrypt 2017), Coretti et al. (Eurocrypt 2018), and Coretti, Dodis, and Guo (Crypto~2018). It was shown that certain security properties, such as one-wayness, pseudorandomness, and collision resistance can be re-established by combining two independent BROs, even if the adversary has access to both backdoor oracles. In this work we further develop the technique of combining two or more independent BROs to render their backdoors useless in a more general sense. More precisely, we study the question of building an \emph{indifferentiable} and backdoor-free random function by combining multiple BROs. Achieving full indifferentiability in this model seems very challenging at the moment. We however make progress by showing that the xor combiner goes well beyond security against preprocessing attacks and offers indifferentiability as long as the adaptivity of queries to different backdoor oracles remains logarithmic in the input size of the BROs. We even show that an extractor-based combiner of three BROs can achieve indifferentiability with respect to a linear adaptivity of backdoor queries. Furthermore, a natural restriction of our definition gives rise to a notion of \emph{indifferentiability with auxiliary input}, for which we give two positive feasibility results. To prove these results we build on and refine techniques by Göös et al. (STOC 2015) and Kothari et al. (STOC 2017) for decomposing distributions with high entropy into distributions with more structure and show how they can be applied in the more involved adaptive settings.
2020
TCC
Revisiting Fairness in MPC: Polynomial Number of Parties and General Adversarial Structures 📺
We investigate fairness in secure multiparty computation when the number of parties n = poly(lambda) grows polynomially in the security parameter, lambda. Prior to this work, efficient protocols achieving fairness with no honest majority and polynomial number of parties were known only for the AND and OR functionalities (Gordon and Katz, TCC'09). We show the following: --We first consider symmetric Boolean functions F : {0,1}^n -> {0,1}, where the underlying function f_{n/2,n/2}: {0, ..., n/2} x {0, ..., n/2} -> {0,1} can be computed fairly and efficiently in the 2-party setting. We present an efficient protocol for any such F tolerating n/2 or fewer corruptions, for n = poly(lambda) number of parties. --We present an efficient protocol for n-party majority tolerating n/2+1 or fewer corruptions, for n = poly(lambda) number of parties. The construction extends to n/2+c or fewer corruptions, for constant c. --We extend both of the above results to more general types of adversarial structures and present instantiations of non-threshold adversarial structures of these types. These instantiations are obtained via constructions of projective planes and combinatorial designs.
2020
TCC
On the Price of Concurrency in Group Ratcheting Protocols 📺
Post-Compromise Security, or PCS, refers to the ability of a given protocol to recover—by means of normal protocol operations—from the exposure of local states of its (otherwise honest) participants. While PCS in the two-party setting has attracted a lot of attention recently, the problem of achieving PCS in the group setting—called group ratcheting here—is much less understood. On the one hand, one can achieve excellent security by simply executing, in parallel, a two-party ratcheting protocol (e.g., Signal) for each pair of members in a group. However, this incurs O(n) communication overhead for every message sent, where n is the group size. On the other hand, several related protocols were recently developed in the context of the IETF Messaging Layer Security (MLS) effort that improve the communication overhead per message to O(log n). However, this reduction of communication overhead involves a great restriction: group members are not allowed to send and recover from exposures concurrently such that reaching PCS is delayed up to n communication time slots (potentially even more). In this work we formally study the trade-off between PCS, concurrency, and communication overhead in the context of group ratcheting. Since our main result is a lower bound, we define the cleanest and most restrictive setting where the tension already occurs: static groups equipped with a synchronous (and authenticated) broadcast channel, where up to t arbitrary parties can concurrently send messages in any given round. Already in this setting, we show in a symbolic execution model that PCS requires Omega(t) communication overhead per message. Our symbolic model permits as building blocks black-box use of (even "dual") PRFs, (even key-updatable) PKE (which in our symbolic definition is at least as strong as HIBE), and broadcast encryption, covering all tools used in previous constructions, but prohibiting the use of exotic primitives. To complement our result, we also prove an almost matching upper bound of O(t(1+log(n/t))), which smoothly increases from O(log n) with no concurrency, to O(n) with unbounded concurrency, matching the previously known protocols.
2020
TCC
Stronger Security and Constructions of Multi-Designated Verifier Signatures 📺
Off-the-Record (OTR) messaging is a two-party message authentication protocol that also provides plausible deniability: there is no record that can later convince a third party what messages were actually sent. The challenge in group OTR, is to enable the sender to sign his messages so that group members can verify who sent a message (signatures should be unforgeable, even by group members). Also, we want the off-the-record property: even if some verifiers are corrupt and collude, they should not be able to prove the authenticity of a message to any outsider. Finally, we need consistency, meaning that if any group member accepts a signature, then all of them do. To achieve these properties it is natural to consider Multi-Designated Verifier Signatures (MDVS). However, existing literature defines and builds only limited notions of MDVS, where (a) the off-the-record property (source hiding) only holds when all verifiers could conceivably collude, and (b) the consistency property is not considered. The contributions of this paper are two-fold: stronger definitions for MDVS, and new constructions meeting those definitions. We strengthen source-hiding to support any subset of corrupt verifiers, and give the first formal definition of consistency. We build three new MDVS: one from generic standard primitives (PRF, key agreement, NIZK), one with concrete efficiency and one from functional encryption.
2020
TCC
Non-interactive classical verification of quantum computation 📺
In a recent breakthrough, Mahadev constructed an interactive protocol that enables a purely classical party to delegate any quantum computation to an untrusted quantum prover. We show that this same task can in fact be performed non-interactively (with setup) and in zero-knowledge. Our protocols result from a sequence of significant improvements to the original four-message protocol of Mahadev. We begin by making the first message instance-independent and moving it to an offline setup phase. We then establish a parallel repetition theorem for the resulting three-message protocol, with an asymptotically optimal rate. This, in turn, enables an application of the Fiat-Shamir heuristic, eliminating the second message and giving a non-interactive protocol. Finally, we employ classical non-interactive zero-knowledge (NIZK) arguments and classical fully homomorphic encryption (FHE) to give a zero-knowledge variant of this construction. This yields the first purely classical NIZK argument system for QMA, a quantum analogue of NP. We establish the security of our protocols under standard assumptions in quantum-secure cryptography. Specifically, our protocols are secure in the Quantum Random Oracle Model, under the assumption that Learning with Errors is quantumly hard. The NIZK construction also requires circuit-private FHE.
2020
TCC
Accumulators in (and Beyond) Generic Groups: Non-Trivial Batch Verification Requires Interaction 📺
We prove a tight lower bound on the number of group operations required for batch verification by any generic-group accumulator that stores a less-than-trivial amount of information. Specifically, we show that $\Omega(t \cdot (\lambda / \log \lambda))$ group operations are required for the batch verification of any subset of $t \geq 1$ elements, where $\lambda \in \mathbb{N}$ is the security parameter, thus ruling out non-trivial batch verification in the standard non-interactive manner. Our lower bound applies already to the most basic form of accumulators (i.e., static accumulators that support membership proofs), and holds both for known-order (and even multilinear) groups and for unknown-order groups, where it matches the asymptotic performance of the known bilinear and RSA accumulators, respectively. In addition, it complements the techniques underlying the generic-group accumulators of Boneh, B{\"{u}}nz and Fisch (CRYPTO '19) and Thakur (ePrint '19) by justifying their application of the Fiat-Shamir heuristic for transforming their interactive batch-verification protocols into non-interactive procedures. Moreover, motivated by a fundamental challenge introduced by Aggarwal and Maurer (EUROCRYPT '09), we propose an extension of the generic-group model that enables us to capture a bounded amount of arbitrary non-generic information (e.g., least-significant bits or Jacobi symbols that are hard to compute generically but are easy to compute non-generically). We prove our lower bound within this extended model, which may be of independent interest for strengthening the implications of impossibility results in idealized models.
2020
TCC
On the Power of an Honest Majority in Three-Party Computation Without Broadcast 📺
Fully secure multiparty computation (MPC) allows a set of parties to compute some function of their inputs, while guaranteeing correctness, privacy, fairness, and output delivery. Understanding the necessary and sufficient assumptions that allow for fully secure MPC is an important goal. Cleve (STOC'86) showed that full security cannot be obtained in general without an honest majority. Conversely, by Rabin and Ben-Or (FOCS'89), assuming a broadcast channel and an honest majority, any function can be computed with full security. Our goal is to characterize the set of functionalities that can be computed with full security, assuming an honest majority, but no broadcast. This question was fully answered by Cohen et al. (TCC'16) -- for the restricted class of \emph{symmetric} functionalities (where all parties receive the same output). Instructively, their results crucially rely on \emph{agreement} and do not carry over to general \emph{asymmetric} functionalities. In this work, we focus on the case of three-party asymmetric functionalities, providing a variety of necessary and sufficient conditions to enable fully secure computation. An interesting use-case of our results is \emph{server-aided} computation, where an untrusted server helps two parties to carry out their computation. We show that without a broadcast assumption, the resource of an external non-colluding server provides no additional power. Namely, a functionality can be computed with the help of the server if and only if it can be computed without it. For fair coin tossing, we further show that the optimal bias for three-party (server-aided) $r$-round protocol remains $\Theta(1/r)$ (as in the two-party setting).
2020
TCC
Synchronous Constructive Cryptography 📺
This paper proposes a simple synchronous composable security framework as an instantiation of the Constructive Cryptography framework, aiming to capture minimally, without unnecessary artefacts, exactly what is needed to state synchronous security guarantees. The objects of study are specifications (i.e., sets) of systems, and traditional security properties like consistency and validity can naturally be understood as specifications, thus unifying composable and property-based definitions. The framework's simplicity is in contrast to current composable frameworks for synchronous computation which are built on top of an asynchronous framework (e.g. the UC framework), thus not only inheriting artefacts and complex features used to handle asynchronous communication, but adding additional overhead to capture synchronous communication. As a second, independent contribution we demonstrate how secure (synchronous) multi-party computation protocols can be understood as constructing a computer that allows a set of parties to perform an arbitrary, on-going computation. An interesting aspect is that the instructions of the computation need not be fixed before the protocol starts but can also be determined during an on-going computation, possibly depending on previous outputs.
2020
TCC
Characterizing Deterministic-Prover Zero Knowledge 📺
Randomness is typically thought to be essential for zero knowledge protocols. Following this intuition, Goldreich and Oren (Journal of Cryptology 94) proved that auxiliary-input zero knowledge cannot be achieved with a deterministic prover. On the other hand, positive results are only known in the honest-verifier setting, or when the prover is given at least a restricted source of entropy. We prove that removing (or just bounding) the verifier's auxiliary input, deterministic-prover zero knowledge becomes feasible: - Assuming non-interactive witness-indistinguishable proofs and subexponential indistinguishability obfuscation and one-way functions, we construct deterministic-prover zero-knowledge arguments for $\NP\cap \coNP$ against verifiers with bounded non-uniform auxiliary input. - Assuming also keyless hash functions that are collision-resistant against bounded-auxiliary-input quasipolynomial-time attackers, we construct similar arguments for all of $\NP$. Together with the result of Goldreich and Oren, this characterizes when deterministic-prover zero knowledge is feasible. We also demonstrate the necessity of strong assumptions, by showing that deterministic prover zero knowledge arguments for a given language imply witness encryption for that language. We further prove that such arguments can always be collapsed to two messages and be made laconic. These implications rely on a more general connection with the notion of predictable arguments by Faonio, Nielsen, and Venturi (PKC 17).
2020
TCC
Zero-Communication Reductions 📺
We introduce a new primitive in information-theoretic cryptography, namely zero-communication reductions (ZCR), with different levels of security. We relate ZCR to several other important primitives, and obtain new results on upper and lower bounds. In particular, we obtain new upper bounds for PSM, CDS and OT complexity of functions, which are exponential in the information complexity of the functions. These upper bounds complement the results of Beimel et al. (2014) which broke the circuit-complexity barrier for ``high complexity'' functions; our results break the barrier of input size for ``low complexity'' functions. We also show that lower bounds on secure ZCR can be used to establish lower bounds for OT-complexity. We recover the known (linear) lower bounds on OT-complexity by Beimal and Malkin (2004) via this new route. We also formulate the lower bound problem for secure ZCR in purely linear-algebraic terms, by defining the invertible rank of a matrix. We present an Invertible Rank Conjecture, proving which will establish super-linear lower bounds for OT-complexity (and if accompanied by an explicit construction, will provide explicit functions with super-linear circuit lower bounds).
2020
TCC
NIZK from SNARG 📺
We give a construction of a non-interactive zero-knowledge (NIZK) argument for all NP languages based on a succinct non-interactive argument (SNARG) for all NP languages and a one-way function. The succinctness requirement for the SNARG is rather mild: We only require that the proof size be $|\pi|=\mathsf{poly}(\lambda)(|x|+|w|)^c$ for some constant $c<1/2$, where $|x|$ is the statement length, $|w|$ is the witness length, and $\lambda$ is the security parameter. Especially, we do not require anything about the efficiency of the verification. Based on this result, we also give a generic conversion from a SNARG to a zero-knowledge SNARG assuming the existence of CPA secure public-key encryption. For this conversion, we require a SNARG to have efficient verification, i.e., the computational complexity of the verification algorithm is $\mathsf{poly}(\lambda)(|x|+|w|)^{o(1)}$. Before this work, such a conversion was only known if we additionally assume the existence of a NIZK. Along the way of obtaining our result, we give a generic compiler to upgrade a NIZK for all NP languages with non-adaptive zero-knowledge to one with adaptive zero-knowledge. Though this can be shown by carefully combining known results, to the best of our knowledge, no explicit proof of this generic conversion has been presented.
2020
TCC
Weakly Extractable One-Way Functions 📺
A family of one-way functions is extractable if given a random function in the family, an efficient adversary can only output an element in the image of the function if it knows a corresponding preimage. This knowledge extraction guarantee is particularly powerful since it does not require interaction. However, extractable one-way functions (EFs) are subject to a strong barrier: assuming indistinguishability obfuscation, no EF can have a knowledge extractor that works against all polynomial-size non-uniform adversaries. This holds even for non-black-box extractors that use the adversary's code. Accordingly, the literature considers either EFs based on non-falsifiable knowledge assumptions, where the extractor is not explicitly given, but it is only assumed to exist, or EFs against a restricted class of adversaries with a bounded non-uniform advice. This falls short of cryptography's gold standard of security that requires an explicit reduction against non-uniform adversaries of arbitrary polynomial size. Motivated by this gap, we put forward a new notion of weakly extractable one-way functions (WEFs) that circumvents the known barrier. We then prove that WEFs are inextricably connected to the long standing question of three-message zero knowledge protocols. We show that different flavors of WEFs are sufficient and necessary for three-message zero knowledge to exist. The exact flavor depends on whether the protocol is computational or statistical zero knowledge and whether it is publicly or privately verifiable. Combined with recent progress on constructing three message zero-knowledge, we derive a new connection between keyless multi-collision resistance and the notion of incompressibility and the feasibility of non-interactive knowledge extraction. Another interesting corollary of our result is that in order to construct three-message zero knowledge arguments, it suffices to construct such arguments where the honest prover strategy is unbounded.
2020
TCC
Algebraic Distinguishers: From Discrete Logarithms to Decisional Uber Assumptions 📺
The algebraic group model, introduced by Fuchsbauer, Kiltz and Loss (CRYPTO '18), is a substantial relaxation of the generic group model capturing algorithms that may exploit the representation of the underlying group. This idealized yet realistic model was shown useful for reasoning about cryptographic assumptions and security properties defined via computational problems. However, it does not generally capture assumptions and properties defined via decisional problems. As such problems play a key role in the foundations and applications of cryptography, this leaves a significant gap between the restrictive generic group model and the standard model. We put forward the notion of algebraic distinguishers, strengthening the algebraic group model by enabling it to capture decisional problems. Within our framework we then reveal new insights on the algebraic interplay between a wide variety of decisional assumptions. These include the decisional Diffie-Hellman assumption, the family of Linear assumptions in multilinear groups, and the family of Uber assumptions in bilinear groups. Our main technical results establish that, from an algebraic perspective, these decisional assumptions are in fact all polynomially equivalent to either the most basic discrete logarithm assumption or to its higher-order variant, the $q$-discrete logarithm assumption. On the one hand, these results increase the confidence in these strong decisional assumptions, while on the other hand, they enable to direct cryptanalytic efforts towards either extracting discrete logarithms or significantly deviating from standard algebraic techniques.
2020
TCC
Continuous Group Key Agreement with Active Security 📺
A continuous group key agreement (CGKA) protocol allows a long-lived group of parties to agree on a continuous stream of fresh secret key material. CGKA protocols allow parties to join and leave mid-session but may neither rely on special group managers, trusted third parties, nor on any assumptions about if, when, or for how long members are online. CGKA captures the core of an emerging generation of highly practical end-to-end secure group messaging (SGM) protocols. In light of their practical origins, past work on CGKA protocols have been subject to stringent engineering and efficiency constraints at the cost of diminished security properties. In this work, we somewhat relax those constraints, instead considering progressively more powerful adversaries. To that end, we present 3 new security notions of increasing strength. Already the weakest of the 3 (passive security) captures attacks to which all prior CGKA constructions are vulnerable. Moreover, the 2 stronger (active security) notions even allow the adversary to use parties' exposed states combined with full network control to mount attacks. In particular, this is closely related to so-called insider attacks which involve malicious group members actively deviating from the protocol. Although insiders are of explicit interest to practical CGKA/SGM designers, our understanding of this class of attackers is still quite nascent. Indeed, we believe ours to be the first security notions in the literature to precisely formulate meaningful guarantees against (a broad class of) insiders. For each of the 3 new security notions we give a new CGKA scheme enjoying sub-linear (potentially even logarithmic) communication complexity in the number of group members (on par with the asymptotics of state-of-the-art practical constructions). We prove each scheme optimally secure, in the sense that the only security violations possible are those necessarily implied by correctness.
2020
TCC
Classical Verification of Quantum Computations with Efficient Verifier 📺
In this paper, we extend the protocol of classical verification of quantum computations (CVQC) recently proposed by Mahadev to make the verification efficient. Our result is obtained in the following three steps: \begin{itemize} \item We show that parallel repetition of Mahadev's protocol has negligible soundness error. This gives the first constant round CVQC protocol with negligible soundness error. In this part, we only assume the quantum hardness of the learning with error (LWE) problem similar to Mahadev's work. \item We construct a two-round CVQC protocol in the quantum random oracle model (QROM) where a cryptographic hash function is idealized to be a random function. This is obtained by applying the Fiat-Shamir transform to the parallel repetition version of Mahadev's protocol. \item We construct a two-round CVQC protocol with an efficient verifier in the CRS+QRO model where both prover and verifier can access a (classical) common reference string generated by a trusted third party in addition to quantum access to QRO. Specifically, the verifier can verify a $\mathsf{QTIME}(T)$ computation in time $\mathsf{poly}(\lambda,\log T)$ where $\lambda$ is the security parameter. For proving soundness, we assume that a standard model instantiation of our two-round protocol with a concrete hash function (say, SHA-3) is sound and the existence of post-quantum indistinguishability obfuscation and post-quantum fully homomorphic encryption in addition to the quantum hardness of the LWE problem. \end{itemize}
2020
TCC
Blockchains from Non-Idealized Hash Functions 📺
The formalization of concrete, non-idealized hash function properties sufficient to prove the security of Bitcoin and related protocols has been elusive, as all previous security analyses of blockchain protocols have been performed in the random oracle model. In this paper we identify three such properties, and then construct a blockchain protocol whose security can be reduced to them in the standard model assuming a common reference string (CRS). The three properties are: {\em collision resistance}, {\em computational randomness extraction} and {\em iterated hardness}. While the first two properties have been extensively studied, iterated hardness has been empirically stress-tested since the rise of Bitcoin; in fact, as we demonstrate in this paper, any attack against it (assuming the other two properties hold) results in an attack against Bitcoin. In addition, iterated hardness puts forth a new class of search problems which we term {\em iterated search problems} (ISP). ISPs enable the concise and modular specification of blockchain protocols, and may be of independent interest.
2020
TCC
Lower Bounds on the Time/Memory Tradeoff of Function Inversion 📺
We study time/memory tradeoffs of function inversion: an algorithm, i.e., an inverter, equipped with an s-bit advice on a randomly chosen function f:[n]->[n] and using q oracle queries to f, tries to invert a randomly chosen output y of f (i.e., to find x such that f(x)=y). Much progress was done regarding adaptive function inversion - the inverter is allowed to make adaptive oracle queries. Hellman [IEEE transactions on Information Theory '80] presented an adaptive inverter that inverts with high probability a random f. Fiat and Naor [SICOMP '00] proved that for any s,q with s^3 q = n^3 (ignoring low-order terms), an s-advice, q-query variant of Hellman's algorithm inverts a constant fraction of the image points of any function. Yao [STOC '90] proved a lower bound of sq<=n for this problem. Closing the gap between the above lower and upper bounds is a long-standing open question. Very little is known of the non-adaptive variant of the question - the inverter chooses its queries in advance. The only known upper bounds, i.e., inverters, are the trivial ones (with s+q=n), and the only lower bound is the above bound of Yao. In a recent work, Corrigan-Gibbs and Kogan [TCC '19] partially justified the difficulty of finding lower bounds on non-adaptive inverters, showing that a lower bound on the time/memory tradeoff of non-adaptive inverters implies a lower bound on low-depth Boolean circuits. Bounds that, for a strong enough choice of parameters, are notoriously hard to prove. We make progress on the above intriguing question, both for the adaptive and the non-adaptive case, proving the following lower bounds on restricted families of inverters: Linear-advice (adaptive inverter): If the advice string is a linear function of f (e.g., A*f, for some matrix A, viewing f as a vector in [n]^n), then s+q is \Omega(n). The bound generalizes to the case where the advice string of f_1 + f_2, i.e., the coordinate-wise addition of the truth tables of f_1 and f_2, can be computed from the description of f_1 and f_2 by a low communication protocol. Affine non-adaptive decoders: If the non-adaptive inverter has an affine decoder - it outputs a linear function, determined by the advice string and the element to invert, of the query answers - then s is \Omega(n) (regardless of q). Affine non-adaptive decision trees: If the non-adaptive inverter is a d-depth affine decision tree - it outputs the evaluation of a decision tree whose nodes compute a linear function of the answers to the queries - and q < cn for some universal c>0, then s is \Omega(n/d \log n).
2020
TCC
Ledger Combiners for Fast Settlement 📺
Blockchain protocols based on variations of the longest-chain rule—whether following the proof-of-work paradigm or one of its alternatives—suffer from a fundamental latency barrier. This arises from the need to collect a sufficient number of blocks on top of a transaction-bearing block to guarantee the transaction’s stability while limiting the rate at which blocks can be created in order to prevent security-threatening forks. Our main result is a black-box security-amplifying combiner based on parallel composition of m blockchains that achieves \Theta(m)-fold security amplification for conflict-free transactions or, equivalently, \Theta(m)-fold reduction in latency. Our construction breaks the latency barrier to achieve, for the first time, a ledger based purely on Nakamoto longest-chain consensus guaranteeing worst-case constant-time settlement for conflict-free transactions: settlement can be accelerated to a constant multiple of block propagation time with negligible error. Operationally, our construction shows how to view any family of blockchains as a unified, virtual ledger without requiring any coordination among the chains or any new protocol metadata. Users of the system have the option to inject a transaction into a single constituent blockchain or---if they desire accelerated settlement---all of the constituent blockchains. Our presentation and proofs introduce a new formalism for reasoning about blockchains, the dynamic ledger, and articulate our constructions as transformations of dynamic ledgers that amplify security. We also illustrate the versatility of this formalism by presenting robust-combiner constructions for blockchains that can protect against complete adversarial control of a minority of a family of blockchains.
2020
TCC
Round Optimal Secure Multiparty Computation from Minimal Assumptions 📺
We construct a four round secure multiparty computation (MPC) protocol in the plain model that achieves security against any dishonest majority. The security of our protocol relies only on the existence of four round oblivious transfer. This culminates the long line of research on constructing round-efficient MPC from minimal assumptions (at least w.r.t. black-box simulation).
2020
TCC
Reusable Two-Round MPC from DDH 📺
We present a reusable two-round multi-party computation (MPC) protocol from the Decisional Diffie Hellman assumption (DDH). In particular, we show how to upgrade any secure two-round MPC protocol to allow reusability of its first message across multiple computations, using Homomorphic Secret Sharing (HSS) and pseudorandom functions in NC1 — each of which can be instantiated from DDH. In our construction, if the underlying two-round MPC protocol is secure against semi-honest adversaries (in the plain model) then so is our reusable two-round MPC protocol. Similarly, if the underlying two-round MPC protocol is secure against malicious adversaries (in the common random/reference string model) then so is our reusable two-round MPC protocol. Previously, such reusable two-round MPC protocols were only known under assumptions on lattices. At a technical level, we show how to upgrade any two-round MPC protocol to a first message succinct two-round MPC protocol, where the first message of the protocol is generated independently of the computed circuit (though it is not reusable). This step uses homomorphic secret sharing (HSS) and low-depth pseudorandom functions. Next, we show a generic transformation that upgrades any first message succinct two-round MPC to allow for reusability of its first message.
2020
TCC
Transparent Error Correcting in a Computationally Bounded World 📺
We construct uniquely decodable codes against channels which are computationally bounded. Our construction requires only a public-coin (transparent) setup. All prior work for such channels either required a setup with secret keys and states, could not achieve unique decoding, or got worse rates (for a given bound on codeword corruptions). On the other hand, our construction relies on a strong cryptographic hash function with security properties that we only instantiate in the random oracle model.
2020
TCC
On Computational Shortcuts for Information-Theoretic PIR 📺
Information-theoretic {\em private information retrieval} (PIR) schemes have attractive concrete efficiency features. However, in the standard PIR model, the computational complexity of the servers must scale linearly with the database size. We study the possibility of bypassing this limitation in the case where the database is a truth table of a ``simple'' function, such as a union of (multi-dimensional) intervals or convex shapes, a decision tree, or a DNF formula. This question is motivated by the goal of obtaining lightweight {\em homomorphic secret sharing} (HSS) schemes and secure multiparty computation (MPC) protocols for the corresponding families. We obtain both positive and negative results. For ``first-generation'' PIR schemes based on Reed-Muller codes, we obtain computational shortcuts for the above function families, with the exception of DNF formulas for which we show a (conditional) hardness results. For ``third-generation'' PIR schemes based on matching vectors, we obtain stronger hardness results that apply to all of the above families. Our positive results yield new information-theoretic HSS schemes and MPC protocols with attractive efficiency features for simple but useful function families. Our negative results establish new connections between information-theoretic cryptography and fine-grained complexity.
2020
TCC
Constant Ciphertext-Rate Non-Committing Encryption from Standard Assumptions 📺
Non-committing encryption (NCE) is a type of public key encryption which comes with the ability to equivocate ciphertexts to encryptions of arbitrary messages, i.e., it allows one to find coins for key generation and encryption which ``explain'' a given ciphertext as an encryption of any message. NCE is the cornerstone to construct adaptively secure multiparty computation [Canetti et al. STOC'96] and can be seen as the quintessential notion of security for public key encryption to realize ideal communication channels. A large body of literature investigates what is the best message-to-ciphertext ratio (i.e., the rate) that one can hope to achieve for NCE. In this work we propose a near complete resolution to this question and we show how to construct NCE with constant rate in the plain model from a variety of assumptions, such as the hardness of the learning with errors (LWE), the decisional Diffie-Hellman (DDH), or the quadratic residuosity (QR) problem. Prior to our work, constructing NCE with constant rate required a trusted setup and indistinguishability obfuscation [Canetti et al. ASIACRYPT'17].
2020
TCC
Super-Linear Time-Memory Trade-Offs for Symmetric Encryption 📺
We build symmetric encryption schemes from a pseudorandom function/permutation with domain size $N$ which have very high security -- in terms of the amount of messages $q$ they can securely encrypt -- assuming the adversary has $S < N$ bits of memory. We aim to minimize the number of calls $k$ we make to the underlying primitive to achieve a certain $q$, or equivalently, to maximize the achievable $q$ for a given $k$. We target in particular $q \gg N$, in contrast to recent works (Jaeger and Tessaro, EUROCRYPT '19; Dinur, EUROCRYPT '20) which aim to beat the birthday barrier with one call when $S < \sqrt{N}$. Our first result gives new and explicit bounds for the Sample-then-Extract paradigm by Tessaro and Thiruvengadam (TCC '18). We show instantiations for which $q =\Omega((N/S)^{k})$. If $S < N^{1- \alpha}$, Thiruvengadam and Tessaro's weaker bounds only guarantee $q > N$ when $k = \Omega(\log N)$. In contrast, here, we show this is true already for $k = O(1/\alpha)$. We also consider a scheme by Bellare, Goldreich and Krawczyk (CRYPTO '99) which evaluates the primitive at $k$ independent random strings, and masks the message with the XOR of the outputs. Here, we show $q= \Omega((N/S)^{k/2})$, using new combinatorial bounds on the list-decodability of XOR codes which are of independent interest. We also study best-possible attacks against this construction.
2020
TCC
A Secret-Sharing Based MPC Protocol for Boolean Circuits with Good Amortized Complexity 📺
We present a new secure multiparty computation protocol in the preprocessing model that allows for the evaluation of a number of instances of a boolean circuit in parallel, with a small online communication complexity per instance of $10$ bits per party and multiplication gate. Our protocol is secure against an active dishonest majority, and can also be transformed, via existing techniques, into a protocol for the evaluation of a single “well-formed” boolean circuit with the same complexity per multiplication gate at the cost of some overhead that depends on the topology of the circuit. Our protocol uses an approach introduced recently in the setting of honest majority and information-theoretical security which, using an algebraic notion called reverse multiplication friendly embeddings, essentially transforms a batch of evaluations of an arithmetic circuit over a small ?eld into one evaluation of another arithmetic circuit over a larger ?eld. To obtain security against a dishonest majority we combine this approach with the well-known SPDZ protocol that operates over a large ?eld. Structurally our protocol is most similar to MiniMAC, a protocol which bases its security on the use of error-correcting codes, but our protocol has a communication complexity which is half of that of MiniMAC when the best available binary codes are used. With respect to certain variant of MiniMAC that utilizes codes over larger ?elds, our communication complexity is slightly worse; however, that variant of MiniMAC needs a much larger preprocessing than ours. We also show that our protocol also has smaller amortized communication complexity than Committed MPC, a protocol for general ?elds based on homomorphic commitments, if we use the best available constructions for those commitments. Finally, we construct a preprocessing phase from oblivious transfer based on ideas from MASCOT and Committed MPC.
2020
TCC
Asynchronous Byzantine Agreement with Subquadratic Communication 📺
Understanding the communication complexity of Byzantine agreement (BA) is a fundamental problem in distributed computing. In particular, as protocols are run with a large number of parties (as, e.g., in the context of blockchain protocols), it is important to understand the dependence of the communication on the number of parties~$n$. Although adaptively secure BA protocols with $o(n^2)$ communication are known in the synchronous and partially synchronous settings, no such protocols are known in the fully asynchronous case. We show here an asynchronous BA protocol with subquadratic communication tolerating an adaptive adversary who can corrupt $f<(1-\epsilon)n/3$ of the parties (for any $\epsilon>0$). One variant of our protocol assumes initial setup done by a trusted dealer, after which an unbounded number of BA executions can be run; alternately, we can achieve subquadratic \emph{amortized} communication with no prior setup. We also show that some form of setup is needed for (non-amortized) subquadratic BA tolerating $\Theta(n)$ corrupted parties. As a contribution of independent interest, we show a secure-computation protocol in the same threat model that has $o(n^2)$ communication when computing no-input functionalities with short output (e.g., coin tossing).
2020
TCC
Lower Bounds for Multi-Server Oblivious RAMs 📺
In this work, we consider oblivious RAMs (ORAM) in a setting with multiple servers and the adversary may corrupt a subset of the servers. We present an $\Omega(log n)$ overhead lower bound for any k-server ORAM that limits any PPT adversary to distinguishing advantage at most $1/4k$ when only one server is corrupted. In other words, if one insists on negligible distinguishing advantage, then multi-server ORAMs cannot be faster than single-server ORAMs even with polynomially many servers of which only one unknown server is corrupted. Our results apply to ORAMs that may err with probability at most 1/128 as well as scenarios where the adversary corrupts larger subsets of servers. We also extend our lower bounds to other important data structures including oblivious stacks, queues, deques, priority queues and search trees.
2020
TCC
On the Security of Time-Lock Puzzles and Timed Commitments 📺
Time-lock puzzles—problems whose solution requires some amount of \emph{sequential} effort—have recently received increased interest (e.g., in the context of verifiable delay functions). Most constructions rely on the sequential-squaring conjecture that computing $g^{2^T} \bmod N$ for a uniform~$g$ requires at least $T$ (sequential) steps. We study the security of time-lock primitives from two perspectives: 1. We give the first hardness result about the sequential-squaring conjecture. Namely, in a quantitative version of the algebraic group model (AGM) that we call the \emph{strong} AGM, we show that any speed up of sequential squaring is as hard as factoring $N$. 2. We then focus on \emph{timed commitments}, one of the most important primitives that can be obtained from time-lock puzzles. We extend existing security definitions to settings that may arise when using timed commitments in higher-level protocols, and give the first construction of \emph{non-malleable} timed commitments. As a building block of independent interest, we also define (and give constructions for) a related primitive called \emph{timed public-key encryption}.
2020
TCC
Lossiness and Entropic Hardness for Ring-LWE 📺
The hardness of the Ring Learning with Errors problem (RLWE) is a central building block for efficiency-oriented lattice-based cryptography. Many applications use an ``entropic'' variant of the problem where the so-called ``secret'' is not distributed uniformly as prescribed but instead comes from some distribution with sufficient min-entropy. However, the hardness of the entropic variant has not been substantiated thus far. For standard LWE (not over rings) entropic results are known, using a ``lossiness approach'' but it was not known how to adapt this approach to the ring setting. In this work we present the first such results, where entropic security is established either under RLWE or under the Decisional Small Polynomial Ratio (DSPR) assumption which is a mild variant of the NTRU assumption. In the context of general entropic distributions, our results in the ring setting essentially match the known lower bounds (Bolboceanu et al., Asiacrypt 2019; Brakerski and Döttling, Eurocrypt 2020).
2020
TCC
Towards Non-Interactive Witness Hiding 📺
Witness hiding proofs require that the verifier cannot find a witness after seeing a proof. The exact round complexity needed for witness hiding proofs has so far remained an open question. In this work, we provide compelling evidence that witness hiding proofs are achievable non-interactively for wide classes of languages. We use non-interactive witness indistinguishable proofs as the basis for all of our protocols. We give four schemes in different settings under different assumptions: – A universal non-interactive proof that is witness hiding as long as any proof system, possibly an inefficient and/or non-uniform scheme, is witness hiding, has a known bound on verifier runtime, and has short proofs of soundness. – A non-uniform non-interactive protocol justified under a worst-case complexity assumption that is witness hiding and efficient, but may not have short proofs of soundness. – A new security analysis of the two-message argument of Pass [Crypto 2003], showing witness hiding for any non-uniformly hard distribution. We propose a heuristic approach to removing the first message, yielding a non-interactive argument. – A witness hiding non-interactive proof system for languages with unique witnesses, assuming the non-existence of a weak form of witness encryption for any language in NP ? coNP.
2020
TCC
Multi-key Fully-Homomorphic Encryption in the Plain Model 📺
The notion of multi-key fully homomorphic encryption (multi-key FHE) [Lopez-Alt, Tromer, Vaikuntanathan, STOC'12] was proposed as a generalization of fully homomorphic encryption to the multiparty setting. In a multi-key FHE scheme for $n$ parties, each party can individually choose a key pair and use it to encrypt its own private input. Given n ciphertexts computed in this manner, the parties can homomorphically evaluate a circuit C over them to obtain a new ciphertext containing the output of C, which can then be decrypted via a decryption protocol. The key efficiency property is that the size of the (evaluated) ciphertext is independent of the size of the circuit. Multi-key FHE with one-round decryption [Mukherjee and Wichs, Eurocrypt'16], has found several powerful applications in cryptography over the past few years. However, an important drawback of all such known schemes is that they require a trusted setup. In this work, we address the problem of constructing multi-key FHE in the plain model. We obtain the following results: - A multi-key FHE scheme with one-round decryption based on the hardness of learning with errors (LWE), ring LWE, and decisional small polynomial ratio (DSPR) problems. - A variant of multi-key FHE where we relax the decryption algorithm to be non-compact -- i.e., where the decryption complexity can depend on the size of C -- based on the hardness of LWE. We call this variant multi-homomorphic encryption (MHE). We observe that MHE is already sufficient for some of the applications of multi-key FHE.
2020
TCC
Expected-Time Cryptography: Generic Techniques and Applications to Concrete Soundness 📺
This paper studies concrete security with respect to expected-time adversaries. Our first contribution is a set of generic tools to obtain tight bounds on the advantage of an adversary with expected-time guarantees. We apply these tools to derive bounds in the random-oracle and generic-group models, which we show to be tight. As our second contribution, we use these results to derive concrete bounds on the soundness of public-coin proofs and arguments of knowledge. Under the lens of concrete security, we revisit a paradigm by Bootle at al. (EUROCRYPT '16) that proposes a general Forking Lemma for multi-round protocols which implements a rewinding strategy with expected-time guarantees. We give a tighter analysis, as well as a modular statement. We adopt this to obtain the first quantitative bounds on the soundness of Bulletproofs (Bünz et al., S&P 2018), which we instantiate with our expected-time generic-group analysis to surface inherent dependence between the concrete security and the statement to be proved.
2020
TCC
FHE-Based Bootstrapping of Designated-Prover NIZK 📺
We present a novel tree-based technique that can convert any designated-prover NIZK proof system (DP-NIZK) which maintains zero-knowledge only for single statement, into one that allows to prove an unlimited number of statements in ZK, while maintaining all parameters succinct. Our transformation requires leveled fully-homomorphic encryption. We note that single-statement DP-NIZK can be constructed from any one-way function. We also observe a two-way derivation between DP-NIZK and attribute-based signatures (ABS), and as a result derive now constructions of ABS and homomorphic signatures (HS). Our construction improves upon the prior construction of lattice-based DP-NIZK by Kim and Wu (Crypto 2018) since we only require leveled FHE as opposed to HS (which also translates to improved LWE parameters when instantiated). Alternatively, the recent construction of NIZK without preprocessing from either circular-secure FHE (Canetti et al., STOC 2019) or polynomial Learning with Errors (Peikert and Shiehian, Crypto 2019) could be used to obtain a similar final statement. Nevertheless, we note that our statement is formally incomparable to these works (since leveled FHE is not known to imply circular secure FHE or the hardness of LWE). We view this as evidence for the potential in our technique, which we hope can find additional applications in future works.
2020
TCC
Secure Massively Parallel Computation for Dishonest Majority 📺
This work concerns secure protocols in the massively parallel computation (MPC) model, which is one of the most widely-accepted models for capturing the challenges of writing protocols for the types of parallel computing clusters which have become commonplace today (MapReduce, Hadoop, Spark, etc.). Recently, the work of Chan et al. (ITCS ’20) initiated this study, giving a way to compile any MPC protocol into a secure one in the common random string model, achieving the standard secure multi-party computation definition of security with up to 1/3 of the parties being corrupt. We are interested in achieving security for much more than 1/3 corruptions. To that end, we give two compilers for MPC protocols, which assume a simple public-key infrastructure, and achieve semi-honest security for all-but-one corruptions. Our first compiler assumes hardness of the learning-with-errors (LWE) problem, and works for any MPC protocol with “short” output—that is, where the output of the protocol can fit into the storage space of one machine, for instance protocols that output a trained machine learning model. Our second compiler works for any MPC protocol (even ones with a long output, such as sorting) but assumes, in addition to LWE, indistinguishability obfuscation and a circular secure variant of threshold FHE.
2020
TCC
Batch Verification and Proofs of Proximity with Polylog Overhead 📺
Suppose Alice wants to convince Bob of the correctness of k NP statements. Alice could send k witnesses to Bob, but as k grows the communication becomes prohibitive. Is it possible to convince Bob using smaller communication (without making cryptographic assumptions or bounding the computational power of a malicious Alive)? This is the question of batch verification for NP statements. Our main result is a new interactive proof protocol for verifying the correctness of k UP statements (NP statements with a unique witness) using communication that is poly-logarithmic in k (and a fixed polynomial in the length of a single witness). This result is obtained by making progress on a different question in the study of interactive proofs. Suppose Alice wants to convince Bob that a huge dataset has some property. Can this be done if Bob can't even read the entire input? In other words, what properties can be verified in sublinear time? An Interactive Proof of Proximity guarantees that Bob accepts if the input has the property, and rejects if the input is far (say in Hamming distance) from having the property. Two central complexity measures of such a protocol are the query and communication complexities (which should both be sublinear). For every query parameter $q$, and for every language in logspace uniform NC, we construct an interactive proof of proximity with query complexity $q$ and communication complexity $(n/q) \cdot \polylog(n)$. Both results are optimal up to poly-logarithmic factors, under reasonable complexity-theoretic or cryptographic assumptions. The second result, which is our main technical contribution, builds on a distance amplification technique introduced in a beautiful recent work of Ben-Sasson, Kopparty and Saraf [CCC 2018].
2020
TCC
Towards Multiparty Computation Withstanding Coercion of All Parties 📺
Incoercible multi-party computation [Canetti-Gennaro ’96] allows parties to engage in secure computation with the additional guarantee that the public transcript of the computation cannot be used by a coercive external entity to verify representations made by the parties regarding their inputs to and outputs from the computation. That is, any deductions regarding the truthfulness of such representations made by the parties could be made even without access to the public transcript. To date, all incoercible secure computation protocols withstand coercion of only a fraction of the parties, or else assume that all parties use an execution environment that makes some crucial parts of their local states physically inaccessible even to themselves. We consider, for the first time, the setting where all parties are coerced, and the coercer expects to see the entire history of the computation.In this setting we construct: - A general multi-party computation protocol that withstands coercion of all parties, as long as none of the coerced parties cooperates with the coercer, namely they all use the prescribed ``faking algorithm'' upon coercion. We refer to this case as cooperative incoercibility. The protocol uses deniable encryption and indistiguishability obfuscation, and takes 4 rounds of communication. - A general two-party computation protocol that withstands even the ``mixed'' case where some of the coerced parties cooperate with the coercer and disclose their true local states. This protocol is limited to computing functions where the input of one of the parties is taken from a small (poly-size) domain. This protocol uses deniable encryption with public deniability for one of the parties; when instantiated using the deniable encryption of Canetti, Park, and Poburinnaya [Crypto'20], it takes 3 rounds of communication. Finally, we show that protocols with certain communication pattern cannot be incoercible, even in a weaker setting where only some parties are coerced.
2020
TCC
Mr NISC: Multiparty Reusable Non-Interactive Secure Computation 📺
Reducing interaction in Multiparty Computation (MPC) is a highly desirable goal in cryptography. It is known that 2-round MPC can be based on the minimal assumption of 2-round Oblivious Transfer (OT) [Benhamouda and Lin, Garg and Srinivasan, EC 2018], and 1-round MPC is impossible in general. In this work, we propose a natural ``hybrid'' model, called \emph{multiparty reusable Non-Interactive Secure Computation (mrNISC)}. In this model, parties publish encodings of their private inputs $x_i$ on a public bulletin board, once and for all. Later, any subset $I$ of them can compute \emph{on-the-fly} a function $f$ on their inputs $\vec x_I = {\{x_i\}}_{i \in I}$ by just sending a single message to a stateless evaluator, conveying the result $f(\vec x_I)$ and nothing else. Importantly, the input encodings can be \emph{reused} in any number of on-the-fly computations, and the same classical simulation security guaranteed by multi-round MPC, is achieved. In short, mrNISC has a minimal yet ``tractable'' interaction pattern. We initiate the study of mrNISC on several fronts. First, we formalize the model of mrNISC protocols, and present both a UC security definition and a game-based security definition. Second, we construct mrNISC protocols in the plain model with semi-honest and semi-malicious security based on pairing groups. Third, we demonstrate the power of mrNISC by showing two applications: non-interactive MPC (NIMPC) with reusable setup and a distributed version of program obfuscation. At the core of our construction of mrNISC is a witness encryption scheme for a special language that verifies Non-Interactive Zero-Knowledge (NIZK) proofs of the validity of computations over committed values, which is of independent interest.
2020
TCC
Security analysis of SPAKE2+ 📺
We show that a slight variant of Protocol SPAKE2+, which was presented but not analyzed in [Cash, Kiltz, Shoup 2008], is a secure *asymmetric* password-authenticated key exchange protocol (PAKE), meaning that the protocol still provides good security guarantees even if a server is compromised and the password file stored on the server is leaked to an adversary. The analysis is done in the UC framework (i.e., a simulation-based security model), under the computational Diffie-Hellman (CDH) assumption, and modeling certain hash functions as random oracles. The main difference between our variant and the original Protocol SPAKE2+ is that our variant includes standard key confirmation flows; also, adding these flows allows some slight simplification to the remainder of the protocol. Along the way, we also (i) provide the first proof (under the same assumptions) that a slight variant of Protocol SPAKE2 from [Abdalla, Pointcheval 2005] is a secure *symmetric* PAKE in the UC framework (previous security proofs were all in the weaker BPR framework [Bellare, Pointcheval, Rogaway 2000]); (ii) provide a proof (under very similar assumptions) that a variant of Protocol SPAKE2+ that is currently being standardized is also a secure asymmetric PAKE; (iii) repair several problems in earlier UC formulations of secure symmetric and asymmetric PAKE.
2020
TCC
The Resiliency of MPC with Low Interaction: The Benefit of Making Errors 📺
We study information-theoretic secure multiparty protocols that achieve full security, including guaranteed output delivery, at the presence of an active adversary that corrupts a constant fraction of the parties. It is known that 2 rounds are insufficient for such protocols even when the adversary corrupts only two parties (Gennaro, Ishai, Kushilevitz, and Rabin; Crypto 2002), and that perfect protocols can be implemented in three rounds as long as the adversary corrupts less than a quarter of the parties (Applebaum , Brakerski, and Tsabary; Eurocrypt, 2019). Furthermore, it was recently shown that the quarter threshold is tight for any 3-round \emph{perfectly-secure} protocol (Applebaum, Kachlon, and Patra; FOCS 2020). Nevertheless, one may still hope to achieve a better-than-quarter threshold at the expense of allowing some negligible correctness errors and/or statistical deviations in the security. Our main results show that this is indeed the case. Every function can be computed by 3-round protocols with \emph{statistical} security as long as the adversary corrupts less than third of the parties. Moreover, we show that any better resiliency threshold requires four rounds. Our protocol is computationally inefficient and has an exponential dependency in the circuit's depth $d$ and in the number of parties $n$. We show that this overhead can be avoided by relaxing security to computational, assuming the existence of a non-interactive commitment (NICOM). Previous 3-round computational protocols were based on stronger public-key assumptions. When instantiated with statistically-hiding NICOM, our protocol provides \emph{everlasting statistical} security, i.e., it is secure against adversaries that are computationally unlimited \emph{after} the protocol execution. To prove these results, we introduce a new hybrid model that allows for 2-round protocols with linear resliency threshold. Here too we prove that, for perfect protocols, the best achievable resiliency is $n/4$, whereas statistical protocols can achieve a threshold of $n/3$. We also construct the first 2-round $n/3$-statistical verifiable secret sharing that supports second-level sharing and prove a matching lower-bound, extending the results of Patra, Choudhary, Rabin, and Rangan (Crypto 2009). Overall, our results refines the differences between statistical and perfect models of security, and show that there are efficiency gaps even in the regime of realizable thresholds.
2020
TCC
Topology-Hiding Communication from Minimal Assumptions. 📺
Topology-hiding broadcast (THB) enables parties communicating over an incomplete network to broadcast messages while hiding the topology from within a given class of graphs. THB is a central tool underlying general topology-hiding secure computation (THC) (Moran et al. TCC’15). Although broadcast is a privacy-free task, it was recently shown that THB for certain graph classes necessitates computational assumptions, even in the semi-honest setting, and even given a single corrupted party. In this work we investigate the minimal assumptions required for topology-hiding communication—both Broadcast or Anonymous Broadcast (where the broadcaster’s identity is hidden). We develop new techniques that yield a variety of necessary and sufficient conditions for the feasibility of THB/THAB in different cryptographic settings: information theoretic, given existence of key agreement, and given existence of oblivious transfer. Our results show that feasibility can depend on various properties of the graph class, such as connectivity, and highlight the role of different properties of topology when kept hidden, including direction, distance, and/or distance-of-neighbors to the broadcaster. An interesting corollary of our results is a dichotomy for THC with a public number of at least three parties, secure against one corruption: information-theoretic feasibility if all graphs are 2-connected; necessity and sufficiency of key agreement otherwise.
2020
TCC
On the Round Complexity of the Shuffle Model 📺
The shuffle model of differential privacy [Bittau et al. SOSP 2017; Erlingsson et al. SODA 2019; Cheu et al. EUROCRYPT 2019] was proposed as a viable model for performing distributed differentially private computations. Informally, the model consists of an untrusted analyzer that receives messages sent by participating parties via a shuffle functionality, the latter potentially disassociates messages from their senders. Prior work focused on one-round differentially private shuffle model protocols, demonstrating that functionalities such as addition and histograms can be performed in this model with accuracy levels similar to that of the curator model of differential privacy, where the computation is performed by a fully trusted party. A model closely related to the shuffle model was presented in the seminal work of Ishai et al. on establishing cryptography from anonymous communication [FOCS 2006]. Focusing on the round complexity of the shuffle model, we ask in this work what can be computed in the shuffle model of differential privacy with two rounds. Ishai et al. showed how to use one round of the shuffle to establish secret keys between every two parties. Using this primitive to simulate a general secure multi-party protocol increases its round complexity by one. We show how two parties can use one round of the shuffle to send secret messages without having to first establish a secret key, hence retaining round complexity. Combining this primitive with the two-round semi-honest protocol of Applebaum, Brakerski, and Tsabary [TCC 2018], we obtain that every randomized functionality can be computed in the shuffle model with an honest majority, in merely two rounds. This includes any differentially private computation. We hence move to examine differentially private computations in the shuffle model that (i) do not require the assumption of an honest majority, or (ii) do not admit one-round protocols, even with an honest majority. For that, we introduce two computational tasks: common element, and nested common element with parameter $\alpha$. For the common element problem we show that for large enough input domains, no one-round differentially private shuffle protocol exists with constant message complexity and negligible $\delta$, whereas a two-round protocol exists where every party sends a single message in every round. For the nested common element we show that no one-round differentially private protocol exists for this problem with adversarial coalition size $\alpha n$. However, we show that it can be privately computed in two rounds against coalitions of size $cn$ for every $c < 1$. This yields a separation between one-round and two-round protocols. We further show a one-round protocol for the nested common element problem that is differentially private with coalitions of size smaller than $c n$ for all $0 < c < \alpha < 1 / 2$.
2020
TCC
On Average-Case Hardness in TFNP from One-Way Functions 📺
The complexity class TFNP consists of all NP search problems that are total in the sense that a solution is guaranteed to exist for all instances. Over the years, this class has proved to illuminate surprising connections among several diverse subfields of mathematics like combinatorics, computational topology, and algorithmic game theory. More recently, we are starting to better understand its interplay with cryptography. We know that certain cryptographic primitives (e.g. one-way permutations, collision-resistant hash functions, or indistinguishability obfuscation) imply average-case hardness in TFNP and its important subclasses. However, its relationship with the most basic cryptographic primitive -- \ie one-way functions (OWFs) -- still remains unresolved. Under an additional complexity theoretic assumption, OWFs imply hardness in TFNP (Hubá?ek, Naor, and Yogev, ITCS 2017). It is also known that average-case hardness in most structured subclasses of TFNP does not imply any form of cryptographic hardness in a black-box way (Rosen, Segev, and Shahaf, TCC 2017) and, thus, one-way functions might be sufficient. Specifically, no negative result which would rule out basing average-case hardness in TFNP \emph{solely} on OWFs is currently known. In this work, we further explore the interplay between TFNP and OWFs and give the first negative results. As our main result, we show that there cannot exist constructions of average-case (and, in fact, even worst-case) hard TFNP problem from OWFs with a certain type of simple black-box security reductions. The class of reductions we rule out is, however, rich enough to capture many of the currently known cryptographic hardness results for TFNP. Our results are established using the framework of black-box separations (Impagliazzo and Rudich, STOC 1989) and involve a novel application of the reconstruction paradigm (Gennaro and Trevisan, FOCS 2000).
2020
TCC
Perfect Zero Knowledge: New Upperbounds and Relativized Separations 📺
We investigate the complexity of problems that admit perfect zero-knowledge interactive protocols and establish new unconditional upper bounds and oracle separation results. We establish our results by investigating certain distribution testing problems: computational problems over high-dimensional distributions represented by succinct Boolean circuits. A relatively less-investigated complexity class SBP emerged as significant in this study. The main results we establish are: (1) A unconditional inclusion that $\NIPZK \subseteq \CoSBP$. (2) Construction of a relativized world in which there is a distribution testing problem that lies in NIPZK but not in SBP, thus giving a relativized separation of $\NIPZK$ (and hence PZK) from SBP. (3) Construction of a relativized world in which there is a distribution testing problem that lies in $\PZK$ but not in $\CoSBP$, thus giving a relativized separation of PZK$ from CoSBP. Results (1) and (3) imply an oracle separating PZK from NIPZK. Our results refine the landscape of perfect zero-knowledge classes in relation to traditional complexity classes.
2020
TCC
Information-Theoretic 2-Round MPC without Round Collapsing: Adaptive Security, and More 📺
We present simpler and improved constructions of 2-round protocols for secure multi-party computation (MPC) in the semi-honest setting. Our main results are new information-theoretically secure protocols for arithmetic NC1 in two settings: (i) the plain model tolerating up to $t < n/2$ corruptions; and (ii) in the OLE-correlation model tolerating any number of corruptions. Our protocols achieve adaptive security and require only black-box access to the underlying field, whereas previous results only achieve static security and require non-black-box field access. Moreover, both results extend to polynomial-size circuits with computational and adaptive security, while relying on black-box access to a pseudorandom generator. In the OLE correlation model, the extended protocols for circuits tolerate up to $n-1$ corruptions. Along the way, we introduce a conceptually novel framework for 2-round MPC that does not rely on the round collapsing framework underlying all of the recent advances in 2-round MPC.
2020
TCC
Public-Coin Zero-Knowledge Arguments with (almost) Minimal Time and Space Overheads 📺
Zero-knowledge protocols enable the truth of a mathematical statement to be certified by a verifier without revealing any other information. Such protocols are a cornerstone of modern cryptography and recently are becoming more and more practical. However, a major bottleneck in deployment is the efficiency of the prover and, in particular, the space-efficiency of the protocol. For every $\mathsf{NP}$ relation that can be verified in time $T$ and space $S$, we construct a public-coin zero-knowledge argument in which the prover runs in time $T \cdot \mathrm{polylog}(T)$ and space $S \cdot \mathrm{polylog}(T)$. Our proofs have length $\mathrm{polylog}(T)$ and the verifier runs in time $T \cdot \mathrm{polylog}(T)$ (and space $\mathrm{polylog}(T)$). Our scheme is in the random oracle model and relies on the hardness of discrete log in prime-order groups. Our main technical contribution is a new space efficient \emph{polynomial commitment scheme} for multi-linear polynomials. Recall that in such a scheme, a sender commits to a given multi-linear polynomial $P:\mathbb{F}^n \to \mathbb{F}$ so that later on it can prove to a receiver statements of the form ``$P(x)=y$''. In our scheme, which builds on commitments schemes of Bootle et al. (Eurocrypt 2016) and B{\"u}nz et al. (S\&P 2018), we assume that the sender is given multi-pass streaming access to the evaluations of $P$ on the Boolean hypercube and we show how to implement both the sender and receiver in roughly time $2^n$ and space $n$ and with communication complexity roughly $n$.
2020
TCC
Universal Composition with Global Subroutines: Capturing Global Setup within plain UC 📺
The Global and Externalized UC frameworks [Canetti-Dodis-Pass-Walfish, TCC 07] extend the plain UC framework to additionally handle protocols that use a ``global setup'', namely a mechanism that is also used by entities outside the protocol. These frameworks have broad applicability: Examples include public-key infrastructures, common reference strings, shared synchronization mechanisms, global blockchains, or even abstractions such as the random oracle. However, the need to work in a specialized framework has been a source of confusion, incompatibility, and an impediment to broader use. We show how security in the presence of a global setup can be captured within the plain UC framework, thus significantly simplifying the treatment. This is done as follows: - We extend UC-emulation to the case where both the emulating protocol $\pi$ and the emulated protocol $\phi$ make subroutine calls to protocol $\gamma$ that is accessible also outside $\pi$ and $\phi$. As usual, this notion considers only a single instance of $\phi$ or $\pi$ (alongside $\gamma$). - We extend the UC theorem to hold even with respect to the new notion of UC emulation. That is, we show that if $\pi$ UC-emulates $\phi$ in the presence of $\gamma$, then $\rho^{\phi\rightarrow\pi}$ UC-emulates $\rho$ for any protocol $\rho$, even when $\rho$ uses $\gamma$ directly, and in addition calls many instances of $\phi$, all of which use the same instance of $\gamma$. We prove this extension using the existing UC theorem as a black box, thus further simplifying the treatment. We also exemplify how our treatment can be used to streamline, within the plain UC model, proofs of security of systems that involve global set-up, thus providing greater simplicity and flexibility.
2020
TCHES
A Hybrid-CPU-FPGA-based Solution to the Recovery of Sha256crypt-hashed Passwords 📺
This paper presents an accelerator design for the password recovery of sha256crypt based on hybrid CPU-FPGA devices. By applying the brute-force attack computation model proposed in this paper, we decompose the sha256crypt function into two types of operations, namely the data dispatching and the block transforming. The data dispatching operation generates message blocks and the block transforming operation transforms message blocks into digests. These two operations are efficiently accelerated by the customized data dispatch unit and the pipelined block transform unit, respectively. Difficulties of adopting the pipeline technique are addressed also with the following techniques. The group scheduling is used to solve the data dependency that stalls the pipeline. The look-ahead execution eliminates the uncertainty of the execution path. The data path pruning and spatial-temporal multiplexing reduce the resource overhead of non-computing units.The proposed accelerator design is implemented and evaluated on the Xilinx Zynq-7000 XC7Z030-3 SoC. Our experimental results show that the proposed accelerator can improve energy efficiency by 2.54x over the state-of-the-art password recovery tool Hashcat running on an NVIDIA GTX1080Ti GPU. Compared with the pure FPGA-based implementation in John-the-Ripper, the proposed accelerator improves energy efficiency by 1.64x and improves resource efficiency by 1.69x.
2020
TCHES
On the spectral features of robust probing security 📺
In this work we provide a spectral formalization of non-interference in the presence of glitches. Our goal is to present new theoretical and practical tools to reason about robust-d-probing security. We show that the current understanding of extended probes lends itself to probes that participate, during gadget composition, to the creation of additional extended probes. In turn, this enables a natural extension of non-interference definitions into robust ones to build a new reasoning framework that can formally explain some semi-formal results already appeared in the past and be used to synthesize new robust-d-SNI gadgets.
2020
TCHES
Plaintext: A Missing Feature for Enhancing the Power of Deep Learning in Side-Channel Analysis? Breaking multiple layers of side-channel countermeasures 📺
Deep learning (DL) has proven to be very effective for image recognition tasks, with a large body of research on various model architectures for object classification. Straight-forward application of DL to side-channel analysis (SCA) has already shown promising success, with experimentation on open-source variable key datasets showing that secret keys can be revealed with 100s traces even in the presence of countermeasures. This paper aims to further improve the application of DL for SCA, by enhancing the power of DL when targeting the secret key of cryptographic algorithms when protected with SCA countermeasures. We propose a new model, CNN-based model with Plaintext feature extension (CNNP) together with multiple convolutional filter kernel sizes and structures with deeper and narrower neural networks, which has empirically proven its effectiveness by outperforming reference profiling attack methods such as template attacks (TAs), convolutional neural networks (CNNs) and multilayer perceptron (MLP) models. Our model generates state-of-the art results when attacking the ASCAD variable-key database, which has a restricted number of training traces per key, recovering the key within 40 attack traces in comparison with order of 100s traces required by straightforward machine learning (ML) application. During the profiling stage an attacker needs no additional knowledge on the implementation, such as the masking scheme or random mask values, only the ability to record the power consumption or electromagnetic field traces, plaintext/ciphertext and the key. Additionally, no heuristic pre-processing is required in order to break the high-order masking countermeasures of the target implementation.
2020
TCHES
Rejection Sampling Schemes for Extracting Uniform Distribution from Biased PUFs 📺
This paper presents an efficient fuzzy extractor (FE) construction for secure cryptographic key generation from physically unclonable functions (PUFs). The proposed FE, named acceptance-or-rejection (AR)-based FE, utilizes a new debiasing scheme to extract a uniform distribution from a biased PUF response. The proposed debiasing scheme employs the principle of rejection sampling, and can extract a longer debiased bit string compared to those of conventional debiasing schemes. In addition, the proposed AR-based FE is extended to ternary PUF responses (i.e., ternary encoding of a PUF response). These responses can be derived according to cell-wise reliability of the PUF and are promising for extraction of stable and high-entropy responses from common PUFs. The performance of the AR-based Fes is evaluated through an experimental simulation of PUF-based key generation and compared with conventional FEs. We confirm that the proposed AR-based FE can achieve the highest efficiency in terms of PUF and nonvolatile memory (NVM) sizes for various PUF conditions among the conventional counterparts. More precisely, the AR-based FE can realize a 128-bit key generation with up-to 55% smaller PUF size or up-to 72% smaller NVM size than other conventional FEs. In addition, the ternary AR-based FE is up to 55% more efficient than the binary version, and can also achieve up-to 63% higher efficiency than conventional counterparts. Furthermore, we show that the AR-based FE can be applied to PUFs with local biases (e.g., biases depending on cell location in SRAM PUFs), unlike all the conventional schemes, for which only global (or identical) biases are assumed.
2020
TCHES
Second-Order Masked Lookup Table Compression Scheme 📺
Masking by lookup table randomisation is a well-known technique used to achieve side-channel attack resistance for software implementations, particularly, against DPA attacks. The randomised table technique for first- and second-order security requires about m•2n bits of RAM to store an (n,m)-bit masked S-box lookup table. Table compression helps in reducing the amount of memory required, and this is useful for highly resource-constrained IoT devices. Recently, Vadnala (CT-RSA 2017) proposed a randomised table compression scheme for first- and second-order security in the probing leakage model. This scheme reduces the RAM memory required by about a factor of 2l, where l is a compression parameter. Vivek (Indocrypt 2017) demonstrated an attack against the second-order scheme of Vadnala. Hence achieving table compression at second and higher orders is an open problem.In this work, we propose a second-order secure randomised table compression scheme which works for any (n,m)-bit S-box. Our proposal is a variant of Vadnala’s scheme that is not only secure but also significantly improves the time-memory trade-off. Specifically, we improve the online execution time by a factor of 2n−l. Our proposed scheme is proved 2-SNI secure in the probing leakage model. We have implemented our method for AES-128 on a 32-bit ARM Cortex processor. We are able to reduce the memory required to store a randomised S-box table for second-order AES-128 implementation to 59 bytes.
2020
TCHES
Improving the Performance of the Picnic Signature Scheme 📺
Picnic is a digital signature algorithm designed to provide security against attacks by quantum computers. The design uses only symmetric-key primitives, and is an efficient instantiation of the MPC-in-the-head paradigm. In this work, we explore the Picnic design in great detail. We investigate and benchmark different parameter choices and show that there exist better parameter choices than those in the current specification. We also present improvements to the MPC protocol that shorten signatures and reduce signing time. The proposed MPC changes tailor the protocol to the circuit of interest in Picnic, but may also be of independent interest. Taken together, these changes give a new instantiation of Picnic that signs messages 7.9 to 13.9 times faster, and verifies signatures 4.5 to 5.5 times faster than the existing “Picnic2” design, while having nearly the same signature sizes.
2020
TCHES
Faster Montgomery and double-add ladders for short Weierstrass curves 📺
The Montgomery ladder and Joye ladder are well-known algorithms for elliptic curve scalar multiplication with a regular structure. The Montgomery ladder is best known for its implementation on Montgomery curves, which requires 5M+4S+1m+8A per scalar bit, and 6 field registers. Here (M, S,m,A) represent respectively field Multiplications, Squarings, multiplications by a curve constant, and Additions or subtractions. This ladder is also complete, meaning that it works on all input points and all scalars. Many protocols do not use Montgomery curves, but instead use prime-order curves in short Weierstrass form. These have historically been much slower, with ladders costing at least 14 multiplications or squarings per bit: 8M + 6S + 27A for the Montgomery ladder and 8M+ 6S + 30A for the Joye ladder. In 2017, Kim et al. improved the Montgomery ladder to 8M+ 4S + 12A + 1H per bit using 9 registers, where the H represents a halving. Hamburg simplified Kim et al.’s formulas to 8M+ 4S + 8A + 1H per bit using 6 registers. Here we present improved formulas which compute the Montgomery ladder on short Weierstrass curves using 8M+ 3S + 7A per bit, and requiring 6 registers. We also give formulas for the Joye ladder that use 9M+3S+7A per bit, requiring 5 registers. One of our new formulas supports very efficient 4-way vectorization. We also discuss curve invariants, exceptional points, side-channel protection and how to set up and finish these ladder operations. Finally, we show a novel technique to make these ladders complete when the curve order is not divisible by 2 or 3, at a modest increase in cost. A sample implementation of these techniques is given in the supplementary material, also posted at https://github.com/bitwiseshiftleft/ladder_formulas
2020
TCHES
Modeling Soft Analytical Side-Channel Attacks from a Coding Theory Viewpoint 📺
One important open question in side-channel analysis is to find out whether all the leakage samples in an implementation can be exploited by an adversary, as suggested by masking security proofs. For attacks exploiting a divide-and-conquer strategy, the answer is negative: only the leakages corresponding to the first/last rounds of a block cipher can be exploited. Soft Analytical Side-Channel Attacks (SASCA) have been introduced as a powerful solution to mitigate this limitation. They represent the target implementation and its leakages as a code (similar to a Low Density Parity Check code) that is decoded thanks to belief propagation. Previous works have shown the low data complexities that SASCA can reach in practice. In this paper, we revisit these attacks by modeling them with a variation of the Random Probing Model used in masking security proofs, that we denote as the Local Random Probing Model (LRPM). Our study establishes interesting connections between this model and the erasure channel used in coding theory, leading to the following benefits. First, the LRPM allows bounding the security of concrete implementations against SASCA in a fast and intuitive manner. We use it in order to confirm that the leakage of any operation in a block cipher can be exploited, although the leakages of external operations dominate in known-plaintext/ciphertext attack scenarios. Second, we show that the LRPM is a tool of choice for the (nearly worst-case) analysis of masked implementations in the noisy leakage model, taking advantage of all the operations performed, and leading to new tradeoffs between their amount of randomness and physical noise level. Third, we show that it can considerably speed up the evaluation of other countermeasures such as shuffling.
2020
TCHES
RISQ-V: Tightly Coupled RISC-V Accelerators for Post-Quantum Cryptography 📺
Empowering electronic devices to support Post-Quantum Cryptography (PQC) is a challenging task. PQC introduces new mathematical elements and operations which are usually not easy to implement on standard processors. Especially for low cost and resource constraint devices, hardware acceleration is usually required. In addition, as the standardization process of PQC is still ongoing, a focus on maintaining flexibility is mandatory. To cope with such requirements, hardware/software co-design techniques have been recently used for developing complex and highly customized PQC solutions. However, while most of the previous works have developed loosely coupled PQC accelerators, the design of tightly coupled accelerators and Instruction Set Architecture (ISA) extensions for PQC have been barely explored. To this end, we present RISQ-V, an enhanced RISC-V architecture that integrates a set of powerful tightly coupled accelerators to speed up lattice-based PQC. RISQ-V efficiently reuses processor resources and reduces the amount of memory accesses. This significantly increases the performance while keeping the silicon area overhead low. We present three contributions. First, we propose a set of powerful hardware accelerators deeply integrated into the RISC-V pipeline. Second, we extended the RISC-V ISA with 29 new instructions to efficiently perform operations for lattice-based cryptography. Third, we implemented our RISQ-V in ASIC technology and on FPGA. We evaluated the performance of NewHope, Kyber, and Saber on RISQ-V. Compared to the pure software implementation on RISC-V, our co-design implementations show a speedup factor of up to 11.4 for NewHope, 9.6 for Kyber, and 2.7 for Saber. For the ASIC implementation, the energy consumption was reduced by factors of up to 9.5 for NewHope, 7.7 for Kyber, and 2.1 for Saber. The cell count of the CPU was increased by a factor of 1.6 compared to the original RISC-V design, which can be considered as a moderate increase for the achieved performance gain.
2020
TCHES
Minerva: The curse of ECDSA nonces Systematic analysis of lattice attacks on noisy leakage of bit-length of ECDSA nonces 📺
Best Paper CHES 2020
We present our discovery of a group of side-channel vulnerabilities in implementations of the ECDSA signature algorithm in a widely used Atmel AT90SC FIPS 140-2 certified smartcard chip and five cryptographic libraries (libgcrypt, wolfSSL, MatrixSSL, SunEC/OpenJDK/Oracle JDK, Crypto++). Vulnerable implementations leak the bit-length of the scalar used in scalar multiplication via timing. Using leaked bit-length, we mount a lattice attack on a 256-bit curve, after observing enough signing operations. We propose two new methods to recover the full private key requiring just 500 signatures for simulated leakage data, 1200 for real cryptographic library data, and 2100 for smartcard data. The number of signatures needed for a successful attack depends on the chosen method and its parameters as well as on the noise profile, influenced by the type of leakage and used computation platform. We use the set of vulnerabilities reported in this paper, together with the recently published TPM-FAIL vulnerability [MSE+20] as a basis for real-world benchmark datasets to systematically compare our newly proposed methods and all previously published applicable lattice-based key recovery methods. The resulting exhaustive comparison highlights the methods’ sensitivity to its proper parametrization and demonstrates that our methods are more efficient in most cases. For the TPM-FAIL dataset, we decreased the number of required signatures from approximately 40 000 to mere 900.
2020
TCHES
DANA Universal Dataflow Analysis for Gate-Level Netlist Reverse Engineering 📺
Reverse engineering of integrated circuits, i.e., understanding the internals of Integrated Circuits (ICs), is required for many benign and malicious applications. Examples of the former are detection of patent infringements, hardware Trojans or Intellectual Property (IP)-theft, as well as interface recovery and defect analysis, while malicious applications include IP-theft and finding insertion points for hardware Trojans. However, regardless of the application, the reverse engineer initially starts with a large unstructured netlist, forming an incomprehensible sea of gates.This work presents DANA, a generic, technology-agnostic, and fully automated dataflow analysis methodology for flattened gate-level netlists. By analyzing the flow of data between individual Flip Flops (FFs), DANA recovers high-level registers. The key idea behind DANA is to combine independent metrics based on structural and control information with a powerful automated architecture. Notably, DANA works without any thresholds, scenario-dependent parameters, or other “magic” values that the user must choose. We evaluate DANA on nine modern hardware designs, ranging from cryptographic co-processors, over CPUs, to the OpenTitan, a stateof- the-art System-on-Chip (SoC), which is maintained by the lowRISC initiative with supporting industry partners like Google and Western Digital. Our results demonstrate almost perfect recovery of registers for all case studies, regardless whether they were synthesized as FPGA or ASIC netlists. Furthermore, we explore two applications for dataflow analysis: we show that the raw output of DANA often already allows to identify crucial components and high-level architecture features and also demonstrate its applicability for detecting simple hardware Trojans.Hence, DANA can be applied universally as the first step when investigating unknown netlists and provides major guidance for human analysts by structuring and condensing the otherwise incomprehensible sea of gates. Our implementation of DANA and all synthesized netlists are available as open source on GitHub.
2020
TCHES
Strength in Numbers: Improving Generalization with Ensembles in Machine Learning-based Profiled Side-channel Analysis 📺
The adoption of deep neural networks for profiled side-channel attacks provides powerful options for leakage detection and key retrieval of secure products. When training a neural network for side-channel analysis, it is expected that the trained model can implement an approximation function that can detect leaking side-channel samples and, at the same time, be insensible to noisy (or non-leaking) samples. This outlines a generalization situation where the model can identify the main representations learned from the training set in a separate test set.This paper discusses how output class probabilities represent a strong metric when conducting the side-channel analysis. Further, we observe that these output probabilities are sensitive to small changes, like selecting specific test traces or weight initialization for a neural network. Next, we discuss the hyperparameter tuning, where one commonly uses only a single out of dozens of trained models, where each of those models will result in different output probabilities. We show how ensembles of machine learning models based on averaged class probabilities can improve generalization. Our results emphasize that ensembles increase a profiled side-channel attack’s performance and reduce the variance of results stemming from different hyperparameters, regardless of the selected dataset or leakage model.
2020
TCHES
Retrofitting Leakage Resilient Authenticated Encryption to Microcontrollers 📺
The security of Internet of Things (IoT) devices relies on fundamental concepts such as cryptographically protected firmware updates. In this context attackers usually have physical access to a device and therefore side-channel attacks have to be considered. This makes the protection of required cryptographic keys and implementations challenging, especially for commercial off-the-shelf (COTS) microcontrollers that typically have no hardware countermeasures. In this work, we demonstrate how unprotected hardware AES engines of COTS microcontrollers can be efficiently protected against side-channel attacks by constructing a leakage resilient pseudo random function (LR-PRF). Using this side-channel protected building block, we implement a leakage resilient authenticated encryption with associated data (AEAD) scheme that enables secured firmware updates. We use concepts from leakage resilience to retrofit side-channel protection on unprotected hardware AES engines by means of software-only modifications. The LR-PRF construction leverages frequent key changes and low data complexity together with key dependent noise from parallel hardware to protect against side-channel attacks. Contrary to most other protection mechanisms such as time-based hiding, no additional true randomness is required. Our concept relies on parallel S-boxes in the AES hardware implementation, a feature that is fortunately present in many microcontrollers as a measure to increase performance. In a case study, we implement the protected AEAD scheme for two popular ARM Cortex-M microcontrollers with differing parallelism. We evaluate the protection capabilities in realistic IoT attack scenarios, where non-invasive EM probes or power consumption measurements are employed by the attacker. We show that the concept provides the side-channel hardening that is required for the long-term security of IoT devices.
2020
TCHES
Remove Some Noise: On Pre-processing of Side-channel Measurements with Autoencoders 📺
In the profiled side-channel analysis, deep learning-based techniques proved to be very successful even when attacking targets protected with countermeasures. Still, there is no guarantee that deep learning attacks will always succeed. Various countermeasures make attacks significantly more complex, and such countermeasures can be further combined to make the attacks even more challenging. An intuitive solution to improve the performance of attacks would be to reduce the effect of countermeasures.This paper investigates whether we can consider certain types of hiding countermeasures as noise and then use a deep learning technique called the denoising autoencoder to remove that noise. We conduct a detailed analysis of six different types of noise and countermeasures separately or combined and show that denoising autoencoder improves the attack performance significantly.
2020
TCHES
Unrolled Cryptography on Silicon: A Physical Security Analysis 📺
Cryptographic primitives with low-latency performance have gained momentum lately due to an increased demand for real-time applications. Block ciphers such as PRINCE enable data encryption (resp. decryption) within a single clock cycle at a moderately high operating frequency when implemented in a fully-unrolled fashion. Unsurprisingly, many typical environments for unrolled ciphers require protection against physical adversaries as well. Yet, recent works suggest that most common SCA countermeasures are hard to apply to low-latency circuits. Hardware masking, for example, requires register stages to offer resistance, thus adding delay and defeating the purpose of unrolling. On another note, it has been indicated that unrolled primitives without any additional means of protection offer an intrinsic resistance to SCA attacks due to their parallelism, asynchronicity and speed of execution. In this work, we take a closer look at the physical security properties provided by unrolled cryptographic IC implementations. We are able to confirm that the nature of unrolling indeed bears the potential to decrease the susceptibility of cipher implementations significantly when reset methods are applied. With respect to certain adversarial models, e.g., ciphertext-only access, an amazingly high level of protection can be achieved. While this seems to be a great result for cryptographic hardware engineers, there is an attack vector hidden in plain sight which still threatens the security of unrolled implementations remarkably – namely the static power consumption of CMOS-based circuits. We point out that essentially all reasons which make it hard to extract meaningful information from the dynamic behavior of unrolled primitives are not an issue when exploiting the static currents for key recovery. Our evaluation is based on real-silicon measurements of an unrolled PRINCE core in a custom 40nm ASIC. The presented results serve as a neat educational case study to demonstrate the broad differences between dynamic and static power information leakage in the light of technological advancement.
2020
TCHES
High-speed Instruction-set Coprocessor for Lattice-based Key Encapsulation Mechanism: Saber in Hardware 📺
In this paper, we present an instruction set coprocessor architecture for lattice-based cryptography and implement the module lattice-based post-quantum key encapsulation mechanism (KEM) Saber as a case study. To achieve fast computation time, the architecture is fully implemented in hardware, including CCA transformations. Since polynomial multiplication plays a performance-critical role in the module and ideal lattice-based public-key cryptography, a parallel polynomial multiplier architecture is proposed that overcomes memory access bottlenecks and results in a highly parallel yet simple and easy-to-scale design. Such multipliers can compute a full multiplication in 256 cycles, but are designed to target any area/performance trade-offs. Besides optimizing polynomial multiplication, we make important design decisions and perform architectural optimizations to reduce the overall cycle counts as well as improve resource utilization. For the module dimension 3 (security comparable to AES-192), the coprocessor computes CCA key generation, encapsulation, and decapsulation in only 5,453, 6,618 and 8,034 cycles respectively, making it the fastest hardware implementation of Saber to our knowledge. On a Xilinx UltraScale+ XCZU9EG-2FFVB1156 FPGA, the entire instruction set coprocessor architecture runs at 250 MHz clock frequency and consumes 23,686 LUTs, 9,805 FFs, and 2 BRAM tiles (including 5,113 LUTs and 3,068 FFs for the Keccak core).
2020
TCHES
Strengthening Sequential Side-Channel Attacks Through Change Detection 📺
The sequential structure of some side-channel attacks makes them subject to error propagation, i.e. when an error occurs during the recovery of some part of a secret key, all the following guesses might as well be chosen randomly. We propose a methodology that strengthens sequential attacks by automatically identifying and correcting errors. The core ingredient of our methodology is a change-detection test that monitors the distribution of the distinguisher values used to reconstruct the secret key. Our methodology includes an error-correction procedure that can cope both with false positives of the change-detection test, and inaccuracies of the estimated location of the wrong key guess. The proposed methodology is general and can be included in several attacks. As meaningful examples, we conduct two different side-channel attacks against RSA-2048: an horizontal power-analysis attack based on correlation and a vertical timing attack. Our experiments show that, in all the considered cases, strengthened attacks outperforms their original counterparts and alternative solutions that are based on thresholds. In particular, strengthened attacks achieve high success rates even when the side-channel measurements are noisy or limited in number, without prohibitively increasing the computing time.
2020
TCHES
Investigating Profiled Side-Channel Attacks Against the DES Key Schedule 📺
Recent publications describe profiled single trace side-channel attacks (SCAs) against the DES key-schedule of a “commercially available security controller”. They report a significant reduction of the average remaining entropy of cryptographic keys after the attack, with surprisingly large, key-dependent variations of attack results, and individual cases with remaining key entropies as low as a few bits. Unfortunately, they leave important questions unanswered: Are the reported wide distributions of results plausible - can this be explained? Are the results device-specific or more generally applicable to other devices? What is the actual impact on the security of 3-key triple DES? We systematically answer those and several other questions by analyzing two commercial security controllers and a general purpose microcontroller. We observe a significant overall reduction and, importantly, also observe a large key-dependent variation in single DES key security levels, i.e. 49.4 bit mean and 0.9 % of keys &lt; 40 bit (first investigated security controller; other results similar). We also observe a small fraction of keys with exceptionally low security levels that can be called weak keys. It is unclear, whether a device’s side-channel security should be assessed based on such rare weak key outliers. We generalize results to other leakage models by attacking the hardware DES accelerator of a general purpose microcontroller exhibiting a different leakage model. A highly simplified leakage simulation also confirms the wide distribution and shows that security levels are predictable to some extent. Through extensive investigations we find that the actual weakness of keys mainly stems from the specific switching noise they cause. Based on our investigations we expect that widely distributed results and weak outliers should be expected for all profiled attacks against (insufficiently protected) key-schedules, regardless of the algorithm and specific implementation. Finally, we describe a sound approach to estimate actual 3-key triple-DES security levels from empirical single DES results and find that the impact on the security of 3-key triple-DES is limited, i.e. 96.1 bit mean and 0.24 % of key-triples &lt; 80 bit for the same security controller.
2020
TCHES
A Novel Evaluation Metric for Deep Learning-Based Side Channel Analysis and Its Extended Application to Imbalanced Data 📺
Since Kocher (CRYPTO’96) proposed timing attack, side channel analysis (SCA) has shown great potential to break cryptosystems via physical leakage. Recently, deep learning techniques are widely used in SCA and show equivalent and even better performance compared to traditional methods. However, it remains unknown why and when deep learning techniques are effective and efficient for SCA. Masure et al. (IACR TCHES 2020(1):348–375) illustrated that deep learning paradigm is suitable for evaluating implementations against SCA from a worst-case scenario point of view, yet their work is limited to balanced data and a specific loss function. Besides, deep learning metrics are not consistent with side channel metrics. In most cases, they are deceptive in foreseeing the feasibility and complexity of mounting a successful attack, especially for imbalanced data. To mitigate the gap between deep learning metrics and side channel metrics, we propose a novel Cross Entropy Ratio (CER) metric to evaluate the performance of deep learning models for SCA. CER is closely related to traditional side channel metrics Guessing Entropy (GE) and Success Rate (SR) and fits to deep learning scenario. Besides, we show that it works stably while deep learning metrics such as accuracy becomes rather unreliable when the training data tends to be imbalanced. However, estimating CER can be done as easy as natural metrics in deep learning algorithms with low computational complexity. Furthermore, we adapt CER metric to a new kind of loss function, namely CER loss function, designed specifically for deep learning in side channel scenario. In this way, we link directly the SCA objective to deep learning optimization. Our experiments on several datasets show that, for SCA with imbalanced data, CER loss function outperforms Cross Entropy loss function in various conditions.
2020
TCHES
Splitting the Interpose PUF: A Novel Modeling Attack Strategy 📺
We demonstrate that the Interpose PUF proposed at CHES 2019, an Arbiter PUF-based design for so-called Strong Physical Unclonable Functions (PUFs), can be modeled by novel machine learning strategies up to very substantial sizes and complexities. Our attacks require in the most difficult cases considerable, but realistic, numbers of CRPs, while consuming only moderate computation times, ranging from few seconds to few days. The attacks build on a new divide-and-conquer approach that allows us to model the two building blocks of the Interpose PUF separately. For non-reliability based Machine Learning (ML) attacks, this eventually leads to attack times on (kup, kdown)-Interpose PUFs that are comparable to the ones against max{kup, kdown}-XOR Arbiter PUFs, refuting the original claim that Interpose PUFs could provide security similar to (kdown + kup/2)-XOR Arbiter PUFs (CHES 2019). On the technical side, our novel divide-and-conquer technique might also be useful in analyzing other designs, where XOR Arbiter PUF challenge bits are unknown to the attacker.
2020
TCHES
CPAmap: On the Complexity of Secure FPGA Virtualization, Multi-Tenancy, and Physical Design 📺
With virtualized Field Programmable Gate Arrays (FPGAs) on the verge of being deployed to the cloud computing domain, there is a rising interest in resolving recently identified security issues. Those issues result from different trusted and untrusted entities sharing the FPGA fabric and the Power Distribution Network. Researchers were able to perform both side-channel and fault attacks between logically isolated designs on the same FPGA fabric, compromising security of cryptographic modules and other critical implementations. Side-channel attacks specifically are enabled by the vast degree of freedom given to developers when making use of the basic FPGA resources. Both ring oscillators as well as long delay lines, implemented using low-level FPGA primitives, have been shown to provide sufficient data for simple or correlation-based power analysis attacks. In order to develop new or apply known countermeasures onto designs and implementations in a virtualized multi-tenant FPGA, we seek to fully understand the underlying mechanisms and dependencies of chip-internal side-channel attacks. Although the impact of process variation and other physical design parameters on side-channel vulnerability has been investigated in previous works, remote attacks between logically isolated partitions in multi-tenant FPGAs introduce new and unique challenges. Thus, we systematically analyze the impact of physical mapping of both attacker and victim design on the success of correlation power analysis attacks on the Advanced Encryption Standard (AES). We report our findings on a Xilinx Zynq 7000-based platform, which show that the effect of global and local placement as well as routing and process variation on the success of side-channel attacks almost exceeds the impact of hiding countermeasures. This result reveals fundamental challenges in secure virtualization of FPGAs, which have been mostly ignored so far. Eventually, our results may also help vendors and hypervisors in developing zero overhead side-channel countermeasures based on adequate global and local placement of isolated designs on a multi-tenant FPGA.
2020
TCHES
Revisiting a Methodology for Efficient CNN Architectures in Profiling Attacks 📺
This work provides a critical review of the paper by Zaid et al. titled “Methodology for Efficient CNN Architectures in Profiling attacks”, which was published in TCHES Volume 2020, Issue 1. This work studies the design of CNN networks to perform side-channel analysis of multiple implementations of the AES for embedded devices. Based on the authors’ code and public data sets, we were able to cross-check their results and perform a thorough analysis. We correct multiple misconceptions by carefully inspecting different elements of the model architectures proposed by Zaid et al. First, by providing a better understanding on the internal workings of these models, we can trivially reduce their number of parameters on average by 52%, while maintaining a similar performance. Second, we demonstrate that the convolutional filter’s size is not strictly related to the amount of misalignment in the traces. Third, we show that increasing the filter size and the number of convolutions actually improves the performance of a network. Our work demonstrates once again that reproducibility and review are important pillars of academic research. Therefore, we provide the reader with an online Python notebook which allows to reproduce some of our experiments1 and additional example code is made available on Github.2
2020
TCHES
JackHammer: Efficient Rowhammer on Heterogeneous FPGA-CPU Platforms 📺
After years of development, FPGAs are finally making an appearance on multi-tenant cloud servers. Heterogeneous FPGA-CPU microarchitectures require reassessment of common assumptions about isolation and security boundaries, as they introduce new attack vectors and vulnerabilities. In this work, we analyze the memory and cache subsystem and study Rowhammer and cache attacks enabled by two proposed heterogeneous FPGA-CPU platforms from Intel: the Arria 10 GX with an integrated FPGA-CPU platform, and the Arria 10 GX PAC expansion card which connects the FPGA to the CPU via the PCIe interface. We demonstrate JackHammer, a novel, efficient, and stealthy Rowhammer from the FPGA to the host’s main memory. Our results indicate that a malicious FPGA can perform twice as fast as a typical Rowhammer from the CPU on the same system and causes around four times as many bit flips as the CPU attack. We demonstrate the efficacy of JackHammer from the FPGA through a realistic fault attack on the WolfSSL RSA signing implementation that reliably causes a fault after an average of fifty-eight RSA signatures, 25% faster than a CPU Rowhammer. In some scenarios our JackHammer attack produces faulty signatures more than three times more often and almost three times faster than a conventional CPU Rowhammer. Finally, we systematically analyze new cache attacks in these environments following demonstration of a cache covert channel across FPGA and CPU.
2020
TCHES
The Long and Winding Path to Secure Implementation of GlobalPlatform SCP10 📺
GlobalPlatform (GP) card specifications are defined for smart cards regarding rigorous security requirements. The increasingly more powerful cards within an open ecosystem of multiple players stipulate that asymmetric-key protocols become necessary. In this paper, we analyze SCP10, which is the Secure Channel Protocol (SCP) that relies on RSA for key exchange and authentication. Our findings are twofold. First, we demonstrate several flaws in the design of SCP10. We discuss the scope of the identified flaws by presenting several attack scenarios in which a malicious attacker can recover all the messages protected by SCP10. We provide a full implementation of these attacks. For instance, an attacker can get the freshly generated session keys in less than three hours. Second, we propose a secure implementation of SCP10 and discuss how it can mitigate the discovered flaws. Finally, we measure the overhead incurred by the implemented countermeasures.
2020
TCHES
ISA Extensions for Finite Field Arithmetic: Accelerating Kyber and NewHope on RISC-V 📺
We present and evaluate a custom extension to the RISC-V instruction set for finite field arithmetic. The result serves as a very compact approach to software-hardware co-design of PQC implementations in the context of small embedded processors such as smartcards. The extension provides instructions that implement finite field operations with subsequent reduction of the result. As small finite fields are used in various PQC schemes, such instructions can provide a considerable speedup for an otherwise software-based implementation. Furthermore, we create a prototype implementation of the presented instructions for the extendable VexRiscv core, integrate the result into a chip design, and evaluate the design on two different FPGA platforms. The effectiveness of the extension is evaluated by using the instructions to optimize the Kyber and NewHope key-encapsulation schemes. To that end, we also present an optimized software implementation for the standard RISC-V instruction set for the polynomial arithmetic underlying those schemes, which serves as basis for comparison. Both variants are tuned on an assembler level to optimally use the processor pipelines of contemporary RISC-V CPUs. The result shows a speedup for the polynomial arithmetic of up to 85% over the basic software implementation. Using the custom instructions drastically reduces the code and data size of the implementation without introducing runtime-performance penalties at a small cost in circuit size. When used in the selected schemes, the custom instructions can be used to replace a full general purpose multiplier to achieve very compact implementations.
2020
TCHES
Single-Trace Attacks on Keccak 📺
Since its selection as the winner of the SHA-3 competition, Keccak, with all its variants, has found a large number of applications. It is, for instance, a common building block in schemes submitted to NIST’s post-quantum cryptography project. In many of these applications, Keccak processes ephemeral secrets. In such a setting, side-channel adversaries are limited to a single observation, meaning that differential attacks are inherently prevented. If, however, such a single trace of Keccak can already be sufficient for key recovery has so far been unknown. In this paper, we change the above by presenting the first single-trace attack targeting Keccak. Our method is based on soft-analytical side-channel attacks and, thus, combines template matching with message passing in a graphical model of the attacked algorithm. As a straight-forward model of Keccak does not yield satisfactory results, we describe several optimizations for the modeling and the message-passing algorithm. Their combination allows attaining high attack performance in terms of both success rate as well as computational runtime. We evaluate our attack assuming generic software (microcontroller) targets and thus use simulations in the generic noisy Hamming-weight leakage model. Hence, we assume relatively modest profiling capabilities of the adversary. Nonetheless, the attack can reliably recover secrets in a large number of evaluated scenarios at realistic noise levels. Consequently, we demonstrate the need for countermeasures even in settings where DPA is not a threat.
2020
TCHES
Parameterized Hardware Accelerators for Lattice-Based Cryptography and Their Application to the HW/SW Co-Design of qTESLA 📺
This paper presents a set of efficient and parameterized hardware accelerators that target post-quantum lattice-based cryptographic schemes, including a versatile cSHAKE core, a binary-search CDT-based Gaussian sampler, and a pipelined NTT-based polynomial multiplier, among others. Unlike much of prior work, the accelerators are fully open-sourced, are designed to be constant-time, and can be parameterized at compile-time to support different parameters without the need for re-writing the hardware implementation. These flexible, publicly-available accelerators are leveraged to demonstrate the first hardware-software co-design using RISC-V of the post-quantum lattice-based signature scheme qTESLA with provably secure parameters. In particular, this work demonstrates that the NIST’s Round 2 level 1 and level 3 qTESLA variants achieve over a 40-100x speedup for key generation, about a 10x speedup for signing, and about a 16x speedup for verification, compared to the baseline RISC-V software-only implementation. For instance, this corresponds to execution in 7.7, 34.4, and 7.8 milliseconds for key generation, signing, and verification, respectively, for qTESLA’s level 1 parameter set on an Artix-7 FPGA, demonstrating the feasibility of the scheme for embedded applications.
2020
TCHES
Generic Side-channel attacks on CCA-secure lattice-based PKE and KEMs 📺
In this work, we demonstrate generic and practical EM side-channel assisted chosen ciphertext attacks over multiple LWE/LWR-based Public Key Encryption (PKE) and Key Encapsulation Mechanisms (KEM) secure in the chosen ciphertext model (IND-CCA security). We show that the EM side-channel information can be efficiently utilized to instantiate a plaintext checking oracle, which provides binary information about the output of decryption, typically concealed within IND-CCA secure PKE/KEMs, thereby enabling our attacks. Firstly, we identified EM-based side-channel vulnerabilities in the error correcting codes (ECC) enabling us to distinguish based on the value/validity of decrypted codewords. We also identified similar vulnerabilities in the Fujisaki-Okamoto transform which leaks information about decrypted messages applicable to schemes that do not use ECC. We subsequently exploit these vulnerabilities to demonstrate practical attacks applicable to six CCA-secure lattice-based PKE/KEMs competing in the second round of the NIST standardization process. We perform experimental validation of our attacks on implementations taken from the open-source pqm4 library, running on the ARM Cortex-M4 microcontroller. Our attacks lead to complete key-recovery in a matter of minutes on all the targeted schemes, thus showing the effectiveness of our attack.
2020
TCHES
Cortex-M4 optimizations for {R,M} LWE schemes 📺
This paper proposes various optimizations for lattice-based key encapsulation mechanisms (KEM) using the Number Theoretic Transform (NTT) on the popular ARM Cortex-M4 microcontroller. Improvements come in the form of a faster code using more efficient modular reductions, optimized small-degree polynomial multiplications, and more aggressive layer merging in the NTT, but also in the form of reduced stack usage. We test our optimizations in software implementations of Kyber and NewHope, both round 2 candidates in the NIST post-quantum project, and also NewHope-Compact, a recently proposed variant of NewHope with smaller parameters. Our software is the first implementation of NewHope-Compact on the Cortex-M4 and shows speed improvements over previous high-speed implementations of Kyber and NewHope. Moreover, it gives a common framework to compare those schemes with the same level of optimization. Our results show that NewHope- Compact is the fastest scheme, followed by Kyber, and finally NewHope, which seems to suffer from its large modulus and error distribution for small dimensions.
2020
TCHES
Understanding Screaming Channels: From a Detailed Analysis to Improved Attacks 📺
Recently, some wireless devices have been found vulnerable to a novel class of side-channel attacks, called Screaming Channels. These leaks might appear if the sensitive leaks from the processor are unintentionally broadcast by a radio transmitter placed on the same chip. Previous work focuses on identifying the root causes, and on mounting an attack at a distance considerably larger than the one achievable with conventional electromagnetic side channels, which was demonstrated in the low-noise environment of an anechoic chamber. However, a detailed understanding of the leak, attacks that take full advantage of the novel vector, and security evaluations in more practical scenarios are still missing. In this paper, we conduct a thorough experimental analysis of the peculiar properties of Screaming Channels. For example, we learn about the coexistence of intended and unintended data, the role of distance and other parameters on the strength of the leak, the distortion of the leakmodel, and the portability of the profiles. With such insights, we build better attacks. We profile a device connected via cable with 10000·500 traces. Then, 5 months later, we attack a different instance at 15m in an office environment. We recover the AES-128 key with 5000·1000 traces and key enumeration up to 223. Leveraging spatial diversity, we mount some attacks in the presence of obstacles. As a first example of application to a real system, we show a proof-of-concept attack against the authentication method of Google Eddystone beacons. On the one side, this work lowers the bar for more realistic attacks, highlighting the importance of the novel attack vector. On the other side, it provides a broader security evaluation of the leaks, helping the defender and radio designers to evaluate risk, and the need of countermeasures.
2020
TCHES
Fixslicing: A New GIFT Representation: Fast Constant-Time Implementations of GIFT and GIFT-COFB on ARM Cortex-M 📺
The GIFT family of lightweight block ciphers, published at CHES 2017, offers excellent hardware performance figures and has been used, in full or in part, in several candidates of the ongoing NIST lightweight cryptography competition. However, implementation of GIFT in software seems complex and not efficient due to the bit permutation composing its linear layer (a feature shared with PRESENT cipher). In this article, we exhibit a new non-trivial representation of the GIFT family of block ciphers over several rounds. This new representation, that we call fixslicing, allows extremely efficient software bitsliced implementations of GIFT, using only a few rotations, surprisingly placing GIFT as a very efficient candidate on micro-controllers. Our constant time implementations show that, on ARM Cortex-M3, 128-bit data can be ciphered with only about 800 cycles for GIFT-64 and about 1300 cycles for GIFT-128 (assuming pre-computed round keys). In particular, this is much faster than the impressive PRESENT implementation published at CHES 2017 that requires 2116 cycles in the same setting, or the current best AES constant time implementation reported that requires 1617 cycles. This work impacts GIFT, but also improves software implementations of all other cryptographic primitives directly based on it or strongly related to it.
2020
TCHES
From A to Z: Projective coordinates leakage in the wild 📺
At EUROCRYPT 2004, Naccache et al. showed that the projective coordinates representation of the resulting point of an elliptic curve scalar multiplication potentially allows to recover some bits of the scalar. However, this attack has received little attention by the scientific community, and the status of deployed mitigations to prevent it in widely adopted cryptography libraries is unknown. In this paper, we aim to fill this gap, by analyzing several cryptography libraries in this context. To demonstrate the applicability of the attack, we use a side-channel attack to exploit this vulnerability within libgcrypt in the context of ECDSA. To the best of our knowledge, this is the first practical attack instance. It targets the insecure binary extended Euclidean algorithm implementation using a microarchitectural side-channel attack that allows recovering the projective representation of the output point of scalar multiplication during ECDSA signature generation. We captured 100k traces to estimate the number of traces an attacker would need to compromise the libgcrypt ECDSA implementation, resulting in less than 2k for commonly used elliptic curve secp256r1, demonstrating the attack feasibility. During exploitation, we found two additional vulnerabilities. However, we remark the purpose of this paper is not merely exploiting a library but about providing an analysis on the projective coordinates vulnerability status in widely deployed open-source libraries, filling a gap between its original description in the academic literature and the adoption of countermeasures to thwart it in real-world applications.
2020
TCHES
Defeating State-of-the-Art White-Box Countermeasures with Advanced Gray-Box Attacks 📺
The goal of white-box cryptography is to protect secret keys embedded in a cryptographic software deployed in an untrusted environment. In this article, we revisit state-of-the-art countermeasures employed in white-box cryptography, and we discuss possible ways to combine them. Then we analyze the different gray-box attack paths and study their performances in terms of required traces and computation time. Afterward, we propose a new paradigm for the gray-box attack against white-box cryptography, which exploits the data-dependency of the target implementation. We demonstrate that our approach provides substantial complexity improvements over the existing attacks. Finally, we showcase this new technique by breaking the three winning AES-128 white-box implementations from WhibOx 2019 white-box cryptography competition.
2020
TCHES
High-Speed Masking for Polynomial Comparison in Lattice-based KEMs 📺
With the NIST post-quantum standardization competition entering the second round, the interest in practical implementation results of the remaining NIST candidates is steadily growing. Especially implementations on embedded devices are often not protected against side-channel attacks, such as differential power analysis. In this regard, the application of countermeasures against side-channel attacks to candidates of the NIST standardization process is still an understudied topic. Our work aims to contribute to the NIST competition by enabling a more realistic judgment of the overhead cost introduced by side-channel countermeasures that are applied to lattice-based KEMs that achieve CCA-security based on the Fujisaki-Okamoto transform. We present a novel higher-order masking scheme that enables an efficient comparison of polynomials as previous techniques based on arithmetic-to-Boolean conversions renders this (generally inexpensive) component extremely expensive in the masked case. Our approach has linear complexity in the number of shares compared to quadratic complexity of previous contributions and it applies to lattice based schemes with prime modulus. It comes with a proof in the probing model and an efficient implementation on an ARM Cortex-M4F microcontroller which was defined as a preferred evaluation platform for embedded implementations by NIST. Our algorithm can be executed in only 1.5-2.2 milliseconds on the target platform (depending on the masking order) and is therefore well suited even for lightweight applications. While in previous work, practical side-channel experiments were conducted using only 5,000 - 100,000 power traces, we confirm the absence of first-order leakage in this work by collecting 1 million power traces and applying the t-test methodology.
2020
TCHES
Protecting against Statistical Ineffective Fault Attacks 📺
Statistical Ineffective Fault Attacks (SIFA) pose a threat for many practical implementations of symmetric primitives. Countermeasures against both power analysis and fault attacks typically do not prevent straightforward SIFA attacks, which require only very limited knowledge about the concrete implementation. Therefore, the exploration of countermeasures against SIFA that do not rely on protocols or physical protection mechanisms is of great interest. In this paper, we describe different countermeasure strategies against SIFA. First, we introduce an abstraction layer between the algorithmic specification of a cipher and its implementation in hardware or software to study and describe resistance against SIFA. We then show that by basing the masked implementation on permutations as building blocks, we can build circuits that withstand single-fault SIFA and DPA attacks. We show how this approach can be applied to 3-bit, 4-bit, and 5-bit S-boxes and the AES S-box. Additionally, we present a strategy based on fine-grained fault detection suitable for protecting any circuit against SIFA attacks. Although this approach may lead to a higher implementation cost due to the fine-grained detection needed, it can be used to protect arbitrary circuits and can be generalized to cover multi-fault SIFA. For single-fault SIFA protection, our countermeasures only have a small computational overhead compared to a simple combination of masking and duplication.
2020
TCHES
Side-Channel Countermeasures’ Dissection and the Limits of Closed Source Security Evaluations 📺
We take advantage of a recently published open source implementation of the AES protected with a mix of countermeasures against side-channel attacks to discuss both the challenges in protecting COTS devices against such attacks and the limitations of closed source security evaluations. The target implementation has been proposed by the French ANSSI (Agence Nationale de la Sécurité des Systèmes d’Information) to stimulate research on the design and evaluation of side-channel secure implementations. It combines additive and multiplicative secret sharings into an affine masking scheme that is additionally mixed with a shuffled execution. Its preliminary leakage assessment did not detect data dependencies with up to 100,000 measurements. We first exhibit the gap between such a preliminary leakage assessment and advanced attacks by demonstrating how a countermeasures’ dissection exploiting a mix of dimensionality reduction, multivariate information extraction and key enumeration can recover the full key with less than 2,000 measurements. We then discuss the relevance of open source evaluations to analyze such implementations efficiently, by pointing out that certain steps of the attack are hard to automate without implementation knowledge (even with machine learning tools), while performing them manually is straightforward. Our findings are not due to design flaws but from the general difficulty to prevent side-channel attacks in COTS devices with limited noise. We anticipate that high security on such devices requires significantly more shares.
2020
TCHES
A Fast and Accurate Guessing Entropy Estimation Algorithm for Full-key Recovery 📺
Guessing entropy (GE) is a widely adopted metric that measures the average computational cost needed for a successful side-channel analysis (SCA). However, with current estimation methods where the evaluator has to average the correct key rank over many independent side-channel leakage measurement sets, full-key GE estimation is impractical due to its prohibitive computing requirement. A recent estimation method based on posterior probabilities, although scalable, is not accurate.We propose a new guessing entropy estimation algorithm (GEEA) based on theoretical distributions of the ranking score vectors. By discovering the relationship of GE with pairwise success rates and utilizing it, GEEA uses a sum of many univariate Gaussian probabilities instead of multi-variate Gaussian probabilities, significantly improving the computation efficiency.We show that GEEA is more accurate and efficient than all current GE estimations. To the best of our knowledge, it is the only practical full-key GE evaluation on given experimental data sets which the evaluator has access to. Moreover, it can accurately predict the GE for larger sizes than the experimental data sets, providing comprehensive security evaluation.
2020
TCHES
Highly Efficient Architecture of NewHope-NIST on FPGA using Low-Complexity NTT/INTT 📺
NewHope-NIST is a promising ring learning with errors (RLWE)-based postquantum cryptography (PQC) for key encapsulation mechanisms. The performance on the field-programmable gate array (FPGA) affects the applicability of NewHope-NIST. In RLWE-based PQC algorithms, the number theoretic transform (NTT) is one of the most time-consuming operations. In this paper, low-complexity NTT and inverse NTT (INTT) are used to implement highly efficient NewHope-NIST on FPGA. First, both the pre-processing of NTT and the post-processing of INTT are merged into the fast Fourier transform (FFT) algorithm, which reduces N and 2N modular multiplications for N-point NTT and INTT, respectively. Second, a compact butterfly unit and an efficient modular reduction on the modulus 12289 are proposed for the low-complexity NTT/INTT architecture, which achieves an improvement of approximately 3× in the area time product (ATP) compared with the results of the state-of-the-art designs. Finally, a highly efficient architecture with doubled bandwidth and timing hiding for NewHope-NIST is presented. The implementation results on an FPGA show that our design is at least 2.5× faster and has 4.9× smaller ATP compared with the results of the state-of-the-art designs of NewHope-NIST on similar platforms.
2020
TCHES
FENL: an ISE to mitigate analogue micro-architectural leakage 📺
Ge et al. [GYH18] propose the augmented ISA (or aISA), a central tenet of which is the selective exposure of micro-architectural resources via a less opaque abstraction than normal. The aISA proposal is motivated by the need for control over such resources, for example to implement robust countermeasures against microarchitectural attacks. In this paper, we apply an aISA-style approach to challenges stemming from analogue micro-architectural leakage; examples include power-based Hamming weight and distance leakage from relatively fine-grained resources (e.g., pipeline registers), which are not exposed in, and so cannot be reliably controlled via, a normal ISA. Specifically, we design, implement, and evaluate an ISE named FENL: the ISE acts as a fence for leakage, preventing interaction between, and hence leakage from, instructions before and after it in program order. We demonstrate that the implementation and use of FENL has relatively low overhead, and represents an effective tool for systematically localising and reducing leakage.
2020
TCHES
Dismantling DST80-based Immobiliser Systems 📺
Car manufacturers deploy vehicle immobiliser systems in order to prevent car theft. However, in many cases the underlying cryptographic primitives used to authenticate a transponder are proprietary in nature and thus not open to public scrutiny. In this paper we publish the proprietary Texas Instruments DST80 cipher used in immobilisers of several manufacturers. Additionally, we expose serious flaws in immobiliser systems of major car manufacturers such as Toyota, Kia, Hyundai and Tesla. Specifically, by voltage glitching the firmware protection mechanisms of the microcontroller, we extracted the firmware from several immobiliser ECUs and reverse engineered the key diversification schemes employed within. We discovered that Kia and Hyundai immobiliser keys have only three bytes of entropy and that Toyota only relies on publicly readable information such as the transponder serial number and three constants to generate cryptographic keys. Furthermore, we present several practical attacks which can lead to recovering the full 80-bit cryptographic key in a matter of seconds or permanently disabling the transponder. Finally, even without key management or configuration issues, we demonstrate how an attacker can recover the cryptographic key using a profiled side-channel attack. We target the key loading procedure and investigate the practical applicability in the context of portability. Our work once again highlights the issues automotive vendors face in implementing cryptography securely.
2020
TCHES
Efficient and Private Computations with Code-Based Masking 📺
Code-based masking is a very general type of masking scheme that covers Boolean masking, inner product masking, direct sum masking, and so on. The merits of the generalization are twofold. Firstly, the higher algebraic complexity of the sharing function decreases the information leakage in “low noise conditions” and may increase the “statistical security order” of an implementation (with linear leakages). Secondly, the underlying error-correction codes can offer improved fault resistance for the encoded variables. Nevertheless, this higher algebraic complexity also implies additional challenges. On the one hand, a generic multiplication algorithm applicable to any linear code is still unknown. On the other hand, masking schemes with higher algebraic complexity usually come with implementation overheads, as for example witnessed by inner-product masking. In this paper, we contribute to these challenges in two directions. Firstly, we propose a generic algorithm that allows us (to the best of our knowledge for the first time) to compute on data shared with linear codes. Secondly, we introduce a new amortization technique that can significantly mitigate the implementation overheads of code-based masking, and illustrate this claim with a case study. Precisely, we show that, although performing every single code-based masked operation is relatively complex, processing multiple secrets in parallel leads to much better performances. This property enables code-based masked implementations of the AES to compete with the state-of-the-art in randomness complexity. Since our masked operations can be instantiated with various linear codes, we hope that these investigations open new avenues for the study of code-based masking schemes, by specializing the codes for improved performances, better side-channel security or improved fault tolerance.
2020
TCHES
Persistent Fault Attack in Practice 📺
Persistence fault analysis (PFA) is a novel fault analysis technique proposed in CHES 2018 and demonstrated with rowhammer-based fault injections. However, whether such analysis can be applied to traditional fault attack scenario, together with its difficulty in practice, has not been carefully investigated. For the first time, a persistent fault attack is conducted on an unprotected AES implemented on ATmega163L microcontroller in this paper. Several critical challenges are solved with our new improvements, including (1) how to decide whether the fault is injected in SBox; (2) how to use the maximum likelihood estimation to pursue the minimum number of ciphertexts; (3) how to utilize the unknown fault in SBox to extract the key. Our experiments show that: to break AES with physical laser injections despite all these challenges, the minimum and average number of required ciphertexts are 926 and 1641, respectively. It is about 38% and 28% reductions of the ciphertexts required in comparison to 1493 and 2273 in previous work where both fault value and location have to be known. Furthermore, our analysis is extended to the PRESENT cipher. By applying the persistent fault analysis to the penultimate round, the full PRESENT key of 80 bits can be recovered. Eventually, an experimental validation is performed to confirm the accuracy of our attack with more insights. This paper solves the challenges in most aspects of practice and also demonstrates the feasibility and universality of PFA on SPN block ciphers.
2020
TCHES
When one vulnerable primitive turns viral: Novel single-trace attacks on ECDSA and RSA 📺
Microarchitecture based side-channel attacks are common threats nowadays. Intel SGX technology provides a strong isolation from an adversarial OS, however, does not guarantee protection against side-channel attacks. In this paper, we analyze the security of the mbedTLS binary GCD algorithm, an implementation that offers interesting challenges when compared for example with OpenSSL, due to the usage of very tight loops in the former. Using practical experiments we demonstrate the mbedTLS binary GCD implementation is vulnerable to side-channel analysis using the SGX-Step framework against mbedTLS based SGX enclaves.We analyze the security of some use cases of this algorithm in this library, resulting in the discovery of a new vulnerability in the ECDSA code path that allows a single-trace attack against this implementation. This vulnerability is three-fold interesting: It resides in the implementation of a countermeasure which makes it more dangerous due to the false state of security the countermeasure currently offers. It reduces mbedTLS ECDSA security to an integer factorization problem. An unexpected GCD call inside the ECDSA code path compromises the countermeasure. We also cover an orthogonal use case, this time inside the mbedTLS RSA code path during the computation of a CRT parameter when loading a private key. The attack also exploits the binary GCD implementation threat, showing how a single vulnerable primitive leads to multiple vulnerabilities. We demonstrate both security threats with end-to-end attacks using 1000 trials each, showing in both cases single-trace attacks can be achieved with success rates very close to 100%.
2020
TCHES
Time-memory trade-off in Toom-Cook multiplication: an application to module-lattice based cryptography 📺
Since the introduction of the ring-learning with errors problem, the number theoretic transform (NTT) based polynomial multiplication algorithm has been studied extensively. Due to its faster quasilinear time complexity, it has been the preferred choice of cryptographers to realize ring-learning with errors cryptographic schemes. Compared to NTT, Toom-Cook or Karatsuba based polynomial multiplication algorithms, though being known for a long time, still have a fledgling presence in the context of post-quantum cryptography.In this work, we observe that the pre- and post-processing steps in Toom-Cook based multiplications can be expressed as linear transformations. Based on this observation we propose two novel techniques that can increase the efficiency of Toom-Cook based polynomial multiplications. Evaluation is reduced by a factor of 2, and we call this method precomputation, and interpolation is reduced from quadratic to linear, and we call this method lazy interpolation.As a practical application, we applied our algorithms to the Saber post-quantum key-encapsulation mechanism. We discuss in detail the various implementation aspects of applying our algorithms to Saber. We show that our algorithm can improve the efficiency of the computationally costly matrix-vector multiplication by 12−37% compared to previous methods on their respective platforms. Secondly, we propose different methods to reduce the memory footprint of Saber for Cortex-M4 microcontrollers. Our implementation shows between 2.6 and 5.7 KB reduction in the memory usage with respect to the smallest implementation in the literature.
2020
TCHES
A Compact and Scalable Hardware/Software Co-design of SIKE 📺
We present efficient and compact hardware/software co-design implementations of the Supersingular Isogeny Key Encapsulation (SIKE) protocol on field-programmable gate arrays (FPGAs). In order to be better equipped for different post-quantum scenarios, our architectures were designed to feature high-flexibility by covering all the currently available parameter sets and with support for primes up to 1016 bits. In particular, any of the current SIKE parameters equivalent to the post-quantum security of AES-128/192/256 and SHA3-256 can be selected and run on-the-fly. This security scalability property, together with the small footprint and efficiency of our architectures, makes them ideal for embedded applications in a post-quantum world. In addition, the proposed implementations exhibit regular, constant-time execution, which provides protection against timing and simple sidechannel attacks. Our results demonstrate that supersingular isogeny-based primitives such as SIDH and SIKE can indeed be deployed for embedded applications featuring competitive performance. For example, our smallest architecture based on a 128-bit MAC unit takes only 3415 slices, 21 BRAMs and 57 DSPs on a Virtex 7 690T and can perform key generation, encapsulation and decapsulation in 14.4, 24.4 and 26.0 milliseconds for SIKEp434 and in 52.3, 86.4 and 93.2 milliseconds for SIKEp751, respectively.
2020
TCHES
FEDS: Comprehensive Fault Attack Exploitability Detection for Software Implementations of Block Ciphers 📺
Fault injection attacks are one of the most powerful forms of cryptanalytic attacks on ciphers. A single, precisely injected fault during the execution of a cipher like the AES, can completely reveal the key within a few milliseconds. Software implementations of ciphers, therefore, need to be thoroughly evaluated for such attacks. In recent years, automated tools have been developed to perform these evaluations. These tools either work on the cipher algorithm or on their implementations. Tools that work at the algorithm level can provide a comprehensive assessment of fault attack vulnerability for different fault attacks and with different fault models. Their application is, however, restricted because every realization of the cipher has unique vulnerabilities. On the other hand, tools that work on cipher implementations have a much wider application but are often restricted by the range of fault attacks and the number of fault models they can evaluate.In this paper, we propose a framework, called FEDS, that uses a combination of compiler techniques and model checking to merge the advantages of both, algorithmic level tools as well as implementation level tools. Like the algorithmic level tools, FEDS can provide a comprehensive assessment of fault attack exploitability considering a wide range of fault attacks and fault models. Like implementation level tools, FEDS works with implementations, therefore has wide application. We demonstrate the versatility of FEDS by evaluating seven different implementations of AES (including bitsliced implementation) and implementations of CLEFIA and CAMELLIA for Differential Fault Attacks. The framework automatically identifies exploitable instructions in all implementations. Further, we present an application of FEDS in a Fault Attack Aware Compiler, that can automatically identify and protect exploitable regions of the code. We demonstrate that the compiler can generate significantly more efficient code than a naïvely protected equivalent, while maintaining the same level of protection.
2020
TCHES
Low-Latency Hardware Masking with Application to AES 📺
During the past two decades there has been a great deal of research published on masked hardware implementations of AES and other cryptographic primitives. Unfortunately, many hardware masking techniques can lead to increased latency compared to unprotected circuits for algorithms such as AES, due to the high-degree of nonlinear functions in their designs. In this paper, we present a hardware masking technique which does not increase the latency for such algorithms. It is based on the LUT-based Masked Dual-Rail with Pre-charge Logic (LMDPL) technique presented at CHES 2014. First, we show 1-glitch extended strong noninterference of a nonlinear LMDPL gadget under the 1-glitch extended probing model. We then use this knowledge to design an AES implementation which computes a full