International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Recent videos of IACR talks

Year
Venue
Title
2020
CRYPTO
Compressed Sigma-Protocol Theory and Practical Application to Plug & Play Secure Algorithmics 📺
Sigma-Protocols provide a well-understood basis for secure algorithmics. Recently, Bulletproofs (Bootle et al., EUROCRYPT 2016, and Bünz et al., S&P 2018) have been proposed as a drop-in replacement in case of zero-knowledge (ZK) for arithmetic circuits, achieving logarithmic communication instead of linear. Its pivot is an ingenious, logarithmic-size proof of knowledge BP for certain quadratic relations. However, reducing ZK for general relations to it forces a somewhat cumbersome ``reinvention'' of cryptographic protocol theory. We take a rather different viewpoint and reconcile Bulletproofs with Sigma-Protocol Theory such that (a) simpler circuit ZK is developed within established theory, while (b) achieving exactly the same logarithmic communication. The natural key here is linearization. First, we repurpose BPs as a blackbox compression mechanism for standard Sigma-Protocols handling ZK proofs of general linear relations (on compactly committed secret vectors); our pivot. Second, we reduce the case of general nonlinear relations to blackbox applications of our pivot via a novel variation on arithmetic secret sharing based techniques for Sigma-Protocols (Cramer et al., ICITS 2012). Orthogonally, we enhance versatility by enabling scenarios not previously addressed, e.g., when a secret input is dispersed across several commitments. Standard implementation platforms leading to logarithmic communication follow from a Discrete-Log assumption or a generalized Strong-RSA assumption. Also, under a Knowledge-of-Exponent Assumption (KEA) communication drops to constant, as in ZK-SNARKS. All in all, our theory should more generally be useful for modular (``plug & play'') design of practical cryptographic protocols; this is further evidenced by our separate work (2020) on proofs of partial knowledge.
2020
CRYPTO
New Techniques for Traitor Tracing: Size N^{1/3} and More from Pairings 📺
The best existing traitor tracing scheme from pairings achieves $O(\sqrt{N})$-sized parameters, which has stood since 2006. This intuitively seems to be consistent with the fact that pairings allow for degree-2 computations, yielding a quadratic compression. In this work, we show that this intuition is false by building a traitor tracing scheme from pairings with $O(\sqrt[3]{N})$-sized parameters. We obtain our scheme by developing a number of new traitor tracing techniques offering various trade-offs that were not possible before, giving the first significant parameter improvements in pairings-based traitor tracing in over a decade.
2020
CRYPTO
New Techniques for Zero-Knowledge: Leveraging Inefficient Provers to Reduce Assumptions, Interaction, and Trust 📺
We present a transformation from NIZK with inefficient provers in the uniform random string (URS) model to ZAPs (two message witness indistinguishable proofs) with inefficient provers. While such a transformation was known for the case where the prover is efficient, the security proof breaks down if the prover is inefficient. Our transformation is obtained via new applications of Nisan-Wigderson designs, a combinatorial object originally introduced in the derandomization literature. We observe that our transformation is applicable both in the setting of super-polynomial provers/poly-time adversaries, as well as a new fine-grained setting, where the prover is polynomial time and the verifier/simulator/zero knowledge distinguisher are in a lower complexity class, such as $\mathsf{NC}^1$. We also present $\mathsf{NC}^1$-fine-grained NIZK in the URS model for all of NP from the worst-case assumption $\oplus L/\poly \not\subseteq \mathsf{NC}^1$. Our techniques yield the following applications: --ZAPs for $\mathsf{AM}$ from Minicrypt assumptions (with super-polynomial time provers), --$\mathsf{NC}^1$-fine-grained ZAPs for $\mathsf{NP}$ from worst-case assumptions, --Protocols achieving an ``offline'' notion of NIZK (oNIZK) in the standard (no-CRS) model with uniform soundness in both the super-polynomial setting (from Minicrypt assumptions) and the $\mathsf{NC}^1$-fine-grained setting (from worst-case assumptions). The oNIZK notion is sufficient for use in indistinguishability-based proofs.
2020
CRYPTO
On Succinct Arguments and Witness Encryption from Groups 📺
Succinct non-interactive arguments (SNARGs) enable proofs of NP statements with very low communication. Recently, there has been significant work in both theory and practice on constructing SNARGs with very short proofs. Currently, the state-of-the-art in succinctness is due to Groth (Eurocrypt 2016) who constructed a SNARG from bilinear maps where the proof consists of just 3 group elements. In this work, we first construct a concretely-efficient designated-verifier (preprocessing) SNARG with inverse polynomial soundness, where the proof consists of just 2 group elements in a standard (generic) group. This leads to a 50% reduction in concrete proof size compared to Groth's construction. We follow the approach of Bitansky et al. (TCC 2013) who describe a compiler from linear PCPs to SNARGs in the preprocessing model. Our improvement is based on a new linear PCP packing technique that allows us to construct 1-query linear PCPs which can then be compiled into a SNARG (using ElGamal encryption over a generic group). An appealing feature of our new SNARG is that the verifier can precompute a statement-independent lookup table in an offline phase; verifying proofs then only requires 2 exponentiations and a single table lookup. This makes our new designated-verifier SNARG appealing in settings that demand fast verification and minimal communication. We then turn to the question of constructing arguments where the proof consists of a single group element. Here, we first show that any (possibly interactive) argument for a language L where the verification algorithm is "generic" (i.e., only performs generic group operations) and the proof consists of a single group element, implies a witness encryption scheme for L. We then show that under a yet-unproven, but highly plausible, hypothesis on the hardness of approximating the minimal distance of linear codes, we can construct a 2-message laconic argument for NP where the proof consists of a single group element. Under the same hypothesis, we obtain a witness encryption scheme for NP in the generic group model. Along the way, we show that under a conceptually-similar but proven hardness of approximation result, there is a 2-message laconic argument for NP with negligible soundness error where the prover's message consists of just 2 group elements. In both settings, we obtain laconic arguments (and linear PCPs) with linear decision procedures. Our constructions circumvent a previous lower bound by Groth on such argument systems with linear decision procedures by relying on imperfect completeness. Namely, our constructions have vanishing but not negligible completeness error, while the lower bound of Groth implicitly assumes negligible completeness error of the underlying argument. Our techniques thus highlight new avenues for designing linear PCPs, succinct arguments, and witness encryption schemes.
2020
CRYPTO
Cryptanalytic Extraction of Neural Network Models 📺
We argue that the machine learning problem of model extraction is actually a cryptanalytic problem in disguise, and should be studied as such. Given oracle access to a neural network, we introduce a differential attack that can efficiently steal the parameters of the remote model up to floating point precision. Our attack relies on the fact that ReLU neural networks are piecewise linear functions, and thus queries at the critical points reveal information about the model parameters. We evaluate our attack on multiple neural network models and extract models that are 2^20 times more precise and require 100x fewer queries than prior work. For example, we extract a 100,000 parameter neural network trained on the MNIST digit recognition task with 2^21.5 queries in under an hour, such that the extracted model agrees with the oracle on all inputs up to a worst-case error of 2^-25, or a model with 4,000 parameters in 2^18.5 queries with worst-case error of 2^-40.4. Code is available at https://github.com/google-research/cryptanalytic-model-extraction.
2020
CRYPTO
Dynamic Decentralized Functional Encryption 📺
We introduce Dynamic Decentralized Functional Encryption (DDFE), a generalization of Functional Encryption which allows multiple users to join the system dynamically, without relying on a trusted third party or on expensive and interactive Multi-Party Computation protocols. This notion subsumes existing multi-user extensions of Functional Encryption, such as Multi-Input, Multi-Client, and Ad Hoc Multi-Input Functional Encryption. We define and construct schemes for various functionalities which serve as building-blocks for latter primitives and may be useful in their own right, such as a scheme for dynamically computing sums in any Abelian group. These constructions build upon simple primitives in a modular way, and have instantiations from well-studied assumptions, such as DDH or LWE. Our constructions culminate in an Inner-Product scheme for computing weighted sums on aggregated encrypted data, from standard assumptions in prime-order groups in the Random Oracle Model.
2020
CRYPTO
Slide Reduction, Revisited—Filling the Gaps in SVP Approximation 📺
We show how to generalize Gama and Nguyen's slide reduction algorithm [STOC '08] for solving the approximate Shortest Vector Problem over lattices (SVP) to allow for arbitrary block sizes, rather than just block sizes that divide the rank n of the lattice. This leads to significantly better running times for most approximation factors. We accomplish this by combining slide reduction with the DBKZ algorithm of Micciancio and Walter [Eurocrypt '16]. We also show a different algorithm that works when the block size is quite large---at least half the total rank. This yields the first non-trivial algorithm for sublinear approximation factors. Together with some additional optimizations, these results yield significantly faster provably correct algorithms for \delta-approximate SVP for all approximation factors n^{1/2+\eps} \leq \delta \leq n^{O(1)}, which is the regime most relevant for cryptography. For the specific values of \delta = n^{1-\eps} and \delta = n^{2-\eps}, we improve the exponent in the running time by a factor of 2 and a factor of 1.5 respectively.
2020
CRYPTO
Practical Product Proofs for Lattice Commitments 📺
We construct a practical lattice-based zero-knowledge argument for proving multiplicative relations between committed values. The underlying commitment scheme that we use is the currently most efficient one of Baum et al. (SCN 2018), and the size of our multiplicative proof is only slightly larger than of the one for just proving knowledge of the committed values. We additionally improve on the results of Lyubashevsky and Seiler (Eurocrypt 2018) to show that the above-mentioned techniques can work over rings $Z_q[X]/(X^d+1)$ where $X^d+1$ splits into low-degree factors, which is a property necessary for many applications. In particular, we use Fourier analysis to show that the NTT coefficients of random small-norm challenges are not concentrated on any particular value.
2020
CRYPTO
Anonymous Tokens with Private Metadata Bit 📺
We present a cryptographic construction for anonymous tokens with private metadata bit, called PMBTokens. This primitive enables an issuer to provide a user with a lightweight, single-use anonymous trust token that can embed a single private bit, which is accessible only to the party who holds the secret authority key and is private with respect to anyone else. Our construction generalizes and extends the functionality of Privacy Pass (PETS’18) with this private metadata bit capability. It provides unforgeability, unlinkability, and privacy for the metadata bit properties based on the DDH and CTDH assumptions in the random oracle model. Both Privacy Pass and PMBTokens rely on non-interactive zero-knowledge proofs (NIZKs). We present new techniques to remove the need for NIZKs, while still achieving unlinkability. We implement our constructions and we report their efficiency costs.
2020
CRYPTO
Round-optimal Black-box Commit-and-prove with Succinct Communication 📺
We give a four-round black-box construction of a commit-and-prove protocol with succinct communication. Our construction is WI and has constant soundness error, and it can be upgraded into a one that is ZK and has negligible soundness error by relying on a round-preserving transformation of Khurana et al. (TCC 2018). Our construction is obtained by combining the MPC-in-the-head technique of Ishai et al. (SICOMP 2009) with the two-round succinct argument of Kalai et al. (STOC 2014), and the main technical novelty lies in the analysis of the soundness---we show that, although the succinct argument of Kalai et al. does not necessarily provide soundness for NP statements, it can be used in the MPC-in-the-head technique for proving the consistency of committed MPC views. Our construction is based on sub-exponentially hard collision-resistant hash functions, two-round PIRs, and two-round OTs.
2020
CRYPTO
Black-Box Transformations from Passive to Covert Security with Public Verifiability 📺
In the context of secure computation, protocols with security against covert adversaries ensure that any misbehavior by malicious parties will be detected by the honest parties with some constant probability. As such, these protocols provide better security guarantees than passively secure protocols and, moreover, are easier to construct than protocols with full security against active adversaries. Protocols that, upon detecting a cheating attempt, allow the honest parties to compute a certificate that enables third parties to verify whether an accused party misbehaved or not are called publicly verifiable. In this work, we present the first generic compilers for constructing two-party protocols with covert security and public verifiability from protocols with passive security. We present two separate compilers, which are both fully blackbox in the underlying protocols they use. Both of them only incur a constant multiplicative factor in terms of bandwidth overhead and a constant additive factor in terms of round complexity on top of the passively secure protocols they use. The first compiler applies to all two-party protocols that have no private inputs. This class of protocols covers the important class of preprocessing protocols that are used to setup correlated randomness among parties. We use our compiler to obtain the first secret-sharing based two-party protocol with covert security and public verifiability. Notably, the produced protocol achieves public verifiability essentially for free when compared with the best known previous solutions based on secret-sharing that did not provide public verifiability Our second compiler constructs protocols with covert security and public verifiability for arbitrary functionalities from passively secure protocols. It uses our first compiler to perform a setup phase, which is independent of the parties' inputs as well as the protocol they would like to execute. Finally, we show how to extend our techniques to obtain multiparty computation protocols with covert security and public verifiability against arbitrary constant fractions of corruptions.
2020
CRYPTO
Black-box use of One-way Functions is Useless for Optimal Fair Coin-Tossing 📺
A two-party fair coin-tossing protocol guarantees output delivery to the honest party even when the other party aborts during the protocol execution. Cleve (STOC--1986) demonstrated that a computationally bounded fail-stop adversary could alter the output distribution of the honest party by (roughly) $1/r$ (in the statistical distance) in an $r$-message coin-tossing protocol. An optimal fair coin-tossing protocol ensures that no adversary can alter the output distribution beyond $1/r$. In a seminal result, Moran, Naor, and Segev (TCC--2009) constructed the first optimal fair coin-tossing protocol using (unfair) oblivious transfer protocols. Whether the existence of oblivious transfer protocols is a necessary hardness of computation assumption for optimal fair coin-tossing remains among the most fundamental open problems in theoretical cryptography. The results of Impagliazzo and Luby (FOCS–1989) and Cleve and Impagliazzo (1993) prove that optimal fair coin-tossing implies the necessity of one-way functions' existence; a significantly weaker hardness of computation assumption compared to the existence of secure oblivious transfer protocols. However, the sufficiency of the existence of one-way functions is not known. Towards this research endeavor, our work proves a black-box separation of optimal fair coin-tossing from the existence of one-way functions. That is, the black-box use of one-way functions cannot enable optimal fair coin-tossing. Following the standard Impagliazzo and Rudich (STOC--1989) approach of proving black-box separations, our work considers any $r$-message fair coin-tossing protocol in the random oracle model where the parties have unbounded computational power. We demonstrate a fail-stop attack strategy for one of the parties to alter the honest party's output distribution by $1/\sqrt r$ by making polynomially-many additional queries to the random oracle. As a consequence, our result proves that the $r$-message coin-tossing protocol of Blum (COMPCON--1982) and Cleve (STOC--1986), which uses one-way functions in a black-box manner, is the best possible protocol because an adversary cannot change the honest party's output distribution by more than $1/\sqrt r$. Several previous works, for example, Dachman--Soled, Lindell, Mahmoody, and Malkin (TCC--2011), Haitner, Omri, and Zarosim (TCC--2013), and Dachman--Soled, Mahmoody, and Malkin (TCC--2014), made partial progress on proving this black-box separation assuming some restrictions on the coin-tossing protocol. Our work diverges significantly from these previous approaches to prove this black-box separation in its full generality. The starting point is the recently introduced potential-based inductive proof techniques for demonstrating large gaps in martingales in the information-theoretic plain model. Our technical contribution lies in identifying a global invariant of communication protocols in the random oracle model that enables the extension of this technique to the random oracle model.
2020
CRYPTO
Order-Fairness for Byzantine Consensus 📺
Decades of research in both cryptography and distributed systems has extensively studied the problem of state machine replication, also known as Byzantine consensus. A consensus protocol must usually satisfy two properties: {\em consistency} and {\em liveness}. These properties ensure that honest participating nodes agree on the same log and dictate when fresh transactions get added. They fail, however, to ensure against adversarial manipulation of the actual {\em ordering} of transactions in the log. Indeed, in leader-based protocols (almost all protocols used today), malicious leaders can directly choose the final transaction ordering. To rectify this problem, we propose a third consensus property: {\em transaction order-fairness}. We initiate the first formal investigation of order-fairness and explain its fundamental importance. We also provide several natural definitions for order-fairness and analyze the assumptions necessary to realize them. We also propose a new class of consensus protocols called Aequitas. Aequitas protocols are the first to achieve order-fairness in addition to consistency and liveness. They can be realized in a black-box way using existing broadcast and agreement primitives (or indeed using any consensus protocol), and work in both synchronous and asynchronous network models.
2020
CRYPTO
Universally Composable Relaxed Password Authenticated Key Exchange 📺
Protocols for password authenticated key exchange (PAKE) allow two parties who share only a weak password to agree on a cryptographic key. We revisit the notion of PAKE in the universal composability (UC) framework, and propose a relaxation of the PAKE functionality of Canetti et al. that we call lazy-extraction PAKE (lePAKE). Our relaxation allows the ideal-world adversary to postpone its password guess until after a session is complete. We argue that this relaxed notion still provides meaningful security in the password-only setting. As our main result, we show that several PAKE protocols that were previously only proven secure with respect to a ``game-based'' definition of security can be shown to UC-realize the lePAKE functionality in the random-oracle model. These include SPEKE, SPAKE2, and TBPEKE, the most efficient PAKE schemes currently known.
2020
CRYPTO
Incompressible Encodings 📺
An incompressible encoding can probabilistically encode some data $m$ into a codeword $c$, which is not much larger. Anyone can decode $c$ to recover the original data $m$. However, $c$ cannot be efficiently compressed, even if the original data $m$ is given to the decompression procedure for free. In other words, $c$ is a representation of $m$, yet is computationally incompressible even given $m$. An incompressible encoding is composable if many encodings cannot be simultaneously compressed into anything sufficiently smaller than their concatenation. A recent work of Damgard, Ganesh and Orlandi (CRYPTO '19) defined a variant of incompressible encodings and gave an applications to ``proofs of replicated storage''. They constructed incompressible encodings in an ideal permutation model over a structured domain, but it was left open if they can be constructed under standard assumptions, or even in the more basic random-oracle model. In this work, we give new constructions, negative results and applications of incompressible encodings: * We construct incompressible encodings in the common random string (CRS) model under the Decisional Composite Residuosity (DCR) or Learning with Errors (LWE) assumptions. However, the construction has several drawbacks: (1) it is not composable, (2) it only achieves selective security, and (3) the CRS is as long as the data $m$. * We leverage the above construction to also get a scheme in the random-oracle model, under the same assumptions, that avoids all of the above drawbacks. Furthermore, it is significantly more efficient than the prior ideal-model construction. * We give black-box separations, showing that incompressible encodings in the plain model cannot be proven secure under any standard hardness assumption, and incompressible encodings in the CRS model must inherently suffer from all of the drawbacks above. * We give a new application to ``big-key cryptography in the bounded-retrieval model'', where secret keys are made intentionally huge to make them hard to exfiltrate. Using incompressible encodings, we can get all the security benefits of a big key without wasting storage space, by having the key to encode useful data.
2020
CRYPTO
Guaranteed Output Delivery Comes Free in Honest Majority MPC 📺
We study the communication complexity of unconditionally secure MPC with guaranteed output delivery over point-to-point channels for corruption threshold t < n/2, assuming the existence of a public broadcast channel. We ask the question: “is it possible to construct MPC in this setting s.t. the communication complexity per multiplication gate is linear in the number of parties?” While a number of works have focused on reducing the communication complexity in this setting, the answer to the above question has remained elusive until now. We also focus on the concrete communication complexity of evaluating each multiplication gate. We resolve the above question in the affirmative by providing an MPC with communication complexity O(Cn\phi) bits (ignoring fixed terms which are independent of the circuit) where \phi is the length of an element in the field, C is the size of the (arithmetic) circuit, n is the number of parties. This is the first construction where the asymptotic communication complexity matches the best-known semi-honest protocol. This represents a strict improvement over the previously best-known communication complexity of O(C(n\phi + \kappa) + D_Mn^2\kappa) bits, where \kappa is the security parameter and D_M is the multiplicative depth of the circuit. Furthermore, the concrete communication complexity per multiplication gate is 5.5 field elements per party in the best case and 7.5 field elements in the worst case when one or more corrupted parties have been identified. This also roughly matches the best-known semi-honest protocol, which requires 5.5 field elements per gate. The above also yields the first secure-with-abort MPC protocol with the same cost per multiplication gate as the best-known semi-honest protocol. Our main result is obtained by compiling the secure-with-abort MPC protocol into a fully secure one.
2020
CRYPTO
Cryptanalysis of The Lifted Unbalanced Oil Vinegar Signature Scheme 📺
In 2017, Ward Beullens et al. submitted Lifted Unbalanced Oil and Vinegar (LUOV), a signature scheme based on the famous multivariate public-key cryptosystem (MPKC) called Unbalanced Oil and Vinegar (UOV), to NIST for the competition for post-quantum public-key scheme standardization. The defining feature of LUOV is that, though the public key P works in the extension field of degree r of F2, the coefficients of P come from F2. This is done to significantly reduce the size of P. The LUOV scheme is now in the second round of the NIST PQC standardization process. In this paper, we introduce a new attack on LUOV. It exploits the "lifted" structure of LUOV to reduce direct attacks on it to those over a subfield. We show that this reduces the complexity below the targeted security for the NIST postquantum standardization competition.
2020
CRYPTO
Handling Adaptive Compromise for Practical Encryption Schemes 📺
Early Career Researcher Award
We provide a new definitional framework capturing the multi-user security of encryption schemes and pseudorandom functions in the face of adversaries that can adaptively compromise users' keys. We provide a sequence of results establishing the security of practical symmetric encryption schemes under adaptive compromise in the random oracle or ideal cipher model. The bulk of analysis complexity for adaptive compromise security is relegated to the analysis of lower-level primitives such as pseudorandom functions. We apply our framework to give proofs of security for the BurnBox system for privacy in the face of border searches and the in-use searchable symmetric encryption scheme due to Cash et al. In both cases, prior analyses had bugs that our framework helps avoid.
2020
CRYPTO
Delegation with Updatable Unambiguous Proofs and PPAD-Hardness 📺
In this work, we show the hardness of finding a Nash equilibrium, a \PPAD-complete problem, based on the quasi-polynomial hardness of the decisional assumption on groups with bilinear maps introduced by Kalai, Paneth and Yang [STOC 2019]. Towards this goal, we construct an {\em unambiguous} and {\em updatable} delegation scheme under this assumption for deterministic computations running in super-polynomial time and polynomial space. This delegation scheme, which is of independent interest, is publicly verifiable and non-interactive in the common reference string (CRS) model. It is {\em unambiguous} meaning that it is hard to compute two different proofs for the same statement. It is {\em updatable} meaning that given a proof for the statement that a Turing machine $M$ reaches configuration $\conf_T$ in $T$ steps, one can {\em efficiently} generate a proof for the statement that $M$ reaches configuration $\conf_{T+1}$ in $T+1$ steps.
2020
CRYPTO
The Memory-Tightness of Authenticated Encryption 📺
This paper initiates the study of the provable security of authenticated encryption (AE) in the memory-bounded setting. Recent works -- Tessaro and Thiruvengadam (TCC '18), Jaeger and Tessaro (EUROCRYPT '19), and Dinur (EUROCRYPT '20) -- focus on confidentiality, and look at schemes for which trade-offs between the attacker's memory and its data complexity are inherent. Here, we ask whether these results and techniques can be lifted to the full AE setting, which additionally asks for integrity. We show both positive and negative results. On the positive side, we provide tight memory-sensitive bounds for the security of GCM and its generalization, CAU (Bellare and Tackmann, CRYPTO '16). Our bounds apply to a restricted case of AE security which abstracts the deployment within protocols like TLS, and rely on a new memory-tight reduction to corresponding restricted notions of confidentiality and integrity. In particular, our reduction uses an amount of memory which linearly depends on that of the given adversary, as opposed to only imposing a constant memory overhead as in earlier works (Auerbach et al, CRYPTO '17). On the negative side, we show that a large class of black-box reductions cannot generically lift confidentiality and integrity security to a joint definition of AE security in a memory-tight way.
2020
CRYPTO
Efficient Pseudorandom Correlation Generators from Ring-LPN 📺
Secure multiparty computation can often utilize a trusted source of correlated randomness to achieve better efficiency. A recent line of work, initiated by Boyle et al. (CCS 2018, Crypto 2019), showed how useful forms of correlated randomness can be generated using a cheap, one-time interaction, followed by only ``silent'' local computation. This is achieved via a \emph{pseudorandom correlation generator} (PCG), a deterministic function that stretches short correlated seeds into long instances of a target correlation. Previous works constructed concretely efficient PCGs for simple but useful correlations, including random oblivious transfer and vector-OLE, together with efficient protocols to distribute the PCG seed generation. Most of these constructions were based on variants of the Learning Parity with Noise (LPN) assumption. PCGs for other useful correlations had poor asymptotic and concrete efficiency. In this work, we design a new class of efficient PCGs based on different flavors of the {\em ring-LPN} assumption. Our new PCGs can generate OLE correlations, authenticated multiplication triples, matrix product correlations, and other types of useful correlations over large fields. These PCGs are more efficient by orders of magnitude than the previous constructions and can be used to improve the preprocessing phase of many existing MPC protocols.
2020
CRYPTO
Lattice-Based Blind Signatures, Revisited 📺
We observe that all previously known lattice-based blind signatures schemes contain subtle flaws in their security proofs (e.g.,~Rückert, ASIACRYPT '08) or can be attacked (e.g., BLAZE by Alkadri et al., FC~'20). Motivated by this, we revisit the problem of constructing blind signatures from standard lattice assumptions. We propose a new three-round lattice-based blind signature scheme whose security can be proved, in the random oracle model, from the standard SIS assumption. Our starting point is a modified version of the insecure three-round BLAZE scheme, which itself is based Lyubashevsky's three-round identification scheme combined with a new aborting technique to reduce the correctness error. Our proof builds upon and extends the recent modular framework for blind signatures of Hauck, Kiltz, and Loss (EUROCRYPT~'19). It also introduces several new techniques to overcome the additional challenges posed by the correctness error which is inherent to all lattice-based constructions. While our construction is mostly of theoretical interest, we believe it to be an important stepping stone for future works in this area.
2020
CRYPTO
Fully Deniable Interactive Encryption 📺
Deniable encryption (Canetti \emph{et al.}, Crypto 1996) enhances secret communication over public channels, providing the additional guarantee that the secrecy of communication is protected even if the parties are later coerced (or willingly bribed) to expose their entire internal states: plaintexts, keys and randomness. To date, constructions of deniable encryption --- and more generally, interactive deniable communication --- only address restricted cases where only {\em one} party is compromised (Sahai and Waters, STOC 2014). The main question --- whether deniable communication is at all possible if {\em both} parties are coerced at once --- has remained open. We resolve this question in the affirmative, presenting a communication protocol that is {\em fully deniable} under coercion of both parties. Our scheme has three rounds, assumes subexponentially secure indistinguishability obfuscation and one-way functions, and uses a short global reference string that is generated once at system set-up and suffices for an unbounded number of encryptions and decryptions. Of independent interest, we introduce a new notion called \emph{off-the-record deniability}, which protects parties even when their claimed internal states are inconsistent (a case not covered by prior definitions). Our scheme satisfies both standard deniability and off-the-record deniability.
2020
CRYPTO
Time-Space Tradeoffs and Short Collisions in Merkle-Damgård Hash Functions 📺
We study collision-finding against Merkle-Damgård hashing in the random-oracle model by adversaries with an arbitrary $S$-bit auxiliary advice input about the random oracle and $T$ queries. Recent work showed that such adversaries can find collisions (with respect to a random IV) with advantage $\Omega(ST^2/2^n)$, where $n$ is the output length, beating the birthday bound by a factor of $S$. These attacks were shown to be optimal. We observe that the collisions produced are very long, on the order $T$ blocks, which would limit their practical relevance. We prove several results related to improving these attacks to find short collisions. We first exhibit a simple attack for finding $B$-block-long collisions achieving advantage $\tilde{\Omega}(STB/2^n)$. We then study if this attack is optimal. We show that the prior technique based on the bit-fixing model (used for the $ST^2/2^n$ bound) provably cannot reach this bound, and towards a general result we prove there are qualitative jumps in the optimal attacks for finding length $1$, length $2$, and unbounded-length collisions. Namely, the optimal attacks achieve (up to logarithmic factors) order of $(S+T)/2^n$, $ST/2^n$ and $ST^2/2^n$ advantage. We also give an upper bound on the advantage of a restricted class of short-collision finding attacks via a new analysis on the growth of trees in random functional graphs that may be of independent interest.
2020
CRYPTO
Lattice Reduction for Modules, or How to Reduce ModuleSVP to ModuleSVP 📺
We show how to generalize lattice reduction algorithms to module lattices. Specifically, we reduce $\gamma$-approximate ModuleSVP over module lattices with rank $k \geq2$ to $\gamma'$-approximate ModuleSVP over module lattices with rank $2 \leq \beta \leq k$. To do so, we modify the celebrated slide-reduction algorithm of Gama and Nguyen to work with module filtrations, a high-dimensional generalization of the ($\Z$-)basis of a lattice. The particular value of $\gamma$ that we achieve depends on the underlying number field $K$, the order $R \subseteq \mathcal{O}_K$, and the embedding (as well as, of course, $k$ and $\beta$). However, for reasonable choices of these parameters, the approximation factor that we achieve is surprisingly close to the one achieved by ``plain'' lattice reduction algorithms, which require an arbitrary SVP oracle in the same dimension. In other words, we show that ModuleSVP oracles are nearly as useful as SVP oracles for solving approximate ModuleSVP in higher dimensions. Our result generalizes the recent independent result of Lee, Pellet-Mary, Stehl\'e, and Wallet, which works in the important special case when $\beta = 2$ and $R = \mathcal{O}_K$ is the ring of integers of $K$ under the canonical embedding. Indeed, at a high level our reduction can be thought of as a generalization of theirs in roughly the same way that block reduction generalizes LLL reduction.
2020
CRYPTO
Random Probing Security: Verification, Composition, Expansion and New Constructions 📺
Masking countermeasure is among the most powerful countermeasures to counteract side-channel attacks. Leakage models have been exhibited to theoretically reason on the security of such masked implementations. So far, the most widely used leakage model is the probing model defined by Ishai, Sahai, and Wagner at (CRYPTO 2003). While it is advantageously convenient for security proofs, it does not capture an adversary exploiting full leakage traces as, e.g., in horizontal attacks. Those attacks target the multiple manipulation of the same share to average a constant noise and recover the corresponding value. To capture a wider class of attacks another model was introduced and is referred to as the random probing model. From a leakage parameter p, each wire of the circuit leaks its value with probability p. While this model much better reflects the physical reality of side channels, it requires more complex security proofs and does not yet come with practical constructions. In this paper, we define the first framework dedicated to the random probing model. We provide an automatic tool, called VRAPS, to quantify the random probing security of a circuit from its leakage probability. We also formalize a composition property for secure random probing gadgets and exhibit its relation to the strong non-interference (SNI) notion used in the context of probing security. We then revisit the expansion idea proposed by Ananth, Ishai, and Sahai (CRYPTO 2018) and introduce a compiler that builds a random probing secure circuit from small base gadgets achieving a random probing expandability property. We instantiate this compiler with small gadgets for which we verify the expected properties directly from our automatic tool. Our construction can tolerate a leakage probability up to 2^−8, against 2^−25 for the previous construction, with a better asymptotic complexity.
2020
CRYPTO
Alzette: a 64-bit ARX-box (feat. CRAX and TRAX) 📺
S-boxes are the only source of non-linearity in many symmetric cryptographic primitives. While they are often defined as being functions operating on a small space, some recent designs propose the use of much larger ones (e.g., 32 bits). In this context, an S-box is then defined as a subfunction whose cryptographic properties can be estimated precisely. In this paper, we present a 64-bit ARX-based S-box called Alzette which can be evaluated in constant time using only 12 instructions on modern CPUs. Its parallel application can also leverage vector (SIMD) instructions. One iteration of Alzette has differential and linear properties comparable to those of the AES S-box, while two iterations are at least as secure as the AES super S-box. Since the state size is much larger than the typical 4 or 8 bits, the study of the relevant cryptographic properties of Alzette is not trivial. We further discuss how such wide S-boxes could be used to construct round functions of 64-, 128- and 256-bit (tweakable) block ciphers with good cryptographic properties that are guaranteed even in the related-tweak setting. We use these structures to design a very lightweight 64-bit block cipher (CRAX) which outerperforms SPECK-64/128 for short messages on micro-controllers, and a 256-bit tweakable block cipher (TRAX) which can be used to obtain strong security guarantees against powerful adversaries (nonce misuse, quantum attacks).
2020
CRYPTO
The MALICIOUS Framework: Embedding Backdoors into Tweakable Block Ciphers 📺
Inserting backdoors in encryption algorithms has long seemed like a very interesting, yet difficult problem. Most attempts have been unsuccessful for symmetric-key primitives so far and it remains an open problem of how to build such ciphers. In this work, we propose the MALICIOUS framework, a new method to build tweakable block ciphers that have a backdoor hidden, which allows to retrieve the secret key. Our backdoor is differential in nature: a specific related-tweak differential path with high probability is hidden during design phase of the cipher. We explain how the backdoor can be used to practically recover the secret key of a user for any entity knowing the backdoor and we also argue why even knowing the presence of the backdoor and the workings of the cipher will not permit to retrieve the backdoor for an external user. We analyze the security of our construction in the classical black-box model and we show that retrieving the backdoor (the hidden high-probability differential path) is very difficult. We instantiate our framework by proposing the LowMC-M construction, a new family of tweakable block ciphers based on instances of the LowMC cipher, which allow such backdoor embedding. Generating LowMC-M instances is trivial and the LowMC-M family has basically the same efficiency as the LowMC instances it is based on.
2020
CRYPTO
Reverse Firewalls for Actively Secure MPCs 📺
Reverse firewalls were introduced at Eurocrypt 2015 by Miro-nov and Stephens-Davidowitz, as a method for protecting cryptographic protocols against attacks on the devices of the honest parties. In a nutshell: a reverse firewall is placed outside of a device and its goal is to ``sanitize'' the messages sent by it, in such a way that a malicious device cannot leak its secrets to the outside world. It is typically assumed that the cryptographic devices are attacked in a ``functionality-preserving way'' (i.e.~informally speaking, the functionality of the protocol remains unchanged under this attacks). In their paper, Mironov and Stephens-Davidowitz construct a protocol for passively-secure two-party computations with firewalls, leaving extension of this result to stronger models as an open question. In this paper, we address this problem by constructing a protocol for secure computation with firewalls that has two main advantages over the original protocol from Eurocrypt 2015. Firstly, it is a \emph{multi}party computation protocol (i.e.~it works for an arbitrary number $n$ of the parties, and not just for $2$). Secondly, it is secure in much stronger corruption settings, namely in the \emph{actively corruption model}. More precisely: we consider an adversary that can fully corrupt up to $n-1$ parties, while the remaining parties are corrupt in a functionality-preserving way. Our core techniques are: malleable commitments and malleable non-interactive zero-knowledge, which in particular allow us to create a novel protocol for multiparty augmented coin-tossing into the well with reverse firewalls (that is based on a protocol of Lindell from Crypto 2001).
2020
CRYPTO
Fast reduction of algebraic lattices over cyclotomic fields 📺
We introduce a framework generalizing lattice reduction algorithms to module lattices in order to \emph{practically} and \emph{efficiently} solve the $\gamma$-Hermite Module-SVP problem over arbitrary cyclotomic fields. The core idea is to exploit the structure of the subfields for designing a recursive strategy of reduction in the tower of fields we are working in. Besides, we demonstrate how to leverage the inherent symplectic geometry existing such fields to provide a significant speed-up of the reduction for rank two modules. As a byproduct, we also generalize to all cyclotomic fields and provide speedups for many previous number theoretical algorithms, in particular to the rounding in the so-called Log-unit lattice. Quantitatively, we show that a module of rank 2 over a cyclotomic field of degree $n$ can be heuristically reduced within approximation factor $2^{\tilde{O}(n)}$ in time $\tilde{O}(n^2B)$, where $B$ is the bitlength of the entries. For $B$ large enough, this complexity shrinks to $\tilde{O}(n^{\log_2 3}B)$. This last result is particularly striking as it goes below the estimate of $n^2B$ swaps given by the classical analysis of the LLL algorithm using the decrease of the \emph{potential} of the basis. Finally, all this framework is fully parallelizable, and we provide a full implementation. We apply it to break multilinear cryptographic candidates on concrete proposed parameters. We were able to reduce matrices of dimension 4096 with 6675-bit integers in 4 days, which is more than a million times faster than previous state-of-the-art implementations. Eventually, we demonstrate a quasicubic time for the Gentry-Szydlo algorithm which finds a generator given the relative norm and a basis of an ideal. This algorithm is important in cryptanalysis and requires efficient ideal multiplications and lattice reductions; as such we can practically use it in dimension 1024.
2020
CRYPTO
Fiat-Shamir for Repeated Squaring with Applications to PPAD-Hardness and VDFs 📺
The Fiat-Shamir transform is a methodology for compiling a (public-coin) interactive proof system for a language $L$ into a {\em non-interactive} argument system for $L$. Proving security of the Fiat-Shamir transform in the standard model, especially in the context of \emph{succinct} arguments, is largely an unsolved problem. The work of Canetti et al. (STOC 2019) proved the security of the Fiat-Shamir transform applied to the Goldwasser-Kalai-Rothblum (STOC 2008) succinct interactive proof system under a very strong ``optimal learning with errors'' assumption. Achieving a similar result under standard assumptions remains an important open question. In this work, we consider the problem of compiling a different succinct interactive proof system: Pietrzak's proof system (ITCS 2019) for the iterated squaring problem. We construct a hash function family (with evaluation time roughly $2^{\lambda^{\epsilon}}$) that guarantees the soundness of Fiat-Shamir for this protocol assuming the sub-exponential ($2^{-n^{1-\epsilon}}$)-hardness of the $n$-dimensional learning with errors problem. (The latter follows from the worst-case $2^{n^{1-\epsilon}}$ hardness of lattice problems.) More generally, we extend the ``bad-challenge function'' methodology of Canetti et al. for proving the soundness of Fiat-Shamir to a class of protocols whose bad-challenge functions are {\em not} efficiently computable. As a corollary (following Choudhuri et al., ePrint 2019 and Ephraim et al., EUROCRYPT 2020), we construct hard-on-average problems in the complexity class $\mathbf{CLS}\subset \mathbf{PPAD}$ under the $2^{\secp^\epsilon}$-hardness of the repeated squaring problem and the $2^{-n^{1-\epsilon}}$-hardness of the learning with errors problem. Under the additional assumption that the repeated squaring problem is ``inherently sequential'', we also obtain a Verifiable Delay Function (Boneh et al., EUROCRYPT 2018) in the standard model. Finally, we give additional PPAD-hardness and VDF instantiations demonstrating a broader tradeoff between the strength of the repeated squaring assumption and the strength of the lattice assumption.
2020
CRYPTO
Mode-Level vs. Implementation-Level Physical Security in Symmetric Cryptography: A Practical Guide Through the Leakage-Resistance Jungle 📺
Triggered by the increasing deployment of embedded cryptographic devices (e.g., for the IoT), the design of authentication, encryption and authenticated encryption schemes enabling improved security against side-channel attacks has become an important research direction. Over the last decade, a number of modes of operation have been proposed and analyzed under different abstractions. In this paper, we investigate the practical consequences of these findings. For this purpose, we first translate the physical assumptions of leakage-resistance proofs into minimum security requirements for implementers. Thanks to this (heuristic) translation, we observe that (i) security against physical attacks can be viewed as a tradeoff between mode-level and implementation-level protection mechanisms, and (i}) security requirements to guarantee confidentiality and integrity in front of leakage can be concretely different for the different parts of an implementation. We illustrate the first point by analyzing several modes of operation with gradually increased leakage-resistance. We illustrate the second point by exhibiting leveled implementations, where different parts of the investigated schemes have different security requirements against leakage, leading to performance improvements when high physical security is needed. We finally initiate a comparative discussion of the different solutions to instantiate the components of a leakage-resistant authenticated encryption scheme.
2020
CRYPTO
Non-Interactive Zero-Knowledge Arguments for QMA, with preprocessing 📺
We initiate the study of non-interactive zero-knowledge (NIZK) arguments for languages in QMA. Our first main result is the following: if Learning With Errors (LWE) is hard for quantum computers, then any language in QMA has an NIZK argument with preprocessing. The preprocessing in our argument system consists of (i) the generation of a CRS and (ii) a single (instance-independent) quantum message from verifier to prover. The instance-dependent phase of our argument system involves only a single classical message from prover to verifier. Importantly, verification in our protocol is entirely classical, and the verifier needs not have quantum memory; its only quantum actions are in the preprocessing phase. Our second contribution is to extend the notion of a classical proof of knowledge to the quantum setting. We introduce the notions of arguments and proofs of quantum knowledge (AoQK/PoQK), and we show that our non-interactive argument system satisfies the definition of an AoQK. In particular, we explicitly construct an extractor which can recover a quantum witness from any prover which is successful in our protocol. Finally, we show that any language in QMA has an (interactive) proof of quantum knowledge.
2020
CRYPTO
Cryptanalysis of LEDAcrypt 📺
We report on the concrete cryptanalysis of LEDAcrypt, a 2nd Round candidate in NIST's Post-Quantum Cryptography standardization process and one of 17 encryption schemes that remain as candidates for near-term standardization. LEDAcrypt consists of a public-key encryption scheme built from the McEliece paradigm and a key-encapsulation mechanism (KEM) built from the Niederreiter paradigm, both using a quasi-cyclic low-density parity-check (QC-LDPC) code. In this work, we identify a large class of extremely weak keys and provide an algorithm to recover them. For example, we demonstrate how to recover $1$ in $2^{47.79}$ of LEDAcrypt's keys using only $2^{18.72}$ guesses at the 256-bit security level. This is a major, practical break of LEDAcrypt. Further, we demonstrate a continuum of progressively less weak keys (from extremely weak keys up to all keys) that can be recovered in substantially less work than previously known. This demonstrates that the imperfection of LEDAcrypt is fundamental to the system's design.
2020
CRYPTO
Security Analysis of NIST CTR-DRBG 📺
We study the security of CTR-DRBG, one of NIST’s recommended Pseudorandom Number Generator (PRNG) designs. Recently, Woodage and Shumow (Eurocrypt’ 19), and then Cohney et al. (S&P’ 20) point out some potential vulnerabilities in both NIST specification and common implementations of CTR-DRBG. While these researchers do suggest counter-measures, the security of the patched CTR-DRBG is still questionable. Our work fills this gap, proving that CTR-DRBG satisfies the robustness notion of Dodis et al. (CCS’13), the standard security goal for PRNGs.
2020
CRYPTO
A Polynomial-Time Algorithm for Solving the Hidden Subset Sum Problem 📺
At Crypto '99, Nguyen and Stern described a lattice based algorithm for solving the hidden subset sum problem, a variant of the classical subset sum problem where the n weights are also hidden. While the Nguyen-Stern algorithm works quite well in practice for moderate values of n, we argue that its complexity is actually exponential in n; namely in the final step one must recover a very short basis of a n-dimensional lattice, which takes exponential-time in n, as one must apply BKZ reduction with increasingly large block-sizes. In this paper, we describe a variant of the Nguyen-Stern algorithm that works in polynomial-time. The first step is the same orthogonal lattice attack with LLL as in the original algorithm. In the second step, instead of applying BKZ, we use a multivariate technique that recovers the short lattice vectors and finally the hidden secrets in polynomial time. Our algorithm works quite well in practice, as we can reach n=250 in a few hours on a single PC.
2020
CRYPTO
A Tight Parallel Repetition Theorem for Partially Simulatable Interactive Arguments via Smooth KL-Divergence 📺
Hardness amplification is a central problem in the study of interactive protocols. While "natural" parallel repetition transformation is known to reduce the soundness error of some special cases of interactive arguments: three-message protocols (Bellare, Impagliazzo, and Naor [FOCS '97]) and public-coin protocols (Hastad, Pass, Wikstrom, and Pietrzak [TCC '10], Chung and Lu [TCC '10] and Chung and Pass [TCC '15]), it fails to do so in the general case (the above Bellare et al.; also Pietrzak and Wikstrom [TCC '07]). The only known round-preserving approach that applies to all interactive arguments is Haitner's random-terminating transformation [SICOMP '13], who showed that the parallel repetition of the transformed protocol reduces the soundness error at a weak exponential rate: if the original m-round protocol has soundness error (1 − ε) then the n-parallel repetition of its random-terminating variant has soundness error (1 − ε)^{ε n/m^4} (omitting constant factors). Hastad et al. have generalized this result to the so-called partially simulatable interactive arguments. In this work we prove that parallel repetition of random-terminating arguments reduces the soundness error at a much stronger exponential rate: the soundness error of the n parallel repetition is (1 − ε)^{n/m}, only an m factor from the optimal rate of (1 − ε)^n achievable in public-coin and three-message arguments. The result generalizes to partially simulatable arguments. This is achieved by presenting a tight bound on a relaxed variant of the KL-divergence between the distribution induced by our reduction and its ideal variant, a result whose scope extends beyond parallel repetition proofs. We prove the tightness of the above bound for random-terminating arguments, by presenting a matching protocol.
2020
CRYPTO
Quantifying the Security Cost of Migrating Protocols to Practice 📺
We give a framework for relating the quantitative, concrete security of a "reference'' protocol (say, one appearing in an academic paper) to that of some derived, "real'' protocol (say, appearing in a cryptographic standard). It is based on the indifferentiability framework of Maurer, Renner, and Holenstein (MRH), whose application has been exclusively focused upon non-interactive cryptographic primitives, e.g., hash functions and Feistel networks. Our extension of MRH is supported by a clearly defined execution model, and two composition lemmata, all formalized in a modern pseudocode language. Together, these allow for precise statements about game-based security properties of cryptographic objects (interactive or not) at various levels of abstraction, As a real-world application, we design and prove tight security bounds for a potential TLS 1.3 extension that integrates the SPAKE2 password-authenticated key-exchange into the existing handshake. (This is a problem of current interest to the IETF.)
2020
CRYPTO
Cryptanalysis Results on Spook: Bringing Full-round Shadow-512 to the Light 📺
Spook is one of the 32 candidates that has made it to the second round of the NIST Lightweight Cryptography Standardization process, and is particularly interesting since it proposes differential side channel resistance. In this paper, we present practical distinguishers of the full 6-step version of the underlying permutations of Spook, namely Shadow-512 and Shadow-384, solving challenges proposed by the designers on the permutation. We also propose practical forgeries with 4-step Shadow for the S1P mode of operation in the nonce misuse scenario, which is allowed by the CIML2 security game considered by the authors. All the results presented in this paper have been implemented.
2020
CRYPTO
Collusion Resistant Watermarkable PRFs from Standard Assumptions 📺
A software watermarking scheme can embed a message into a program without significantly changing its functionality. Moreover, any attempt to remove the embedded message in a marked program will substantially change the functionality of the program. Prior constructions of watermarking schemes focus on watermarking cryptographic functions, such as pseudorandom function (PRF), public key encryption, etc. A natural security requirement for watermarking schemes is collusion resistance, where the adversary’s goal is to remove the embedded messages given multiple marked versions of the same program. Currently, this strong security guarantee has been achieved by watermarking schemes for public key cryptographic primitives from standard assumptions (Goyal et al., CRYPTO 2019) and by watermarking schemes for PRFs from indistinguishability obfuscation (Yang et al., ASIACRYPT 2019). However, no collusion resistant watermarking scheme for PRF from standard assumption is known. In this work, we solve this problem by presenting a generic construction that upgrades a watermarkable PRF without collusion resistance to a collusion resistant one. One appealing feature of our construction is that it can preserve the security properties of the original scheme. For example, if the original scheme has security with extraction queries, the new scheme is also secure with extraction queries. Besides, the new scheme can achieve unforgeability even if the original scheme does not provide this security property. Instantiating our construction with existing watermarking schemes for PRF, we obtain collusion resistant watermarkable PRFs from standard assumptions, offering various security properties.
2020
CRYPTO
Comparing the difficulty of factorization and discrete logarithm: a 240-digit experiment 📺
We report on two new records: the factorization of RSA-240, a 795-bit number, and a discrete logarithm computation over a 795-bit prime field. Previous records were the factorization of RSA-768 in 2009 and a 768-bit discrete logarithm computation in 2016. Our two computations at the 795-bit level were done using the same hardware and software, and show that computing a discrete logarithm is not much harder than a factorization of the same size. Moreover, thanks to algorithmic variants and well-chosen parameters, our computations were significantly less expensive than anticipated based on previous records. The last page of this paper also reports on the factorization of RSA-250.
2020
CRYPTO
Automatic Verification of Differential Characteristics: Application to Reduced Gimli 📺
Since Keccak was selected as the SHA-3 standard, more and more permutation-based primitives have been proposed. Different from block ciphers, there is no round key in the underlying permutation for permutation-based primitives. Therefore, there is a higher risk for a differential characteristic of the underlying permutation to become incompatible when considering the dependency of difference transitions over different rounds. However, in most of the MILP or SAT based models to search for differential characteristics, only the difference transitions are involved and are treated as independent in different rounds, which may cause that an invalid one is found for the underlying permutation. To overcome this obstacle, we are motivated to design a model which automatically avoids the inconsistency in the search for differential characteristics. Our technique is to involve both the difference transitions and value transitions in the constructed model. Such an idea is inspired by the algorithm to find SHA-2 characteristics as proposed by Mendel et al. in ASIACRYPT 2011, where the differential characteristic and the conforming message pair are simultaneously searched. As a first attempt, our new technique will be applied to the Gimli permutation, which was proposed in CHES 2017. As a result, we reveal that some existing differential characteristics of reduced Gimli are indeed incompatible, one of which is found in the Gimli document. In addition, since only the permutation is analyzed in the Gimli document, we are lead to carry out a comprehensive study, covering the proposed hash scheme and the authenticated encryption (AE) scheme specified for Gimli, which has become a second round candidate of the NIST lightweight cryptography standardization process. For the hash scheme, a semi-free-start (SFS) collision attack can reach up to 8 rounds starting from an intermediate round. For the AE scheme, a state recovery attack is demonstrated to achieve up to 9 rounds. It should be emphasized that our analysis does not threaten the security of Gimli.
2020
CRYPTO
Faster Enumeration-based Lattice Reduction: Root Hermite Factor k^(1/(2k)) in Time k^(k/8 + o(k)) 📺
We give a lattice reduction algorithm that achieves root Hermite factor k^(1/(2k)) in time k^(k/8 + o(k)) and polynomial memory. This improves on the previously best known enumeration-based algorithms which achieve the same quality, but in time k^(k/(2e) + o(k)). A cost of k^(k/8 + o(k)) was previously mentioned as potentially achievable (Hanrot-Stehlé’10) or as a heuristic lower bound (Nguyen’10) for enumeration algorithms. We prove the complexity and quality of our algorithm under a heuristic assumption and provide empirical evidence from simulation and implementation experiments attesting to its performance for practical and cryptographic parameter sizes. Our work also suggests potential avenues for achieving costs below k^(k/8 + o(k)) for the same root Hermite factor, based on the geometry of SDBKZ-reduced bases.
2020
CRYPTO
Rounding in the Rings 📺
In this work, we conduct a comprehensive study on establishing hardness reductions for (Module) Learning with Rounding over rings (RLWR). Towards this, we present an algebraic framework of LWR, inspired by a recent work of Peikert and Pepin (TCC '19). Then we show a search-to-decision reduction for Ring-LWR, generalizing a result in the plain LWR setting by Bogdanov et al. (TCC '15). Finally, we show a reduction from Ring-LWE to Module Ring-LWR (even for leaky secrets), generalizing the plain LWE to LWR reduction by Alwen et al. (Crypto '13). One of our central techniques is a new ring leftover hash lemma, which might be of independent interests.
2020
CRYPTO
Verifiable Registration-Based Encryption 📺
In recent work, Garg, Hajiabadi, Mahmoody, and Rahimi (TCC 2018) introduced a new encryption framework, which they referred to as Registration-Based Encryption (RBE). The central motivation behind RBE was to provide a novel methodology for solving the well-known key-escrow problem in Identity-Based Encryption (IBE) systems. Informally, in an RBE system, there is no private-key generator unlike IBE systems, but instead, it is replaced with a public key accumulator. Every user in an RBE system samples its own public-secret key pair and sends the public key to the accumulator for registration. The key accumulator has no secret state and is only responsible for compressing all the registered user identity-key pairs into a short public commitment. Here the encryptor only requires the compressed parameters along with the target identity, whereas a decryptor requires supplementary key material along with the secret key associated with the registered public key. The initial construction by Garg et al. based on standard assumptions only provided weak efficiency properties. In a follow-up work by Garg, Hajiabadi, Mahmoody, Rahimi, and Sekar (PKC 2019), they gave an efficient RBE construction from standard assumptions. However, both these works considered the key accumulator to be honest which might be too strong an assumption in real-world scenarios. In this work, we initiate a formal study of RBE systems with malicious key accumulators. To that end, we introduce a strengthening of the RBE framework which we call Verifiable RBE (VRBE). A VRBE system additionally gives the users an extra capability to obtain short proofs from the key accumulator proving correct (and unique) registration for every registered user as well as proving non-registration for any yet unregistered identity. We construct VRBE systems that provide succinct proofs of registration and non-registration from standard assumptions (such as CDH, Factoring, LWE). Our proof systems also naturally allow a much more efficient audit process which can be performed by any non-participating third party as well. A by-product of our approach is that we provide a more efficient RBE construction than that provided in the prior work of Garg \etal\cite{GHMRS19}. And, lastly, we initiate a study on the extension of VRBE to a wider range of access and trust structures.
2020
CRYPTO
Indifferentiability for Public Key Cryptosystems 📺
We initiate the study of indifferentiability for public key encryption and other public key primitives. Our main results are definitions and constructions of public key cryptosystems that are indifferentiable from ideal cryptosystems, in the random oracle model. Cryptosystems include: 1) Public key encryption; 2) Digital signatures; 3) Non-interactive key agreement. Our schemes are based on relatively standard public key assumptions. By being indifferentiable from an ideal object, our schemes automatically satisfy a wide range of security properties, including any property representable as a single-stage game, and can be composed to operate in higher-level protocols.
2020
CRYPTO
Asymptotic complexities of discrete logarithm algorithms in pairing-relevant finite fields 📺
We study the discrete logarithm problem at the boundary case between small and medium characteristic finite fields, which is precisely the area where finite fields used in pairing-based cryptosystems live. In order to evaluate the security of pairing-based protocols, we thoroughly analyze the complexity of all the algorithms that coexist at this boundary case: the Quasi-Polynomial algorithms, the Number Field Sieve and its many variants, and the Function Field Sieve. We adapt the latter to the particular case where the extension degree is composite, and show how to lower the complexity by working in a shifted function field. All this study finally allows us to give precise values for the characteristic asymptotically achieving the highest security level for pairings. Surprisingly enough, there exist special characteristics that are as secure as general ones.
2020
CRYPTO
The Measure-and-Reprogram Technique 2.0: Multi-Round Fiat-Shamir and More 📺
We revisit recent works by Don, Fehr, Majenz and Schaffner and by Liu and Zhandry on the security of the Fiat-Shamir transformation of sigma-protocols in the quantum random oracle model (QROM). Two natural questions that arise in this context are: (1) whether the results extend to the Fiat-Shamir transformation of {\em multi-round} interactive proofs, and (2) whether Don et al.'s O(q^2) loss in security is optimal. Firstly, we answer question (1) in the affirmative. As a byproduct of solving a technical difficulty in proving this result, we slightly improve the result of Don et al., equipping it with a cleaner bound and an even simpler proof. We apply our result to digital signature schemes showing that it can be used to prove strong security for schemes like MQDSS in the QROM. As another application we prove QROM-security of a non-interactive OR proof by Liu, Wei and Wong. As for question (2), we show via a Grover-search based attack that Don et al.'s quadratic security loss for the Fiat-Shamir transformation of sigma-protocols is optimal up to a small constant factor. This extends to our new multi-round result, proving it tight up to a factor that depends on the number of rounds only, i.e. is constant for any constant-round interactive proof.
2020
CRYPTO
MPC with Friends and Foes 📺
Classical definitions for secure multiparty computation assume the existence of a single adversarial entity controlling the set of corrupted parties. Intuitively, the definition requires that the view of the adversary, corrupting t parties, in a real-world execution can be simulated by an adversary in an ideal model, where parties interact only via a trusted-party. No restrictions, however, are imposed on the view of honest parties in the protocol, thus, if honest parties obtain information about the private inputs of other honest parties -- it is not counted as a violation of privacy. This is arguably undesirable in many situations that fall into the MPC framework. Nevertheless, there are secure protocols (e.g., the 2-round multiparty protocol of Ishai et al. [CRYPTO 2010] tolerating a single corrupted party) that instruct the honest parties to reveal their private inputs to all other honest parties (once the malicious party is somehow identified). In this paper, we put forth a new security notion, which we call FaF-security, extending the classical notion. In essence, (t,h^*)-FaF-security requires the view of a subset of up to h^* honest parties to also be simulatable in the ideal model (in addition to the view of the malicious adversary, corrupting up to t parties). This property should still hold, even if the adversary leaks information to honest parties by sending them non-prescribed messages. We provide a thorough exploration of the new notion, investigating it in relation to a variety of existing security notions. We further investigate the feasibility of achieving FaF-security and show that every functionality can be computed with (computational) (t,h^*)-FaF full-security, if and only if 2t+ h^*<m. Interestingly, the lower-bound result actually shows that even fair FaF-security is impossible in general when 2t+ h^*\ge m (surprisingly, the view of the malicious attacker is not used as the trigger for the attack). We also investigate the optimal round complexity for (t,h^*)-Faf-secure protocols and give evidence that the leakage of private inputs of honest parties in the protocol of Ishai et al. [CRYPTO 2010] is inherent.
2020
CRYPTO
Better Concrete Security for Half-Gates Garbling (in the Multi-Instance Setting) 📺
We study the concrete security of high-performance implementations of half-gates garbling, which all rely on (hardware-accelerated) AES. We find that current instantiations using k-bit wire labels can be completely broken—in the sense that the circuit evaluator learns all the inputs of the circuit garbler—in time O(2k/C), where C is the total number of (non-free) gates that are garbled, possibly across multiple independent executions. The attack can be applied to existing circuit-garbling libraries using k = 80 when C ≈ $10^9$, and would require 267 machine-months and cost about $3500 to implement on the Google Cloud Platform. Since the attack can be entirely parallelized, the attack could be carried out in about a month using ≈ 250 machines. With this as our motivation, we seek a way to instantiate the hash function in the half-gates scheme so as to achieve better concrete security. We present a construction based on AES that achieves optimal security in the single-instance setting (when only a single circuit is garbled). We also show how to modify the half-gates scheme so that its concrete security does not degrade in the multi-instance setting. Our modified scheme is as efficient as prior work in networks with up to 2 Gbps bandwidth.
2020
CRYPTO
Always Have a Backup Plan: Fully Secure Synchronous MPC with Asynchronous Fallback 📺
Protocols for secure Multi-Party Computation (MPC) can be classified according to the underlying communication model. Two prominent communication models considered in the literature are the synchronous and asynchronous models, which considerably differ in terms of the achievable security guarantees. Synchronous MPC protocols can achieve the optimal corruption threshold $n/2$ and allow every party to give input, but become completely insecure when synchrony assumptions are violated. On the other hand, asynchronous MPC protocols remain secure under arbitrary network conditions, but can tolerate only $n/3$ corruptions and parties with slow connections unavoidably cannot give input. A natural question is whether there exists a protocol for MPC that can tolerate up to $t_s < n/2$ corruptions under a synchronous network and $t_a < n/3$ corruptions even when the network is asynchronous. We answer this question by showing tight feasibility and impossibility results. More specifically, we show that such a protocol exists if and only if $t_a + 2t_s < n$ and the number of inputs taken into account under an asynchronous network is at most $n-t_s$.
2020
CRYPTO
New Constructions of Hinting PRGs, OWFs with Encryption, and more 📺
Over the last few years, there has been a surge of new cryptographic results, including laconic oblivious transfer, (anonymous/ hierarchical) identity-based encryption, trapdoor functions, chosen-ciphertext security transformations, designated-verifier zero-knowledge proofs, due to a beautiful framework recently introduced in the works of Cho et al. (Crypto 2017), and Dottling and Garg (Crypto 2017). The primitive of one-way function with encryption (OWFE) and its relatives (chameleon encryption, one-time signatures with encryption, hinting PRGs, trapdoor hash encryption, batch encryption) has been a centerpiece in all these results. While there exist multiple realizations of OWFE (and its relatives) from a variety of assumptions such as CDH, Factoring, and LWE, all such constructions fall under the same general ``missing block" framework. Although this framework has been instrumental in opening up a new pathway towards various cryptographic functionalities via the abstraction of OWFE (and its relatives), it has been accompanied by undesirable inefficiencies that might inhibit a much wider adoption in many practical scenarios. Motivated by the surging importance of the OWFE abstraction (and its relatives), a natural question to ask is whether the existing approaches can be diversified to not only obtain more constructions from different assumptions, but also in developing newer frameworks. We believe answering this question will eventually lead to important and previously unexplored performance trade-offs in the overarching applications of this novel cryptographic paradigm. In this work, we propose a new accumulation-style framework for building a new class of OWFE as well as hinting PRG constructions with a special focus on achieving shorter ciphertext size and shorter public parameter size (respectively). Such performance improvements parlay into shorter parameters in their corresponding applications. Briefly, we explore the following performance trade-offs --- (1) for OWFE, our constructions outperform in terms of ciphertext size as well as encryption time, but this comes at the cost of larger evaluation and setup times, (2) for hinting PRGs, our constructions provide a rather dramatic trade-off between evaluation time versus parameter size, with our construction leading to significantly shorter public parameter size. The trade-off enabled by our hinting PRG construction also leads to interesting implications in the CPA-to-CCA transformation provided in. We also provide concrete performance measurements for our constructions and compare them with existing approaches. We believe highlighting such trade-offs will lead to wider adoption of these abstractions in a practical sense.
2020
CRYPTO
Two-Sided Malicious Security for Private Intersection-Sum with Cardinality 📺
Private intersection-sum with cardinality allows two parties, where each party holds a private set and one of the parties additionally holds a private integer value associated with each element in her set, to jointly compute the cardinality of the intersection of the two sets as well as the sum of the associated integer values for all the elements in the intersection, and nothing beyond that. We present a new construction for private intersection sum with cardinality that provides malicious security with abort and guarantees that both parties receive the output upon successful completion of the protocol. A central building block for our constructions is a primitive called shuffled distributed oblivious PRF (DOPRF), which is a PRF that offers oblivious evaluation using a secret key shared between two parties, and in addition to this allows obliviously permuting the PRF outputs of several parallel oblivious evaluations. We present the first construction for shuffled DOPRF with malicious security. We further present several new sigma proof protocols for relations across Pedersen commitments, ElGamal encryptions, and Camenisch-Shoup encryptions that we use in our main construction, for which we develop new batching techniques to reduce communication. We implement and evaluate the efficiency of our protocol and show that we can achieve communication cost that is only 4-5x greater than the most efficient semi-honest protocol. When measuring monetary cost of executing the protocol in the cloud, our protocol is 25x more expensive than the semi-honest protocol. Our construction also allows for different parameter regimes that enable trade-offs between communication and computation.
2020
CRYPTO
Private Set Intersection in the Internet Setting From Lightweight Oblivious PRF 📺
We present a new protocol for two-party private set intersection (PSI) with semi-honest security in the plain model and one-sided malicious security in the random oracle model. Our protocol achieves a better balance between computation and communication than existing PSI protocols. Specifically, our protocol is the fastest in networks with moderate bandwidth (e.g., 30 - 100 Mbps). Considering the monetary cost (proposed by Pinkas et al. in CRYPTO 2019) to run the protocol on a cloud computing service, our protocol also compares favorably. Underlying our PSI protocol is a new lightweight multi-point oblivious pesudorandom function (OPRF) protocol based on oblivious transfer (OT) extension. We believe this new protocol may be of independent interest.
2020
CRYPTO
Spartan: Efficient and general-purpose zkSNARKs without trusted setup 📺
This paper introduces Spartan, a new family of zero-knowledge succinct non- interactive arguments of knowledge (zkSNARKs) for the rank-1 constraint satisfiabil- ity (R1CS), an NP-complete language that generalizes arithmetic circuit satisfiability. A distinctive feature of Spartan is that it offers the first zkSNARKs without trusted setup (i.e., transparent zkSNARKs) for NP where verifying a proof incurs sub-linear costs—without requiring uniformity in the NP statement’s structure. Furthermore, Spartan offers zkSNARKs with a time-optimal prover, a property that has remained elusive for nearly all zkSNARKs in the literature. To achieve these results, we introduce new techniques that we compose with the sum-check protocol, a seminal interactive proof protocol: (1) computation commit- ments, a primitive to create a succinct commitment to a description of a computation; this technique is crucial for a verifier to achieve sub-linear costs after investing a one-time, public computation to preprocess a given NP statement; (2) SPARK, a cryptographic compiler to transform any existing extractable polynomial commitment scheme for multilinear polynomials to one that efficiently handles sparse multilinear polynomials; this technique is critical for achieving a time-optimal prover; and (3) a compact encoding of an R1CS instance as a low-degree polynomial. The end result is a public-coin succinct interactive argument of knowledge for NP (which can be viewed as a succinct variant of the sum-check protocol); we transform it into a zkSNARK using prior techniques. By applying SPARK to different commitment schemes, we obtain several zkSNARKs where the verifier’s costs and the proof size range from $O(log^2{n})$ to $O(\sqrt{n})$ depending on the underlying commitment scheme ($n$ denotes the size of the NP statement). These schemes do not require a trusted setup except for one that requires a universal trusted setup. We implement Spartan as a library in about 8,000 lines of Rust. We use the library to build a transparent zkSNARK in the random oracle model where security holds under the discrete logarithm assumption. We experimentally evaluate it and compare with recent zkSNARKs for R1CS instance sizes up to 220 constraints. Among schemes without trusted setup, Spartan offers the fastest prover with speedups of 36--$152\times$ depending on the baseline, produces proofs that are shorter by 1.2--$416\times$, and incurs the lowest verification times with speedups of 3.6--$1326\times$. When compared to the state-of-the-art zkSNARK with trusted setup, Spartan’s prover is $2\times$ faster for arbitrary R1CS instances and $16\times$ faster for data-parallel workloads.
2020
CRYPTO
A non-PCP Approach to Succinct Quantum-Safe Zero-Knowledge 📺
Today's most compact zero-knowledge arguments are based on the hardness of the discrete logarithm problem and related classical assumptions. If one is interested in quantum-safe solutions, then all of the known techniques stem from the PCP-based framework of Kilian (STOC 92) which can be instantiated based on the hardness of any collision-resistant hash function. Both approaches produce asymptotically logarithmic sized arguments but, by exploiting extra algebraic structure, the discrete logarithm arguments are a few orders of magnitude more compact in practice than the generic constructions.\\ In this work, we present the first (poly)-logarithmic \emph{post-quantum} zero-knowledge arguments that deviate from the PCP approach. At the core of succinct zero-knowledge proofs are succinct commitment schemes (in which the commitment and the opening proof are sub-linear in the message size), and we propose two such constructions based on the hardness of the (Ring)-Short Integer Solution (Ring-SIS) problem, each having certain trade-offs. For commitments to $N$ secret values, the communication complexity of our first scheme is $\tilde{O}(N^{1/c})$ for any positive integer $c$, and $O(\log^2 N)$ for the second. %Both of our protocols have somewhat large \emph{slack}, which in lattice constructions is the ratio of the norm of the extracted secrets to the norm of the secrets that the honest prover uses in the proof. The lower this factor, the smaller we can choose the practical parameters. For a fixed value of this factor, our $\tilde{O}(N^{1/c})$-argument actually achieves lower communication complexity. Both of these are a significant theoretical improvement over the previously best lattice construction by Bootle et al. (CRYPTO 2018) which gave $O(\sqrt{N})$-sized proofs.
2020
CRYPTO
Lower Bounds for Encrypted Multi-Maps and Searchable Encryption in the Leakage Cell Probe Model 📺
Encrypted multi-maps (EMMs) enable clients to outsource the storage of a multi-map to a potentially untrusted server while maintaining the ability to perform operations in a privacy-preserving manner. EMMs are an important primitive as they are an integral building block for many practical applications such as searchable encryption and encrypted databases. In this work, we formally examine the tradeoffs between privacy and efficiency for EMMs. Currently, all known dynamic EMMs with constant overhead reveal if two operations are performed on the same key or not that we denote as the global key-equality pattern. In our main result, we present strong evidence that the leakage of the global key-equality pattern is inherent for any dynamic EMM construction with $O(1)$ efficiency. In particular, we consider the slightly smaller leakage of decoupled key-equality pattern where leakage of key-equality between update and query operations is decoupled and the adversary only learns whether two operations of the same type are performed on the same key or not. We show that any EMM with at most decoupled key-equality pattern leakage incurs $\Omega(\log n)$ overhead in the leakage cell probe model. This is tight as there exist ORAM-based constructions of EMMs with logarithmic slowdown that leak no more than the decoupled key-equality pattern (and actually, much less). Furthermore, we present stronger lower bounds that encrypted multi-maps leaking at most the decoupled key-equality pattern but are able to perform one of either the update or query operations in the plaintext still require $\Omega(\log n)$ overhead. Finally, we extend our lower bounds to show that dynamic, response-hiding searchable encryption schemes must also incur $\Omega(log n)$ overhead even when one of either the document updates or searches may be performed in the plaintext.
2020
CRYPTO
Generically Speeding-Up Repeated Squaring is Equivalent to Factoring: Sharp Thresholds for All Generic-Ring Delay Functions 📺
Despite the fundamental importance of delay functions, repeated squaring in RSA groups (Rivest, Shamir and Wagner '96) is the only candidate offering both a useful structure and a realistic level of practicality. Somewhat unsatisfyingly, its sequentiality is provided directly by assumption (i.e., the function is assumed to be a delay function). We prove sharp thresholds on the sequentiality of all generic-ring delay functions relative to an RSA modulus based on the hardness of factoring in the standard model. In particular, we show that generically speeding-up repeated squaring (even with a preprocessing stage and any polynomial number parallel processors) is equivalent to factoring. More generally, based on the (essential) hardness of factoring, we prove that any generic-ring function is in fact a delay function, admitting a sharp sequentiality threshold that is determined by our notion of sequentiality depth. Moreover, we show that generic-ring functions admit not only sharp sequentiality thresholds, but also sharp pseudorandomness thresholds.
2020
CRYPTO
Scalable Pseudorandom Quantum States 📺
Efficiently sampling a quantum state that is hard to distinguish from a truly random quantum state is an elementary task in quantum information theory that has both computational and physical uses. This is often referred to as pseudorandom (quantum) state generator, or PRS generator for short. In existing constructions of PRS generators, security scales with the number of qubits in the states, i.e.\ the (statistical) security parameter for an $n$-qubit PRS is roughly $n$. Perhaps counter-intuitively, $n$-qubit PRS are not known to imply $k$-qubit PRS even for $k<n$. Therefore the question of \emph{scalability} for PRS was thus far open: is it possible to construct $n$-qubit PRS generators with security parameter $\secp$ for all $n, \secp$. Indeed, we believe that PRS with tiny (even constant) $n$ and large $\secp$ can be quite useful. We resolve the problem in this work, showing that any quantum-secure one-way function implies scalable PRS. We follow the paradigm of first showing a \emph{statistically} secure construction when given oracle access to a random function, and then replacing the random function with a quantum-secure (classical) pseudorandom function to achieve computational security. However, our methods deviate significantly from prior works since scalable pseudorandom states require randomizing the amplitudes of the quantum state, and not just the phase as in all prior works. We show how to achieve this using Gaussian sampling.
2020
CRYPTO
Overcoming Impossibility Results in Composable Security using Interval-Wise Guarantees 📺
Composable security definitions, at times called simulation-based definitions, provide strong security guarantees that hold in any context. However, they are also met with some skepticism due to many impossibility results; goals such as commitments and zero-knowledge that are achievable in a stand-alone sense were shown to be unachievable composably (without a setup) since provably no efficient simulator exists. In particular, in the context of adaptive security, the so-called "simulator commitment problem" arises: once a party gets corrupted, an efficient simulator is unable to be consistent with its pre-corruption outputs. A natural question is whether such impossibility results are unavoidable or only artifacts of frameworks being too restrictive. In this work, we propose a novel type of composable security statement that evades the commitment problem. Our new type is able to express the composable guarantees of schemes that previously did not have a clear composable understanding. To this end, we leverage the concept of system specifications in the Constructive Cryptography framework, capturing the conjunction of several interval-wise guarantees, each specifying the guarantees between two events. We develop the required theory and present the corresponding new composition theorem. We present three applications of our theory. First, we show in the context of symmetric encryption with adaptive corruption how our notion naturally captures the expected confidentiality guarantee---the messages remain confidential until either party gets corrupted---and that it can be achieved by any standard semantically secure scheme (negating the need for non-committing encryption). Second, we present a composable formalization of (so far only known to be standalone secure) commitment protocols, which is instantiable without a trusted setup like a CRS. We show it to be sufficient for being used in coin tossing over the telephone, one of the early intuitive applications of commitments. Third, we reexamine a result by Hofheinz, Matt, and Maurer [Asiacrypt'15] implying that IND-ID-CPA security is not the right notion for identity-based encryption, unmasking this claim as an unnecessary framework artifact.
2020
CRYPTO
Adaptively Secure Constrained Pseudorandom Functions in the Standard Model 📺
Constrained pseudorandom functions (CPRFs) allow learning "constrained" PRF keys that can evaluate the PRF on a subset of the input space, or based on some predicate. First introduced by Boneh and Waters [AC’13], Kiayias et al. [CCS’13] and Boyle et al. [PKC’14], they have shown to be a useful cryptographic primitive with many applications. These applications often require CPRFs to be adaptively secure, which allows the adversary to learn PRF values and constrained keys in an arbitrary order. However, there is no known construction of adaptively secure CPRFs based on a standard assumption in the standard model for any non-trivial class of predicates. Moreover, even if we rely on strong tools such as indistinguishability obfuscation (IO), the state-of-the-art construction of adaptively secure CPRFs in the standard model only supports the limited class of NC1 predicates. In this work, we develop new adaptively secure CPRFs for various predicates from different types of assumptions in the standard model. Our results are summarized below. - We construct adaptively secure and O(1)-collusion-resistant CPRFs for t-conjunctive normal form (t-CNF) predicates from one-way functions (OWFs) where t is a constant. Here, O(1)-collusion-resistance means that we can allow the adversary to obtain a constant number of constrained keys. Note that t-CNF includes bit-fixing predicates as a special case. - We construct adaptively secure and single-key CPRFs for inner-product predicates from the learning with errors (LWE) assumption. Here, single-key means that we only allow the adversary to learn one constrained key. Note that inner-product predicates include t-CNF predicates for a constant t as a special case. Thus, this construction supports a more expressive class of predicates than that supported by the first construction though it loses the collusion-resistance and relies on a stronger assumption. - We construct adaptively secure and O(1)-collusion-resistant CPRFs for all circuits from the LWE assumption and indistinguishability obfuscation (IO). The first and second constructions are the first CPRFs for any non-trivial predicates to achieve adaptive security outside of the random oracle model or relying on strong cryptographic assumptions. Moreover, the first construction is also the first to achieve any notion of collusion-resistance in this setting. Besides, we prove that the first and second constructions satisfy weak 1-key privacy, which roughly means that a constrained key does not reveal the corresponding constraint. The third construction is an improvement over previous adaptively secure CPRFs for less expressive predicates based on IO in the standard model.
2020
CRYPTO
Functional Encryption for Attribute-Weighted Sums from k-Lin 📺
We present functional encryption schemes for attribute-weighted sums, where encryption takes as input N attribute-value pairs (x_i,z_i) where x_i is public and z_i is private; secret keys are associated with arithmetic branching programs f, and decryption returns the weighted sum \sum_{i=1}^N f(x_i) z_i while leaking no additional information about the z_i's. Our main construction achieves (1) compact public parameters and key sizes that are independent of N and the secret key can decrypt a ciphertext for any a-priori unbounded N; (2) short ciphertexts that grow with N and the size of z_i but not x_i; (3) simulation-based security against unbounded collusions; (4) relies on the standard k-linear assumption in prime-order bilinear groups.
2020
CRYPTO
Multiparty Generation of an RSA Modulus 📺
We present a new multiparty protocol for the distributed generation of biprime RSA moduli, with security against any subset of maliciously colluding parties assuming oblivious transfer and the hardness of factoring. Our protocol is highly modular, and its uppermost layer can be viewed as a template that generalizes the structure of prior works and leads to a simpler security proof. We introduce a combined sampling-and-sieving technique that eliminates both the inherent leakage in the approach of Frederiksen et al. (Crypto'18), and the dependence upon additively homomorphic encryption in the approach of Hazay et al. (JCrypt'19). We combine this technique with an efficient, privacy-free check to detect malicious behavior retroactively when a sampled candidate is not a biprime, and thereby overcome covert rejection-sampling attacks and achieve both asymptotic and concrete efficiency improvements over the previous state of the art.
2020
CRYPTO
A key-recovery timing attack on post-quantum primitives using the Fujisaki-Okamoto transformation and its application on FrodoKEM 📺
In the implementation of post-quantum primitives, it is well known that all computations that handle secret information need to be implemented to run in constant time. Using the Fujisaki-Okamoto transformation or any of its different variants, a CPA-secure primitive can be converted into an IND-CCA secure KEM. In this paper we show that although the transformation does not handle secret information apart from calls to the CPA-secure primitive, it has to be implemented in constant time. Namely, if the ciphertext comparison step in the transformation is leaking side-channel information, we can launch a key-recovery attack. Several proposed schemes in round 2 of the NIST post-quantum standardization project are susceptible to the proposed attack and we develop and show the details of the attack on one of them, being FrodoKEM. It is implemented on the reference implementation of FrodoKEM, which is claimed to be secure against all timing attacks. In the experiments, the attack code is able to extract the secret key for all security levels using about $2^{30}$ decapsulation calls.
2020
CRYPTO
Interactive Proofs for Social Graphs 📺
We consider interactive proofs for social graphs, where the verifier has only oracle access to the graph and can query for the $i^{th}$ neighbor of a vertex $v$, given $i$ and $v$. In this model, we construct a doubly-efficient public-coin two-message interactive protocol for estimating the size of the graph to within a multiplicative factor $\epsilon>0$. The verifier performs $\widetilde{O}(1/\epsilon^2 \cdot \tau \cdot \Delta)$ queries to the graph, where $\tau$ is the mixing time of the graph and $\Delta$ is the average degree of the graph. The prover runs in quasi-linear time in the number of nodes in the graph. Furthermore, we develop a framework for computing the average of essentially any (reasonable) function $f$ of vertices of the graph. Using this framework, we can estimate many health measures of social graphs such as the clustering coefficients and the average degree, where the verifier performs only a small number of queries to the graph. Using the Fiat-Shamir paradigm, we are able to transform the above protocols to a non-interactive argument in the random oracle model. The result is that any social media company (Facebook, Twitter, etc.) can publish, once and for all, a short proof for the size or health of their social network. This proof can be publicly verified by any single user using a small number of queries to the graph.
2020
CRYPTO
NIZK from LPN and Trapdoor Hash via Approximate-Correlation Intractability 📺
We present new Non-Interactive Zero-Knowledge argument systems (NIZK), based on standard assumptions that were previously not known to imply it. In particular, we rely on the hardness of both the learning parity with noise (LPN) assumption, and the existence of trapdoor hash functions (TDH, defined by Döttling et al., Crypto 2019). TDH can be based on a number of standard assumptions, including DDH, QR, DCR, and LWE. We rely on the Correlation Intractability (CI) framework for converting \Sigma-protocols into NIZK, but deviate from prior works in considering CI for searchable relations where the search function has a probabilistic representation by a simple function class (linear or constant degree in our instantiations). Namely, there is a distribution over simple functions that computes each output bit of the search function with all but small (constant) probability. We present a new tool for proving CI for such function classes via a notion that we call Approximate-Correlation Intractability. This notion requires that CI holds even against approximations of a given function class. We show that approximate-correlation intractability for just constant degree functions suffices if the underlying \Sigma-protocol is implemented using an extractable commitment scheme with approximately low-degree extraction, and that such a commitment scheme can be constructed based on LPN. We then show how to construct approximate CI hash functions for this class from any suitable rate-1 TDH (with an enhanced correctness property that is satisfied by all existing constructions).
2020
CRYPTO
Random Self-reducibility of Ideal-SVP via Arakelov Random Walks 📺
Fixing a number field, the space of all ideal lattices, up to isometry, is naturally an Abelian group, called the *Arakelov class group*. This fact, well known to number theorists, has so far not been explicitly used in the literature on lattice-based cryptography. Remarkably, the Arakelov class group is a combination of two groups that have already led to significant cryptanalytic advances: the class group and the unit torus. In the present article, we show that the Arakelov class group has more to offer. We start with the development of a new versatile tool: we prove that, subject to the Riemann Hypothesis for Hecke L-functions, certain random walks on the Arakelov class group have a rapid mixing property. We then exploit this result to relate the average-case and the worst-case of the Shortest Vector Problem in ideal lattices. Our reduction appears particularly sharp: for Hermite-SVP in ideal lattices of certain cyclotomic number fields, it loses no more than a $\tilde O(\sqrt n)$ factor on the Hermite approximation factor. Furthermore, we suggest that this rapid-mixing theorem should find other applications in cryptography and in algorithmic number theory.
2020
CRYPTO
Nearly Optimal Robust Secret Sharing against Rushing Adversaries 📺
Robust secret sharing is a strengthening of standard secret sharing that allows the shared secret to be recovered even if some of the shares being used in the reconstruction have been adversarially modified. In this work, we study the setting where out of all the $n$ shares, the adversary is allowed to adaptively corrupt and modify up to $t$ shares, where $n = 2t+1$.\footnote{Note that if the adversary is allowed to modify any more shares, then correct reconstruction would be impossible.} Further, we deal with \emph{rushing} adversaries, meaning that the adversary is allowed to see the honest parties' shares before modifying its own shares. It is known that when $n = 2t+1$, to share a secret of length $m$ bits and recover it with error less than $2^{-\sec}$, shares of size at least $m+\sec$ bits are needed. Recently, Bishop, Pastro, Rajaraman, and Wichs (EUROCRYPT 2016) constructed a robust secret sharing scheme with shares of size $m + O(\sec\cdot\polylog(n,m,\sec))$ bits that is secure in this setting against non-rushing adversaries. Later, Fehr and Yuan (EUROCRYPT 2019) constructed a scheme that is secure against rushing adversaries, but has shares of size $m + O(\sec\cdot n^{\eps}\cdot \polylog(n,m,\sec))$ bits for an arbitrary constant $\eps > 0$. They also showed a variant of their construction with share size $m + O(\sec\cdot\polylog(n,m,\sec))$ bits, but with super-polynomial reconstruction time. We present a robust secret sharing scheme that is simultaneously close-to-optimal in all of these respects -- it is secure against rushing adversaries, has shares of size $m+O(\sec \log{n} (\log{n}+\log{m}))$ bits, and has polynomial-time sharing and reconstruction. Central to our construction is a polynomial-time algorithm for a problem on semi-random graphs that arises naturally in the paradigm of local authentication of shares used by us and in the aforementioned work.
2020
CRYPTO
Efficient Constant-Round MPC with Identifiable Abort and Public Verifiability 📺
Recent years have seen a tremendous growth in the interest in se- cure multiparty computation (MPC) and its applications. While much progress has been made concerning its efficiency, many current, state-of-the-art protocols are vulnerable to Denial of Service attacks, where a cheating party may prevent the honest parties from learning the output of the computation, whilst remaining anonymous. The security model of identifiable abort aims to prevent these at- tacks, by allowing honest parties to agree upon the identity of a cheating party, who can then be excluded in the future. Several existing MPC protocols offer security with identifiable abort against a dishonest majority of corrupted parties. However, all of these protocols have a round complexity that scales linearly with the depth of the circuit (and are therefore unsuitable for use in high latency net- works) or use cryptographic primitives or techniques that have a high computa- tional overhead. In this work, we present the first efficient MPC protocols with identifiable abort in the dishonest majority setting, which run in a constant number of rounds and make only black-box use of cryptographic primitives. Our main construction is built from highly efficient primitives in a careful way to achieve identifiability at a low cost. In particular, we avoid the use of public-key operations outside of a setup phase, incurring a relatively low overhead on top of the fastest currently known constant-round MPC protocols based on garbled circuits. Our construction also avoids the use of adaptively secure primitives and heavy zero-knowledge machinery, which was inherent in previous works. In addition, we show how to upgrade our protocol to achieve public verifiability using a public bulletin board, allowing any external party to verify correctness of the computation or identify a cheating party.
2020
CRYPTO
Shorter Non-Interactive Zero-Knowledge Arguments and ZAPs for Algebraic Languages 📺
We put forth a new framework for building pairing-based non-interactive zero-knowledge (NIZK) arguments for a wide class of algebraic languages, which are an extension of linear languages, containing disjunctions of linear languages and more. Our approach differs from the Groth-Sahai methodology, in that we rely on pairings to compile a Sigma-protocol into a NIZK. Our framework enjoys a number of interesting features: - conceptual simplicity, parameters derive from the Sigma-protocol; - proofs as short as resulting from the Fiat-Shamir heuristic applied to the underlying Sigma-protocol; - fully adaptive soundness and perfect zero-knowledge in the common random string model with a single random group element as CRS; - yields simple and efficient two-round, public coin, publicly-verifiable perfect witness- indistinguishable (WI) arguments(ZAPs) in the plain model. To our knowledge, this is the first construction of two-rounds statistical witness-indistinguishable arguments from pairing assumptions. Our proof system relies on a new (static, falsifiable) assumption over pairing groups which generalizes the standard kernel Diffie-Hellman assumption in a natural way and holds in the generic group model (GGM) and in the algebraic group model (AGM). Replacing Groth-Sahai \NIZKs with our new proof system allows to improve several important cryptographic primitives. In particular, we obtain the shortest tightly-secure structure-preserving signature scheme (which are a core component in anonymous credentials), the shortest tightly-secure quasi-adaptive \NIZK with unbounded simulation soundness (which in turns implies the shortest tightly-mCCA-secure cryptosystem), and shorter ring signatures.
2020
CRYPTO
Non-Malleability against Polynomial Tampering 📺
We present the first explicit construction of a non-malleable code that can handle tampering functions that are bounded-degree polynomials. Prior to our work, this was only known for degree-1 polynomials (affine tampering functions), due to Chattopadhyay and Li (STOC 2017). As a direct corollary, we obtain an explicit non-malleable code that is secure against tampering by bounded-size arithmetic circuits. We show applications of our non-malleable code in constructing non-malleable secret sharing schemes that are robust against bounded-degree polynomial tampering. In fact our result is stronger: we can handle adversaries that can adaptively choose the polynomial tampering function based on initial leakage of a bounded number of shares. Our results are derived from explicit constructions of seedless non-malleable extractors that can handle bounded-degree polynomial tampering functions. Prior to our work, no such result was known even for degree-2 (quadratic) polynomials.
2020
CRYPTO
The Summation-Truncation Hybrid: Reusing Discarded Bits for Free 📺
A well-established PRP-to-PRF conversion design is truncation: one evaluates an $n$-bit pseudorandom permutation on a certain input, and truncates the result to $a$ bits. The construction is known to achieve tight $2^{n-a/2}$ security. Truncation has gained popularity due to its appearance in the GCM-SIV key derivation function (ACM CCS 2015). This key derivation function makes four evaluations of AES, truncates the outputs to $n/2$ bits, and concatenates these to get a $2n$-bit subkey. In this work, we demonstrate that truncation is wasteful. In more detail, we present the Summation-Truncation Hybrid (STH). At a high level, the construction consists of two parallel evaluations of truncation, where the truncated $(n-a)$-bit chunks are not discarded but rather summed together and appended to the output. We prove that STH achieves a similar security level as truncation, and thus that the $n-a$ bits of extra output is rendered for free. In the application of GCM-SIV, the current key derivation can be used to output $3n$ bits of random material, or it can be reduced to three primitive evaluations. Both changes come with no security loss.
2020
CRYPTO
Security Analysis and Improvements for the IETF MLS Standard for Group Messaging 📺
Secure messaging (SM) protocols allow users to communicate securely over untrusted infrastructure. In contrast to most other secure communication protocols (such as TLS, SSH, or Wireguard), SM sessions may be long-lived (e.g., years) and highly asynchronous. In order to deal with likely state compromises of users during the lifetime of a session, SM protocols do not only protect authenticity and privacy, but they also guarantee forward secrecy (FS) and post-compromise security (PCS). The former ensures that messages sent and received before a state compromise remain secure, while the latter ensures that users can recover from state compromise as a consequence of normal protocol usage. SM has received considerable attention in the two-party case, where prior work has studied the well-known double-ratchet paradigm, in particular, and SM as a cryptographic primitive, in general. Unfortunately, this paradigm does not scale well to the problem of secure group messaging (SGM). In order to address the lack of satisfactory SGM protocols, the IETF has launched the message-layer security (MLS) working group, which aims to standardize an eponymous SGM protocol. In this work we analyze the TreeKEM protocol, which is at the core of the SGM protocol proposed by the MLS working group. On a positive note, we show that TreeKEM achieves PCS in isolation (and slightly more). However, we observe that the current version of TreeKEM does not provide an adequate form of FS. More precisely, our work proceeds by formally capturing the exact security of TreeKEM as a so-called continuous group key agreement (CGKA) protocol, which we believe to be a primitive of independent interest. To address the insecurity of TreeKEM, we propose a simple modification to TreeKEM inspired by recent work of Jost et al. (EUROCRYPT '19) and an idea due to Kohbrok (MLS Mailing List). We then show that the modified version of TreeKEM comes with almost no efficiency degradation but achieves optimal (according to MLS specification) CGKA security, including FS and PCS. Our work also lays out how a CGKA protocol can be used to design a full SGM protocol.
2020
CRYPTO
LWE with Side Information: Attacks and Concrete Security Estimation 📺
We propose a framework for cryptanalysis of lattice-based schemes, when side information --in the form of "hints''-- about the secret and/or error is available. Our framework generalizes the so-called primal lattice reduction attack, and allows the progressive integration of hints before running a final lattice reduction step. Our techniques for integrating hints include sparsifying the lattice, projecting onto and intersecting with hyperplanes, and/or altering the distribution of the secret vector. Our main contribution is to propose a toolbox and a methodology to integrate such hints into lattice reduction attacks and to predict the performance of those lattice attacks with side information. While initially designed for side-channel information, our framework can also be used in other cases: exploiting decryption failures, or simply exploiting constraints imposed by certain schemes (LAC, Round5, NTRU), that were previously not known to (slightly) benefit from lattice attacks. We implement a Sage 9.0 toolkit to actually mount such attacks with hints when computationally feasible, and to predict their performances on larger instances. We provide several end-to-end application examples, such as an improvement of a single trace attack on Frodo by Bos et al (SAC 2018). Contrary to ad-hoc practical attacks exploiting side-channel leakage, our work is a generic way to estimate security loss even given very little side-channel information.
2020
CRYPTO
Non-Malleable Secret Sharing against Bounded Joint-Tampering Attacks in the Plain Model 📺
Secret sharing enables a dealer to split a secret into a set of shares, in such a way that certain authorized subsets of share holders can reconstruct the secret, whereas all unauthorized subsets cannot. Non-malleable secret sharing (Goyal and Kumar, STOC 2018) additionally requires that, even if the shares have been tampered with, the reconstructed secret is either the original or a completely unrelated one. In this work, we construct non-malleable secret sharing tolerating $p$-time {\em joint-tampering} attacks in the plain model (in the computational setting), where the latter means that, for any $p>0$ fixed {\em a priori}, the attacker can tamper with the same target secret sharing up to $p$ times. In particular, assuming one-to-one one-way functions, we obtain: - A secret sharing scheme for threshold access structures which tolerates joint $p$-time tampering with subsets of the shares of maximal size ({\em i.e.}, matching the privacy threshold of the scheme). This holds in a model where the attacker commits to a partition of the shares into non-overlapping subsets, and keeps tampering jointly with the shares within such a partition (so-called {\em selective partitioning}). - A secret sharing scheme for general access structures which tolerates joint $p$-time tampering with subsets of the shares of size $O(\sqrt{\log n})$, where $n$ is the number of parties. This holds in a stronger model where the attacker is allowed to adaptively change the partition within each tampering query, under the restriction that once a subset of the shares has been tampered with jointly, that subset is always either tampered jointly or not modified by other tampering queries (so-called {\em semi-adaptive partitioning}). At the heart of our result for selective partitioning lies a new technique showing that every one-time {\em statistically} non-malleable secret sharing against joint tampering is in fact {\em leakage-resilient} non-malleable ({\em i.e.},\ the attacker can leak jointly from the shares prior to tampering). We believe this may be of independent interest, and in fact we show it implies lower bounds on the share size and randomness complexity of statistically non-malleable secret sharing against {\em independent} tampering.
2020
CRYPTO
Stacked Garbling: Garbled Circuit Proportional to Longest Execution Path 📺
Secure two party computation (2PC) of arbitrary programs can be efficiently achieved using garbled circuits (GC). The bottleneck of GC efficiency is communication. It is widely believed that it is necessary to transmit the entire GC during 2PC, even for conditional branches that are not taken. This folklore belief is false. We propose a novel GC technique, stacked garbling, that eliminates the communication cost of inactive conditional branches. We extend the ideas of conditional GC evaluation explored in (Kolesnikov, Asiacrypt 18) and (Heath and Kolesnikov, Eurocrypt 20). Unlike these works, ours is for general 2PC where no player knows which conditional branch is taken. Our garbling scheme, Stack, requires communication proportional to the longest execution path rather than to the entire circuit. Stack is compatible with state-of-the-art techniques, such as free XOR and half-gates. Stack is a garbling scheme. As such, it can be plugged into a variety of existing protocols, and the resulting round complexity is the same as that of standard GC. The approach does incur computation cost quadratic in the conditional branching factor vs linear in standard schemes, but the tradeoff is beneficial for most programs: GC computation even on weak hardware is faster than GC transmission on fast channels. We implemented Stack in C++. Stack reduces communication cost by approximately the branching factor: for 16 branches, communication is reduced by 10.5x. In terms of wall-clock time for circuits with branching factor 16 over a 50 Mbps WAN on a laptop, Stack outperforms state-of- the-art half-gates-based 2PC by more than 4x.
2020
CRYPTO
Out of Oddity -- New Cryptanalytic Techniques against Symmetric Primitives Optimized for Integrity Proof Systems 📺
The security and performance of many integrity proof systems like SNARKs, STARKs and Bulletproofs highly depend on the underlying hash function. For this reason several new proposals have recently been developed. These primitives obviously require an in-depth security evaluation, especially since their implementation constraints have led to less standard design approaches. This work compares the security levels offered by two recent families of such primitives, namely GMiMC and HadesMiMC. We exhibit low-complexity distinguishers against the GMiMC and HadesMiMC permutations for most parameters proposed in recently launched public challenges for STARK-friendly hash functions. In the more concrete setting of the sponge construction corresponding to the practical use in the ZK-STARK protocol, we present a practical collision attack on a round-reduced version of GMiMC and a preimage attack on some instances of HadesMiMC. To achieve those results, we adapt and generalize several cryptographic techniques to fields of odd characteristic.
2020
CRYPTO
Fast and Secure Updatable Encryption 📺
Updatable encryption allows a client to outsource ciphertexts to some untrusted server and periodically rotate the encryption key. The server can update ciphertexts from an old key to a new key with the help of an update token, received from the client, which should not reveal anything about plaintexts to an adversary. We provide a new and highly efficient suite of updatable encryption schemes that we collectively call SHINE. In the variant designed for short messages, ciphertext generation consists of applying one permutation and one exponentiation (per message block), while updating ciphertexts requires just one exponentiation. Variants for longer messages provide much stronger security guarantees than prior work that has comparable efficiency. We present a new confidentiality notion for updatable encryption schemes that implies prior notions. We prove that SHINE is secure under our new confidentiality definition while also providing ciphertext integrity.
2020
CRYPTO
Amplifying the Security of Functional Encryption, Unconditionally 📺
Security amplification is a fundamental problem in cryptography. In this work, we study security amplification for functional encryption. We show two main results: - For any constant epsilon in (0,1), we can amplify an epsilon-secure FE scheme for P/poly which is secure against all polynomial sized adversaries to a fully secure FE scheme for P/poly, unconditionally. - For any constant epsilon in (0,1), we can amplify an epsilon-secure FE scheme for P/poly which is secure against subexponential sized adversaries to a subexponentially secure FE scheme for P/poly, unconditionally. Furthermore, both of our amplification results preserve compactness of the underlying FE scheme. Previously, amplification results for FE were only known assuming subexponentially secure LWE. Along the way, we introduce a new form of homomorphic secret sharing called set homomorphic secret sharing that may be of independent interest. Additionally, we introduce a new technique, which allows one to argue security amplification of nested primitives, and prove a general theorem that can be used to analyze the security amplification of parallel repetitions.
2020
CRYPTO
A Classification of Computational Assumptions in the Algebraic Group Model 📺
We give a taxonomy of computational assumptions in the algebraic group model (AGM). We first analyze the Uber assumption family for bilinear groups defined by Boyen and then extend it in multiple ways to cover assumptions such as Gap Diffie-Hellman and the LRSW assumption. We show that in the AGM every member of these families reduces to the q-discrete logarithm (DL) problem, for some q that depends on the degrees of the polynomials defining the assumption. Using the meta-reduction technique, we then separate (q+1)-DL from q-DL, which thus yields a classification of all members of the extended Uber-assumption families. We finally show that there are strong assumptions, such as one-more DL, that provably fall outside our classification, as we prove that they cannot be reduced to q-DL even in the AGM.
2020
CRYPTO
Improved Primitives for MPC over Mixed Arithmetic-Binary Circuits 📺
This work introduces novel techniques to improve the translation between arithmetic and binary data types in multi-party computation. To this end, we introduce a new approach to performing these conversions, using what we call \emph{extended doubly-authenticated bits} (edaBits), which correspond to shared integers in the arithmetic domain whose bit decomposition is shared in the binary domain. These can be used to considerably increase the efficiency of non-linear operations such as truncation, secure comparison and bit-decomposition. Our eDaBits are similar to the \emph{daBits} technique introduced by Rotaru et al.~(Indocrypt 2019). However, our main observations are that (1) applications that benefit from daBits can also benefit from edaBits in the same way, and (2) we can generate edaBits directly in a much more efficeint way than computing them directly from a set of DaBits. Technically, the second contribution is much more challenging, and involves a novel cut and choose technique that may be of independent interest, and requires taking advantage of natural tamper-resilient properties of binary circuits that occur in our construction to obtain the best level of efficiency. Finally, we show how our eDaBits can be applied to efficiently implement various non-linear protocols of interest, and we thoroughly analyze their correctness for both signed and unsigned integers. The results of this work can be applied to any corruption threshold, although they seem best suited to dishonest majority protocols such as SPDZ. We implement and benchmark our constructions, and experimentally verify that our technique yield a substantial increase in effiency. Our eDaBits save in communication by a factor that lies between $2$ and $170$ for secure comparisons with respect to a purely arithmetic approach, and between $2$ and $60$ with respect to using daBits. Improvements in throughput per second are more subdued but still as high as a factor of $47$. We also apply our novel machinery to the tasks of biometric matching and convolutional neural networks, obtaining a noticeable improvement as well.
2020
CRYPTO
Leakage-Resilient Key Exchange and Two-Seed Extractors 📺
Can Alice and Bob agree on a uniformly random secret key without having any truly secret randomness to begin with? Here we consider a setting where Eve can get partial leakage on the internal state of both Alice and Bob individually before the protocol starts. They then run a protocol using their states without any additional randomness and need to agree on a shared key that looks uniform to Eve, even after observing the leakage and the protocol transcript. We focus on non-interactive (one round) key exchange (NIKE), where Alice and Bob send one message each without waiting for one another. We first consider this problem in the symmetric-key setting, where the states of Alice and Bob include a shared secret as well as individual uniform randomness. However, since Eve gets leakage on these states, Alice and Bob need to perform privacy amplification to derive a fresh secret key from them. Prior solutions require Alice and Bob to sample fresh uniform randomness during the protocol, while in our setting all of their randomness was already part of their individual states a priori and was therefore subject to leakage. We show an information-theoretic solution to this problem using a novel primitive that we call a two-seed extractor, which we in turn construct by drawing a connection to communication-complexity lower-bounds in the number-on-forehead (NOF) model. We then turn to studying this problem in the public-key setting, where the states of Alice and Bob consist of independent uniform randomness. Unfortunately, we give a black-box separation showing that leakage-resilient NIKE in this setting cannot be proven secure via a black-box reduction under any game-based assumption when the leakage is super-logarithmic. This includes virtually all assumptions used in cryptography, and even very strong assumptions such as indistinguishability obfuscation (iO). Nevertheless, we also provide positive results that get around the above separation: -We show that every key exchange protocol (e.g., Diffie-Hellman) is secure when the leakage amount is logarithmic, or potentially even greater if we assume sub-exponential security without leakage. -We notice that the black-box separation does not extend to schemes in the common reference string (CRS) model, or to schemes with preprocessing, where Alice and Bob can individually pre-process their random coins to derive their secret state prior to leakage. We give a solution in the CRS model with preprocessing using bilinear maps. We also give solutions in just the CRS model alone (without preprocessing) or just with preprocessing (without a CRS), using iO and lossy functions.
2020
EUROCRYPT
Key-Homomorphic Pseudorandom Functions from LWE with Small Modulus 📺
Pseudorandom functions (PRFs) are fundamental objects in cryptography that play a central role in symmetric-key cryptography. Although PRFs can be constructed from one-way functions generically, these black-box constructions are usually inefficient and require deep circuits to evaluate compared to direct PRF constructions that rely on specific algebraic assumptions. From lattices, one can directly construct PRFs from the Learning with Errors (LWE) assumption (or its ring variant) using the result of Banerjee, Peikert, and Rosen (Eurocrypt 2012) and its subsequent works. However, all existing PRFs in this line of work rely on the hardness of the LWE problem where the associated modulus is super-polynomial in the security parameter. In this work, we provide two new PRF constructions from the LWE problem that each focuses on either minimizing the depth of its evaluation circuit or providing key-homomorphism while relying on the hardness of the LWE problem with only a polynomial modulus. Along the way, we introduce a new variant of the LWE problem called the Learning with Rounding and Errors (LWRE) problem. We show that for certain settings of parameters, the LWRE problem is as hard as the LWE problem. We then show that the hardness of the LWRE problem naturally induces a pseudorandom synthesizer that can be used to construct a low-depth PRF. The techniques that we introduce to study the LWRE problem can then be used to derive variants of existing key-homomorphic PRFs whose security can be reduced from the hardness of the LWE problem with a much smaller modulus.
2020
EUROCRYPT
Stacked Garbling for Disjunctive Zero-Knowledge Proofs 📺
Zero-knowledge (ZK) proofs (ZKP) have received wide attention, focusing on non-interactivity, short proof size, and fast verification time. We focus on the fastest total proof time, in particular for large Boolean circuits. Under this metric, Garbled Circuit (GC)-based ZKP (Jawurek et al., [JKO], CCS 2013) remained the state-of-the-art technique due to the low-constant linear scaling of computing the garbling. We improve GC-ZKP for proof statements with conditional clauses. Our communication is proportional to the longest branch rather than to the entire proof statement. This is most useful when the number m of branches is large, resulting in up to factor m× improvement over JKO. In our proof-of-concept illustrative application, prover P demonstrates knowledge of a bug in a codebase consisting of any number of snippets of actual C code. Our computation cost is linear in the size of the code- base and communication is constant in the number of snippets. That is, we require only enough communication for a single largest snippet! Our conceptual contribution is stacked garbling for ZK, a privacy-free circuit garbling scheme that can be used with the JKO GC-ZKP protocol to construct more efficient ZKP. Given a Boolean circuit C and computational security parameter k, our garbling is L · k bits long, where L is the length of the longest execution path in C. All prior concretely efficient garbling schemes produce garblings of size |C| · k. The computational cost of our scheme is not increased over prior state-of-the-art. We implement our GC-ZKP and demonstrate significantly improved (m× over JKO) ZK performance for functions with branching factor m. Compared with recent ZKP (STARK, Libra, KKW, Ligero, Aurora, Bulletproofs), our scheme offers much better proof times for larger circuits (35-1000× or more, depending on circuit size and compared scheme). For our illustrative application, we consider four C code snippets, each of about 30-50 LOC; one snippet allows an invalid memory dereference. The entire proof takes 0.15 seconds and communication is 1.5 MB.
2020
EUROCRYPT
Lightweight Authenticated Encryption Mode Suitable for Threshold Implementation 📺
This paper proposes tweakable block cipher (TBC) based modes \textsf{PFB}\_\textsf{Plus} and \textsf{PFB}$\omega$ that are efficient in threshold implementations (TI). Let $t$ be an algebraic degree of a target function, e.g. $t=1$ (resp. $t>1$) for linear (resp. non-linear) function. The $d$-th order TI encodes the internal state into $d t + 1$ shares. Hence, the area size increases proportionally to the number of shares. This implies that TBC based modes can be smaller than block cipher (BC) based modes in TI because TBC requires $s$-bit block to ensure $s$-bit security, e.g. \textsf{PFB} and \textsf{Romulus}, while BC requires $2s$-bit block. However, even with those TBC based modes, the minimum we can reach is 3 shares of $s$-bit state with $t=2$ and the first-order TI ($d=1$). Our first design \textsf{PFB}\_\textsf{Plus} aims to break the barrier of the $3s$-bit state in TI. The block size of an underlying TBC is $s/2$ bits and the output of TBC is linearly expanded to $s$ bits. This expanded state requires only 2 shares in the first-order TI, which makes the total state size $2.5s$ bits. We also provide rigorous security proof of \textsf{PFB}\_\textsf{Plus}. Our second design \textsf{PFB}$\omega$ further increases a parameter $\omega$: a ratio of the security level $s$ to the block size of an underlying TBC. We prove security of \textsf{PFB}$\omega$ for any $\omega$ under some assumptions for an underlying TBC and for parameters used to update a state. Next, we show a concrete instantiation of \textsf{PFB}\_\textsf{Plus} for 128-bit security. It requires a TBC with 64-bit block, 128-bit key and 128-bit tweak, while no existing TBC can support it. We design a new TBC by extending \textsf{SKINNY} and provide basic security evaluation. Finally, we give hardware benchmarks of \textsf{PFB}\_\textsf{Plus} in the first-order TI to show that TI of \textsf{PFB}\_\textsf{Plus} is smaller than that of \textsf{PFB} by more than one thousand gates and is the smallest within the schemes having 128-bit security.
2020
EUROCRYPT
On Instantiating the Algebraic Group Model from Falsifiable Assumptions 📺
We provide a standard-model implementation (of a relaxation) of the algebraic group model (AGM, [Fuchsbauer, Kiltz, Loss, CRYPTO 2018]). Specifically, we show that every algorithm that uses our group is algebraic, and hence "must know" a representation of its output group elements in terms of its input group elements. Here, "must know" means that a suitable extractor can extract such a representation efficiently. We stress that our implementation relies only on falsifiable assumptions in the standard model, and in particular does not use any knowledge assumptions. As a consequence, our group allows to transport a number of results obtained in the AGM into the standard model, under falsifiable assumptions. For instance, we show that in our group, several Diffie-Hellman-like assumptions (including computational Diffie-Hellman) are equivalent to the discrete logarithm assumption. Furthermore, we show that our group allows to prove the Schnorr signature scheme tightly secure in the random oracle model. Our construction relies on indistinguishability obfuscation, and hence should not be considered as a practical group itself. However, our results show that the AGM is a realistic computational model (since it can be instantiated in the standard model), and that results obtained in the AGM are also possible with standard-model groups.
2020
EUROCRYPT
How to Extract Useful Randomness from Unreliable Sources 📺
For more than 30 years, cryptographers have been looking for public sources of uniform randomness in order to use them as a set-up to run appealing cryptographic protocols without relying on trusted third parties. Unfortunately, nowadays it is fair to assess that assuming the existence of physical phenomena producing public uniform randomness is far from reality.  It is known that uniform randomness cannot be extracted from a single weak source. A well-studied way to overcome this is to consider several independent weak sources. However, this means we must trust the various sampling processes of weak randomness from physical processes. Motivated by the above state of affairs, this work considers a set-up where players can access multiple {\em potential} sources of weak randomness, several of which may be jointly corrupted by a computationally unbounded adversary. We introduce {\em SHELA} (Somewhere Honest Entropic Look Ahead) sources to model this situation. We show that there is no hope of extracting uniform randomness from a {\em SHELA} source. Instead, we focus on the task of {\em Somewhere-Extraction} (i.e., outputting several candidate strings, some of which are uniformly distributed -- yet we do not know which). We give explicit constructions of {\em Somewhere-Extractors} for {\em SHELA} sources with good parameters. Then, we present applications of the above somewhere-extractor where the public uniform randomness can be replaced by the output of such extraction from corruptible sources, greatly outperforming trivial solutions. The output of somewhere-extraction is also useful in other settings, such as a suitable source of random coins for many randomized algorithms. In another front, we comprehensively study the problem of {\em Somewhere-Extraction} from a {\em weak} source, resulting in a series of bounds. Our bounds highlight the fact that, in most regimes of parameters (including those relevant for applications), {\em SHELA} sources significantly outperform {\em weak} sources of comparable parameters both when it comes to the process of {\em Somewhere-Extraction}, or in the task of amplification of success probability in randomized algorithms. Moreover, the low quality of somewhere-extraction from weak sources excludes its use in various efficient applications.
2020
EUROCRYPT
Everybody’s a Target: Scalability in Public-Key Encryption 📺
For 1<=m<=n, we consider a natural m-out-of-n multi-instance scenario for a public-key encryption (PKE) scheme. An adversary, given n independent instances of PKE, wins if he breaks at least m out of the n instances. In this work, we are interested in the scaling factor of PKE schemes, SF, which measures how well the difficulty of breaking m out of the n instances scales in m. That is, a scaling factor SF=l indicates that breaking m out of n instances is at least l times more difficult than breaking one single instance. A PKE scheme with small scaling factor hence provides an ideal target for mass surveillance. In fact, the Logjam attack (CCS 2015) implicitly exploited, among other things, an almost constant scaling factor of ElGamal over finite fields (with shared group parameters). For Hashed ElGamal over elliptic curves, we use the generic group model to describe how the scaling factor depends on the scheme's granularity. In low granularity, meaning each public key contains its independent group parameter, the scheme has optimal scaling factor SF=m; In medium and high granularity, meaning all public keys share the same group parameter, the scheme still has a reasonable scaling factor SF=sqrt(m). Our findings underline that instantiating ElGamal over elliptic curves should be preferred to finite fields in a multi-instance scenario. As our main technical contribution, we derive new generic-group lower bounds of Omega(sqrt(mp)) on the complexity of solving both the m-out-of-n Gap Discrete Logarithm and the m-out-of-n Gap Computational Diffie-Hellman problem over groups of prime order p, extending a recent result by Yun (EUROCRYPT 2015). We establish the lower bound by studying the hardness of a related computational problem which we call the search-by-hypersurface problem.
2020
EUROCRYPT
OptORAMa: Optimal Oblivious RAM 📺
Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC '87 and J. ACM '96) is a technique for provably obfuscating programs' access patterns, such that the access patterns leak no information about the programs' secret inputs. To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy's original ORAM work for statistical security and in a somewhat restricted model (the so called \emph{balls-and-bins} model), and recently by Larsen and Nielsen (CRYPTO '18) for computational security. A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS '18) who gave a construction with $O(\log N\cdot \log\log N)$ amortized blowup, assuming one-way functions. One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of $n$ elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our $O(n)$ algorithm improves the previously best known deterministic or randomized algorithms whose running time is $O(n \cdot\log n)$ or $O(n \cdot\log \log n)$, respectively.
2020
EUROCRYPT
Improving Key-Recovery in Linear Attacks: Application to 28-round PRESENT 📺
Linear cryptanalysis is one of the most important tools in use for the security evaluation of symmetric primitives. Many improvements and refinements have been published since its introduction, and many applications on different ciphers have been found. Among these upgrades, Collard et al. proposed in 2007 an acceleration of the key-recovery part of Algorithm 2 for last-round attacks based on the FFT. In this paper we present a generalized, matrix-based version of the previous algorithm which easily allows to take into consideration an arbitrary number of key-recovery rounds. We also provide efficient variants that exploit the key-schedule relations and that can be combined with multiple linear attacks. Using our algorithms we provide some new cryptanalysis on PRESENT, including, to the best of our knowledge, the first attack on 28 rounds.
2020
EUROCRYPT
(One) failure is not an option: Bootstrapping the search for failures in lattice-based encryption schemes 📺
Lattice-based encryption schemes are often subject to the possibility of decryption failures, in which valid encryptions are decrypted incorrectly. Such failures, in large number, leak information about the secret key, enabling an attack strategy alternative to pure lattice reduction. Extending the "failure boosting" technique of D'Anvers et al. in PKC 2019, we propose an approach that we call "directional failure boosting" that uses previously found "failing ciphertexts" to accelerate the search for new ones. We analyse in detail the case where the lattice is defined over polynomial ring modules quotiented by <X^N + 1> and demonstrate it on a simple Mod-LWE-based scheme parametrized à la Kyber768/Saber. We show that, using our technique, for a given secret key (single-target setting), the cost of searching for additional failing ciphertexts after one or more have already been found, can be sped up dramatically. We thus demonstrate that, in this single-target model, these schemes should be designed so that it is hard to even obtain one decryption failure. Besides, in a wider security model where there are many target secret keys (multi-target setting), our attack greatly improves over the state of the art.
2020
EUROCRYPT
Which Languages Have 4-Round Fully Black-Box Zero-Knowledge Arguments from One-Way Functions? 📺
We prove that if a language $\cL$ has a 4-round fully black-box zero-knowledge argument with negligible soundness based on one-way functions, then $\overline{\cL} \in \MA$. Since $\coNP \subseteq \MA$ implies that the polynomial hierarchy collapses, our result implies that $\NP$-complete languages are unlikely to have 4-round fully black-box zero-knowledge arguments based on one-way functions. In TCC 2018, Hazay and Venkitasubramaniam, and Khurana, Ostrovsky, and Srinivasan demonstrated 4-round fully black-box zero-knowledge arguments for all languages in $\NP$ based on injective one-way functions. Their results also imply a 5-round protocol based on one-way functions. In essence, our result resolves the round complexity of fully black-box zero-knowledge arguments based on one-way functions.
2020
EUROCRYPT
Modeling for Three-Subset Division Property without Unknown Subset -- Improved Cube Attacks against Trivium and Grain-128AEAD 📺
A division property is a generic tool to search for integral distinguishers, and automatic tools such as MILP or SAT/SMT allow us to evaluate the propagation efficiently. In the application to stream ciphers, it enables us to estimate the security of cube attacks theoretically, and it leads to the best key-recovery attacks against well-known stream ciphers. However, it was reported that some of the key-recovery attacks based on the division property degenerate to distinguishing attacks due to the inaccuracy of the division property. Three-subset division property (without unknown subset) is a promising method to solve this inaccuracy problem, and a new algorithm using automatic tools for the three-subset division property was recently proposed at Asiacrypt2019. In this paper, we first show that this state-of-the-art algorithm is not always efficient and we cannot improve the existing key-recovery attacks. Then, we focus on the feature of the three-subset division property without unknown subset and propose another new efficient algorithm using automatic tools. Our algorithm is more efficient than existing algorithms, and it can improve existing key-recovery attacks. In the application to Trivium, we show a 841-round key-recovery attack. We also show that a 855-round key-recovery attack, which was proposed at CRYPTO2018, has a critical flaw and does not work. As a result, our 841-round attack becomes the best key-recovery attack. In the application to Grain-128AEAD, we show that the known 184-round key-recovery attack degenerates to distinguishing attacks. Then, the distinguishing attacks are improved up to 189 rounds, and we also show the best key-recovery attack against 190 rounds.
2020
EUROCRYPT
Indistinguishability Obfuscation Without Maps: Attacks and Fixes for Noisy Linear FE 📺
Candidates of Indistinguishability Obfuscation (iO) can be categorized as ``direct'' or ``bootstrapping based''. Direct constructions rely on high degree multilinear maps [GGH13,GGHRSW13] and provide heuristic guarantees, while bootstrapping based constructions [LV16,Lin17,LT17,AJLMS19,Agr19,JLMS19] rely, in the best case, on bilinear maps as well as new variants of the Learning With Errors (LWE) assumption and pseudorandom generators. Recent times have seen exciting progress in the construction of indistinguishability obfuscation (iO) from bilinear maps (along with other assumptions) [LT17,AJLMS19,JLMS19,Agr19]. As a notable exception, a recent work by Agrawal [Agr19] provided a construction for iO without using any maps. This work identified a new primitive, called Noisy Linear Functional Encryption (NLinFE) that provably suffices for iO and gave a direct construction of NLinFE from new assumptions on lattices. While a preliminary cryptanalysis for the new assumptions was provided in the original work, the author admitted the necessity of performing significantly more cryptanalysis before faith could be placed in the security of the scheme. Moreover, the author did not suggest concrete parameters for the construction. In this work, we fill this gap by undertaking the task of thorough cryptanalytic study of NLinFE. We design two attacks that let the adversary completely break the security of the scheme. Our attacks are completely new and unrelated to attacks that were hitherto used to break other candidates of iO. To achieve this, we develop new cryptanalytic techniques which (we hope) will inform future designs of the primitive of NLinFE. From the knowledge gained by our cryptanalytic study, we suggest modifications to the scheme. We provide a new scheme which overcomes the vulnerabilities identified before. We also provide a thorough analysis of all the security aspects of this scheme and argue why plausible attacks do not work. We additionally provide concrete parameters with which the scheme may be instantiated. We believe the security of NLinFE stands on significantly firmer footing as a result of this work.
2020
EUROCRYPT
Low Error Efficient Computational Extractors in the CRS Model 📺
In recent years, there has been exciting progress on building two-source extractors for sources with low min-entropy. Unfortunately, all known explicit constructions of two-source extractors in the low entropy regime suffer from non-negligible error, and building such extractors with negligible error remains an open problem. We investigate this problem in the computational setting, and obtain the following results. We construct an explicit 2-source extractor, and even an explicit non-malleable extractor, with negligible error, for sources with low min-entropy, under computational assumptions in the Common Random String (CRS) model. More specifically, we assume that a CRS is generated once and for all, and allow the min-entropy sources to depend on the CRS. We obtain our constructions by using the following transformations. - Building on the technique of [BHK11], we show a general transformation for converting any computational 2-source extractor (in the CRS model) into a computational non-malleable extractor (in the CRS model), for sources with similar min-entropy. We emphasize that the resulting computational non-malleable extractor is resilient to arbitrarily many tampering attacks (a property that is impossible to achieve information theoretically). This may be of independent interest. This transformation uses cryptography, and in particular relies on the sub-exponential hardness of the Decisional Diffie Hellman (DDH) assumption. - Next, using the blueprint of [BACD+17], we give a transformation converting our computational non-malleable extractor (in the CRS model) into a computational 2-source extractor for sources with low min-entropy (in the CRS model). Our 2-source extractor works for unbalanced sources: specifically, we require one of the sources to be larger than a specific polynomial in the other. This transformation does not incur any additional assumptions. Our analysis makes a novel use of the leakage lemma of Gentry and Wichs [GW11].
2020
EUROCRYPT
The Price of Active Security in Cryptographic Protocols 📺
We construct the first actively-secure Multi-Party Computation (MPC) protocols with an \emph{arbitrary} number of parties in the dishonest majority setting, for an \emph{arbitrary} field $\FF$ with \emph{constant communication overhead} over the ``passive-GMW'' protocol (Goldreich, Micali and Wigderson, STOC `87). Our protocols rely on passive implementations of Oblivious Transfer (OT) in the boolean setting and Oblivious Linear function Evaluation (OLE) in the arithmetic setting. Previously, such protocols were only known over sufficiently large fields (Genkin et al. STOC `14) or a constant number of parties (Ishai et al. CRYPTO `08). Conceptually, our protocols are obtained via a new compiler from a passively-secure protocol for a distributed multiplication functionality $\cF_\mult$, to an actively-secure protocol for general functionalities. Roughly, $\cF_\mult$ is parameterized by a linear-secret sharing scheme $\cS$, where it takes $\cS$-shares of two secrets and returns $\cS$-shares of their product. We show that our compilation is concretely efficient for sufficiently large fields, resulting in an overhead of 2 when securely computing natural circuits. Our compiler has two additional benefits: (1) it can rely on \emph{any} passive implementation of $\cF_\mult$, which, besides the standard implementation based on OT (for boolean) and OLE (for arithmetic) allows us to rely on implementations based on threshold cryptosystems (Cramer et al. Eurocrypt `01); and (2) it can rely on weaker-than-passive (i.e., imperfect/leaky) implementations, which in some parameter regimes yield actively-secure protocols with overhead less than 2. Instantiating this compiler with an ``honest-majority'' implementations of $\cF_\mult$, we obtain the first honest-majority protocol with optimal corruption threshold for boolean circuits with constant communication overhead over the best passive protocol (Damg{\aa}rd and Nielsen, CRYPTO `07).
2020
EUROCRYPT
Transparent SNARKs from DARK Compilers 📺
We construct a new polynomial commitment scheme for univariate and multivariate polynomials over finite fields, with public-coin evaluation proofs that have logarithmic communication and verification cost in the number of coefficients of the polynomial. The underlying technique is a Diophantine Argument of Knowledge (DARK), leveraging integer representations of polynomials and groups of unknown order. Security is shown from the strong RSA and the adaptive root assumption. Moreover, the scheme does not require a trusted setup if instantiated with class groups. We apply this new cryptographic compiler to a restricted class of algebraic linear IOPs in order to obtain doubly-efficient public-coin IPs with succinct communication and witness-extended emulation for any NP relation. Allowing for linear preprocessing, the online verifier's work is logarithmic in the circuit complexity of the relation. Concretely, we obtain quasi-linear prover time when compiling the IOP employed in Sonic(MBKM, CCS 19). Applying the Fiat-Shamir transform in the random oracle model results in a SNARK system with quasi-linear preprocessing, quasi-linear (online) prover time, logarithmic proof size, and logarithmic (online) verification time for arbitrary circuits. The SNARK is also concretely efficient with 8.4KB proofs and 75ms verification time for circuits with 1 million gates. Most importantly, this SNARK is transparent: it does not require a trusted setup. We also obtain zk-SNARKs by applying a variant of our polynomial commitment scheme that is hiding and offers zero-knowledge evaluation proofs. This construction is the first transparent zk-SNARK that has both a practical prover time as well as strictly logarithmic proof size and verification time. We call our system Supersonic.
2020
EUROCRYPT
TNT: How to Tweak a Block Cipher 📺
In this paper, we propose Tweak-aNd-Tweak (TNT for short) mode, which builds a tweakable block cipher from three independent block ciphers. TNT handles the tweak input by simply XOR-ing the unmodified tweak into the internal state of block ciphers twice. Due to its simplicity, TNT can also be viewed as a way of turning a block cipher into a tweakable block cipher by dividing the block cipher into three chunks, and adding the tweak at the two cutting points only. TNT is proven to be of beyond-birthday-bound $2^{2n/3}$ security, under the assumption that the three chunks are independent secure $n$-bit SPRPs. It clearly brings minimum possible overhead to both software and hardware implementations. To demonstrate this, an instantiation named TNT-AES with 6, 6, 6 rounds of AES as the underlying block ciphers is proposed. Besides the inherent proven security bound and tweak-independent rekeying feature of the TNT mode, the performance of TNT-AES is comparable with all existing TBCs designed through modular methods.
2020
EUROCRYPT
Quantum Security Analysis of CSIDH 📺
CSIDH is a recent proposal for post-quantum non-interactive key-exchange, based on supersingular elliptic curve isogenies. It is similar in design to a previous scheme by Couveignes, Rostovtsev and Stolbunov, but aims at an improved balance between efficiency and security. In the proposal, the authors suggest concrete parameters in order to meet some desired levels of quantum security. These parameters are based on the hardness of recovering a hidden isogeny between two elliptic curves, using a quantum subexponential algorithm of Childs, Jao and Soukharev. This algorithm combines two building blocks: first, a quantum algorithm for recovering a hidden shift in a commutative group. Second, a computation in superposition of all isogenies originating from a given curve, which the algorithm calls as a black box. In this paper, we give a comprehensive security analysis of CSIDH. Our first step is to revisit three quantum algorithms for the abelian hidden shift problem from the perspective of non-asymptotic cost, with trade-offs between their quantum and classical complexities. Second, we complete the non-asymptotic study of the black box in the hidden shift algorithm. We give a quantum procedure that evaluates CSIDH-512 using less than 40~000 logical qubits. This allows us to show that the parameters proposed by the authors of CSIDH do not meet their expected quantum security.
2020
EUROCRYPT
Continuous Verifiable Delay Functions 📺
We introduce the notion of a continuous verifiable delay function (cVDF): a function g which is (a) iteratively sequential---meaning that evaluating the iteration $g^{(t)}$ of g (on a random input) takes time roughly t times the time to evaluate g, even with many parallel processors, and (b) (iteratively) verifiable---the output of $g^{(t)}$ can be efficiently verified (in time that is essentially independent of t). In other words, the iterated function $g^{(t)}$ is a verifiable delay function (VDF) (Boneh et al., CRYPTO '18), having the property that intermediate steps of the computation (i.e., $g^{(t')}$ for t'<t) are publicly and continuously verifiable. We demonstrate that cVDFs have intriguing applications: (a) they can be used to construct public randomness beacons that only require an initial random seed (and no further unpredictable sources of randomness), (b) enable outsourceable VDFs where any part of the VDF computation can be verifiably outsourced, and (c) have deep complexity-theoretic consequences: in particular, they imply the existence of depth-robust moderately-hard Nash equilibrium problem instances, i.e. instances that can be solved in polynomial time yet require a high sequential running time. Our main result is the construction of a cVDF based on the repeated squaring assumption and the soundness of the Fiat-Shamir (FS) heuristic for constant-round proofs. We highlight that when viewed as a (plain) VDF, our construction requires a weaker FS assumption than previous ones (earlier constructions require the FS heuristic for either super-logarithmic round proofs, or for arguments).
2020
EUROCRYPT
Generic-Group Delay Functions Require Hidden-Order Groups 📺
Despite the fundamental importance of delay functions, underlying both the classic notion of a time-lock puzzle and the more recent notion of a verifiable delay function, the only known delay function that offers both sufficient structure for realizing these two notions and a realistic level of practicality is the ``iterated squaring'' construction of Rivest, Shamir and Wagner. This construction, however, is based on rather strong assumptions in groups of hidden orders, such as the RSA group (which requires a trusted setup) or the class group of an imaginary quadratic number field (which is still somewhat insufficiently explored from the cryptographic perspective). For more than two decades, the challenge of constructing delay functions in groups of known orders, admitting a variety of well-studied instantiations, has eluded the cryptography community. In this work we prove that there are no constructions of generic-group delay functions in cyclic groups of known orders: We show that for any delay function that does not exploit any particular property of the representation of the underlying group, there exists an attacker that completely breaks the function's sequentiality when given the group's order. As any time-lock puzzle and verifiable delay function give rise to a delay function, our result holds for these two notions we well, and explains the lack of success in resolving the above-mentioned long-standing challenge. Moreover, our result holds even if the underlying group is equipped with a d-linear map, for any constant d>=2 (and even for super-constant values of d under certain conditions).
2020
EUROCRYPT
Fractal: Post-Quantum and Transparent Recursive Proofs from Holography 📺
We present a new methodology to efficiently realize recursive composition of succinct non-interactive arguments of knowledge (SNARKs). Prior to this work, the only known methodology relied on pairing-based SNARKs instantiated on cycles of pairing-friendly elliptic curves, an expensive algebraic object. Our methodology does not rely on any special algebraic objects and, moreover, achieves new desirable properties: it is post-quantum and it is transparent (the setup is public coin). We exploit the fact that recursive composition is simpler for SNARKs with preprocessing, and the core of our work is obtaining a preprocessing zkSNARK for rank-1 constraint satisfiability (R1CS) that is post-quantum and transparent. We obtain this latter by establishing a connection between holography and preprocessing in the random oracle model, and then constructing a holographic proof for R1CS. We experimentally validate our methodology, demonstrating feasibility in practice.
2020
EUROCRYPT
Finding Hash Collisions with Quantum Computers by Using Differential Trails with Smaller Probability than Birthday Bound 📺
In this paper we spot light on dedicated quantum collision attacks on concrete hash functions, which has not received much attention so far. In the classical setting, the generic complexity to find collisions of an $n$-bit hash function is $O(2^{n/2})$, thus classical collision attacks based on differential cryptanalysis such as rebound attacks build differential trails with probability higher than $2^{-n/2}$. By the same analogy, generic quantum algorithms such as the BHT algorithm find collisions with complexity $O(2^{n/3})$. With quantum algorithms, a pair of messages satisfying a differential trail with probability $p$ can be generated with complexity $p^{-1/2}$. Hence, in the quantum setting, some differential trails with probability up to $2^{-2n/3}$ that cannot be exploited in the classical setting may be exploited to mount a collision attack in the quantum setting. In particular, the number of attacked rounds may increase. In this paper, we attack two international hash function standards: AES-MMO and Whirlpool. For AES-MMO, we present a $7$-round differential trail with probability $2^{-80}$ and use it to find collisions with a quantum version of the rebound attack, while only $6$ rounds can be attacked in the classical setting. For Whirlpool, we mount a collision attack based on a $6$-round differential trail from a classical rebound distinguisher with a complexity higher than the birthday bound. This improves the best classical attack on 5 rounds by 1. We also show that those trails are optimal in our approach. Our results have two important implications. First, there seems to exist a common belief that classically secure hash functions will remain secure against quantum adversaries. Indeed, several second-round candidates in the NIST post-quantum competition use existing hash functions, say SHA-3, as quantum secure ones. Our results disprove this common belief. Second, our observation suggests that differential trail search should not stop with probability $2^{-n/2}$ but should consider up to $2^{-2n/3}$. Hence it deserves to revisit the previous differential trail search activities.
2020
EUROCRYPT
Signatures from Sequential-OR Proofs 📺
OR-proofs enable a prover to show that it knows the witness for one of many statements, or that one out of many statements is true. OR-proofs are a remarkably versatile tool, used to strengthen security properties, design group and ring signature schemes, and achieve tight security. The common technique to build OR-proofs is based on an approach introduced by Cramer, Damgaard, and Schoenmakers (CRYPTO'94), where the prover splits the verifier's challenge into random shares and computes proofs for each statement in parallel. In this work we study a different, less investigated OR-proof technique, highlighted by Abe, Ohkubo, and Suzuki (ASIACRYPT'02). The difference is that the prover now computes the individual proofs sequentially. We show that such sequential OR-proofs yield signature schemes which can be proved secure in the non-programmable random oracle model. We complement this positive result with a black-box impossibility proof, showing that the same is unlikely to be the case for signatures derived from traditional OR-proofs. We finally argue that sequential-OR signature schemes can be proved secure in the quantum random oracle model, albeit with very loose bounds and by programming the random oracle.
2020
EUROCRYPT
Extracting Randomness from Extractor-Dependent Sources 📺
We revisit the well-studied problem of extracting nearly uniform randomness from an arbitrary source of sufficient min-entropy. Strong seeded extractors solve this problem by relying on a public random seed, which is unknown to the source. Here, we consider a setting where the seed is reused over time and the source may depend on prior calls to the extractor with the same seed. Can we still extract nearly uniform randomness? In more detail, we assume the seed is chosen randomly, but the source can make arbitrary oracle queries to the extractor with the given seed before outputting a sample. We require that the sample has entropy and differs from any of the previously queried values. The extracted output should look uniform even to a distinguisher that gets the seed. We consider two variants of the problem, depending on whether the source only outputs the sample, or whether it can also output some correlated public auxiliary information that preserves the sample's entropy. Our results are: * Without Auxiliary Information: We show that every pseudo-random function (PRF) with a sufficiently high security level is a good extractor in this setting, even if the distinguisher is computationally unbounded. We further show that the source necessarily needs to be computationally bounded and that such extractors imply one-way functions. * With Auxiliary Information: We construct secure extractors in this setting, as long as both the source and the distinguisher are computationally bounded. We give several constructions based on different intermediate primitives, yielding instantiations based on the DDH, DLIN, LWE or DCR assumptions. On the negative side, we show that one cannot prove security against computationally unbounded distinguishers in this setting under any standard assumption via a black-box reduction. Furthermore, even when restricting to computationally bounded distinguishers, we show that there exist PRFs that are insecure as extractors in this setting and that a large class of constructions cannot be proven secure via a black-box reduction from standard assumptions.
2020
EUROCRYPT
Measure-Rewind-Measure: Tighter Quantum Random Oracle Model Proofs for One-Way to Hiding and CCA Security 📺
We introduce a new technique called `Measure-Rewind-Measure' (MRM) to achieve tighter security proofs in the quantum random oracle model (QROM). We first apply our MRM technique to derive a new security proof for a variant of the `double-sided' quantum One-Way to Hiding Lemma (O2H) of Bindel et al. [TCC 2019] which, for the first time, avoids the square-root advantage loss in the security proof. In particular, it bypasses a previous `impossibility result' of Jiang, Zhang and Ma [IACR eprint 2019]. We then apply our new O2H Lemma to give a new tighter security proof for the Fujisaki-Okamoto transform for constructing a strong (INDCCA) Key Encapsulation Mechanism (KEM) from a weak (INDCPA) public-key encryption scheme satisfying a mild injectivity assumption.
2020
EUROCRYPT
Friet: an Authenticated Encryption Scheme with Built-in Fault Detection 📺
In this work we present a duplex-based authenticated encryption scheme Friet based on a new permutation called Friet-P. We designed Friet-P with a novel approach for cryptographic permutations and block ciphers that takes fault-attack resistance into account and that we introduce in this paper. In this method, we build a permutation f_C to be embedded in a larger one f. First, we define f as a sequence of steps that all abide a chosen error-correcting code C, i.e., that map C-codewords to C-codewords. Then, we embed f_C in f by first encoding its input to an element of C, applying f and then decoding back from C. This last step detects a fault when the output of f is not in C. We motivate the design of the permutation we use in Friet and report on performance in soft- and hardware. We evaluate the fault-detection capabilities of the software and simulated hardware implementations with attacks. Finally, we perform a leakage evaluation. Our code is available at https://github.com/thisimon/Friet.git.
2020
EUROCRYPT
New Slide Attacks on Almost Self-Similar Ciphers 📺
The slide attack is a powerful cryptanalytic tool which has the unusual property that it can break iterated block ciphers with a complexity that does not depend on their number of rounds. However, it requires complete self similarity in the sense that all the rounds must be identical. While this can be the case in Feistel structures, this rarely happens in SP networks since the last round must end with an additional post-whitening subkey. In addition, in many SP networks the final round has additional asymmetries - for example, in AES the last round omits the MixColumns operation. Such asymmetry in the last round can make it difficult to utilize most of the advanced tools which were developed for slide attacks, such as deriving from one slid pair additional slid pairs by repeatedly re-encrypting their ciphertexts. Consequently, almost all the successful applications of slide attacks against real cryptosystems (e.g., FF3, GOST, SHACAL-1, etc.) had targeted Feistel structures rather than SP networks. In this paper we overcome this last round problem by developing four new types of slide attacks. We demonstrate their power by applying them to many types of AES-like structures (with and without linear mixing in the last round, with known or secret S-boxes, with periodicity of 1,2 and 3 in their subkeys, etc). In most of these cases, the time complexity of our attack is close to $2^{n/2}$, the smallest possible complexity for most slide attacks. Our new slide attacks have several unique properties: The first uses slid sets in which each plaintext from the first set forms a slid pair with some plaintext from the second set, but without knowing the exact correspondence. The second makes it possible to create from several slid pairs an exponential number of new slid pairs which form a hypercube spanned by the given pairs. The third has the unusual property that it is always successful, and the fourth can use known messages instead of chosen messages, with only slightly higher time complexity.
2020
EUROCRYPT
Formalizing Data Deletion in the Context of the Right to be Forgotten 📺
The right of an individual to request the deletion of their personal data by an entity that might be storing it -- referred to as \emph{the right to be forgotten} -- has been explicitly recognized, legislated, and exercised in several jurisdictions across the world, including the European Union, Argentina, and California. However, much of the discussion surrounding this right offers only an intuitive notion of what it means for it to be fulfilled -- of what it means for such personal data to be deleted. In this work, we provide a formal definitional framework for the right to be forgotten using tools and paradigms from cryptography. In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals' data when a request is made of it to delete some of this data. Our framework captures most, though not all, relevant aspects of typical systems involved in data processing. While it cannot be viewed as expressing the statements of current laws (especially since these are rather vague in this respect), our work offers technically precise definitions that represent possibilities for what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require. Finally, with the goal of demonstrating the applicability of our framework and definitions, we consider various natural and simple scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at implementing systems offering deletion guarantees, and also describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities.
2020
EUROCRYPT
PSI from PaXoS: Fast, Malicious Private Set Intersection 📺
We present a 2-party private set intersection (PSI) protocol which provides security against malicious participants, yet is almost as fast as the fastest known semi-honest PSI protocol of Kolesnikov et al. (CCS 2016). Our protocol is based on a new approach for two-party PSI, which can be instantiated to provide security against either malicious or semi-honest adversaries. The protocol is unique in that the only difference between the semi-honest and malicious versions is an instantiation with different parameters for a linear error-correction code. It is also the first PSI protocol which is concretely efficient while having linear communication and security against malicious adversaries, while running in the OT-hybrid model (assuming a non-programmable random oracle). State of the art semi-honest PSI protocols take advantage of cuckoo hashing, but it has proven a challenge to use cuckoo hashing for malicious security. Our protocol is the first to use cuckoo hashing for malicious- secure PSI. We do so via a new data structure, called a probe-and-XOR of strings (PaXoS), which may be of independent interest. This abstraction captures important properties of previous data structures, most notably garbled Bloom filters. While an encoding by a garbled Bloom filter is larger by a factor of $\Omega(\lambda)$ than the original data, we describe a significantly improved PaXoS based on cuckoo hashing that achieves constant rate while being no worse in other relevant efficiency measures.
2020
EUROCRYPT
Efficient simulation of random states and random unitaries 📺
We consider the problem of efficiently simulating random quantum states and random unitary operators, in a manner which is convincing to unbounded adversaries with black-box oracle access. This problem has previously only been considered for restricted adversaries. Against adversaries with an a priori bound on the number of queries, it is well-known that t-designs suffice. Against polynomial-time adversaries, one can use pseudorandom states (PRS) and pseudorandom unitaries (PRU), as defined in a recent work of Ji, Liu, and Song; unfortunately, no provably secure construction is known for PRUs. In our setting, we are concerned with unbounded adversaries. Nonetheless, we are able to give stateful quantum algorithms which simulate the ideal object in both settings of interest. In the case of Haar-random states, our simulator is polynomial-time, has negligible error, and can also simulate verification and reflection through the simulated state. This yields an immediate application to quantum money: a money scheme which is information-theoretically unforgeable and untraceable. In the case of Haar-random unitaries, our simulator takes polynomial space, but simulates both forward and inverse access with zero error. These results can be seen as the first significant steps in developing a theory of lazy sampling for random quantum objects.
2020
EUROCRYPT
Compact Adaptively Secure ABE from k-Lin: Beyond NC1 and towards NL 📺
We present a new general framework for constructing compact and adaptively secure attribute-based encryption (ABE) schemes from k-Lin in asymmetric bilinear pairing groups. Previously, the only construction [Kowalczyk and Wee, Eurocrypt '19] that simultaneously achieves compactness and adaptive security from static assumptions supports policies represented by Boolean formulae. Our framework enables supporting more expressive policies represented by arithmetic branching programs. Our framework extends to ABE for policies represented by uniform models of computation such as Turing machines. Such policies enjoy the feature of being applicable to attributes of arbitrary lengths. We obtain the first compact adaptively secure ABE for deterministic and non-deterministic finite automata (DFA and NFA) from k-Lin, previously unknown from any static assumptions. Beyond finite automata, we obtain the first ABE for large classes of uniform computation, captured by deterministic and non-deterministic logspace Turing machines (the complexity classes L and NL) based on k-Lin. Our ABE scheme has compact secret keys of size linear in the description size of the Turing machine M. The ciphertext size grows linearly in the input length, but also linearly in the time complexity, and exponentially in the space complexity. Irrespective of compactness, we stress that our scheme is the first that supports large classes of Turing machines based solely on standard assumptions. In comparison, previous ABE for general Turing machines all rely on strong primitives related to indistinguishability obfuscation.
2020
EUROCRYPT
Evolving Ramp Secret Sharing with a Small Gap 📺
Evolving secret-sharing schemes, introduced by Komargodski, Naor, and Yogev (TCC 2016b), are secret-sharing schemes in which there is no a-priory upper bound on the number of parties that will participate. The parties arrive one by one and when a party arrives the dealer gives it a share; the dealer cannot update this share when other parties arrive. Motivated by the fact that when the number of parties is known, ramp secret-sharing schemes are more efficient than threshold secret-sharing schemes, we study evolving ramp secret-sharing schemes. Specifically, we study evolving $(b(j),g(j))$-ramp secret-sharing schemes, where $g,b: \NN\to \NN$ are non-decreasing functions. In such schemes, any set of parties that for some $j$ contains $g(j)$ parties from the first parties that arrive can reconstruct the secret, and any set such that for every $j$ contains less than $b(j)$ parties from the first $j$ parties that arrive cannot learn any information about the secret. We focus on the case that the gap is small, namely $g(j)-b(j)=j^{\beta}$ for $0<\beta<1$. We show that there is an evolving ramp secret-sharing scheme with gap $t^{\beta}$, in which the share size of the $j$-th party is $\tilde{O}(j^{4-\frac{1}{\log^2 {1/\beta}}})$. Furthermore, we show that our construction results in much better share size for fixed values of $\beta$, i.e., there is an evolving ramp secret-sharing scheme with gap $\sqrt{j}$, in which the share size of the $j$-th party is $\tilde{O}(j)$. Our construction should be compared to the best known evolving $g(j)$-threshold secret-sharing schemes (i.e., when $b(j)=g(j)-1$) in which the share size of the $j$-th party is $\tilde{O}(j^4)$. Thus, our construction offers a significant improvement for every constant $\beta$, showing that allowing a gap between the sizes of the authorized and unauthorized sets can reduce the share size. In addition, we present an evolving $(k/2,k)$-ramp secret-sharing scheme for a constant $k$ (which can be very big), where any set of parties of size at least $k$ can reconstruct the secret and any set of parties of size at most $k/2$ cannot learn any information about the secret. The share size of the $j$-th party in our construction is $O(\log k\log j)$. This is an improvement over the best known evolving $k$-threshold secret-sharing schemes in which the share size of the $j$-th party is $O(k\log j)$.
2020
EUROCRYPT
Quantum-access-secure message authentication via blind-unforgeability 📺
Formulating and designing authentication of classical messages in the presence of adversaries with quantum query access has been a challenge, as the familiar classical notions of unforgeability do not directly translate into meaningful notions in the quantum setting. A particular difficulty is how to fairly capture the notion of ``predicting an unqueried value'' when the adversary can query in quantum superposition. We propose a natural definition of unforgeability against quantum adversaries called blind unforgeability. This notion defines a function to be predictable if there exists an adversary who can use "partially blinded" oracle access to predict values in the blinded region. We support the proposal with a number of technical results. We begin by establishing that the notion coincides with EUF-CMA in the classical setting and go on to demonstrate that the notion is satisfied by a number of simple guiding examples, such as random functions and quantum-query-secure pseudorandom functions. We then show the suitability of blind unforgeability for supporting canonical constructions and reductions. We prove that the "hash-and-MAC" paradigm and the Lamport one-time digital signature scheme are indeed unforgeable according to the definition. In this setting, we additionally define and study a new variety of quantum-secure hash functions called Bernoulli-preserving. Finally, we demonstrate that blind unforgeability is strictly stronger than a previous definition of Boneh and Zhandry [EUROCRYPT '13, CRYPTO '13] and resolve an open problem concerning this previous definition by constructing an explicit function family which is forgeable yet satisfies the definition.
2020
EUROCRYPT
He Gives C-Sieves on the CSIDH 📺
Recently, Castryck, Lange, Martindale, Panny, and Renes proposed \emph{CSIDH} (pronounced ``sea-side'') as a candidate post-quantum ``commutative group action.'' It has attracted much attention and interest, in part because it enables noninteractive Diffie--Hellman-like key exchange with quite small communication. Subsequently, CSIDH has also been used as a foundation for digital signatures. In 2003--04, Kuperberg and then Regev gave asymptotically subexponential quantum algorithms for ``hidden shift'' problems, which can be used to recover the CSIDH secret key from a public key. In late 2011, Kuperberg gave a follow-up quantum algorithm called the \emph{collimation sieve} (``c-sieve'' for short), which improves the prior ones, in particular by using exponentially less quantum memory and offering more parameter tradeoffs. While recent works have analyzed the concrete cost of the original algorithms (and variants) against CSIDH, nothing of this nature was previously available for the c-sieve. This work fills that gap. Specifically, we generalize Kuperberg's collimation sieve to work for arbitrary finite cyclic groups, provide some practical efficiency improvements, give a classical (i.e., non-quantum) simulator, run experiments for a wide range of parameters up to the actual CSIDH-512 group order, and concretely quantify the complexity of the c-sieve against CSIDH. Our main conclusion is that the proposed CSIDH parameters provide relatively little quantum security beyond what is given by the cost of quantumly evaluating the CSIDH group action itself (on a uniform superposition). For example, the cost of CSIDH-512 key recovery is only about~$2^{16}$ quantum evaluations using~$2^{40}$ bits of quantumly accessible \emph{classical} memory (plus relatively small other resources). This improves upon a prior estimate of~$2^{32.5}$ evaluations and~$2^{31}$ qubits of \emph{quantum} memory, for a variant of Kuperberg's original sieve. Under the plausible assumption that quantum evaluation does not cost much more than what is given by a recent ``best case'' analysis, CSIDH-512 can therefore be broken using significantly less than~$2^{64}$ quantum T-gates. This strongly invalidates its claimed NIST level~1 quantum security, especially when accounting for the MAXDEPTH restriction. Moreover, under analogous assumptions for CSIDH-1024 and -1792, which target higher NIST security levels, except near the high end of the MAXDEPTH range even these instantiations fall short of level~1.
2020
EUROCRYPT
Hardness of LWE on General Entropic Distributions 📺
The hardness of the Learning with Errors (LWE) problem is by now a cornerstone of the cryptographic landscape, allowing to con- struct cryptographic schemes with properties unknown under other as- sumptions, and being conjectured to be resilient to quantum attacks. LWE is essentially the task of solving a noisy system of random linear equations over uniformly random secret variables (“the LWE secret”), evaluated modulo some integer. In applications the secret variables usu- ally correspond to the secret key of the cryptographic scheme. It is therefore of great importance to understand what happens when the secret variables are not sampled uniformly (but still have some entropy). This is relevant for settings where an adversary manages to obtain partial information on the secret (a.k.a key leakage), for various theoretical ap- plications, and also for practical use where for efficiency or convenience it is easier to sample the secret from some non-uniform distribution. This so called “Entropic LWE” problem has been studied in a number of works, starting with Goldwasser et al. (ICS 2010). However, so far it was only known how to prove the hardness of Entropic LWE for secret distributions supported inside a ball of small radius. In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ball-bounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highly-composite moduli, and using black-box impossibility for arbitrary moduli. Technically, we show that the entropic hardness of LWE relies on a sim- ple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distri- bution s, given s + e, where e is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ball-bounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.
2020
EUROCRYPT
Candidate iO From Homomorphic Encryption Schemes 📺
We propose a new approach to construct general-purpose indistinguishability obfuscation (iO). Our construction is obtained via a new intermediate primitive that we call split fully-homomorphic encryption (split FHE), which we show to be sufficient for constructing iO. Specifically, split FHE is FHE where decryption takes the following two-step syntactic form: (i) A secret decryption step uses the secret key and produces a hint which is (asymptotically) shorter than the length of the encrypted message, and (ii) a public decryption step that only requires the ciphertext and the previously generated hint (and not the entire secret key), and recovers the encrypted message. In terms of security, the hints for a set of ciphertexts should not allow one to violate semantic security for any other ciphertexts. Next, we show a generic candidate construction of split FHE based on three building blocks: (i) A standard FHE scheme with linear decrypt-and-multiply (which can be instantiated with essentially all LWE-based constructions), (ii) a linearly homomorphic encryption scheme with short decryption hints (such as the Damgard-Jurik encryption scheme, based on the DCR problem), and (iii) a cryptographic hash function (which can be based on a variety of standard assumptions). Our approach is heuristic in the sense that our construction is not provably secure and makes implicit assumptions about the interplay between these underlying primitives. We show evidence that this construction is secure by providing an argument in an appropriately defined oracle model. We view our construction as a big departure from the state-of-the-art constructions, and it is in fact quite simple.
2020
EUROCRYPT
Rational isogenies from irrational endomorphisms 📺
In this paper, we introduce a polynomial-time algorithm to compute a connecting $\mathcal{O}$-ideal between two supersingular elliptic curves over $\mathbb{F}_p$ with common $\mathbb{F}_p$-endomorphism ring $\mathcal{O}$, given a description of their full endomorphism rings. This algorithm provides a reduction of the security of the CSIDH cryptosystem to the problem of computing endomorphism rings of supersingular elliptic curves. A similar reduction for SIDH appeared at Asiacrypt 2016, but relies on totally different techniques. Furthermore, we also show that any supersingular elliptic curve constructed using the complex-multiplication method can be located precisely in the supersingular isogeny graph by explicitly deriving a path to a known base curve. This result prohibits the use of such curves as a building block for a hash function into the supersingular isogeny graph.
2020
EUROCRYPT
Resource-Restricted Cryptography: Revisiting MPC Bounds in the Proof-of-Work Era 📺
Traditional bounds on synchronous Byzantine agreement (BA) and secure multi-party computation (MPC) establish that in absence of a private correlated-randomness setup, such as a PKI, protocols can tolerate up to $t<n/3$ of the parties being malicious. The introduction of ``Nakamoto style'' consensus, based on Proof-of-Work (PoW) blockchains, put forth a somewhat different flavor of BA, showing that even a majority of corrupted parties can be tolerated as long as the majority of the computation resources remain at honest hands. This assumption on honest majority of some resource was also extended to other resources such as stake, space, etc., upon which blockchains achieving Nakamoto-style consensus were built that violated the $t<n/3$ bound in terms of number of party corruptions. The above state of affairs begs the question of whether the seeming mismatch is due to different goals and models, or whether the resource-restricting paradigm can be generically used to circumvent the $n/3$ lower bound. In this work we study this question and formally demonstrate how the above paradigm changes the rules of the game in cryptographic definitions. First, we abstract the core properties that the resource-restricting paradigm offers by means of a functionality {\em wrapper}, in the UC framework, which when applied to a standard point-to-point network restricts the ability (of the adversary) to send new messages. We show that such a wrapped network can be implemented using the resource-restricting paradigm---concretely, using PoWs and honest majority of computing power---and that the traditional $t<n/3$ impossibility results fail when the parties have access to such a network. Our construction is in the {\em fresh} Common Reference String (CRS) model---i.e., it assumes a CRS which becomes available to the parties at the same time as to the adversary. We then present constructions for BA and MPC, which given access to such a network tolerate $t<n/2$ corruptions without assuming a private correlated randomness setup. We also show how to remove the freshness assumption from the CRS by leveraging the power of a random oracle. Our MPC protocol achieves the standard notion of MPC security, where parties might have dedicated roles, as is for example the case in Oblivious Transfer protocols. This is in contrast to existing solutions basing MPC on PoWs, which associate roles to pseudonyms but do not link these pseudonyms with the actual parties.
2020
EUROCRYPT
Statistical ZAPR Arguments from Bilinear Maps 📺
Dwork and Naor (FOCS '00) defined ZAPs as 2-message witness-indistinguishable proofs that are public-coin. We relax this to \emph{ZAPs with private Randomness} (ZAPRs), where the verifier can use private coins to sample the first message (independently of the statement being proved), but the proof must remain publicly verifiable given only the protocol transcript. In particular, ZAPRs are \emph{reusable}, meaning that the first message can be reused for multiple proofs without compromising security. Known constructions of ZAPs from trapdoor permutations or bilinear maps are only computationally WI (and statistically sound). Two recent results of Badrinarayanan-Fernando-Jain-Khurana-Sahai and Goyal-Jain-Jin-Malavolta [EUROCRYPT '20] construct the first \emph{statistical ZAP arguments}, which are statistically WI (and computationally sound), from the quasi-polynomial LWE assumption. Here, we construct \emph{statistical ZAPR arguments} from the quasi-polynomial decision-linear (DLIN) assumption on groups with a bilinear map. Our construction relies on a combination of several tools including Groth-Ostrovsky-Sahai NIZK and NIWI [EUROCRYPT '06, CRYPTO '06, JACM '12], ``sometimes-binding statistically hiding commitments'' [Kalai-Khurana-Sahai, EUROCRYPT '18] and the ``MPC-in-the-head'' technique [Ishai-Kushilevitz-Ostrovsky-Sahai, STOC '07].
2020
EUROCRYPT
An Algebraic Attack on Rank Metric Code-Based Cryptosystems 📺
The Rank metric decoding problem is the main problem considered in cryptography based on codes in the rank metric. Very efficient schemes based on this problem or quasi-cyclic versions of it have been proposed recently, such as those in the submissions ROLLO and RQC currently at the second round of the NIST Post-Quantum Cryptography Standardization Process. While combinatorial attacks on this problem have been extensively studied and seem now well understood, the situation is not as satisfactory for algebraic attacks, for which previous work essentially suggested that they were ineffective for cryptographic parameters. In this paper, starting from Ourivski and Johansson's algebraic modelling of the problem into a system of polynomial equations, we show how to augment this system with easily computed equations so that the augmented system is solved much faster via Gröbner bases. This happens because the augmented system has solving degree $r$, $r+1$ or $r+2$ depending on the parameters, where $r$ is the rank weight, which we show by extending results from Verbel \emph{et al.} (PQCrypto 2019) on systems arising from the MinRank problem; with target rank $r$, Verbel \emph{et al.} lower the solving degree to $r+2$, and even less for some favorable instances that they call ``superdetermined''. We give complexity bounds for this approach as well as practical timings of an implementation using \texttt{magma}. This improves upon the previously known complexity estimates for both Gröbner basis and (non-quantum) combinatorial approaches, and for example leads to an attack in 200 bits on ROLLO-I-256 whose claimed security was 256 bits.
2020
EUROCRYPT
SPARKs: Succinct Parallelizable Arguments of Knowledge 📺
We introduce the notion of a Succinct Parallelizable Argument of Knowledge (SPARK). This is an argument system with the following three properties for computing and proving a time T (non-deterministic) computation: - The prover's (parallel) running time is T + polylog T. (In other words, the prover's running time is essentially T for large computation times!) - The prover uses at most polylog T processors. - The communication complexity and verifier complexity are both polylog T. While the third property is standard in succinct arguments, the combination of all three is desirable as it gives a way to leverage moderate parallelism in favor of near-optimal running time. We emphasize that even a factor two overhead in the prover's parallel running time is not allowed. Our main results are the following, all for non-deterministic polynomial-time RAM computation. We construct (1) an (interactive) SPARK based solely on the existence of collision-resistant hash functions, and (2) a non-interactive SPARK based on any collision-resistant hash function and any SNARK with quasi-linear overhead (as satisfied by recent SNARK constructions).
2020
EUROCRYPT
Statistical Zaps and New Oblivious Transfer Protocols 📺
We study the problem of achieving statistical privacy in interactive proof systems and oblivious transfer -- two of the most well studied two-party protocols -- when limited rounds of interaction are available. -- Statistical Zaps: We give the first construction of statistical Zaps, namely, two-round statistical witness-indistinguishable (WI) protocols with a public-coin verifier. Our construction achieves computational soundness based on the quasi-polynomial hardness of learning with errors assumption. -- Three-Round Statistical Receiver-Private Oblivious Transfer: We give the first construction of a three-round oblivious transfer (OT) protocol -- in the plain model -- that achieves statistical privacy for receivers and computational privacy for senders against malicious adversaries, based on polynomial-time assumptions. The round-complexity of our protocol is optimal. We obtain our first result by devising a public-coin approach to compress sigma protocols, without relying on trusted setup. To obtain our second result, we devise a general framework via a new notion of statistical hash commitments that may be of independent interest.
2020
EUROCRYPT
Impossibility Results for Lattice-Based Functional Encryption Schemes 📺
Functional Encryption denotes a form of encryption where a master secret key-holder can control which functions a user can evaluate on encrypted data. Learning With Errors (LWE) (Regev, STOC'05) is known to be a useful cryptographic hardness assumption which implies strong primitives such as, for example, fully homomorphic encryption (Brakerski et al., ITCS'12) and lockable obfuscation (Goyal et al., Wichs et al., FOCS'17). Despite its strength, however, there is just a limited number of functional encryption schemes which can be based on LWE. In fact, there are functional encryption schemes which can be achieved by using pairings but for which no secure instantiations from lattice-based assumptions are known: function-hiding inner product encryption (Lin, Baltico et al., CRYPTO'17) and compact quadratic functional encryption (Abdalla et al., CRYPTO'18). This raises the question whether there are some mathematical barriers which hinder us from realizing function-hiding and compact functional encryption schemes from lattice-based assumptions as LWE. To study this problem, we prove an impossibility result for function-hiding functional encryption schemes which meet some algebraic restrictions at ciphertext encryption and decryption. Those restrictions are met by a lot of attribute-based, identity-based and functional encryption schemes whose security stems from LWE. Therefore, we see our results as important indications why it is hard to construct new functional encryption schemes from LWE and which mathematical restrictions have to be overcome to construct secure lattice-based functional encryption schemes for new functionalities.
2020
EUROCRYPT
Non-Interactive Zero-Knowledge in Pairing-Free Groups from Weaker Assumptions 📺
We provide new constructions of non-interactive zero-knowledge arguments (NIZKs) for NP from discrete-logarithm-style assumptions over cyclic groups, without relying on pairings. A previous construction from (Canetti et al., Eurocrypt'18) achieves such NIZKs under the assumption that no efficient adversary can break the key-dependent message (KDM) security of (additive) ElGamal with respect to all (even inefficient) functions over groups of size $2^\lambda$, with probability better than $\poly(\lambda)/2^{\lambda}$. This is an extremely strong, non-falsifiable assumption. In particular, even mild (polynomial) improvements over the current best known attacks on the discrete logarithm problem would already contradict this assumption. (Canetti et al. STOC'19) describe how to improve the assumption to rely only on KDM security with respect to all efficient functions, therefore obtaining an assumption that is (in spirit) falsifiable. Our first construction improves this state of affairs. We provide a construction of NIZKs for NP under the CDH assumption together with the assumption that no efficient adversary can break the key-dependent message one-wayness of ElGamal with respect to efficient functions over groups of size $2^\lambda$, with probability better than $\poly(\lambda)/2^{c\lambda}$ (denoted $2^{-c\lambda}$-OWKDM), for a constant $c = 3/4$. Unlike the previous assumption, our assumption leaves an exponential gap between the best known attack and the required security guarantee. We also analyse whether we could build NIZKs when CDH does not hold. As a second contribution, we construct an infinitely often NIZK argument system for NP (where soundness and zero-knowledge are only guaranteed to hold for infinitely many security parameters), under the $2^{-c\lambda}$-OWKDM security of ElGamal with $c = 28/29+o(1)$, together with the existence of low-depth pseudorandom generators.
2020
EUROCRYPT
Succinct Non-Interactive Secure Computation 📺
We present the first maliciously secure protocol for succinct non-interactive secure two-party computation (SNISC): Each player sends just a single message whose length is (essentially) independent of the running time of the function to be computed. The protocol does not require any trusted setup, satisfies superpolynomial-time simulation-based security (SPS), and is based on (subexponential) security of the Learning With Errors (LWE) assumption. We do not rely on SNARKs or "knowledge of exponent"-type assumptions. Since the protocol is non-interactive, the relaxation to SPS security is needed, as standard polynomial-time simulation is impossible; however, a slight variant of our main protocol yields a SNISC with polynomial-time simulation in the CRS model.
2020
EUROCRYPT
Separate Your Domains: NIST PQC KEMs, Oracle Cloning and Read-Only Indifferentiability 📺
It is convenient and common for schemes in the random oracle model to assume access to multiple random oracles (ROs), leaving to implementations the task --we call it oracle cloning-- of constructing them from a single RO. The first part of the paper is a case study of oracle cloning in KEM submissions to the NIST Post-Quantum Cryptography standardization process. We give key-recovery attacks on some submissions arising from mistakes in oracle cloning, and find other submissions using oracle cloning methods whose validity is unclear. Motivated by this, the second part of the paper gives a theoretical treatment of oracle cloning. We give a definition of what is an "oracle cloning method" and what it means for such a method to "work," in a framework we call read-only indifferentiability, a simple variant of classical indifferentiability that yields security not only for usage in single-stage games but also in multi-stage ones. We formalize domain separation, and specify and study many oracle cloning methods, including common domain-separating ones, giving some general results to justify (prove read-only indifferentiability of) certain classes of methods. We are not only able to validate the oracle cloning methods used in many of the unbroken NIST PQC KEMs, but also able to specify and validate oracle cloning methods that may be useful beyond that.
2020
EUROCRYPT
Efficient Constructions for Almost-everywhere Secure Computation 📺
We study the problem of {\em almost-everywhere reliable message transmission}; a key component in designing efficient and secure MPC protocols for sparsely connected networks. The goal is to design low-degree networks which allow a large fraction of honest nodes to communicate reliably even while linearly many nodes can experience byzantine corruption and deviate arbitrarily from the assigned protocol.\\ \noindent In this paper, we achieve a $\log$-degree network with a polylogarithmic work complexity protocol, thereby improving over the state-of-the-art result of Chandran {\em et al.} (ICALP 2010) who required a polylogarithmic-degree network and had a linear work complexity. In addition, we also achieve: \begin{itemize} \item A work efficient version of Dwork et. al.'s (STOC 1986) butterfly network. \item An improvement upon the state of the art protocol of Ben-or and Ron (Information Processing Letters 1996) in the randomized corruption model---both in work-efficiency and in resilience.
2020
EUROCRYPT
Broadcast-Optimal Two-Round MPC 📺
An intensive effort by the cryptographic community to minimize the round complexity of secure multi-party computation (MPC) has recently led to optimal two-round protocols from minimal assumptions. Most of the proposed solutions, however, make use of a broadcast channel in every round, and it is unclear if the broadcast channel can be replaced by standard point-to-point communication in a round-preserving manner, and if so, at what cost on the resulting security. In this work, we provide a complete characterization of the trade-off between number of broadcast rounds and achievable security level for two-round MPC tolerating arbitrarily many active corruptions. Specifically, we consider all possible combinations of broadcast and point-to-point rounds against the three standard levels of security for maliciously se- cure MPC protocols, namely, security with identifiable, unanimous, and selective abort. For each of these notions and each combination of broadcast and point-to-point rounds, we provide either a tight feasibility or an infeasibility result of two-round MPC. Our feasibility results hold assuming two-round OT in the CRS model, whereas our impossibility results hold given any correlated randomness.
2020
EUROCRYPT
Tornado: Automatic Generation of Probing-Secure Masked Bitsliced Implementations 📺
Cryptographic implementations deployed in real world devices often aim at (provable) security against the powerful class of side-channel attacks while keeping reasonable performances. Last year at Asiacrypt, a new formal verification tool named tightPROVE was put forward to exactly determine whether a masked implementation is secure in the well-deployed probing security model for any given security order t. Also recently, a compiler named Usuba was proposed to automatically generate bitsliced implementations of cryptographic primitives. This paper goes one step further in the security and performances achievements with a new automatic tool named Tornado. In a nutshell, from the high-level description of a cryptographic primitive, Tornado produces a functionally equivalent bitsliced masked implementation at any desired order proven secure in the probing model, but additionally in the so-called register probing model which much better fits the reality of software implementations. This framework is obtained by the integration of Usuba with tightPROVE+, which extends tightPROVE with the ability to verify the security of implementations in the register probing model and to fix them with inserting refresh gadgets at carefully chosen locations accordingly. We demonstrate Tornado on the lightweight cryptographic primitives selected to the second round of the NIST competition and which somehow claimed to be masking friendly. It advantageously displays performances of the resulting masked implementations for several masking orders and prove their security in the register probing model.
2020
EUROCRYPT
Mathematics and Cryptography: A Marriage of Convenience? 📺
Invited Paper
Mathematics and cryptography have a long history together, with the ups and downs inherent in any long relationship. Whether it is a marriage of convenience or a love match, their progeny have lives of their own and have had an impact on the world. This invited lecture will briefly recall some high points from the past, give speculation and encouragement for the future of this marriage, and give counseling on how to improve communication, resolve conflicts, and play well together, based on personal experience and lessons learned.
2020
EUROCRYPT
Mind the Composition: Birthday Bound Attacks on EWCDMD and SoKAC21 📺
In an early version of CRYPTO'17, Mennink and Neves proposed \textsf{EWCDMD}, a dual of \textsf{EWCDM}, and showed $n$-bit security, where $n$ is the block size of the underlying block cipher. In CRYPTO'19, Chen et al. proposed permutation based design \textsf{SoKAC21} and showed $2n/3$-bit security, where $n$ is the input size of the underlying permutation. In this paper we show birthday bound attacks on \textsf{EWCDMD} and \textsf{SoKAC21}, invalidating their security claims. Both attacks exploit an inherent composition nature present in the constructions. Motivated by the above two attacks exploiting the composition nature, we consider some generic relevant composition based constructions of ideal primitives (possibly in the ideal permutation and random oracle model) and present birthday bound distinguishers for them. In particular, we demonstrate a birthday bound distinguisher against (1) a secret random permutation followed by a public random function and (2) composition of two secret random functions. Our distinguishers for \textsf{SoKAC21} and \textsf{EWCDMD} are direct consequences of (1) and (2) respectively.
2020
EUROCRYPT
Double-Base Chains for Scalar Multiplications on Elliptic Curves 📺
Double-base chains (DBCs) are widely used to speed up scalar multiplications on elliptic curves. We present three results of DBCs. First, we display a structure of the set containing all DBCs and propose an iterative algorithm to compute the number of DBCs for a positive integer. This is the first polynomial time algorithm to compute the number of DBCs for positive integers. Secondly, we present an asymptotic lower bound on average Hamming weights of DBCs $\frac{\log n}{8.25}$ for a positive integer $n$. This result answers an open question about the Hamming weights of DBCs. Thirdly, we propose a new algorithm to generate an optimal DBC for any positive integer. The time complexity of this algorithm is $\mathcal{O}\left(\left(\log n\right)^2 \log\log n\right)$ bit operations and the space complexity is $\mathcal{O}\left(\left(\log n\right)^{2}\right)$ bits of memory. This algorithm accelerates the recoding procedure by more than $6$ times compared to the state-of-the-art Bernstein, Chuengsatiansup, and Lange's work. The Hamming weights of optimal DBCs are over $60$\% smaller than those of NAFs. Experimental results show that scalar multiplication using our optimal DBC is about $13$\% faster than that using non-adjacent form on elliptic curves over large prime fields.
2020
EUROCRYPT
Low Weight Discrete Logarithms and Subset Sum in $2^{0.65n}$ with Polynomial Memory 📺
We propose two heuristic polynomial memory collision finding algorithms for the low Hamming weight discrete logarithm problem in any abelian group $G$. The first one is a direct adaptation of the Becker-Coron-Joux (BCJ) algorithm for subset sum to the discrete logarithm setting. The second one significantly improves on this adaptation for all possible weights using a more involved application of the representation technique together with some new Markov chain analysis. In contrast to other low weight discrete logarithm algorithms, our second algorithm's time complexity interpolates to Pollard's $|G|^{\frac 1 2}$ bound for general discrete logarithm instances. We also introduce a new heuristic subset sum algorithm with polynomial memory that improves on BCJ's $2^{0.72n}$ time bound for random subset sum instances $a_1, \ldots, a_n, t \in \Z_{2^n}$. Technically, we introduce a novel nested collision finding for subset sum -- inspired by the NestedRho algorithm from Crypto '16 -- that recursively produces collisions. We first show how to instantiate our algorithm with run time $2^{0.649n}$. Using further tricks, we are then able to improve its complexity down to $2^{0.645n}$.
2020
EUROCRYPT
Private Information Retrieval with Sublinear Online Time 📺
Best Young Researcher Award
We present the first protocols for private information retrieval that allow fast (sublinear-time) database lookups without increasing the server-side storage requirements. To achieve these efficiency goals, our protocols work in an offline/online model. In an offline phase, which takes place before the client has decided which database bit it wants to read, the client fetches a short string from the servers. In a subsequent online phase, the client can privately retrieve its desired bit of the database by making a second query to the servers. By pushing the bulk of the server-side computation into the offline phase (which is independent of the client’s query), our protocols allow the online phase to complete very quickly—in time sublinear in the size of the database. Our protocols can provide statistical security in the two-server setting and computational security in the single-server setting. Finally, we prove that, in this model, our protocols are optimal in terms of the trade-off they achieve between communication and running time.
2020
EUROCRYPT
Side-channel Masking with Pseudo-Random Generator 📺
High-order masking countermeasures against side-channel attacks usually require plenty of randomness during their execution. For security against t probes, the classical ISW countermeasure requires O(t^2 s) random bits, where s is the circuit size. However running a True Random Number Generator (TRNG) can be costly in practice and become a bottleneck on embedded devices. In [IKL+13] the authors introduced the notion of robust pseudo-random number generator (PRG), which must remain secure even against an adversary who can probe at most t wires. They showed that when embedding a robust PRG within a private circuit, the number of random bits can be reduced to O(t^4), that is independent of the circuit size s (up to a logarithmic factor). Using bipartite expander graphs, this can be further reduced to O(t^(3+eps)); however the resulting construction is unpractical. In this paper we describe a practical construction where the number of random bits is only O(t^2) for security against t probes, without expander graphs; moreover the running time of each pseudo-random generation goes down from O(t^4) to O(t). Our technique consists in using multiple independent PRGs instead of a single one. We show that for ISW circuits, the robustness property of the PRG is not required anymore, which leads to simple and efficient constructions. For example, for AES we only need 48 bytes of randomness to get second-order security (t=2), instead of 2880 in the original Rivain-Prouff countermeasure; when implemented on an ARM-based embedded device with a relatively slow TRNG, we obtain a 50% speed-up compared to Rivain-Prouff.
2020
EUROCRYPT
Blackbox Secret Sharing Revisited: A Coding-Theoretic Approach with Application to Expansionless Near-Threshold Schemes 📺
A {\em blackbox} secret sharing (BBSS) scheme works in exactly the same way for all finite Abelian groups $G$; it can be instantiated for any such group $G$ and {\em only} black-box access to its group operations and to random group elements is required. A secret is a single group element and each of the $n$ players' shares is a vector of such elements. Share-computation and secret-reconstruction is by integer linear combinations. These do not depend on $G$, and neither do the privacy and reconstruction parameters $t,r$. This classical, fundamental primitive was introduced by Desmedt and Frankel (CRYPTO 1989) in their context of ``threshold cryptography.'' The expansion factor is the total number of group elements in a full sharing divided by $n$. For threshold BBSS with $t$-privacy ($1\leq t \leq n-1$), $t+1$-reconstruction and arbitrary $n$, constructions with minimal expansion $O(\log n)$ exist (CRYPTO 2002, 2005). These results are firmly rooted in number theory; each makes (different) judicious choices of orders in number fields admitting a vector of elements of very large length (in the number field degree) whose corresponding Vandermonde-determinant is sufficiently controlled so as to enable BBSS by a suitable adaptation of Shamir's scheme. Alternative approaches generally lead to very large expansion. The state of the art of BBSS has not changed for the last 15 years. Our contributions are two-fold. (1) We introduce a novel, nontrivial, effective construction of BBSS based on {\em coding theory} instead of number theory. For threshold-BBSS we also achieve minimal expansion factor $O(\log n)$. (2) Our method is more versatile. Namely, we show, for the first time, BBSS that is {\em near-threshold}, i.e., $r-t$ is an arbitrarily small constant fraction of $n$, {\em and} that has expansion factor~$O(1)$, i.e., individual share-vectors of {\em constant} length (``asymptotically expansionless''). Threshold can be concentrated essentially freely across full range. We also show expansion is minimal for near-threshold and that such BBSS cannot be attained by previous methods. Our general construction is based on a well-known mathematical principle, the local-global principle. More precisely, we first construct BBSS over local rings through either Reed-Solomon or algebraic geometry codes. We then ``glue'' these schemes together in a dedicated manner to obtain a global secret sharing scheme, i.e., defined over the integers, which, as we finally prove using novel insights, has the desired BBSS properties. Though our main purpose here is advancing BBSS for its own sake, we also briefly address possible protocol applications.
2020
EUROCRYPT
Tight Security Bounds for Double-block Hash-then-Sum MACs 📺
In this work, we study the security of deterministic MAC constructions with a double-block internal state, captured by the double-block hash-then-sum (DBH) paradigm. Most DBH constructions, including PolyMAC, SUM-ECBC, PMAC-Plus, 3kf9 and LightMAC-Plus, have been proved to be pseudorandom up to 2^{2n/3} queries when they are instantiated with an n-bit block cipher, while the best known generic attacks require 2^{3n/4} queries. We close this gap by proving the PRF-security of DBH constructions up to 2^{3n/4} queries (ignoring the maximum message length). The core of the security proof is to refine Mirror theory that systematically estimates the number of solutions to a system of equations and non-equations, and apply it to prove the security of the finalization function. Then we identify security requirements of the internal hash functions to ensure 3n/4-bit security of the resulting constructions when combined with the finalization function. Within this framework, we prove the security of DBH whose internal hash function is given as the concatenation of a universal hash function using two independent keys. This class of constructions include PolyMAC and SUM-ECBC. Moreover, we prove the security of PMAC-Plus, 3kf9 and LightMAC-Plus up to 2^{3n/4} queries.
2020
EUROCRYPT
On the Quantum Complexity of the Continuous Hidden Subgroup Problem 📺
The Hidden Subgroup Problem (HSP) aims at capturing all problems that are susceptible to be solvable in quantum polynomial time following the blueprints of Shor's celebrated algorithm. Successful solutions to this problems over various commutative groups allow to efficiently perform number-theoretic tasks such as factoring or finding discrete logarithms. The latest successful generalization (Eisenträger et al. STOC 2014) considers the problem of finding a full-rank lattice as the hidden subgroup of the continuous vector space R^m, even for large dimensions m. It unlocked new cryptanalytic algorithms (Biasse-Song SODA 2016, Cramer et al. EUROCRYPT 2016 and 2017), in particular to find mildly short vectors in ideal lattices. The cryptanalytic relevance of such a problem raises the question of a more refined and quantitative complexity analysis. In the light of the increasing physical difficulty of maintaining a large entanglement of qubits, the degree of concern may be different whether the above algorithm requires only linearly many qubits or a much larger polynomial amount of qubits. This is the question we start addressing with this work. We propose a detailed analysis of (a variation of) the aforementioned HSP algorithm, and conclude on its complexity as a function of all the relevant parameters. Our modular analysis is tailored to support the optimization of future specialization to cases of cryptanalytic interests. We suggest a few ideas in this direction.
2020
EUROCRYPT
Statistical ZAP Arguments 📺
Dwork and Naor (FOCS'00) first introduced and constructed two message public coin witness indistinguishable proofs (ZAPs) for NP based on trapdoor permutations. Since then, ZAPs have also been obtained based on the decisional linear assumption on bilinear maps, and indistinguishability obfuscation, and have proven extremely useful in the design of several cryptographic primitives. However, all known constructions of two-message public coin (or even publicly verifiable) proof systems only guarantee witness indistinguishability against computationally bounded verifiers. In this paper, we construct the first public coin two message witness indistinguishable (WI) arguments for NP with {\em statistical} privacy, assuming quasi-polynomial hardness of the learning with errors (LWE) assumption. We also show that the same protocol has a super-polynomial simulator (SPS), which yields the first public-coin SPS statistical zero knowledge argument. Prior to this, there were no known constructions of two-message publicly verifiable WI protocols under lattice assumptions, even satisfying the weaker notion of computational witness indistinguishability.
2020
EUROCRYPT
Tight Time-Space Lower Bounds for Finding Multiple Collision Pairs and Their Applications 📺
We consider a \emph{collision search problem} (CSP), where given a parameter $C$, the goal is to find $C$ collision pairs in a random function $f:[N] \rightarrow [N]$ (where $[N] = \{0,1,\ldots,N-1\})$ using $S$ bits of memory. Algorithms for CSP have numerous cryptanalytic applications such as space-efficient attacks on double and triple encryption. The best known algorithm for CSP is \emph{parallel collision search} (PCS) published by van Oorschot and Wiener, which achieves the time-space tradeoff $T^2 \cdot S = \tilde{O}(C^2 \cdot N)$. In this paper, we prove that any algorithm for CSP satisfies $T^2 \cdot S = \tilde{\Omega}(C^2 \cdot N)$, hence the best known time-space tradeoff is optimal (up to poly-logarithmic factors in $N$). On the other hand, we give strong evidence that proving similar unconditional time-space tradeoff lower bounds on CSP applications (such as breaking double and triple encryption) may be very difficult, and would imply a breakthrough in complexity theory. Hence, we propose a new restricted model of computation and prove that under this model, the best known time-space tradeoff attack on double encryption is optimal.
2020
EUROCRYPT
Integral Matrix Gram Root and Lattice Gaussian Sampling without Floats 📺
Many advanced lattice based cryptosystems require to sample lattice points from Gaussian distributions. One challenge for this task is that all current algorithms resort to floating-point arithmetic (FPA) at some point, which has numerous drawbacks in practice: it requires numerical stability analysis, extra storage for high-precision, lazy/backtracking techniques for efficiency, and may suffer from weak determinism which can completely break certain schemes. In this paper, we give techniques to implement Gaussian sampling over general lattices without using FPA. To this end, we revisit the approach of Peikert, using perturbation sampling. Peikert's approach uses continuous Gaussian sampling and some decomposition $\BSigma = \matA \matA^t$ of the target covariance matrix $\BSigma$. The suggested decomposition, e.g. the Cholesky decomposition, gives rise to a square matrix $\matA$ with real (not integer) entries. Our idea, in a nutshell, is to replace this decomposition by an integral one. While there is in general no integer solution if we restrict $\matA$ to being a square matrix, we show that such a decomposition can be efficiently found by allowing $\matA$ to be wider (say $n \times 9n$). This can be viewed as an extension of Lagrange's four-square theorem to matrices. In addition, we adapt our integral decomposition algorithm to the ring setting: for power-of-2 cyclotomics, we can exploit the tower of rings structure for improved complexity and compactness.
2020
EUROCRYPT
The Retracing Boomerang Attack 📺
Boomerang attacks are extensions of differential attacks, that make it possible to combine two unrelated differential properties of the first and second part of a cryptosystem with probabilities $p$ and $q$ into a new differential-like property of the whole cryptosystem with probability $p^2q^2$ (since each one of the properties has to be satisfied twice). In this paper we describe a new version of boomerang attacks which uses the counterintuitive idea of throwing out most of the data in order to force equalities between certain values on the ciphertext side. In certain cases, this creates a correlation between the four probabilistic events, which increases the probability of the combined property to $p^2q$ and increases the signal to noise ratio of the resultant distinguisher. We call this variant a {\it retracing boomerang attack} since we make sure that the boomerang we throw follows the same path on its forward and backward directions. To demonstrate the power of the new technique, we apply it to the case of 5-round AES. This version of AES was repeatedly attacked by a large variety of techniques, but for twenty years its complexity had remained stuck at $2^{32}$. At Crypto'18 it was finally reduced to $2^{24}$ (for full key recovery), and with our new technique we can further reduce the complexity of full key recovery to the surprisingly low value of $2^{16.5}$ (i.e., only $90,000$ encryption/decryption operations are required for a full key recovery on half the rounds of AES). In addition to improving previous attacks, our new technique unveils a hidden relationship between boomerang attacks and two other cryptanalytic techniques, the yoyo game and the recently introduced mixture differentials.
2020
EUROCRYPT
Optimal Merging in Quantum $k$-xor and $k$-sum Algorithms 📺
The $k$-xor or Generalized Birthday Problem aims at finding, given $k$ lists of bit-strings, a $k$-tuple among them XORing to 0. If the lists are unbounded, the best classical (exponential) time complexity has withstood since Wagner's CRYPTO 2002 paper. If the lists are bounded (of the same size) and such that there is a single solution, the \emph{dissection algorithms} of Dinur \emph{et al.} (CRYPTO 2012) improve the memory usage over a simple meet-in-the-middle. In this paper, we study quantum algorithms for the $k$-xor problem. With unbounded lists and quantum access, we improve previous work by Grassi \emph{et al.} (ASIACRYPT 2018) for almost all $k$. Next, we extend our study to lists of any size and with classical access only. We define a set of ``merging trees'' which represent the best known strategies for quantum and classical merging in $k$-xor algorithms, and prove that our method is optimal among these. Our complexities are confirmed by a Mixed Integer Linear Program that computes the best strategy for a given $k$-xor problem. All our algorithms apply also when considering modular additions instead of bitwise xors. This framework enables us to give new improved quantum $k$-xor algorithms for all $k$ and list sizes. Applications include the subset-sum problem, LPN with limited memory and the multiple-encryption problem.
2020
EUROCRYPT
Implementing Grover oracles for quantum key search on AES and LowMC 📺
Grover's search algorithm gives a quantum attack against block ciphers by searching for a key that matches a small number of plaintext-ciphertext pairs. This attack uses O(N) calls to the cipher to search a key space of size N. Previous work in the specific case of AES derived the full gate cost by analyzing quantum circuits for the cipher, but focused on minimizing the number of qubits. In contrast, we study the cost of quantum key search attacks under a depth restriction and introduce techniques that reduce the oracle depth, even if it requires more qubits. As cases in point, we design quantum circuits for the block ciphers AES and LowMC. Our circuits give a lower overall attack cost in both the gate count and depth-times-width cost models. In NIST's post-quantum cryptography standardization process, security categories are defined based on the concrete cost of quantum key search against AES. We present new, lower cost estimates for each category, so our work has immediate implications for the security assessment of post-quantum cryptography. As part of this work, we release Q# implementations of the full Grover oracle for AES-128, -192, -256 and for the three LowMC instantiations used in Picnic, including unit tests and code to reproduce our quantum resource estimates. To the best of our knowledge, these are the first two such full implementations and automatic resource estimations.
2020
EUROCRYPT
Marlin: Preprocessing zkSNARKs with Universal and Updatable SRS 📺
We present a general methodology to construct preprocessing zkSNARKs where the structured reference string (SRS) is universal and updatable. This exploits a novel application of *holographic* IOPs, a natural generalization of holographic PCPs [Babai et al., STOC 1991]. We use our methodology to obtain a preprocessing zkSNARK where the SRS has linear size and arguments have constant size. Our construction improves on Sonic [Maller et al., CCS 2019], the prior state of the art in this setting, in all efficiency parameters: proving is an order of magnitude faster and verification is twice as fast, even with smaller SRS size and argument size. Our construction is most efficient when instantiated in the algebraic group model (also used by Sonic), but we also demonstrate how to realize it under concrete knowledge assumptions. The core of our zkSNARK is a new holographic IOP for rank-1 constraint satisfiability (R1CS), which is the first to achieve linear proof length and constant query complexity (among other efficiency features).
2020
EUROCRYPT
Private Aggregation from Fewer Anonymous Messages 📺
Consider the setup where $n$ parties are each given an element~$x_i$ in the finite field $\F_q$ and the goal is to compute the sum $\sum_i x_i$ in a secure fashion and with as little communication as possible. We study this problem in the \emph{anonymized model} of Ishai et al.~(FOCS 2006) where each party may broadcast anonymous messages on an insecure channel. We present a new analysis of the one-round ``split and mix'' protocol of Ishai et al. In order to achieve the same security parameter, our analysis reduces the required number of messages by a $\Theta(\log n)$ multiplicative factor. We also prove lower bounds showing that the dependence of the number of messages on the domain size, the number of parties, and the security parameter is essentially tight. Using a reduction of Balle et al. (2019), our improved analysis of the protocol of Ishai et al. yields, in the same model, an $\left(\varepsilon, \delta\right)$-differentially private protocol for aggregation that, for any constant $\varepsilon > 0$ and any $\delta = \frac{1}{\poly(n)}$, incurs only a constant error and requires only a \emph{constant number of messages} per party. Previously, such a protocol was known only for $\Omega(\log n)$ messages per party.
2020
EUROCRYPT
Two-Round Oblivious Transfer from CDH or LPN 📺
We show a new general approach for constructing maliciously-secure two-round oblivious transfer (OT). Specifically, we provide a generic sequence of transformations to upgrade a very basic notion of two-roundOT, which we call elementary OT, to UC-secure OT. We then give simple constructions of elementary OT under the Computational Diffie-Hellman(CDH) assumption or the Learning Parity with Noise (LPN) assumption, yielding the first constructions of malicious (UC-secure) two-round OT under these assumptions. Since two-round OT is complete for two-round 2-party and multi-party computation in the malicious setting, we also achieve the first constructions of the latter under these assumptions.
2020
EUROCRYPT
New Constructions of Statistical NIZKs: Dual-Mode DV-NIZKs and More 📺
Non-interactive zero-knowledge proofs (NIZKs) are important primitives in cryptography. A major challenge since the early works on NIZKs has been to construct NIZKs with a statistical zero-knowledge guarantee against unbounded verifiers. In the common reference string (CRS) model, such "statistical NIZK arguments" are currently known from k-Lin in a pairing-group and from LWE. In the (reusable) designated-verifier model (DV-NIZK), where a trusted setup algorithm generates a reusable verification key for checking proofs, we also have a construction from DCR. If we relax our requirements to computational zero-knowledge, we additionally have NIZKs from factoring and CDH in a pairing group in the CRS model, and from nearly all assumptions that imply public-key encryption (e.g., CDH, LPN, LWE) in the designated-verifier model. Thus, there still remains a gap in our understanding of statistical NIZKs in both the CRS and the designated-verifier models. In this work, we develop new techniques for constructing statistical NIZK arguments. First, we construct statistical DV-NIZK arguments from the k-Lin assumption in pairing-free groups, the QR assumption, and the DCR assumption. These are the first constructions in pairing-free groups and from QR that satisfy statistical zero-knowledge. All of our constructions are secure even if the verification key is chosen maliciously (i.e., they are "malicious-designated-verifier" NIZKs), and moreover, they satisfy a "dual-mode" property where the CRS can be sampled from two computationally indistinguishable distributions: one distribution yields statistical DV-NIZK arguments while the other yields computational DV-NIZK proofs. We then show how to adapt our k-Lin construction in a pairing group to obtain new publicly-verifiable statistical NIZK arguments from pairings with a qualitatively weaker assumption than existing constructions of pairing-based statistical NIZKs. Our constructions follow the classic paradigm of Feige, Lapidot, and Shamir (FLS). While the FLS framework has traditionally been used to construct computational (DV)-NIZK proofs, we newly show that the same framework can be leveraged to construct dual-mode (DV)-NIZKs.
2020
EUROCRYPT
Combiners for Functional Encryption, Unconditionally 📺
Functional encryption (FE) combiners allow one to combine many candidates for a functional encryption scheme, possibly based on different computational assumptions, into another functional encryption candidate with the guarantee that the resulting candidate is secure as long as at least one of the original candidates is secure. The fundamental question in this area is whether FE combiners exist. There have been a series of works Ananth et. al. (CRYPTO '16), Ananth-Jain-Sahai (EUROCRYPT '17), Ananth et. al (TCC '19) on constructing FE combiners from various assumptions. We give the first unconditional construction of combiners for functional encryption, resolving this question completely. Our construction immediately implies an unconditional universal functional encryption scheme, an FE scheme that is secure if such an FE scheme exists. Previously such results either relied on algebraic assumptions or required subexponential security assumptions.
2020
EUROCRYPT
On the Streaming Indistinguishability of a Random Permutation and a Random Function 📺
An adversary with $S$ bits of memory obtains a stream of $Q$ elements that are uniformly drawn from the set $\{1,2,\ldots,N\}$, either with or without replacement. This corresponds to sampling $Q$ elements using either a random function or a random permutation. The adversary's goal is to distinguish between these two cases. This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary's advantage is upper bounded by $\sqrt{Q \cdot S/N}$. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of $O(\log N)$ if $Q \cdot S \approx N$. However, the bound's proof assumed an unproven combinatorial conjecture. Moreover, if $Q \cdot S \ll N$ there is a gap between the upper bound of $\sqrt{Q \cdot S/N}$ and the $Q \cdot S/N$ advantage obtained by known attacks. In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of $O(\log Q \cdot Q \cdot S/N)$ on the adversary's advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a hybrid argument that gives rise to a reduction from the unique-disjointness communication complexity problem to streaming.
2020
EUROCRYPT
Fault Template Attacks on Block Ciphers Exploiting Fault Propagation 📺
Fault attacks (FA) are one of the potent practical threats to modern cryptographic implementations. Over the years the FA techniques have evolved, gradually moving towards the exploitation of device-centric properties of the faults. In this paper, we exploit the fact that activation and propagation of a fault through a given combinational circuit (i.e., observability of a fault) is data-dependent. Next, we show that this property of combinational circuits leads to powerful Fault Template Attacks (FTA), even for implementations having dedicated protections against both power and fault-based vulnerabilities. The attacks found in this work are applicable even if the fault injection is made at the middle rounds of a block cipher, which are out of reach for most of the other existing fault analysis strategies. Quite evidently, they also work for a known-plaintext scenario. Moreover, the middle round attacks are entirely blind in the sense that no access to the ciphertexts (correct/faulty) or plaintexts are required. The adversary is only assumed to have the power of repeating an unknown plaintext several times. Practical validation over a hardware implementation of SCA-FA protected PRESENT, and simulated evaluation on a public software implementation of protected AES prove the efficacy of the proposed attacks.
2020
EUROCRYPT
On the Memory-Tightness of Hashed ElGamal 📺
We study the memory-tightness of security reductions in public-key cryptography, focusing in particular on Hashed ElGamal. We prove that any {\em straightline} (i.e., without rewinding) black-box reduction needs memory which grows linearly with the number of queries of the adversary it has access to, as long as this reduction treats the underlying group generically. This makes progress towards proving a conjecture by Auerbach {\em et al.} (CRYPTO 2017), and is also the first lower bound on memory-tightness for a concrete cryptographic scheme (as opposed to generalized reductions across security notions). Our proof relies on compression arguments in the generic group model.
2020
EUROCRYPT
On a Generalization of Substitution-Permutation Networks: The HADES Design Strategy 📺
Keyed and unkeyed cryptographic permutations often iterate simple round functions. Substitution-permutation networks (SPNs) are an approach that is popular since the mid 1990s. One of the new directions in the design of these round functions is to reduce the substitution (S-Box) layer from a full one to a partial one, uniformly distributed over all the rounds. LowMC and Zorro are examples of this approach. A relevant freedom in the design space is to allow for a highly non-uniform distribution of S-Boxes. However, choosing rounds that are so different from each other is very rarely done, as it makes security analysis and implementation much harder. We develop the design strategy HADES and an analysis framework for it, which despite this increased complexity allows for security arguments against many classes of attacks, similar to earlier simpler SPNs. The framework builds upon the wide trail design strategy, and it additionally allows for security arguments against algebraic attacks, which are much more of a concern when algebraically simple S-Boxes are used. Subsequently, this is put into practice by concrete instances and benchmarks for a use case that generally benefits from a smaller number of S-Boxes and showcases the diversity of design options we support: A candidate cipher natively working with objects in GF(p), for securing data transfers with distributed databases using secure multiparty computation (MPC). Compared to the currently fastest design MiMC, we observe significant improvements in online bandwidth requirements and throughput with a simultaneous reduction of preprocessing effort, while having a comparable online latency.
2020
EUROCRYPT
Security of Hedged Fiat-Shamir Signatures under Fault Attacks 📺
Deterministic generation of per-signature randomness has been a widely accepted solution to mitigate the catastrophic risk of randomness failure in Fiat--Shamir type signature schemes. However, recent studies have practically demonstrated that such de-randomized schemes, including EdDSA, are vulnerable to differential fault attacks, which enable adversaries to recover the entire secret signing key, by artificially provoking randomness reuse or corrupting computation in other ways. In order to balance concerns of both randomness failures and the threat of fault injection, some signature designs are advocating a ``hedged'' derivation of the per-signature randomness, by hashing the secret key, message, and a nonce. Despite the growing popularity of the hedged paradigm in practical signature schemes, to the best of our knowledge, there has been no attempt to formally analyze the fault resilience of hedged signatures. We perform a formal security analysis of the fault resilience of signature schemes constructed via the Fiat--Shamir transform. We propose a model to characterize bit-tampering fault attacks, and investigate their impact across different steps of the signing operation. We prove that, for some types of faults, attacks are mitigated by the hedged paradigm, while attacks remain possible for others. As concrete case studies, we then apply our results to XEdDSA, a hedged version of EdDSA used in the Signal messaging protocol, and to Picnic2, a hedged Fiat--Shamir signature scheme in Round 2 of the NIST Post-Quantum standardization process.
2020
EUROCRYPT
Compact NIZKs from Standard Assumptions on Bilinear Maps 📺
A non-interactive zero-knowledge (NIZK) protocol enables a prover to convince a verifier of the truth of a statement without leaking any other information by sending a single message. The main focus of this work is on exploring short pairing-based NIZKs for all NP languages based on standard assumptions. In this regime, the seminal work of Groth, Ostrovsky, and Sahai (J.ACM'12) (GOS-NIZK) is still considered to be the state-of-the-art. Although fairly efficient, one drawback of GOS-NIZK is that the proof size is multiplicative in the circuit size computing the NP relation. That is, the proof size grows by $O(|C|k)$, where $C$ is the circuit for the NP relation and $k$ is the security parameter. By now, there have been numerous follow-up works focusing on shortening the proof size of pairing-based NIZKs, however, thus far, all works come at the cost of relying either on a non-standard knowledge-type assumption or a non-static $q$-type assumption. Specifically, improving the proof size of the original GOS-NIZK under the same standard assumption has remained as an open problem. Our main result is a construction of a pairing-based NIZK for all of NP whose proof size is additive in $|C|$, that is, the proof size only grows by $|C| +poly(k)$, based on the decisional linear (DLIN) assumption. Since the DLIN assumption is the same assumption underlying GOS-NIZK, our NIZK is a strict improvement on their proof size. As by-products of our main result, we also obtain the following two results: (1) We construct a perfectly zero-knowledge NIZK (NIPZK) for NP relations computable in NC1 with proof size $|w|poly(k)$ where $|w|$ is the witness length based on the DLIN assumption. This is the first pairing-based NIPZK for a non-trivial class of NP languages whose proof size is independent of $|C|$ based on a standard assumption. (2) We construct a universally composable (UC) NIZK for NP relations computable in NC1 in the erasure-free adaptive setting whose proof size is $|w|poly(k)$ from the DLIN assumption. This is an improvement over the recent result of Katsumata, Nishimaki, Yamada, and Yamakawa (CRYPTO'19), which gave a similar scheme based on a non-static $q$-type assumption. The main building block for all of our NIZKs is a constrained signature scheme with decomposable online-offline efficiency. This is a property which we newly introduce in this paper and construct from the DLIN assumption. We believe this construction is of an independent interest.
2020
EUROCRYPT
Key Recovery from Gram--Schmidt Norm Leakage in Hash-and-Sign Signatures over NTRU Lattices 📺
In this paper, we initiate the study of side-channel leakage in hash-and-sign lattice-based signatures, with particular emphasis on the two efficient implementations of the original GPV lattice-trapdoor paradigm for signatures, namely NIST second-round candidate Falcon and its simpler predecessor DLP. Both of these schemes implement the GPV signature scheme over NTRU lattices, achieving great speed-ups over the general lattice case. Our results are mainly threefold. First, we identify a specific source of side-channel leakage in most implementations of those schemes, namely, the one-dimensional Gaussian sampling steps within lattice Gaussian sampling. It turns out that the implementations of these steps often leak the Gram--Schmidt norms of the secret lattice basis. Second, we elucidate the link between this leakage and the secret key, by showing that the entire secret key can be efficiently reconstructed solely from those Gram--Schmidt norms. The result makes heavy use of the algebraic structure of the corresponding schemes, which work over a power-of-two cyclotomic field. Third, we concretely demonstrate the side-channel attack against DLP (but not Falcon due to the different structures of the two schemes). The challenge is that timing information only provides an approximation of the Gram--Schmidt norms, so our algebraic recovery technique needs to be combined with pruned tree search in order to apply it to approximate values. Experimentally, we show that around $2^{35}$ DLP traces are enough to reconstruct the entire key with good probability.
2020
EUROCRYPT
Adaptively Secure ABE for DFA from k-Lin and More 📺
In this work, we present: - the first adaptively secure ABE for DFA from the k-Lin assumption in prime-order bilinear groups; this resolves one of open problems posed by Waters [CRYPTO'12]; - the first ABE for NFA from the k-Lin assumption, provided the number of accepting paths is smaller than the order of the underlying group; the scheme achieves selective security; - the first compact adaptively secure ABE (supporting unbounded multi-use of attributes) for branching programs from the k-Lin assumption, which generalizes and simplifies the recent result of Kowalczyk and Wee for boolean formula (NC1) [EUROCRYPT'19]. Our adaptively secure ABE for DFA relies on a new combinatorial mechanism avoiding the exponential security loss in the number of states when naively combining two recent techniques from CRYPTO'19 and EUROCRYPT'19. This requires us to design a selectively secure ABE for NFA; we give a construction which is sufficient for our purpose and of independent interest. Our ABE for branching programs leverages insights from our ABE for DFA.
2020
EUROCRYPT
Blind Schnorr Signatures and Signed ElGamal Encryption in the Algebraic Group Model 📺
The Schnorr blind signing protocol allows blind issuing of Schnorr signatures, one of the most widely used signatures. Despite its practical relevance, its security analysis is unsatisfactory. The only known security proof is informal and in the combination of the generic group model (GGM) and the random oracle model (ROM) assuming that the ``ROS problem'' is hard. The situation is similar for (Schnorr-)signed ElGamal encryption, a simple CCA2-secure variant of ElGamal. We analyze the security of these schemes in the algebraic group model (AGM), an idealized model closer to the standard model than the GGM. We first prove tight security of Schnorr signatures from the discrete logarithm assumption (DL) in the AGM+ROM. We then give a rigorous proof for blind Schnorr signatures in the AGM+ROM assuming hardness of the one-more discrete logarithm problem and ROS. As ROS can be solved in sub-exponential time using Wagner's algorithm, we propose a simple modification of the signing protocol, which leaves the signatures unchanged. It is therefore compatible with systems that already use Schnorr signatures, such as blockchain protocols. We show that the security of our modified scheme relies on the hardness of a problem related to ROS that appears much harder. Finally, we give tight reductions, again in the AGM+ROM, of the CCA2 security of signed ElGamal encryption to DDH and signed hashed ElGamal key encapsulation to DL.
2020
EUROCRYPT
Lower Bounds for Leakage-Resilient Secret Sharing 📺
Threshold secret sharing allows a dealer to split a secret into $n$ shares such that any authorized subset of cardinality at least $t$ of those shares efficiently reveals the secret, while at the same time any unauthorized subset of cardinality less than $t$ contains no information about the secret. Leakage-resilience additionally requires that the secret remains hidden even if one is given a bounded amount of additional leakage from every share. In this work, we study leakage-resilient secret sharing schemes and prove a lower bound on the share size and the required amount randomness of any information-theoretically secure scheme. We prove that for any information-theoretically secure leakage-resilient secret sharing scheme either the amount of randomness across all shares or the share size has to be linear in $n$. More concretely, for a secret sharing scheme with $p$-bit long shares, $\ell$-bit leakage per share, where $\widehat{t}$ shares uniquely define the remaining $n - \widehat{t}$ shares, it has to hold that $p \ge \frac{\ell (n - t)}{\widehat{t}}$. We use this lower bound to gain further insights into a question that was recently posed by Benhamouda et al. (CRYPTO'18), who ask to what extend existing regular secret sharing schemes already provide protection against leakage. The authors proved that Shamir's secret sharing is $1$-bit leakage-resilient for reconstruction thresholds $t \geq 0.85n$ and conjectured that it is also $1$-bit leakage-resilient for any other threshold that is a constant fraction of the total number of shares. We do not disprove their conjecture, but show that it is the best one could possibly hope for. Concretely, we show that for large enough $n$ and any constant $0< c < 1$ it holds that Shamir's secret sharing scheme is \emph{not} leakage-resilient for $t \leq \frac{cn}{\log n}$. In contrast to the setting with information-theoretic security, we show that our lower bound does not hold in the computational setting. That is, we show how to construct a leakage-resilient secret sharing scheme in the random oracle model that is secure against computationally bounded adversaries and violates the lower bound stated above.
2020
EUROCRYPT
Secure Multi-party Quantum Computation with a Dishonest Majority 📺
The cryptographic task of secure multi-party (classical) computation has received a lot of attention in the last decades. Even in the extreme case where a computation is performed between k mutually distrustful players, and security is required even for the single honest player if all other players are colluding adversaries, secure protocols are known. For quantum computation, on the other hand, protocols allowing arbitrary dishonest majority have only been proven for k=2. In this work, we generalize the approach taken by Dupuis, Nielsen and Salvail (CRYPTO 2012) in the two-party setting to devise a secure, efficient protocol for multi-party quantum computation for any number of players k, and prove security against up to k-1 colluding adversaries. The quantum round complexity of the protocol for computing a quantum circuit of {CNOT, T} depth d is O(k (d + log n)), where n is the security parameter. To achieve efficiency, we develop a novel public verification protocol for the Clifford authentication code, and a testing protocol for magic-state inputs, both using classical multi-party computation.
2020
EUROCRYPT
Sigma protocols for MQ, PKP and SIS, and fishy signature schemes 📺
This work presents sigma protocols to prove knowledge of: - a solution to a system of quadratic polynomials, - a solution to an instance of the Permuted Kernel Problem and - a witness for a variety of lattice statements (including SIS). Our sigma protocols have soundness error 1/q', where q' is any number bounded by the size of the underlying finite field. This is much better than existing proofs, which have soundness error 2/3 or (q'+1)/2q'. The prover and verifier time our proofs are O(q'). We achieve this by first constructing so-called sigma protocols with helper, which are sigma protocols where the prover and the verifier are assisted by a trusted third party, and then eliminating the helper from the proof with a "cut-and-choose" protocol. We apply the Fiat-Shamir transform to obtain signature schemes with security proof in the QROM. We show that the resulting signature schemes, which we call the "MUltivariate quaDratic FIat-SHamir" scheme (MUDFISH) and the "ShUffled Solution to Homogeneous linear SYstem FIat-SHamir" scheme (SUSHSYFISH), are more efficient than existing signatures based on the MQ problem and the Permuted Kernel Problem. Our proof system can be used to improve the efficiency of applications relying on (generalizations of) Stern's protocol. We show that the proof size of our SIS proof is smaller than that of Stern's protocol by an order of magnitude and that our proof is more efficient than existing post-quantum secure SIS proofs.
2020
PKC
Fast, Compact, and Expressive Attribute-Based Encryption 📺
Attribute-based encryption (ABE) is an advanced cryptographic tool and useful to build various types of access control systems. Toward the goal of making ABE more practical, we propose key-policy (KP) and ciphertext-policy (CP) ABE schemes, which first support unbounded sizes of attribute sets and policies with negation and multi-use of attributes, allow fast decryption, and are adaptively secure under a standard assumption, simultaneously. Our schemes are more expressive than previous schemes and efficient enough. To achieve the adaptive security along with the other properties, we refine the technique introduced by Kowalczyk and Wee (Eurocrypt’19) so that we can apply the technique more expressive ABE schemes. Furthermore, we also present a new proof technique that allows us to remove redundant elements used in their ABE schemes. We implement our schemes in 128-bit security level and present their benchmarks for an ordinary personal computer and smartphones. They show that all algorithms run in one second with the personal computer when they handle any policy or attribute set with one hundred attributes.
2020
PKC
Adaptive Simulation Security for Inner Product Functional Encryption 📺
Inner product functional encryption ( $${mathsf {IPFE}}$$ ) [ 1 ] is a popular primitive which enables inner product computations on encrypted data. In $${mathsf {IPFE}}$$ , the ciphertext is associated with a vector $$varvec{x}$$ , the secret key is associated with a vector $$varvec{y}$$ and decryption reveals the inner product $$langle varvec{x},varvec{y} angle $$ . Previously, it was known how to achieve adaptive indistinguishability ( $$mathsf {IND}$$ ) based security for $${mathsf {IPFE}}$$ from the $$mathsf {DDH}$$ , $$mathsf {DCR}$$ and $$mathsf {LWE}$$ assumptions [ 8 ]. However, in the stronger simulation ( $$mathsf {SIM}$$ ) based security game, it was only known how to support a restricted adversary that makes all its key requests either before or after seeing the challenge ciphertext, but not both. In more detail, Wee [ 46 ] showed that the $$mathsf {DDH}$$ -based scheme of Agrawal et al. (Crypto 2016) achieves semi-adaptive simulation-based security, where the adversary must make all its key requests after seeing the challenge ciphertext. On the other hand, O’Neill showed that all $$mathsf {IND}$$ -secure $${mathsf {IPFE}}$$ schemes (which may be based on $$mathsf {DDH}$$ , $$mathsf {DCR}$$ and $$mathsf {LWE}$$ ) satisfy $$mathsf {SIM}$$ based security in the restricted model where the adversary makes all its key requests before seeing the challenge ciphertext. In this work, we resolve the question of $$mathsf {SIM}$$ -based security for $${mathsf {IPFE}}$$ by showing that variants of the $${mathsf {IPFE}}$$ constructions by Agrawal et al. , based on $$mathsf {DDH}$$ , Paillier and $$mathsf {LWE}$$ , satisfy the strongest possible adaptive $$mathsf {SIM}$$ -based security where the adversary can make an unbounded number of key requests both before and after seeing the (single) challenge ciphertext. This establishes optimal security of the $${mathsf {IPFE}}$$ schemes, under all hardness assumptions on which it can (presently) be based.
2020
PKC
Verifiable Inner Product Encryption Scheme 📺
In the standard setting of functional encryption (FE), we assume both the Central Authority (CA) and the encryptors to run their respective algorithms faithfully. Badrinarayanan et al. [ASIACRYPT 2016] proposed the concept of verifiable FE, which essentially guarantees that dishonest encryptors and authorities, even when colluding together, are not able to generate ciphertexts and tokens that give “inconsistent” results. They also provide a compiler turning any perfectly correct FE into a verifiable FE, but do not give efficient constructions. In this paper we improve on this situation by considering Inner-Product Encryption (IPE), which is a special case of functional encryption and a primitive that has attracted wide interest from both practitioners and researchers in the last decade. Specifically, we construct the first efficient verifiable IPE (VIPE) scheme according to the inner-product functionality of Katz, Sahai and Waters [EUROCRYPT 2008]. To instantiate the general construction of Badrinarayanan et al. we need to solve several additional challenges. In particular, we construct the first efficient perfectly correct IPE scheme. Our VIPE satisfies unconditional verifiability, whereas its privacy relies on the DLin assumption.
2020
PKC
A New Paradigm for Public-Key Functional Encryption for Degree-2 Polynomials 📺
We give the first public-key functional encryption that supports the generation of functional decryption keys for degree-2 polynomials, with succinct ciphertexts, whose semi-adaptive simulation-based security is proven under standard assumptions. At the heart of our new paradigm lies a so-called partially function-hiding functional encryption scheme for inner products, which admits public-key instances, and that is sufficient to build functional encryption for degree-2 polynomials. Doing so, we improve upon prior works, such as the constructions from Lin (CRYPTO 17) or Ananth Sahai (EUROCRYPT 17), both of which rely on function-hiding inner product FE, that can only exist in the private-key setting. The simplicity of our construction yields the most efficient FE for quadratic functions from standard assumptions (even those satisfying a weaker security notion). The interest of our methodology is that the FE for quadratic functions that builds upon any partially function-hiding FE for inner products inherits the security properties of the latter. In particular, we build a partially function-hiding FE for inner products that enjoys simulation security, in the semi-adaptive setting, where the challenge sent from the adversary can be chosen adaptively after seeing the public key (but before corrupting functional decryption keys). This is in contrast from prior public-key FE for quadratic functions from Baltico et al. (CRYPTO 17), which only achieved an indistinguishability-based, selective security. As a bonus, we show that we can obtain security against Chosen-Ciphertext Attacks straightforwardly. Even though this is the de facto security notion for encryption, this was not achieved by prior functional encryption schemes for quadratic functions, where the generic Fujisaki Okamoto transformation (CRYPTO 99) does not apply.
2020
PKC
Master-Key KDM-Secure IBE from Pairings 📺
Identity-based encryption (IBE) is a generalization of public-key encryption (PKE) by allowing encryptions to be made to user identities. In this work, we seek to obtain IBE schemes that achieve key-dependent-message (KDM) security with respect to messages that depend on the master secret key. Previous KDM-secure schemes only achieved KDM security in simpler settings, in which messages may only depend on user secret keys. An important motivation behind studying master-KDM security is the application of this notion in obtaining generic constructions of KDM-CCA secure PKE, a primitive notoriously difficult to realize. We give the first IBE that achieves master-KDM security from standard assumptions in pairing groups. Our construction is modular and combines techniques from KDM-secure PKE based from hash-proof systems, together with IBE that admits a tight security proof in the multi-challenge setting, which happens to be unexpectedly relevant in the context of KDM security. In fact, to the best of our knowledge, this is the first setting where techniques developed in the context of realizing tightly secure cryptosystems have led to a new feasibility result. As a byproduct, our KDM-secure IBE, and thus the resulting KDM-CCA-secure PKE both enjoy a tight security reduction, independent of the number of challenge ciphertexts, which was not achieved before.
2020
PKC
Hierarchical Identity-Based Encryption with Tight Multi-challenge Security 📺
We construct the first hierarchical identity-based encryption (HIBE) scheme with tight adaptive security in the multi-challenge setting, where adversaries are allowed to ask for ciphertexts for multiple adaptively chosen identities. Technically, we develop a novel technique that can tightly introduce randomness into user secret keys for hierarchical identities in the multi-challenge setting, which cannot be easily achieved by the existing techniques for tightly multi-challenge secure IBE. In contrast to the previous constructions, the security of our scheme is independent of the number of user secret key queries and that of challenge ciphertext queries. We prove the tight security of our scheme based on the Matrix Decisional Diffie-Hellman Assumption, which is an abstraction of standard and simple decisional Diffie-Hellman assumptions, such as the k -Linear and SXDH assumptions. Finally, we also extend our ideas to achieve tight chosen-ciphertext security and anonymity, respectively. These security notions for HIBE have not been tightly achieved in the multi-challenge setting before.
2020
PKC
The Usefulness of Sparsifiable Inputs: How to Avoid Subexponential iO 📺
We consider the problem of removing subexponential reductions to indistinguishability obfuscation (iO) in the context of obfuscating probabilistic programs. Specifically, we show how to apply complexity absorption (Zhandry Crypto 2016) to the recent notion of probabilistic indistinguishability obfuscation (piO, Canetti et al. TCC 2015). As a result, we obtain a variant of piO which allows to obfuscate a large class of probabilistic programs, from polynomially secure indistinguishability obfuscation and extremely lossy functions. Particularly, our piO variant is able to obfuscate circuits with specific input domains regardless of the performed computation. We then revisit several (direct or indirect) applications of piO, and obtain – a fully homomorphic encryption scheme (without circular security assumptions), – a multi-key fully homomorphic encryption scheme with threshold decryption, – an encryption scheme secure under arbitrary key-dependent messages, – a spooky encryption scheme for all circuits, – a function secret sharing scheme with additive reconstruction for all circuits, all from polynomially secure iO, extremely lossy functions, and, depending on the scheme, also other (but polynomial and comparatively mild) assumptions. All of these assumptions are implied by polynomially secure iO and the (non-polynomial, but very well-investigated) exponential DDH assumption. Previously, all the above applications required to assume the subexponential security of iO (and more standard assumptions).
2020
PKC
Witness Maps and Applications 📺
We introduce the notion of Witness Maps as a cryptographic notion of a proof system. A Unique Witness Map (UWM) deterministically maps all witnesses for an $$mathbf {NP}$$ statement to a single representative witness, resulting in a computationally sound, deterministic-prover, non-interactive witness independent proof system. A relaxation of UWM, called Compact Witness Map (CWM), maps all the witnesses to a small number of witnesses, resulting in a “lossy” deterministic-prover, non-interactive proof-system. We also define a Dual Mode Witness Map (DMWM) which adds an “extractable” mode to a CWM. Our main construction is a DMWM for all $$mathbf {NP}$$ relations, assuming sub-exponentially secure indistinguishability obfuscation ( $${imathcal {O}}$$ ), along with standard cryptographic assumptions. The DMWM construction relies on a CWM and a new primitive called Cumulative All-Lossy-But-One Trapdoor Functions (C-ALBO-TDF), both of which are in turn instantiated based on $${imathcal {O}}$$ and other primitives. Our instantiation of a CWM is in fact a UWM; in turn, we show that a UWM implies Witness Encryption. Along the way to constructing UWM and C-ALBO-TDF, we also construct, from standard assumptions, Puncturable Digital Signatures and a new primitive called Cumulative Lossy Trapdoor Functions (C-LTDF). The former improves up on a construction of Bellare et al. (Eurocrypt 2016), who relied on sub-exponentially secure $${imathcal {O}}$$ and sub-exponentially secure OWF. As an application of our constructions, we show how to use a DMWM to construct the first leakage and tamper-resilient signatures with a deterministic signer , thereby solving a decade old open problem posed by Katz and Vaikunthanathan (Asiacrypt 2009), by Boyle, Segev and Wichs (Eurocrypt 2011), as well as by Faonio and Venturi (Asiacrypt 2016). Our construction achieves the optimal leakage rate of $$1 - o(1)$$ .
2020
PKC
Memory-Tight Reductions for Practical Key Encapsulation Mechanisms 📺
The efficiency of a black-box reduction is an important goal of modern cryptography. Traditionally, the time complexity and the success probability were considered as the main aspects of efficiency measurements. In CRYPTO 2017, Auerbach et al. introduced the notion of memory-tightness in cryptographic reductions and showed a memory-tight reduction of the existential unforgeability of the RSA-FDH signature scheme. Unfortunately, their techniques do not extend directly to the reductions involving intricate RO-programming. The problem seems to be inherent as all the other existing results on memory-tightness are lower bounds and impossibility results. In fact, Auerbach et al. conjectured that a memory-tight reduction for security of Hashed-ElGamal KEM is impossible. We refute the above conjecture. Using a simple RO simulation technique, we provide memory-tight reductions of security of the Cramer-Shoup and the ECIES version of Hashed-ElGamal KEM. We prove memory-tight reductions for different variants of Fujisaki-Okamoto Transformation. We analyze the modular transformations introduced by Hofheinz, Hövermanns and Kiltz (TCC 2017). In addition to the constructions involving implicit rejection, we present a memory-tight reduction for the security of the transformation $$mathsf{ ext {QFO}_m^perp }$$ . Our techniques can withstand correctness-errors, and applicable to several lattice-based KEM candidates.
2020
PKC
Toward RSA-OAEP Without Random Oracles 📺
We show new partial and full instantiation results under chosen-ciphertext security for the widely implemented and standardized RSA-OAEP encryption scheme of Bellare and Rogaway (EUROCRYPT 1994) and two variants. Prior work on such instantiations either showed negative results or settled for “passive” security notions like IND-CPA. More precisely, recall that RSA-OAEP adds redundancy and randomness to a message before composing two rounds of an underlying Feistel transform, whose round functions are modeled as random oracles (ROs), with RSA. Our main results are: Either of the two oracles (while still modeling the other as a RO) can be instantiated in RSA-OAEP under IND-CCA2 using mild standard-model assumptions on the round functions and generalizations of algebraic properties of RSA shown by Barthe, Pointcheval, and Báguelin (CCS 2012). The algebraic properties are only shown to hold at practical parameters for small encryption exponent ( $$e=3$$ ), but we argue they have value for larger e as well. Both oracles can be instantiated simultaneously for two variants of RSA-OAEP, called “ t -clear” and “ s -clear” RSA-OAEP. For this we use extractability-style assumptions in the sense of Canetti and Dakdouk (TCC 2010) on the round functions, as well as novel yet plausible “XOR-type” assumptions on RSA. While admittedly strong, such assumptions may nevertheless be necessary at this point to make positive progress. In particular, our full instantiations evade impossibility results of Shoup (J. Cryptology 2002), Kiltz and Pietrzak (EUROCRYPT 2009), and Bitansky et al. (STOC 2014). Moreover, our results for s -clear RSA-OAEP yield the most efficient RSA-based encryption scheme proven IND-CCA2 in the standard model (using bold assumptions on cryptographic hashing) to date.
2020
PKC
Public-Key Puncturable Encryption: Modular and Compact Constructions 📺
We revisit the method of designing public-key puncturable encryption schemes and present a generic conversion by leveraging the techniques of distributed key-distribution and revocable encryption. In particular, we first introduce a refined version of identity-based revocable encryption, named key-homomorphic identity-based revocable key encapsulation mechanism with extended correctness . Then, we propose a generic construction of puncturable key encapsulation mechanism from the former by merging the idea of distributed key-distribution. Compared to the state-of-the-art, our generic construction supports unbounded number of punctures and multiple tags per message, thus achieving more fine-grained revocation of decryption capability. Further, it does not rely on random oracles , not suffer from non-negligible correctness error, and results in a variety of efficient schemes with distinct features. More precisely, we obtain the first scheme with very compact ciphertexts in the standard model, and the first scheme with support for both unbounded size of tags per ciphertext and unbounded punctures as well as constant-time puncture operation. Moreover, we get a comparable scheme proven secure under the standard DBDH assumption, which enjoys both faster encryption and decryption than previous works based on the same assumption, especially when the number of tags associated with the ciphertext is large.
2020
PKC
Flexible Authenticated and Confidential Channel Establishment (fACCE): Analyzing the Noise Protocol Framework 📺
The Noise protocol framework is a suite of channel establishment protocols, of which each individual protocol ensures various security properties of the transmitted messages, but keeps specification, implementation, and configuration relatively simple. Implementations of the Noise protocols are themselves, due to the employed primitives, very performant. Thus, despite its relative youth, Noise is already used by large-scale deployed applications such as WhatsApp and Slack. Though the Noise specification describes and claims the security properties of the protocol patterns very precisely, there has been no computational proof yet. We close this gap. Noise uses only a limited number of cryptographic primitives which makes it an ideal candidate for reduction-based security proofs. Due to its patterns’ characteristics as channel establishment protocols, and the usage of established keys within the handshake, the authenticated and confidential channel establishment (ACCE) model (Jager et al. CRYPTO 2012) seems to perfectly fit for an analysis of Noise. However, the ACCE model strictly divides protocols into two non-overlapping phases: the pre-accept phase (i.e., the channel establishment) and post-accept phase (i.e., the channel). In contrast, Noise allows the transmission of encrypted messages as soon as any key is established (for instance, before authentication between parties has taken place), and then incrementally increases the channel’s security guarantees. By proposing a generalization of the original ACCE model, we capture security properties of such staged channel establishment protocols flexibly – comparably to the multi-stage key exchange model (Fischlin and Günther CCS 2014). We give security proofs for eight of the 15 basic Noise patterns in the full version (EPRINT 2019/436) and exemplify them by the proof of the  XK pattern in this article.
2020
PKC
Limits on the Efficiency of (Ring) LWE Based Non-interactive Key Exchange 📺
$$mathsf {LWE}$$ based key-exchange protocols lie at the heart of post-quantum public-key cryptography. However, all existing protocols either lack the non-interactive nature of Diffie-Hellman key-exchange or polynomial $$mathsf {LWE}$$ -modulus, resulting in unwanted efficiency overhead. We study the possibility of designing non-interactive $$mathsf {LWE}$$ -based protocols with polynomial $$mathsf {LWE}$$ -modulus. To this end, We identify and formalize simple non-interactive and polynomial $$mathsf {LWE}$$ -modulus variants of existing protocols, where Alice and Bob simultaneously exchange one or more (ring) $$mathsf {LWE}$$ samples with polynomial $$mathsf {LWE}$$ -modulus and then run individual key reconciliation functions to obtain the shared key. We point out central barriers and show that such non-interactive key-exchange protocols are impossible if: (1) the reconciliation functions first compute the inner product of the received $$mathsf {LWE}$$ sample with their private $$mathsf {LWE}$$ secret. This impossibility is information theoretic. (2) One of the reconciliation functions does not depend on the error of the transmitted $$mathsf {LWE}$$ sample. This impossibility assumes hardness of $$mathsf {LWE}$$ . We give further evidence that progress in either direction, of giving an $$mathsf {LWE}$$ -based $$mathrm {NIKE}$$ protocol or proving impossibility of one will lead to progress on some other well-studied questions in cryptography. Overall, our results show possibilities and challenges in designing simple (ring) $$mathsf {LWE}$$ -based non-interactive key exchange protocols.
2020
PKC
PAKEs: New Framework, New Techniques and More Efficient Lattice-Based Constructions in the Standard Model 📺
Password-based authenticated key exchange (PAKE) allows two parties with a shared password to agree on a session key. In the last decade, the design of PAKE protocols from lattice assumptions has attracted lots of attention. However, existing solutions in the standard model do not have appealing efficiency. In this work, we first introduce a new PAKE framework. We then provide two realizations in the standard model, under the Learning With Errors (LWE) and Ring-LWE assumptions, respectively. Our protocols are much more efficient than previous proposals, thanks to three novel technical ingredients that may be of independent interests. The first ingredient consists of two approximate smooth projective hash (ASPH) functions from LWE, as well as two ASPHs from Ring-LWE. The latter are the first ring-based constructions in the literature, one of which only has a quasi-linear runtime while its function value contains $$varTheta (n)$$ field elements (where n is the degree of the polynomial defining the ring). The second ingredient is a new key conciliation scheme that is approximately rate-optimal and that leads to a very efficient key derivation for PAKE protocols. The third one is a new authentication code that allows to verify a MAC with a noisy key.
2020
PKC
Constraining and Watermarking PRFs from Milder Assumptions 📺
Constrained pseudorandom functions (C-PRFs) let the possessor of a secret key delegate the ability to evaluate the function on certain authorized inputs, while keeping the remaining function values pseudorandom. A constraint-hiding constrained PRF (CHC-PRF) additionally conceals the predicate that determines which inputs are authorized. These primitives have a wealth of applications, including watermarking schemes, symmetric deniable encryption, and updatable garbled circuits. Recent works have constructed (CH)C-PRFs from rather aggressive parameterizations of Learning With Errors (LWE) with subexponential modulus-noise ratios, even for relatively simple “puncturing” or $$ ext {NC}^{1}$$ circuit constraints. This corresponds to strong lattice assumptions and inefficient constructions, and stands in contrast to LWE-based unconstrained PRFs and fully homomorphic encryption schemes, which can be based on quasi-polynomial or even (nearly) polynomial modulus-noise ratios. In this work we considerably improve the LWE assumptions needed for building (constraint-hiding) constrained PRFs and watermarking schemes. In particular, for CHC-PRFs and related watermarking schemes we improve the modulus-noise ratio to $$lambda ^{O((d+log lambda ) log lambda )}$$ for depth- d circuit constraints, which is merely quasi-polynomial for $$ ext {NC}^{1}$$ circuits and closely related watermarking schemes. For (constraint-revealing) C-PRFs for $$ ext {NC}^{1}$$ we do even better, obtaining a nearly polynomial $$lambda ^{omega (1)}$$ ratio. These improvements are partly enabled by slightly modifying the definition of C-PRFs, in a way that is still compatible with many of their applications. Finally, as a contribution of independent interest we build CHC-PRFs for special constraint classes from generic , weaker assumptions: we obtain bit-fixing constraints based on the minimal assumption of one-way functions, and hyperplane-membership constraints based on key-homomorphic PRFs.
2020
PKC
Bringing Order to Chaos: The Case of Collision-Resistant Chameleon-Hashes 📺
Chameleon-hash functions, introduced by Krawczyk and Rabin at NDSS 2000, are trapdoor collision-resistant hash-functions parametrized by a public key. If the corresponding secret key is known, arbitrary collisions for the hash function can be efficiently found. Chameleon-hash functions have prominent applications in the design of cryptographic primitives, such as lifting non-adaptively secure signatures to adaptively secure ones. Recently, this primitive also received a lot of attention as a building block in more complex cryptographic applications ranging from editable blockchains to advanced signature and encryption schemes. We observe that in latter applications various different notions of collision-resistance are used, and it is not always clear if the respective notion does really cover what seems intuitively required by the application. Therefore, we revisit existing collision-resistance notions in the literature, study their relations, and—using the example of the recent redactable blockchain proposals—discuss which practical impact different notions of collision-resistance might have. Moreover, we provide a stronger, and arguably more desirable, notion of collision-resistance than what is known from the literature. Finally, we present a surprisingly simple and efficient black-box construction of chameleon-hash functions achieving this strong notion.
2020
PKC
Concretely-Efficient Zero-Knowledge Arguments for Arithmetic Circuits and Their Application to Lattice-Based Cryptography 📺
In this work we present a new interactive Zero-Knowledge Argument of knowledge for general arithmetic circuits. Our protocol is based on the “MPC-in-the-head”-paradigm of Ishai et al. (STOC 2009) and follows the recent “MPC-in-the-head with Preprocessing” as proposed by Katz, Kolesnikov and Wang (ACM CCS 2018). However, in contrast to Katz et al. who used the “cut-and-choose” approach for pre-processing, we show how to incorporate the well-known “sacrificing” paradigm into “MPC-in-the-head”, which reduces the proof size when working over arithmetic circuits. Our argument system uses only lightweight symmetric-key primitives and utilizes a simplified version of the so-called SPDZ-protocol. Based on specific properties of our protocol we then show how it can be used to construct an efficient Zero-Knowledge Argument of Knowledge for instances of the Short Integer Solution (SIS) problem. We present different protocols that are tailored to specific uses of SIS, while utilizing the advantages of our scheme. In particular, we present a variant of our argument system that allows the parties to sample the circuit “on the fly”, which may be of independent interest. We furthermore implemented our Zero-Knowledge argument for SIS and show that using our protocols it is possible to run a complete interactive proof, even for general SIS instances which result in a circuit with $${>}10^6$$ gates, in less than 0.5 s .
2020
PKC
Updateable Inner Product Argument with Logarithmic Verifier and Applications 📺
We propose an improvement for the inner product argument of Bootle et al. (EUROCRYPT’16). The new argument replaces the unstructured common reference string (the commitment key) by a structured one. We give two instantiations of this argument, for two different distributions of the CRS. In the designated verifier setting, this structure can be used to reduce verification from linear to logarithmic in the circuit size. The argument can be compiled to the publicly verifiable setting in asymmetric bilinear groups. The new common reference string can easily be updateable. The argument can be directly used to improve verification of Bulletproofs range proofs (IEEE SP’18). On the other hand, to use the improved argument to prove circuit satisfiability with logarithmic verification, we adapt recent techniques from Sonic (ACM CCS’19) to work with the new common reference string. The resulting argument is secure under standard assumptions (in the Random Oracle Model), in contrast with Sonic and recent works that improve its efficiency (Plonk, Marlin, AuroraLight), which, apart from the Random Oracle Model, need either the Algebraic Group Model or Knowledge Type assumptions.
2020
PKC
On Black-Box Extensions of Non-interactive Zero-Knowledge Arguments, and Signatures Directly from Simulation Soundness 📺
Highly efficient non-interactive zero-knowledge arguments (NIZK) are often constructed for limited languages and it is not known how to extend them to cover wider classes of languages in general. In this work we initiate a study on black-box language extensions for conjunctive and disjunctive relations, that is, building a NIZK system for $${mathcal L}diamond hat{{mathcal L}}$$ (with $$diamond in {wedge , vee }$$ ) based on NIZK systems for languages $${mathcal L}$$ and $$hat{{mathcal L}}$$ . While the conjunctive extension of NIZKs is straightforward by simply executing the given NIZKs in parallel, it is not known how disjunctive extensions could be achieved in a black-box manner. Besides, observe that the simple conjunctive extension does not work in the case of simulation-sound NIZKs (SS-NIZKs), as pointed out by Sahai (Sahai, FOCS 1999). Our main contribution is an impossibility result that negates the existence of the above extensions and implies other non-trivial separations among NIZKs, SS-NIZKs, and labelled SS-NIZKs. Motivated by the difficulty of such transformations, we additionally present an efficient construction of signature schemes based on unbounded simulation-sound NIZKs (USS-NIZKs) for any language without language extensions.
2020
PKC
On QA-NIZK in the BPK Model 📺
Recently, Bellare et al. defined subversion-resistance (security in the case the CRS creator may be malicious) for NIZK. In particular, a Sub-ZK NIZK is zero-knowledge, even in the case of subverted CRS. We study Sub-ZK QA-NIZKs, where the CRS can depend on the language parameter. First, we observe that subversion zero-knowledge (Sub-ZK) in the CRS model corresponds to no-auxiliary-string non-black-box NIZK in the Bare Public Key model, and hence, the use of non-black-box techniques is needed to obtain Sub-ZK. Second, we give a precise definition of Sub-ZK QA-NIZKs that are (knowledge-)sound if the language parameter but not the CRS is subverted and zero-knowledge even if both are subverted. Third, we prove that the most efficient known QA-NIZK for linear subspaces by Kiltz and Wee is Sub-ZK under a new knowledge assumption that by itself is secure in (a weaker version of) the algebraic group model. Depending on the parameter setting, it is (knowledge-)sound under different non-falsifiable assumptions, some of which do not belong to the family of knowledge assumptions.
2020
PKC
Improved Discrete Gaussian and Subgaussian Analysis for Lattice Cryptography 📺
Discrete Gaussian distributions over lattices are central to lattice-based cryptography, and to the computational and mathematical aspects of lattices more broadly. The literature contains a wealth of useful theorems about the behavior of discrete Gaussians under convolutions and related operations. Yet despite their structural similarities, most of these theorems are formally incomparable, and their proofs tend to be monolithic and written nearly “from scratch,” making them unnecessarily hard to verify, understand, and extend. In this work we present a modular framework for analyzing linear operations on discrete Gaussian distributions. The framework abstracts away the particulars of Gaussians, and usually reduces proofs to the choice of appropriate linear transformations and elementary linear algebra. To showcase the approach, we establish several general properties of discrete Gaussians, and show how to obtain all prior convolution theorems (along with some new ones) as straightforward corollaries. As another application, we describe a self-reduction for Learning With Errors (LWE) that uses a fixed number of samples to generate an unlimited number of additional ones (having somewhat larger error). The distinguishing features of our reduction are its simple analysis in our framework, and its exclusive use of discrete Gaussians without any loss in parameters relative to a prior mixed discrete-and-continuous approach. As a contribution of independent interest, for subgaussian random matrices we prove a singular value concentration bound with explicitly stated constants, and we give tighter heuristics for specific distributions that are commonly used for generating lattice trapdoors. These bounds yield improvements in the concrete bit-security estimates for trapdoor lattice cryptosystems.
2020
PKC
Almost Tight Security in Lattice with Polynomial Moduli - PRF, IBE, All-but-many LTF, and More 📺
Achieving tight security is a fundamental task in cryptography. While one of the most important purposes of this task is to improve the overall efficiency of a construction (by allowing smaller security parameters), many current lattice-based instantiations do not completely achieve the goal. Particularly, a super-polynomial modulus seems to be necessary in all prior work for (almost) tight schemes that allow the adversary to conduct queries, such as PRF, IBE, and Signatures. As the super-polynomial modulus would affect the noise-to-modulus ratio and thus increase the parameters, this might cancel out the advantages (in efficiency) brought from the tighter analysis. To determine the full power of tight security/analysis in lattices, it is necessary to determine whether the super-polynomial modulus restriction is inherent. In this work, we remove the super-polynomial modulus restriction for many important primitives – PRF, IBE, All-but-many Lossy Trapdoor Functions, and Signatures. The crux relies on an improvement over the framework of Boyen and Li (Asiacrypt 16), and an almost tight reduction from LWE to LWR, which improves prior work by Alwen et al. (Crypto 13), Bogdanov et al. (TCC 16), and Bai et al. (Asiacrypt 15). By combining these two advances, we are able to derive these almost tight schemes under LWE with a polynomial modulus.
2020
PKC
The Randomized Slicer for CVPP: Sharper, Faster, Smaller, Batchier 📺
Following the recent line of work on solving the closest vector problem with preprocessing (CVPP) using approximate Voronoi cells, we improve upon previous results in the following ways: We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis–Laarhoven–De Weger [PQCrypto 2019] and Laarhoven [MathCrypt 2019]. We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities. We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker–Gama–Joux [Cryptology ePrint Archive, 2015]. Using $$2^{0.185d + o(d)}$$ memory, we can solve a single CVPP instance in $$2^{0.264d + o(d)}$$ time. We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using $$2^{0.208d + o(d)}$$ memory, we can heuristically solve CVPP instances in $$2^{0.234d + o(d)}$$ amortized time, for batches of size at least $$2^{0.058d + o(d)}$$ . Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure.
2020
PKC
Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices: KEMs and Signatures of Smaller Sizes 📺
Currently, lattice-based cryptosystems are less efficient than their number-theoretic counterparts (based on RSA, discrete logarithm, etc.) in terms of key and ciphertext (signature) sizes. For adequate security the former typically needs thousands of bytes while in contrast the latter only requires at most hundreds of bytes. This significant difference has become one of the main concerns in replacing currently deployed public-key cryptosystems with lattice-based ones. Observing the inherent asymmetries in existing lattice-based cryptosystems, we propose asymmetric variants of the (module-)LWE and (module-)SIS assumptions, which yield further size-optimized KEM and signature schemes than those from standard counterparts. Following the framework of Lindner and Peikert (CT-RSA 2011) and the Crystals-Kyber proposal (EuroS&P 2018), we propose an IND-CCA secure KEM scheme from the hardness of the asymmetric module-LWE (AMLWE), whose asymmetry is fully exploited to obtain shorter public keys and ciphertexts. To target at a 128-bit quantum security, the public key (resp., ciphertext) of our KEM only has 896 bytes (resp., 992 bytes). Our signature scheme bears most resemblance to and improves upon the Crystals-Dilithium scheme (ToCHES 2018). By making full use of the underlying asymmetric module-LWE and module-SIS assumptions and carefully selecting the parameters, we construct an SUF-CMA secure signature scheme with shorter public keys and signatures. For a 128-bit quantum security, the public key (resp., signature) of our signature scheme only has 1312 bytes (resp., 2445 bytes). We adapt the best known attacks and their variants to our AMLWE and AMSIS problems and conduct a comprehensive and thorough analysis of several parameter choices (aiming at different security strengths) and their impacts on the sizes, security and error probability of lattice-based cryptosystems. Our analysis demonstrates that AMLWE and AMSIS problems admit more flexible and size-efficient choices of parameters than the respective standard versions.
2020
PKC
MPSign: A Signature from Small-Secret Middle-Product Learning with Errors 📺
We describe a digital signature scheme $$mathsf {MPSign}$$ , whose security relies on the conjectured hardness of the Polynomial Learning With Errors problem ( $$mathsf {PLWE}$$ ) for at least one defining polynomial within an exponential-size family (as a function of the security parameter). The proposed signature scheme follows the Fiat-Shamir framework and can be viewed as the Learning With Errors counterpart of the signature scheme described by Lyubashevsky at Asiacrypt 2016, whose security relies on the conjectured hardness of the Polynomial Short Integer Solution ( $$mathsf {PSIS}$$ ) problem for at least one defining polynomial within an exponential-size family. As opposed to the latter, $$mathsf {MPSign}$$ enjoys a security proof from $$mathsf {PLWE}$$ that is tight in the quantum-access random oracle model. The main ingredient is a reduction from $$mathsf {PLWE}$$ for an arbitrary defining polynomial among exponentially many, to a variant of the Middle-Product Learning with Errors problem ( $$mathsf {MPLWE}$$ ) that allows for secrets that are small compared to the working modulus. We present concrete parameters for $$mathsf {MPSign}$$ using such small secrets, and show that they lead to significant savings in signature length over Lyubashevsky’s Asiacrypt 2016 scheme (which uses larger secrets) at typical security levels. As an additional small contribution, and in contrast to $$mathsf {MPSign}$$ (or $$mathsf {MPLWE}$$ ), we present an efficient key-recovery attack against Lyubashevsky’s scheme (or the inhomogeneous $$mathsf {PSIS}$$ problem), when it is used with sufficiently small secrets, showing the necessity of a lower bound on secret size for the security of that scheme.
2020
PKC
Witness Indistinguishability for Any Single-Round Argument with Applications to Access Control 📺
Consider an access policy for some resource which only allows access to users of the system who own a certain set of attributes. Specifically, we consider the case where such an access structure is defined by some monotone function $$f:{0,1}^N ightarrow {0,1}$$ , belonging to some class of function $$F$$ (e.g. conjunctions, space bounded computation), where N is the number of possible attributes. In this work we show that any succinct single-round delegation scheme for the function class $$F$$ can be converted into a succinct single-round private access control protocol. That is, a verifier can be convinced that an approved user (i.e. one which holds an approved set of attributes) is accessing the system, without learning any additional information about the user or the set of attributes. As a main tool of independent interest, we show that assuming a quasi-polynomially secure two-message oblivious transfer scheme with statistical sender privacy (which can be based on quasi-polynomial hardness of the DDH, QR, DCR or LWE assumptions), we can convert any single-round protocol into a witness indistinguishable one, with similar communication complexity.
2020
PKC
Boosting Verifiable Computation on Encrypted Data 📺
We consider the setting in which an untrusted server stores a collection of data and is asked to compute a function over it. In this scenario, we aim for solutions where the untrusted server does not learn information about the data and is prevented from cheating. This problem is addressed by verifiable and private delegation of computation, proposed by Gennaro, Gentry and Parno (CRYPTO’10), a notion that is close to both the active areas of homomorphic encryption and verifiable computation (VC). However, in spite of the efficiency advances in the respective areas, VC protocols that guarantee privacy of the inputs are still expensive. The only exception is a protocol by Fiore, Gennaro and Pastro (CCS’14) that supports arithmetic circuits of degree at most 2. In this paper we propose new efficient protocols for VC on encrypted data that improve over the state of the art solution of Fiore et al. in multiple aspects. First, we can support computations of degree higher than 2. Second, we achieve public delegatability and public verifiability whereas Fiore et al. need the same secret key to encode inputs and verify outputs. Third, we achieve a new property that guarantees that verifiers can be convinced about the correctness of the outputs without learning information on the inputs. The key tool to obtain our new protocols is a new SNARK that can efficiently handle computations over a quotient polynomial ring, such as the one used by Ring-LWE somewhat homomorphic encryption schemes. This SNARK in turn relies on a new commit-and-prove SNARK for proving evaluations on the same point of several committed polynomials. We propose a construction of this scheme under an extractability assumption over bilinear groups in the random oracle model.
2020
PKC
Lossy CSI-FiSh: Efficient Signature Scheme with Tight Reduction to Decisional CSIDH-512 📺
Recently, Beullens, Kleinjung, and Vercauteren (Asiacrypt’19) provided the first practical isogeny-based digital signature, obtained from the Fiat-Shamir (FS) paradigm. They worked with the CSIDH-512 parameters and passed through a new record class group computation. However, as with all standard FS signatures, the security proof is highly non-tight and the concrete parameters are set under the heuristic that the only way to attack the scheme is by finding collisions for a hash function. In this paper, we propose an FS-style signature scheme, called Lossy CSI-FiSh, constructed using the CSIDH-512 parameters and with a security proof based on the “Lossy Keys” technique introduced by Kiltz, Lyubashevsky and Schaffner (Eurocrypt’18). Lossy CSI-FiSh is provably secure under the same assumption which underlies the security of the key exchange protocol CSIDH (Castryck et al. (Asiacrypt’18)) and is almost as efficient as CSI-FiSh. For instance, aiming for small signature size, our scheme is expected to take around $$approx 800$$  ms to sign/verify while producing signatures of size $$approx 280$$ bytes. This is only twice slower than CSI-FiSh while having similar signature size for the same parameter set. As an additional benefit, our scheme is by construction secure both in the classical and quantum random oracle model.
2020
PKC
Threshold Schemes from Isogeny Assumptions 📺
We initiate the study of threshold schemes based on the Hard Homogeneous Spaces (HHS) framework of Couveignes. Quantum-resistant HHS based on supersingular isogeny graphs have recently become usable thanks to the record class group precomputation performed for the signature scheme CSI-FiSh. Using the HHS equivalent of the technique of Shamir’s secret sharing in the exponents , we adapt isogeny based schemes to the threshold setting. In particular we present threshold versions of the CSIDH public key encryption, and the CSI-FiSh signature schemes. The main highlight is a threshold version of CSI-FiSh which runs almost as fast as the original scheme, for message sizes as low as 1880 B, public key sizes as low as 128 B, and thresholds up to 56; other speed-size-threshold compromises are possible.
2020
PKC
Topology-Hiding Computation for Networks with Unknown Delays 📺
Topology-Hiding Computation (THC) allows a set of parties to securely compute a function over an incomplete network without revealing information on the network topology. Since its introduction in TCC’15 by Moran et al., the research on THC has focused on reducing the communication complexity, allowing larger graph classes, and tolerating stronger corruption types. All of these results consider a fully synchronous model with a known upper bound on the maximal delay of all communication channels. Unfortunately, in any realistic setting this bound has to be extremely large, which makes all fully synchronous protocols inefficient. In the literature on multi-party computation, this is solved by considering the fully asynchronous model. However, THC is unachievable in this model (and even hard to define), leaving even the definition of a meaningful model as an open problem. The contributions of this paper are threefold. First, we introduce a meaningful model of unknown and random communication delays for which THC is both definable and achievable. The probability distributions of the delays can be arbitrary for each channel, but one needs to make the (necessary) assumption that the delays are independent. The existing fully-synchronous THC protocols do not work in this setting and would, in particular, leak information about the topology. Second, in the model with trusted stateless hardware boxes introduced at Eurocrypt’18 by Ball et al., we present a THC protocol that works for any graph class. Third, we explore what is achievable in the standard model without trusted hardware and present a THC protocol for specific graph types (cycles and trees) secure under the DDH assumption. The speed of all protocols scales with the actual (unknown) delay times, in contrast to all previously known THC protocols whose speed is determined by the assumed upper bound on the network delay.
2020
PKC
Sublinear-Round Byzantine Agreement Under Corrupt Majority 📺
Although Byzantine Agreement (BA) has been studied for three decades, perhaps somewhat surprisingly, there still exist significant gaps in our understanding regarding its round complexity. A long-standing open question is the following: can we achieve BA with sublinear round complexity under corrupt majority? Due to the beautiful works by Garay et al. (FOCS’07) and Fitzi and Nielsen (DISC’09), we have partial and affirmative answers to this question albeit for the narrow regime $$f = n/2 + o(n)$$ where f is the number of corrupt nodes and n is the total number of nodes. So far, no positive result is known about the setting $$f > 0.51n$$ even for static corruption! In this paper, we make progress along this somewhat stagnant front. We show that there exists a corrupt-majority BA protocol that terminates in $$O(frac{1}{epsilon } log frac{1}{delta })$$ rounds in the worst case, satisfies consistency with probability at least $$1 - delta $$ , and tolerates $$(1-epsilon )$$ fraction of corrupt nodes. Our protocol secures against an adversary that can corrupt nodes adaptively during the protocol execution but cannot perform “after-the-fact” removal of honest messages that have already been sent prior to corruption. Our upper bound is optimal up to a logarithmic factor in light of the elegant $$varOmega (1/epsilon )$$ lower bound by Garay et al. (FOCS’07).
2020
PKC
Bandwidth-Efficient Threshold EC-DSA 📺
Threshold Signatures allow n parties to share the power of issuing digital signatures so that any coalition of size at least $$t+1$$ can sign, whereas groups of t or less players cannot. Over the last few years many schemes addressed the question of realizing efficient threshold variants for the specific case of EC-DSA signatures. In this paper we present new solutions to the problem that aim at reducing the overall bandwidth consumption. Our main contribution is a new variant of the Gennaro and Goldfeder protocol from ACM CCS 2018 that avoids all the required range proofs, while retaining provable security against malicious adversaries in the dishonest majority setting. Our experiments show that – for all levels of security – our signing protocol reduces the bandwidth consumption of best previously known secure protocols for factors varying between 4.4 and 9, while key generation is consistently two times less expensive. Furthermore compared to these same protocols, our signature generation is faster for 192-bits of security and beyond.
2020
PKC
Blazing Fast OT for Three-Round UC OT Extension 📺
Oblivious Transfer (OT) is an important building block for multi-party computation (MPC). Since OT requires expensive public-key operations, efficiency-conscious MPC protocols use an OT extension (OTE) mechanism [Beaver 96, Ishai et al. 03] to provide the functionality of many independent OT instances with the same sender and receiver, using only symmetric-key operations plus few instances of some base OT protocol. Consequently there is significant interest in constructing OTE friendly protocols, namely protocols that, when used as base-OT for OTE, result in extended OT that are both round-efficient and cost-efficient. We present the most efficient OTE-friendly protocol to date. Specifically: Our base protocol incurs only 3 exponentiations per instance. Our base protocol results in a 3 round extended OT protocol. The extended protocol is UC secure in the Observable Random Oracle Model (ROM) under the CDH assumption. For comparison, the state of the art for base OTs that result in 3-round OTE are proven only in the programmable ROM, and require 4 exponentiations under Interactive DDH or 6 exponentiations under DDH [Masney-Rindal 19]. We also implement our protocol and benchmark it against the Simplest OT protocol [Chou and Orlandi, Latincrypt 2015], which is the most efficient and widely used OT protocol but not known to suffice for OTE. The computation cost is roughly the same in both cases. Interestingly, our base OT is also 3 rounds. However, we slightly modify the extension mechanism (which normally adds a round) so as to preserve the number of rounds in our case.
2020
PKC
Going Beyond Dual Execution: MPC for Functions with Efficient Verification 📺
The dual execution paradigm of Mohassel and Franklin (PKC’06) and Huang, Katz and Evans (IEEE ’12) shows how to achieve the notion of 1-bit leakage security at roughly twice the cost of semi-honest security for the special case of two-party secure computation . To date, there are no multi-party computation (MPC) protocols that offer such a strong trade-off between security and semi-honest performance. Our main result is to address this shortcoming by designing 1-bit leakage protocols for the multi-party setting, albeit for a special class of functions. We say that function f ( x ,  y ) is efficiently verifiable by g if the running time of g is always smaller than f and $$g(x,y,z)=1$$ if and only if $$f(x,y)=z$$ . In the two-party setting, we first improve dual execution by observing that the “second execution” can be an evaluation of g instead of f , and that by definition, the evaluation of g is asymptotically more efficient. Our main MPC result is to construct a 1-bit leakage protocol for such functions from any passive protocol for f that is secure up to additive errors and any active protocol for g . An important result by Genkin et al. (STOC ’14) shows how the classic protocols by Goldreich et al. (STOC ’87) and Ben-Or et al. (STOC ’88) naturally support this property, which allows to instantiate our compiler with two-party and multi-party protocols. A key technical result we prove is that the passive protocol for distributed garbling due to Beaver et al. (STOC ’90) is in fact secure up to additive errors against malicious adversaries, thereby, yielding another powerful instantiation of our paradigm in the constant-round multi-party setting. As another concrete example of instantiating our approach, we present a novel protocol for computing perfect matching that is secure in the 1-bit leakage model and whose communication complexity is less than the honest-but-curious implementations of textbook algorithms for perfect matching.
2020
PKC
Mon$\mathbb {Z}_{2^{k}}$a: Fast Maliciously Secure Two Party Computation on $\mathbb {Z}_{2^{k}}$ 📺
In this paper we present a new 2-party protocol for secure computation over rings of the form $$mathbb {Z}_{2^k}$$ . As many recent efficient MPC protocols supporting dishonest majority, our protocol consists of a heavier (input-independent) pre-processing phase and a very efficient online stage. Our offline phase is similar to BeDOZa (Bendlin et al. Eurocrypt 2011) but employs Joye-Libert (JL, Eurocrypt 2013) as underlying homomorphic cryptosystem and, notably, it can be proven secure without resorting to the expensive sacrifice step. JL turns out to be particularly well suited for the ring setting as it naturally supports $$mathbb {Z}_{2^k}$$ as underlying message space. Moreover, it enjoys several additional properties (such as valid ciphertext-verifiability and efficiency) that make it a very good fit for MPC in general. As a main technical contribution we show how to take advantage of all these properties (and of more properties that we introduce in this work, such as a ZK proof of correct multiplication) in order to design a two-party protocol that is efficient, fast and easy to implement in practice. Our solution is particularly well suited for relatively large choices of k ( e.g. $$k=128$$ ), but compares favorably with the state of the art solution of SPD $$mathbb {Z}_{2^k}$$ (Cramer et al. Crypto 2018) already for the practically very relevant case of $$mathbb {Z}_{2^{64}}$$ .
2020
PKC
Generic Authenticated Key Exchange in the Quantum Random Oracle Model 📺
We propose $$mathsf {FO_mathsf {AKE}}$$ , a generic construction of two-message authenticated key exchange (AKE) from any passively secure public key encryption (PKE) in the quantum random oracle model (QROM). Whereas previous AKE constructions relied on a Diffie-Hellman key exchange or required the underlying PKE scheme to be perfectly correct, our transformation allows arbitrary PKE schemes with non-perfect correctness. Dealing with imperfect schemes is one of the major difficulties in a setting involving active attacks. Our direct construction, when applied to schemes such as the submissions to the recent NIST post-quantum competition, is more natural than previous AKE transformations. Furthermore, we avoid the use of (quantum-secure) digital signature schemes which are considerably less efficient than their PKE counterparts. As a consequence, we can instantiate our AKE transformation with any of the submissions to the recent NIST competition, e.g., ones based on codes and lattices. $$mathsf {FO_mathsf {AKE}}$$ can be seen as a generalisation of the well known Fujisaki-Okamoto transformation (for building actively secure PKE from passively secure PKE) to the AKE setting. As a helper result, we also provide a security proof for the Fujisaki-Okamoto transformation in the QROM for PKE with non-perfect correctness which is tighter and tolerates a larger correctness error than previous proofs.
2020
PKC
Threshold Ring Signatures: New Definitions and Post-quantum Security 📺
A t -out-of- N threshold ring signature allows t parties to jointly and anonymously compute a signature on behalf on N public keys, selected in an arbitrary manner among the set of all public keys registered in the system. Existing definitions for t -out-of- N threshold ring signatures guarantee security only when the public keys are honestly generated, and many even restrict the ability of the adversary to actively participate in the computation of the signatures. Such definitions do not capture the open settings envisioned for threshold ring signatures, where parties can independently add themselves to the system, and join other parties for the computation of the signature. Furthermore, known constructions of threshold ring signatures are not provably secure in the post-quantum setting, either because they are based on non-post quantum secure problems (e.g. Discrete Log, RSA), or because they rely on transformations such as Fiat-Shamir, that are not always secure in the quantum random oracle model (QROM). In this paper, we provide the first definition of t -out-of- N threshold ring signatures against active adversaries who can participate in the system and arbitrarily deviate from the prescribed procedures. Second, we present a post-quantum secure realization based on any (post-quantum secure) trapdoor commitment, which we prove secure in the QROM. Our construction is black-box and it can be instantiated with any trapdoor commitment, thus allowing the use of a variety of hardness assumptions.
2020
PKC
Tight and Optimal Reductions for Signatures Based on Average Trapdoor Preimage Sampleable Functions and Applications to Code-Based Signatures 📺
The GPV construction [ GPV08 ] presents a generic construction of signature schemes in the Hash and Sign paradigm and is used in some lattice based signatures. This construction requires a family $$mathcal {F}$$ of trapdoor preimage sampleable functions (TPSF). In this work we extend this notion to the weaker Average TPSF (ATPSF) and show that the GPV construction also holds for ATPSF in the Random Oracle Model (ROM). We also introduce the problem of finding a Claw with a random function (Claw(RF)) and present a tight security reduction to the Claw(RF) problem. Our reduction is also optimal meaning that an algorithm that solves the Claw(RF) problem breaks the scheme. We extend these results to the quantum setting and prove this same tight and optimal reduction in the QROM. Finally, we apply these results to code-based signatures, notably the Wave signature scheme and prove security for it in the ROM and the QROM, improving and extending the original analysis of [ DST19a ].
2020
PKC
Faster Cofactorization with ECM Using Mixed Representations 📺
This paper introduces a novel implementation of the elliptic curve factoring method specifically designed for medium-size integers such as those arising by billions in the cofactorization step of the Number Field Sieve. In this context, our algorithm requires fewer modular multiplications than any other publicly available implementation. The main ingredients are: the use of batches of primes, fast point tripling, optimal double-base decompositions and Lucas chains, and a good mix of Edwards and Montgomery representations.
2020
PKC
Improved Classical Cryptanalysis of SIKE in Practice 📺
The main contribution of this work is an optimized implementation of the van Oorschot-Wiener (vOW) parallel collision finding algorithm. As is typical for cryptanalysis against conjectured hard problems (e. g. factoring or discrete logarithms), challenges can arise in the implementation that are not captured in the theory, making the performance of the algorithm in practice a crucial element of estimating security. We present a number of novel improvements, both to generic instantiations of the vOW algorithm finding collisions in arbitrary functions, and to its instantiation in the context of the supersingular isogeny key encapsulation (SIKE) protocol, that culminate in an improved classical cryptanalysis of the computational supersingular isogeny (CSSI) problem. In particular, we present a scalable implementation that can be applied to the Round-2 parameter sets of SIKE that can be used to give confidence in their security levels.
2020
PKC
A Short-List of Pairing-Friendly Curves Resistant to Special TNFS at the 128-Bit Security Level 📺
There have been notable improvements in discrete logarithm computations in finite fields since 2015 and the introduction of the Tower Number Field Sieve algorithm (TNFS) for extension fields. The Special TNFS is very efficient in finite fields that are target groups of pairings on elliptic curves, where the characteristic is special (e.g. sparse). The key sizes for pairings should be increased, and alternative pairing-friendly curves can be considered. We revisit the Special variant of TNFS for pairing-friendly curves. In this case the characteristic is given by a polynomial of moderate degree (between 4 and 38) and tiny coefficients, evaluated at an integer (a seed). We present a polynomial selection with a new practical trade-off between degree and coefficient size. As a consequence, the security of curves computed by Barbulescu, El Mrabet and Ghammam in 2019 should be revised: we obtain a smaller estimated cost of STNFS for all curves except BLS12 and BN. To obtain TNFS-secure curves, we reconsider the Brezing–Weng generic construction of families of pairing-friendly curves and estimate the cost of our new Special TNFS algorithm for these curves. This improves on the work of Fotiadis and Konstantinou, Fotiadis and Martindale, and Barbulescu, El Mrabet and Ghammam. We obtain a short-list of interesting families of curves that are resistant to the Special TNFS algorithm, of embedding degrees 10 to 16 for the 128-bit security level. We conclude that at the 128-bit security level, BLS-12 and Fotiadis–Konstantinou–Martindale curves with $$k=12$$ over a 440 to 448-bit prime field seem to be the best choice for pairing efficiency. We also give hints at the 192-bit security level.
2020
PKC
Privacy-Preserving Authenticated Key Exchange and the Case of IKEv2 📺
In this paper, we present a strong, formal, and general-purpose cryptographic model for privacy-preserving authenticated key exchange (PPAKE) protocols. PPAKE protocols are secure in the traditional AKE sense but additionally guarantee the confidentiality of the identities used in communication sessions. Our model has several useful and novel features, among others: it is a proper extension of classical AKE models, guarantees in a strong sense that the confidentiality of session keys is independent from the secrecy of the used identities, and it is the first to support what we call dynamic modes, where the responsibility of selecting the identities of the communication partners may vary over several protocol runs. We show the validity of our model by applying it to the cryptographic core of IPsec IKEv2 with signature-based authentication where the need for dynamic modes is practically well-motivated. In our analysis, we not only show that this protocol provides strong classical AKE security guarantees but also that the identities that are used by the parties remain hidden in successful protocol runs. Historically, the Internet Key Exchange (IKE) protocol was the first real-world AKE to incorporate privacy-preserving techniques. However, lately privacy-preserving techniques have gained renewed interest in the design process of important protocols like TLS 1.3 (with encrypted SNI) and NOISE. We believe that our new model can be a solid foundation to analyze these and other practical protocols with respect to their privacy guarantees, in particular, in the now so wide-spread scenario where multiple virtual servers are hosted on a single machine.
2020
PKC
Linearly-Homomorphic Signatures and Scalable Mix-Nets 📺
Anonymity is a primary ingredient for our digital life. Several tools have been designed to address it such as, for authentication, blind signatures, group signatures or anonymous credentials and, for confidentiality, randomizable encryption or mix-nets. When it comes to complex electronic voting schemes, random shuffling of authenticated ciphertexts with mix-nets is the only known tool. However, it requires huge and complex zero-knowledge proofs to guarantee the actual permutation of the initial ciphertexts in a privacy-preserving way. In this paper, we propose a new approach for proving correct shuffling of signed ElGamal ciphertexts: the mix-servers can simply randomize individual ballots, which means the ciphertexts, the signatures, and the verification keys, with an additional global proof of constant size, and the output will be publicly verifiable. The security proof is in the generic bilinear group model. The computational complexity for the each mix-server is linear in the number of ballots. Verification is also linear in the number of ballots, but independent of the number of rounds of mixing. This leads to a new highly scalable technique. Our construction makes use of linearly-homomorphic signatures, with new features, that are of independent interest.
2020
PKC
Efficient Redactable Signature and Application to Anonymous Credentials 📺
Let us assume that Alice has received a constant-size signature on a set of messages $${m_i}_{i=1}^n$$ from some organization. Depending on the situation, Alice might need to disclose, prove relations about or hide some of these messages. Ideally, the complexity of the corresponding protocols should not depend on the hidden messages. In particular, if Alice wants to disclose only k messages, then the authenticity of the latter should be verifiable in at most O ( k ) operations. Many solutions were proposed over the past decades, but they only provide a partial answer to this problem. In particular, we note that they suffer either from the need to prove knowledge of the hidden elements or from the inability to prove that the latter satisfy some relations. In this paper, we propose a very efficient constant-size redactable signature scheme that addresses all the problems above. Signatures can indeed be redacted to remain valid only on a subset of k messages included in $${m_i}_{i=1}^n$$ . The resulting redacted signature consists of 4 elements and can be verified with essentially k exponentiations. Different shows of the same signature can moreover be made unlinkable leading to a very efficient anonymous credentials system.
2020
TCHES
A Hybrid-CPU-FPGA-based Solution to the Recovery of Sha256crypt-hashed Passwords 📺
This paper presents an accelerator design for the password recovery of sha256crypt based on hybrid CPU-FPGA devices. By applying the brute-force attack computation model proposed in this paper, we decompose the sha256crypt function into two types of operations, namely the data dispatching and the block transforming. The data dispatching operation generates message blocks and the block transforming operation transforms message blocks into digests. These two operations are efficiently accelerated by the customized data dispatch unit and the pipelined block transform unit, respectively. Difficulties of adopting the pipeline technique are addressed also with the following techniques. The group scheduling is used to solve the data dependency that stalls the pipeline. The look-ahead execution eliminates the uncertainty of the execution path. The data path pruning and spatial-temporal multiplexing reduce the resource overhead of non-computing units.The proposed accelerator design is implemented and evaluated on the Xilinx Zynq-7000 XC7Z030-3 SoC. Our experimental results show that the proposed accelerator can improve energy efficiency by 2.54x over the state-of-the-art password recovery tool Hashcat running on an NVIDIA GTX1080Ti GPU. Compared with the pure FPGA-based implementation in John-the-Ripper, the proposed accelerator improves energy efficiency by 1.64x and improves resource efficiency by 1.69x.
2020
TCHES
On the spectral features of robust probing security 📺
In this work we provide a spectral formalization of non-interference in the presence of glitches. Our goal is to present new theoretical and practical tools to reason about robust-d-probing security. We show that the current understanding of extended probes lends itself to probes that participate, during gadget composition, to the creation of additional extended probes. In turn, this enables a natural extension of non-interference definitions into robust ones to build a new reasoning framework that can formally explain some semi-formal results already appeared in the past and be used to synthesize new robust-d-SNI gadgets.
2020
TCHES
Plaintext: A Missing Feature for Enhancing the Power of Deep Learning in Side-Channel Analysis? Breaking multiple layers of side-channel countermeasures 📺
Deep learning (DL) has proven to be very effective for image recognition tasks, with a large body of research on various model architectures for object classification. Straight-forward application of DL to side-channel analysis (SCA) has already shown promising success, with experimentation on open-source variable key datasets showing that secret keys can be revealed with 100s traces even in the presence of countermeasures. This paper aims to further improve the application of DL for SCA, by enhancing the power of DL when targeting the secret key of cryptographic algorithms when protected with SCA countermeasures. We propose a new model, CNN-based model with Plaintext feature extension (CNNP) together with multiple convolutional filter kernel sizes and structures with deeper and narrower neural networks, which has empirically proven its effectiveness by outperforming reference profiling attack methods such as template attacks (TAs), convolutional neural networks (CNNs) and multilayer perceptron (MLP) models. Our model generates state-of-the art results when attacking the ASCAD variable-key database, which has a restricted number of training traces per key, recovering the key within 40 attack traces in comparison with order of 100s traces required by straightforward machine learning (ML) application. During the profiling stage an attacker needs no additional knowledge on the implementation, such as the masking scheme or random mask values, only the ability to record the power consumption or electromagnetic field traces, plaintext/ciphertext and the key. Additionally, no heuristic pre-processing is required in order to break the high-order masking countermeasures of the target implementation.
2020
TCHES
Rejection Sampling Schemes for Extracting Uniform Distribution from Biased PUFs 📺
This paper presents an efficient fuzzy extractor (FE) construction for secure cryptographic key generation from physically unclonable functions (PUFs). The proposed FE, named acceptance-or-rejection (AR)-based FE, utilizes a new debiasing scheme to extract a uniform distribution from a biased PUF response. The proposed debiasing scheme employs the principle of rejection sampling, and can extract a longer debiased bit string compared to those of conventional debiasing schemes. In addition, the proposed AR-based FE is extended to ternary PUF responses (i.e., ternary encoding of a PUF response). These responses can be derived according to cell-wise reliability of the PUF and are promising for extraction of stable and high-entropy responses from common PUFs. The performance of the AR-based Fes is evaluated through an experimental simulation of PUF-based key generation and compared with conventional FEs. We confirm that the proposed AR-based FE can achieve the highest efficiency in terms of PUF and nonvolatile memory (NVM) sizes for various PUF conditions among the conventional counterparts. More precisely, the AR-based FE can realize a 128-bit key generation with up-to 55% smaller PUF size or up-to 72% smaller NVM size than other conventional FEs. In addition, the ternary AR-based FE is up to 55% more efficient than the binary version, and can also achieve up-to 63% higher efficiency than conventional counterparts. Furthermore, we show that the AR-based FE can be applied to PUFs with local biases (e.g., biases depending on cell location in SRAM PUFs), unlike all the conventional schemes, for which only global (or identical) biases are assumed.
2020
TCHES
Second-Order Masked Lookup Table Compression Scheme 📺
Masking by lookup table randomisation is a well-known technique used to achieve side-channel attack resistance for software implementations, particularly, against DPA attacks. The randomised table technique for first- and second-order security requires about m•2n bits of RAM to store an (n,m)-bit masked S-box lookup table. Table compression helps in reducing the amount of memory required, and this is useful for highly resource-constrained IoT devices. Recently, Vadnala (CT-RSA 2017) proposed a randomised table compression scheme for first- and second-order security in the probing leakage model. This scheme reduces the RAM memory required by about a factor of 2l, where l is a compression parameter. Vivek (Indocrypt 2017) demonstrated an attack against the second-order scheme of Vadnala. Hence achieving table compression at second and higher orders is an open problem.In this work, we propose a second-order secure randomised table compression scheme which works for any (n,m)-bit S-box. Our proposal is a variant of Vadnala’s scheme that is not only secure but also significantly improves the time-memory trade-off. Specifically, we improve the online execution time by a factor of 2n−l. Our proposed scheme is proved 2-SNI secure in the probing leakage model. We have implemented our method for AES-128 on a 32-bit ARM Cortex processor. We are able to reduce the memory required to store a randomised S-box table for second-order AES-128 implementation to 59 bytes.
2020
TCHES
Improving the Performance of the Picnic Signature Scheme 📺
Picnic is a digital signature algorithm designed to provide security against attacks by quantum computers. The design uses only symmetric-key primitives, and is an efficient instantiation of the MPC-in-the-head paradigm. In this work, we explore the Picnic design in great detail. We investigate and benchmark different parameter choices and show that there exist better parameter choices than those in the current specification. We also present improvements to the MPC protocol that shorten signatures and reduce signing time. The proposed MPC changes tailor the protocol to the circuit of interest in Picnic, but may also be of independent interest. Taken together, these changes give a new instantiation of Picnic that signs messages 7.9 to 13.9 times faster, and verifies signatures 4.5 to 5.5 times faster than the existing “Picnic2” design, while having nearly the same signature sizes.
2020
TCHES
Faster Montgomery and double-add ladders for short Weierstrass curves 📺
The Montgomery ladder and Joye ladder are well-known algorithms for elliptic curve scalar multiplication with a regular structure. The Montgomery ladder is best known for its implementation on Montgomery curves, which requires 5M+4S+1m+8A per scalar bit, and 6 field registers. Here (M, S,m,A) represent respectively field Multiplications, Squarings, multiplications by a curve constant, and Additions or subtractions. This ladder is also complete, meaning that it works on all input points and all scalars. Many protocols do not use Montgomery curves, but instead use prime-order curves in short Weierstrass form. These have historically been much slower, with ladders costing at least 14 multiplications or squarings per bit: 8M + 6S + 27A for the Montgomery ladder and 8M+ 6S + 30A for the Joye ladder. In 2017, Kim et al. improved the Montgomery ladder to 8M+ 4S + 12A + 1H per bit using 9 registers, where the H represents a halving. Hamburg simplified Kim et al.’s formulas to 8M+ 4S + 8A + 1H per bit using 6 registers. Here we present improved formulas which compute the Montgomery ladder on short Weierstrass curves using 8M+ 3S + 7A per bit, and requiring 6 registers. We also give formulas for the Joye ladder that use 9M+3S+7A per bit, requiring 5 registers. One of our new formulas supports very efficient 4-way vectorization. We also discuss curve invariants, exceptional points, side-channel protection and how to set up and finish these ladder operations. Finally, we show a novel technique to make these ladders complete when the curve order is not divisible by 2 or 3, at a modest increase in cost. A sample implementation of these techniques is given in the supplementary material, also posted at https://github.com/bitwiseshiftleft/ladder_formulas
2020
TCHES
Modeling Soft Analytical Side-Channel Attacks from a Coding Theory Viewpoint 📺
One important open question in side-channel analysis is to find out whether all the leakage samples in an implementation can be exploited by an adversary, as suggested by masking security proofs. For attacks exploiting a divide-and-conquer strategy, the answer is negative: only the leakages corresponding to the first/last rounds of a block cipher can be exploited. Soft Analytical Side-Channel Attacks (SASCA) have been introduced as a powerful solution to mitigate this limitation. They represent the target implementation and its leakages as a code (similar to a Low Density Parity Check code) that is decoded thanks to belief propagation. Previous works have shown the low data complexities that SASCA can reach in practice. In this paper, we revisit these attacks by modeling them with a variation of the Random Probing Model used in masking security proofs, that we denote as the Local Random Probing Model (LRPM). Our study establishes interesting connections between this model and the erasure channel used in coding theory, leading to the following benefits. First, the LRPM allows bounding the security of concrete implementations against SASCA in a fast and intuitive manner. We use it in order to confirm that the leakage of any operation in a block cipher can be exploited, although the leakages of external operations dominate in known-plaintext/ciphertext attack scenarios. Second, we show that the LRPM is a tool of choice for the (nearly worst-case) analysis of masked implementations in the noisy leakage model, taking advantage of all the operations performed, and leading to new tradeoffs between their amount of randomness and physical noise level. Third, we show that it can considerably speed up the evaluation of other countermeasures such as shuffling.
2020
TCHES
RISQ-V: Tightly Coupled RISC-V Accelerators for Post-Quantum Cryptography 📺
Empowering electronic devices to support Post-Quantum Cryptography (PQC) is a challenging task. PQC introduces new mathematical elements and operations which are usually not easy to implement on standard processors. Especially for low cost and resource constraint devices, hardware acceleration is usually required. In addition, as the standardization process of PQC is still ongoing, a focus on maintaining flexibility is mandatory. To cope with such requirements, hardware/software co-design techniques have been recently used for developing complex and highly customized PQC solutions. However, while most of the previous works have developed loosely coupled PQC accelerators, the design of tightly coupled accelerators and Instruction Set Architecture (ISA) extensions for PQC have been barely explored. To this end, we present RISQ-V, an enhanced RISC-V architecture that integrates a set of powerful tightly coupled accelerators to speed up lattice-based PQC. RISQ-V efficiently reuses processor resources and reduces the amount of memory accesses. This significantly increases the performance while keeping the silicon area overhead low. We present three contributions. First, we propose a set of powerful hardware accelerators deeply integrated into the RISC-V pipeline. Second, we extended the RISC-V ISA with 29 new instructions to efficiently perform operations for lattice-based cryptography. Third, we implemented our RISQ-V in ASIC technology and on FPGA. We evaluated the performance of NewHope, Kyber, and Saber on RISQ-V. Compared to the pure software implementation on RISC-V, our co-design implementations show a speedup factor of up to 11.4 for NewHope, 9.6 for Kyber, and 2.7 for Saber. For the ASIC implementation, the energy consumption was reduced by factors of up to 9.5 for NewHope, 7.7 for Kyber, and 2.1 for Saber. The cell count of the CPU was increased by a factor of 1.6 compared to the original RISC-V design, which can be considered as a moderate increase for the achieved performance gain.
2020
TCHES
Minerva: The curse of ECDSA nonces Systematic analysis of lattice attacks on noisy leakage of bit-length of ECDSA nonces 📺
Best Paper CHES 2020
We present our discovery of a group of side-channel vulnerabilities in implementations of the ECDSA signature algorithm in a widely used Atmel AT90SC FIPS 140-2 certified smartcard chip and five cryptographic libraries (libgcrypt, wolfSSL, MatrixSSL, SunEC/OpenJDK/Oracle JDK, Crypto++). Vulnerable implementations leak the bit-length of the scalar used in scalar multiplication via timing. Using leaked bit-length, we mount a lattice attack on a 256-bit curve, after observing enough signing operations. We propose two new methods to recover the full private key requiring just 500 signatures for simulated leakage data, 1200 for real cryptographic library data, and 2100 for smartcard data. The number of signatures needed for a successful attack depends on the chosen method and its parameters as well as on the noise profile, influenced by the type of leakage and used computation platform. We use the set of vulnerabilities reported in this paper, together with the recently published TPM-FAIL vulnerability [MSE+20] as a basis for real-world benchmark datasets to systematically compare our newly proposed methods and all previously published applicable lattice-based key recovery methods. The resulting exhaustive comparison highlights the methods’ sensitivity to its proper parametrization and demonstrates that our methods are more efficient in most cases. For the TPM-FAIL dataset, we decreased the number of required signatures from approximately 40 000 to mere 900.
2020
TCHES
DANA Universal Dataflow Analysis for Gate-Level Netlist Reverse Engineering 📺
Reverse engineering of integrated circuits, i.e., understanding the internals of Integrated Circuits (ICs), is required for many benign and malicious applications. Examples of the former are detection of patent infringements, hardware Trojans or Intellectual Property (IP)-theft, as well as interface recovery and defect analysis, while malicious applications include IP-theft and finding insertion points for hardware Trojans. However, regardless of the application, the reverse engineer initially starts with a large unstructured netlist, forming an incomprehensible sea of gates.This work presents DANA, a generic, technology-agnostic, and fully automated dataflow analysis methodology for flattened gate-level netlists. By analyzing the flow of data between individual Flip Flops (FFs), DANA recovers high-level registers. The key idea behind DANA is to combine independent metrics based on structural and control information with a powerful automated architecture. Notably, DANA works without any thresholds, scenario-dependent parameters, or other “magic” values that the user must choose. We evaluate DANA on nine modern hardware designs, ranging from cryptographic co-processors, over CPUs, to the OpenTitan, a stateof- the-art System-on-Chip (SoC), which is maintained by the lowRISC initiative with supporting industry partners like Google and Western Digital. Our results demonstrate almost perfect recovery of registers for all case studies, regardless whether they were synthesized as FPGA or ASIC netlists. Furthermore, we explore two applications for dataflow analysis: we show that the raw output of DANA often already allows to identify crucial components and high-level architecture features and also demonstrate its applicability for detecting simple hardware Trojans.Hence, DANA can be applied universally as the first step when investigating unknown netlists and provides major guidance for human analysts by structuring and condensing the otherwise incomprehensible sea of gates. Our implementation of DANA and all synthesized netlists are available as open source on GitHub.
2020
TCHES
Strength in Numbers: Improving Generalization with Ensembles in Machine Learning-based Profiled Side-channel Analysis 📺
The adoption of deep neural networks for profiled side-channel attacks provides powerful options for leakage detection and key retrieval of secure products. When training a neural network for side-channel analysis, it is expected that the trained model can implement an approximation function that can detect leaking side-channel samples and, at the same time, be insensible to noisy (or non-leaking) samples. This outlines a generalization situation where the model can identify the main representations learned from the training set in a separate test set.This paper discusses how output class probabilities represent a strong metric when conducting the side-channel analysis. Further, we observe that these output probabilities are sensitive to small changes, like selecting specific test traces or weight initialization for a neural network. Next, we discuss the hyperparameter tuning, where one commonly uses only a single out of dozens of trained models, where each of those models will result in different output probabilities. We show how ensembles of machine learning models based on averaged class probabilities can improve generalization. Our results emphasize that ensembles increase a profiled side-channel attack’s performance and reduce the variance of results stemming from different hyperparameters, regardless of the selected dataset or leakage model.
2020
TCHES
Retrofitting Leakage Resilient Authenticated Encryption to Microcontrollers 📺
The security of Internet of Things (IoT) devices relies on fundamental concepts such as cryptographically protected firmware updates. In this context attackers usually have physical access to a device and therefore side-channel attacks have to be considered. This makes the protection of required cryptographic keys and implementations challenging, especially for commercial off-the-shelf (COTS) microcontrollers that typically have no hardware countermeasures. In this work, we demonstrate how unprotected hardware AES engines of COTS microcontrollers can be efficiently protected against side-channel attacks by constructing a leakage resilient pseudo random function (LR-PRF). Using this side-channel protected building block, we implement a leakage resilient authenticated encryption with associated data (AEAD) scheme that enables secured firmware updates. We use concepts from leakage resilience to retrofit side-channel protection on unprotected hardware AES engines by means of software-only modifications. The LR-PRF construction leverages frequent key changes and low data complexity together with key dependent noise from parallel hardware to protect against side-channel attacks. Contrary to most other protection mechanisms such as time-based hiding, no additional true randomness is required. Our concept relies on parallel S-boxes in the AES hardware implementation, a feature that is fortunately present in many microcontrollers as a measure to increase performance. In a case study, we implement the protected AEAD scheme for two popular ARM Cortex-M microcontrollers with differing parallelism. We evaluate the protection capabilities in realistic IoT attack scenarios, where non-invasive EM probes or power consumption measurements are employed by the attacker. We show that the concept provides the side-channel hardening that is required for the long-term security of IoT devices.
2020
TCHES
Remove Some Noise: On Pre-processing of Side-channel Measurements with Autoencoders 📺
In the profiled side-channel analysis, deep learning-based techniques proved to be very successful even when attacking targets protected with countermeasures. Still, there is no guarantee that deep learning attacks will always succeed. Various countermeasures make attacks significantly more complex, and such countermeasures can be further combined to make the attacks even more challenging. An intuitive solution to improve the performance of attacks would be to reduce the effect of countermeasures.This paper investigates whether we can consider certain types of hiding countermeasures as noise and then use a deep learning technique called the denoising autoencoder to remove that noise. We conduct a detailed analysis of six different types of noise and countermeasures separately or combined and show that denoising autoencoder improves the attack performance significantly.
2020
TCHES
Unrolled Cryptography on Silicon: A Physical Security Analysis 📺
Cryptographic primitives with low-latency performance have gained momentum lately due to an increased demand for real-time applications. Block ciphers such as PRINCE enable data encryption (resp. decryption) within a single clock cycle at a moderately high operating frequency when implemented in a fully-unrolled fashion. Unsurprisingly, many typical environments for unrolled ciphers require protection against physical adversaries as well. Yet, recent works suggest that most common SCA countermeasures are hard to apply to low-latency circuits. Hardware masking, for example, requires register stages to offer resistance, thus adding delay and defeating the purpose of unrolling. On another note, it has been indicated that unrolled primitives without any additional means of protection offer an intrinsic resistance to SCA attacks due to their parallelism, asynchronicity and speed of execution. In this work, we take a closer look at the physical security properties provided by unrolled cryptographic IC implementations. We are able to confirm that the nature of unrolling indeed bears the potential to decrease the susceptibility of cipher implementations significantly when reset methods are applied. With respect to certain adversarial models, e.g., ciphertext-only access, an amazingly high level of protection can be achieved. While this seems to be a great result for cryptographic hardware engineers, there is an attack vector hidden in plain sight which still threatens the security of unrolled implementations remarkably – namely the static power consumption of CMOS-based circuits. We point out that essentially all reasons which make it hard to extract meaningful information from the dynamic behavior of unrolled primitives are not an issue when exploiting the static currents for key recovery. Our evaluation is based on real-silicon measurements of an unrolled PRINCE core in a custom 40nm ASIC. The presented results serve as a neat educational case study to demonstrate the broad differences between dynamic and static power information leakage in the light of technological advancement.
2020
TCHES
High-speed Instruction-set Coprocessor for Lattice-based Key Encapsulation Mechanism: Saber in Hardware 📺
In this paper, we present an instruction set coprocessor architecture for lattice-based cryptography and implement the module lattice-based post-quantum key encapsulation mechanism (KEM) Saber as a case study. To achieve fast computation time, the architecture is fully implemented in hardware, including CCA transformations. Since polynomial multiplication plays a performance-critical role in the module and ideal lattice-based public-key cryptography, a parallel polynomial multiplier architecture is proposed that overcomes memory access bottlenecks and results in a highly parallel yet simple and easy-to-scale design. Such multipliers can compute a full multiplication in 256 cycles, but are designed to target any area/performance trade-offs. Besides optimizing polynomial multiplication, we make important design decisions and perform architectural optimizations to reduce the overall cycle counts as well as improve resource utilization. For the module dimension 3 (security comparable to AES-192), the coprocessor computes CCA key generation, encapsulation, and decapsulation in only 5,453, 6,618 and 8,034 cycles respectively, making it the fastest hardware implementation of Saber to our knowledge. On a Xilinx UltraScale+ XCZU9EG-2FFVB1156 FPGA, the entire instruction set coprocessor architecture runs at 250 MHz clock frequency and consumes 23,686 LUTs, 9,805 FFs, and 2 BRAM tiles (including 5,113 LUTs and 3,068 FFs for the Keccak core).
2020
TCHES
Strengthening Sequential Side-Channel Attacks Through Change Detection 📺
The sequential structure of some side-channel attacks makes them subject to error propagation, i.e. when an error occurs during the recovery of some part of a secret key, all the following guesses might as well be chosen randomly. We propose a methodology that strengthens sequential attacks by automatically identifying and correcting errors. The core ingredient of our methodology is a change-detection test that monitors the distribution of the distinguisher values used to reconstruct the secret key. Our methodology includes an error-correction procedure that can cope both with false positives of the change-detection test, and inaccuracies of the estimated location of the wrong key guess. The proposed methodology is general and can be included in several attacks. As meaningful examples, we conduct two different side-channel attacks against RSA-2048: an horizontal power-analysis attack based on correlation and a vertical timing attack. Our experiments show that, in all the considered cases, strengthened attacks outperforms their original counterparts and alternative solutions that are based on thresholds. In particular, strengthened attacks achieve high success rates even when the side-channel measurements are noisy or limited in number, without prohibitively increasing the computing time.
2020
TCHES
Investigating Profiled Side-Channel Attacks Against the DES Key Schedule 📺
Recent publications describe profiled single trace side-channel attacks (SCAs) against the DES key-schedule of a “commercially available security controller”. They report a significant reduction of the average remaining entropy of cryptographic keys after the attack, with surprisingly large, key-dependent variations of attack results, and individual cases with remaining key entropies as low as a few bits. Unfortunately, they leave important questions unanswered: Are the reported wide distributions of results plausible - can this be explained? Are the results device-specific or more generally applicable to other devices? What is the actual impact on the security of 3-key triple DES? We systematically answer those and several other questions by analyzing two commercial security controllers and a general purpose microcontroller. We observe a significant overall reduction and, importantly, also observe a large key-dependent variation in single DES key security levels, i.e. 49.4 bit mean and 0.9 % of keys &lt; 40 bit (first investigated security controller; other results similar). We also observe a small fraction of keys with exceptionally low security levels that can be called weak keys. It is unclear, whether a device’s side-channel security should be assessed based on such rare weak key outliers. We generalize results to other leakage models by attacking the hardware DES accelerator of a general purpose microcontroller exhibiting a different leakage model. A highly simplified leakage simulation also confirms the wide distribution and shows that security levels are predictable to some extent. Through extensive investigations we find that the actual weakness of keys mainly stems from the specific switching noise they cause. Based on our investigations we expect that widely distributed results and weak outliers should be expected for all profiled attacks against (insufficiently protected) key-schedules, regardless of the algorithm and specific implementation. Finally, we describe a sound approach to estimate actual 3-key triple-DES security levels from empirical single DES results and find that the impact on the security of 3-key triple-DES is limited, i.e. 96.1 bit mean and 0.24 % of key-triples &lt; 80 bit for the same security controller.
2020
TCHES
A Novel Evaluation Metric for Deep Learning-Based Side Channel Analysis and Its Extended Application to Imbalanced Data 📺
Since Kocher (CRYPTO’96) proposed timing attack, side channel analysis (SCA) has shown great potential to break cryptosystems via physical leakage. Recently, deep learning techniques are widely used in SCA and show equivalent and even better performance compared to traditional methods. However, it remains unknown why and when deep learning techniques are effective and efficient for SCA. Masure et al. (IACR TCHES 2020(1):348–375) illustrated that deep learning paradigm is suitable for evaluating implementations against SCA from a worst-case scenario point of view, yet their work is limited to balanced data and a specific loss function. Besides, deep learning metrics are not consistent with side channel metrics. In most cases, they are deceptive in foreseeing the feasibility and complexity of mounting a successful attack, especially for imbalanced data. To mitigate the gap between deep learning metrics and side channel metrics, we propose a novel Cross Entropy Ratio (CER) metric to evaluate the performance of deep learning models for SCA. CER is closely related to traditional side channel metrics Guessing Entropy (GE) and Success Rate (SR) and fits to deep learning scenario. Besides, we show that it works stably while deep learning metrics such as accuracy becomes rather unreliable when the training data tends to be imbalanced. However, estimating CER can be done as easy as natural metrics in deep learning algorithms with low computational complexity. Furthermore, we adapt CER metric to a new kind of loss function, namely CER loss function, designed specifically for deep learning in side channel scenario. In this way, we link directly the SCA objective to deep learning optimization. Our experiments on several datasets show that, for SCA with imbalanced data, CER loss function outperforms Cross Entropy loss function in various conditions.
2020
TCHES
Splitting the Interpose PUF: A Novel Modeling Attack Strategy 📺
We demonstrate that the Interpose PUF proposed at CHES 2019, an Arbiter PUF-based design for so-called Strong Physical Unclonable Functions (PUFs), can be modeled by novel machine learning strategies up to very substantial sizes and complexities. Our attacks require in the most difficult cases considerable, but realistic, numbers of CRPs, while consuming only moderate computation times, ranging from few seconds to few days. The attacks build on a new divide-and-conquer approach that allows us to model the two building blocks of the Interpose PUF separately. For non-reliability based Machine Learning (ML) attacks, this eventually leads to attack times on (kup, kdown)-Interpose PUFs that are comparable to the ones against max{kup, kdown}-XOR Arbiter PUFs, refuting the original claim that Interpose PUFs could provide security similar to (kdown + kup/2)-XOR Arbiter PUFs (CHES 2019). On the technical side, our novel divide-and-conquer technique might also be useful in analyzing other designs, where XOR Arbiter PUF challenge bits are unknown to the attacker.
2020
TCHES
CPAmap: On the Complexity of Secure FPGA Virtualization, Multi-Tenancy, and Physical Design 📺
With virtualized Field Programmable Gate Arrays (FPGAs) on the verge of being deployed to the cloud computing domain, there is a rising interest in resolving recently identified security issues. Those issues result from different trusted and untrusted entities sharing the FPGA fabric and the Power Distribution Network. Researchers were able to perform both side-channel and fault attacks between logically isolated designs on the same FPGA fabric, compromising security of cryptographic modules and other critical implementations. Side-channel attacks specifically are enabled by the vast degree of freedom given to developers when making use of the basic FPGA resources. Both ring oscillators as well as long delay lines, implemented using low-level FPGA primitives, have been shown to provide sufficient data for simple or correlation-based power analysis attacks. In order to develop new or apply known countermeasures onto designs and implementations in a virtualized multi-tenant FPGA, we seek to fully understand the underlying mechanisms and dependencies of chip-internal side-channel attacks. Although the impact of process variation and other physical design parameters on side-channel vulnerability has been investigated in previous works, remote attacks between logically isolated partitions in multi-tenant FPGAs introduce new and unique challenges. Thus, we systematically analyze the impact of physical mapping of both attacker and victim design on the success of correlation power analysis attacks on the Advanced Encryption Standard (AES). We report our findings on a Xilinx Zynq 7000-based platform, which show that the effect of global and local placement as well as routing and process variation on the success of side-channel attacks almost exceeds the impact of hiding countermeasures. This result reveals fundamental challenges in secure virtualization of FPGAs, which have been mostly ignored so far. Eventually, our results may also help vendors and hypervisors in developing zero overhead side-channel countermeasures based on adequate global and local placement of isolated designs on a multi-tenant FPGA.
2020
TCHES
Revisiting a Methodology for Efficient CNN Architectures in Profiling Attacks 📺
This work provides a critical review of the paper by Zaid et al. titled “Methodology for Efficient CNN Architectures in Profiling attacks”, which was published in TCHES Volume 2020, Issue 1. This work studies the design of CNN networks to perform side-channel analysis of multiple implementations of the AES for embedded devices. Based on the authors’ code and public data sets, we were able to cross-check their results and perform a thorough analysis. We correct multiple misconceptions by carefully inspecting different elements of the model architectures proposed by Zaid et al. First, by providing a better understanding on the internal workings of these models, we can trivially reduce their number of parameters on average by 52%, while maintaining a similar performance. Second, we demonstrate that the convolutional filter’s size is not strictly related to the amount of misalignment in the traces. Third, we show that increasing the filter size and the number of convolutions actually improves the performance of a network. Our work demonstrates once again that reproducibility and review are important pillars of academic research. Therefore, we provide the reader with an online Python notebook which allows to reproduce some of our experiments1 and additional example code is made available on Github.2
2020
TCHES
JackHammer: Efficient Rowhammer on Heterogeneous FPGA-CPU Platforms 📺
After years of development, FPGAs are finally making an appearance on multi-tenant cloud servers. Heterogeneous FPGA-CPU microarchitectures require reassessment of common assumptions about isolation and security boundaries, as they introduce new attack vectors and vulnerabilities. In this work, we analyze the memory and cache subsystem and study Rowhammer and cache attacks enabled by two proposed heterogeneous FPGA-CPU platforms from Intel: the Arria 10 GX with an integrated FPGA-CPU platform, and the Arria 10 GX PAC expansion card which connects the FPGA to the CPU via the PCIe interface. We demonstrate JackHammer, a novel, efficient, and stealthy Rowhammer from the FPGA to the host’s main memory. Our results indicate that a malicious FPGA can perform twice as fast as a typical Rowhammer from the CPU on the same system and causes around four times as many bit flips as the CPU attack. We demonstrate the efficacy of JackHammer from the FPGA through a realistic fault attack on the WolfSSL RSA signing implementation that reliably causes a fault after an average of fifty-eight RSA signatures, 25% faster than a CPU Rowhammer. In some scenarios our JackHammer attack produces faulty signatures more than three times more often and almost three times faster than a conventional CPU Rowhammer. Finally, we systematically analyze new cache attacks in these environments following demonstration of a cache covert channel across FPGA and CPU.
2020
TCHES
The Long and Winding Path to Secure Implementation of GlobalPlatform SCP10 📺
GlobalPlatform (GP) card specifications are defined for smart cards regarding rigorous security requirements. The increasingly more powerful cards within an open ecosystem of multiple players stipulate that asymmetric-key protocols become necessary. In this paper, we analyze SCP10, which is the Secure Channel Protocol (SCP) that relies on RSA for key exchange and authentication. Our findings are twofold. First, we demonstrate several flaws in the design of SCP10. We discuss the scope of the identified flaws by presenting several attack scenarios in which a malicious attacker can recover all the messages protected by SCP10. We provide a full implementation of these attacks. For instance, an attacker can get the freshly generated session keys in less than three hours. Second, we propose a secure implementation of SCP10 and discuss how it can mitigate the discovered flaws. Finally, we measure the overhead incurred by the implemented countermeasures.
2020
TCHES
ISA Extensions for Finite Field Arithmetic: Accelerating Kyber and NewHope on RISC-V 📺
We present and evaluate a custom extension to the RISC-V instruction set for finite field arithmetic. The result serves as a very compact approach to software-hardware co-design of PQC implementations in the context of small embedded processors such as smartcards. The extension provides instructions that implement finite field operations with subsequent reduction of the result. As small finite fields are used in various PQC schemes, such instructions can provide a considerable speedup for an otherwise software-based implementation. Furthermore, we create a prototype implementation of the presented instructions for the extendable VexRiscv core, integrate the result into a chip design, and evaluate the design on two different FPGA platforms. The effectiveness of the extension is evaluated by using the instructions to optimize the Kyber and NewHope key-encapsulation schemes. To that end, we also present an optimized software implementation for the standard RISC-V instruction set for the polynomial arithmetic underlying those schemes, which serves as basis for comparison. Both variants are tuned on an assembler level to optimally use the processor pipelines of contemporary RISC-V CPUs. The result shows a speedup for the polynomial arithmetic of up to 85% over the basic software implementation. Using the custom instructions drastically reduces the code and data size of the implementation without introducing runtime-performance penalties at a small cost in circuit size. When used in the selected schemes, the custom instructions can be used to replace a full general purpose multiplier to achieve very compact implementations.
2020
TCHES
Single-Trace Attacks on Keccak 📺
Since its selection as the winner of the SHA-3 competition, Keccak, with all its variants, has found a large number of applications. It is, for instance, a common building block in schemes submitted to NIST’s post-quantum cryptography project. In many of these applications, Keccak processes ephemeral secrets. In such a setting, side-channel adversaries are limited to a single observation, meaning that differential attacks are inherently prevented. If, however, such a single trace of Keccak can already be sufficient for key recovery has so far been unknown. In this paper, we change the above by presenting the first single-trace attack targeting Keccak. Our method is based on soft-analytical side-channel attacks and, thus, combines template matching with message passing in a graphical model of the attacked algorithm. As a straight-forward model of Keccak does not yield satisfactory results, we describe several optimizations for the modeling and the message-passing algorithm. Their combination allows attaining high attack performance in terms of both success rate as well as computational runtime. We evaluate our attack assuming generic software (microcontroller) targets and thus use simulations in the generic noisy Hamming-weight leakage model. Hence, we assume relatively modest profiling capabilities of the adversary. Nonetheless, the attack can reliably recover secrets in a large number of evaluated scenarios at realistic noise levels. Consequently, we demonstrate the need for countermeasures even in settings where DPA is not a threat.
2020
TCHES
Parameterized Hardware Accelerators for Lattice-Based Cryptography and Their Application to the HW/SW Co-Design of qTESLA 📺
This paper presents a set of efficient and parameterized hardware accelerators that target post-quantum lattice-based cryptographic schemes, including a versatile cSHAKE core, a binary-search CDT-based Gaussian sampler, and a pipelined NTT-based polynomial multiplier, among others. Unlike much of prior work, the accelerators are fully open-sourced, are designed to be constant-time, and can be parameterized at compile-time to support different parameters without the need for re-writing the hardware implementation. These flexible, publicly-available accelerators are leveraged to demonstrate the first hardware-software co-design using RISC-V of the post-quantum lattice-based signature scheme qTESLA with provably secure parameters. In particular, this work demonstrates that the NIST’s Round 2 level 1 and level 3 qTESLA variants achieve over a 40-100x speedup for key generation, about a 10x speedup for signing, and about a 16x speedup for verification, compared to the baseline RISC-V software-only implementation. For instance, this corresponds to execution in 7.7, 34.4, and 7.8 milliseconds for key generation, signing, and verification, respectively, for qTESLA’s level 1 parameter set on an Artix-7 FPGA, demonstrating the feasibility of the scheme for embedded applications.
2020
TCHES
Generic Side-channel attacks on CCA-secure lattice-based PKE and KEMs 📺
In this work, we demonstrate generic and practical EM side-channel assisted chosen ciphertext attacks over multiple LWE/LWR-based Public Key Encryption (PKE) and Key Encapsulation Mechanisms (KEM) secure in the chosen ciphertext model (IND-CCA security). We show that the EM side-channel information can be efficiently utilized to instantiate a plaintext checking oracle, which provides binary information about the output of decryption, typically concealed within IND-CCA secure PKE/KEMs, thereby enabling our attacks. Firstly, we identified EM-based side-channel vulnerabilities in the error correcting codes (ECC) enabling us to distinguish based on the value/validity of decrypted codewords. We also identified similar vulnerabilities in the Fujisaki-Okamoto transform which leaks information about decrypted messages applicable to schemes that do not use ECC. We subsequently exploit these vulnerabilities to demonstrate practical attacks applicable to six CCA-secure lattice-based PKE/KEMs competing in the second round of the NIST standardization process. We perform experimental validation of our attacks on implementations taken from the open-source pqm4 library, running on the ARM Cortex-M4 microcontroller. Our attacks lead to complete key-recovery in a matter of minutes on all the targeted schemes, thus showing the effectiveness of our attack.
2020
TCHES
Cortex-M4 optimizations for {R,M} LWE schemes 📺
This paper proposes various optimizations for lattice-based key encapsulation mechanisms (KEM) using the Number Theoretic Transform (NTT) on the popular ARM Cortex-M4 microcontroller. Improvements come in the form of a faster code using more efficient modular reductions, optimized small-degree polynomial multiplications, and more aggressive layer merging in the NTT, but also in the form of reduced stack usage. We test our optimizations in software implementations of Kyber and NewHope, both round 2 candidates in the NIST post-quantum project, and also NewHope-Compact, a recently proposed variant of NewHope with smaller parameters. Our software is the first implementation of NewHope-Compact on the Cortex-M4 and shows speed improvements over previous high-speed implementations of Kyber and NewHope. Moreover, it gives a common framework to compare those schemes with the same level of optimization. Our results show that NewHope- Compact is the fastest scheme, followed by Kyber, and finally NewHope, which seems to suffer from its large modulus and error distribution for small dimensions.
2020
TCHES
Understanding Screaming Channels: From a Detailed Analysis to Improved Attacks 📺
Recently, some wireless devices have been found vulnerable to a novel class of side-channel attacks, called Screaming Channels. These leaks might appear if the sensitive leaks from the processor are unintentionally broadcast by a radio transmitter placed on the same chip. Previous work focuses on identifying the root causes, and on mounting an attack at a distance considerably larger than the one achievable with conventional electromagnetic side channels, which was demonstrated in the low-noise environment of an anechoic chamber. However, a detailed understanding of the leak, attacks that take full advantage of the novel vector, and security evaluations in more practical scenarios are still missing. In this paper, we conduct a thorough experimental analysis of the peculiar properties of Screaming Channels. For example, we learn about the coexistence of intended and unintended data, the role of distance and other parameters on the strength of the leak, the distortion of the leakmodel, and the portability of the profiles. With such insights, we build better attacks. We profile a device connected via cable with 10000·500 traces. Then, 5 months later, we attack a different instance at 15m in an office environment. We recover the AES-128 key with 5000·1000 traces and key enumeration up to 223. Leveraging spatial diversity, we mount some attacks in the presence of obstacles. As a first example of application to a real system, we show a proof-of-concept attack against the authentication method of Google Eddystone beacons. On the one side, this work lowers the bar for more realistic attacks, highlighting the importance of the novel attack vector. On the other side, it provides a broader security evaluation of the leaks, helping the defender and radio designers to evaluate risk, and the need of countermeasures.
2020
TCHES
Fixslicing: A New GIFT Representation: Fast Constant-Time Implementations of GIFT and GIFT-COFB on ARM Cortex-M 📺
The GIFT family of lightweight block ciphers, published at CHES 2017, offers excellent hardware performance figures and has been used, in full or in part, in several candidates of the ongoing NIST lightweight cryptography competition. However, implementation of GIFT in software seems complex and not efficient due to the bit permutation composing its linear layer (a feature shared with PRESENT cipher). In this article, we exhibit a new non-trivial representation of the GIFT family of block ciphers over several rounds. This new representation, that we call fixslicing, allows extremely efficient software bitsliced implementations of GIFT, using only a few rotations, surprisingly placing GIFT as a very efficient candidate on micro-controllers. Our constant time implementations show that, on ARM Cortex-M3, 128-bit data can be ciphered with only about 800 cycles for GIFT-64 and about 1300 cycles for GIFT-128 (assuming pre-computed round keys). In particular, this is much faster than the impressive PRESENT implementation published at CHES 2017 that requires 2116 cycles in the same setting, or the current best AES constant time implementation reported that requires 1617 cycles. This work impacts GIFT, but also improves software implementations of all other cryptographic primitives directly based on it or strongly related to it.
2020
TCHES
From A to Z: Projective coordinates leakage in the wild 📺
At EUROCRYPT 2004, Naccache et al. showed that the projective coordinates representation of the resulting point of an elliptic curve scalar multiplication potentially allows to recover some bits of the scalar. However, this attack has received little attention by the scientific community, and the status of deployed mitigations to prevent it in widely adopted cryptography libraries is unknown. In this paper, we aim to fill this gap, by analyzing several cryptography libraries in this context. To demonstrate the applicability of the attack, we use a side-channel attack to exploit this vulnerability within libgcrypt in the context of ECDSA. To the best of our knowledge, this is the first practical attack instance. It targets the insecure binary extended Euclidean algorithm implementation using a microarchitectural side-channel attack that allows recovering the projective representation of the output point of scalar multiplication during ECDSA signature generation. We captured 100k traces to estimate the number of traces an attacker would need to compromise the libgcrypt ECDSA implementation, resulting in less than 2k for commonly used elliptic curve secp256r1, demonstrating the attack feasibility. During exploitation, we found two additional vulnerabilities. However, we remark the purpose of this paper is not merely exploiting a library but about providing an analysis on the projective coordinates vulnerability status in widely deployed open-source libraries, filling a gap between its original description in the academic literature and the adoption of countermeasures to thwart it in real-world applications.
2020
TCHES
Defeating State-of-the-Art White-Box Countermeasures with Advanced Gray-Box Attacks 📺
The goal of white-box cryptography is to protect secret keys embedded in a cryptographic software deployed in an untrusted environment. In this article, we revisit state-of-the-art countermeasures employed in white-box cryptography, and we discuss possible ways to combine them. Then we analyze the different gray-box attack paths and study their performances in terms of required traces and computation time. Afterward, we propose a new paradigm for the gray-box attack against white-box cryptography, which exploits the data-dependency of the target implementation. We demonstrate that our approach provides substantial complexity improvements over the existing attacks. Finally, we showcase this new technique by breaking the three winning AES-128 white-box implementations from WhibOx 2019 white-box cryptography competition.
2020
TCHES
High-Speed Masking for Polynomial Comparison in Lattice-based KEMs 📺
With the NIST post-quantum standardization competition entering the second round, the interest in practical implementation results of the remaining NIST candidates is steadily growing. Especially implementations on embedded devices are often not protected against side-channel attacks, such as differential power analysis. In this regard, the application of countermeasures against side-channel attacks to candidates of the NIST standardization process is still an understudied topic. Our work aims to contribute to the NIST competition by enabling a more realistic judgment of the overhead cost introduced by side-channel countermeasures that are applied to lattice-based KEMs that achieve CCA-security based on the Fujisaki-Okamoto transform. We present a novel higher-order masking scheme that enables an efficient comparison of polynomials as previous techniques based on arithmetic-to-Boolean conversions renders this (generally inexpensive) component extremely expensive in the masked case. Our approach has linear complexity in the number of shares compared to quadratic complexity of previous contributions and it applies to lattice based schemes with prime modulus. It comes with a proof in the probing model and an efficient implementation on an ARM Cortex-M4F microcontroller which was defined as a preferred evaluation platform for embedded implementations by NIST. Our algorithm can be executed in only 1.5-2.2 milliseconds on the target platform (depending on the masking order) and is therefore well suited even for lightweight applications. While in previous work, practical side-channel experiments were conducted using only 5,000 - 100,000 power traces, we confirm the absence of first-order leakage in this work by collecting 1 million power traces and applying the t-test methodology.
2020
TCHES
Protecting against Statistical Ineffective Fault Attacks 📺
Statistical Ineffective Fault Attacks (SIFA) pose a threat for many practical implementations of symmetric primitives. Countermeasures against both power analysis and fault attacks typically do not prevent straightforward SIFA attacks, which require only very limited knowledge about the concrete implementation. Therefore, the exploration of countermeasures against SIFA that do not rely on protocols or physical protection mechanisms is of great interest. In this paper, we describe different countermeasure strategies against SIFA. First, we introduce an abstraction layer between the algorithmic specification of a cipher and its implementation in hardware or software to study and describe resistance against SIFA. We then show that by basing the masked implementation on permutations as building blocks, we can build circuits that withstand single-fault SIFA and DPA attacks. We show how this approach can be applied to 3-bit, 4-bit, and 5-bit S-boxes and the AES S-box. Additionally, we present a strategy based on fine-grained fault detection suitable for protecting any circuit against SIFA attacks. Although this approach may lead to a higher implementation cost due to the fine-grained detection needed, it can be used to protect arbitrary circuits and can be generalized to cover multi-fault SIFA. For single-fault SIFA protection, our countermeasures only have a small computational overhead compared to a simple combination of masking and duplication.
2020
TCHES
Side-Channel Countermeasures’ Dissection and the Limits of Closed Source Security Evaluations 📺
We take advantage of a recently published open source implementation of the AES protected with a mix of countermeasures against side-channel attacks to discuss both the challenges in protecting COTS devices against such attacks and the limitations of closed source security evaluations. The target implementation has been proposed by the French ANSSI (Agence Nationale de la Sécurité des Systèmes d’Information) to stimulate research on the design and evaluation of side-channel secure implementations. It combines additive and multiplicative secret sharings into an affine masking scheme that is additionally mixed with a shuffled execution. Its preliminary leakage assessment did not detect data dependencies with up to 100,000 measurements. We first exhibit the gap between such a preliminary leakage assessment and advanced attacks by demonstrating how a countermeasures’ dissection exploiting a mix of dimensionality reduction, multivariate information extraction and key enumeration can recover the full key with less than 2,000 measurements. We then discuss the relevance of open source evaluations to analyze such implementations efficiently, by pointing out that certain steps of the attack are hard to automate without implementation knowledge (even with machine learning tools), while performing them manually is straightforward. Our findings are not due to design flaws but from the general difficulty to prevent side-channel attacks in COTS devices with limited noise. We anticipate that high security on such devices requires significantly more shares.
2020
TCHES
A Fast and Accurate Guessing Entropy Estimation Algorithm for Full-key Recovery 📺
Guessing entropy (GE) is a widely adopted metric that measures the average computational cost needed for a successful side-channel analysis (SCA). However, with current estimation methods where the evaluator has to average the correct key rank over many independent side-channel leakage measurement sets, full-key GE estimation is impractical due to its prohibitive computing requirement. A recent estimation method based on posterior probabilities, although scalable, is not accurate.We propose a new guessing entropy estimation algorithm (GEEA) based on theoretical distributions of the ranking score vectors. By discovering the relationship of GE with pairwise success rates and utilizing it, GEEA uses a sum of many univariate Gaussian probabilities instead of multi-variate Gaussian probabilities, significantly improving the computation efficiency.We show that GEEA is more accurate and efficient than all current GE estimations. To the best of our knowledge, it is the only practical full-key GE evaluation on given experimental data sets which the evaluator has access to. Moreover, it can accurately predict the GE for larger sizes than the experimental data sets, providing comprehensive security evaluation.
2020
TCHES
Highly Efficient Architecture of NewHope-NIST on FPGA using Low-Complexity NTT/INTT 📺
NewHope-NIST is a promising ring learning with errors (RLWE)-based postquantum cryptography (PQC) for key encapsulation mechanisms. The performance on the field-programmable gate array (FPGA) affects the applicability of NewHope-NIST. In RLWE-based PQC algorithms, the number theoretic transform (NTT) is one of the most time-consuming operations. In this paper, low-complexity NTT and inverse NTT (INTT) are used to implement highly efficient NewHope-NIST on FPGA. First, both the pre-processing of NTT and the post-processing of INTT are merged into the fast Fourier transform (FFT) algorithm, which reduces N and 2N modular multiplications for N-point NTT and INTT, respectively. Second, a compact butterfly unit and an efficient modular reduction on the modulus 12289 are proposed for the low-complexity NTT/INTT architecture, which achieves an improvement of approximately 3× in the area time product (ATP) compared with the results of the state-of-the-art designs. Finally, a highly efficient architecture with doubled bandwidth and timing hiding for NewHope-NIST is presented. The implementation results on an FPGA show that our design is at least 2.5× faster and has 4.9× smaller ATP compared with the results of the state-of-the-art designs of NewHope-NIST on similar platforms.
2020
TCHES
FENL: an ISE to mitigate analogue micro-architectural leakage 📺
Ge et al. [GYH18] propose the augmented ISA (or aISA), a central tenet of which is the selective exposure of micro-architectural resources via a less opaque abstraction than normal. The aISA proposal is motivated by the need for control over such resources, for example to implement robust countermeasures against microarchitectural attacks. In this paper, we apply an aISA-style approach to challenges stemming from analogue micro-architectural leakage; examples include power-based Hamming weight and distance leakage from relatively fine-grained resources (e.g., pipeline registers), which are not exposed in, and so cannot be reliably controlled via, a normal ISA. Specifically, we design, implement, and evaluate an ISE named FENL: the ISE acts as a fence for leakage, preventing interaction between, and hence leakage from, instructions before and after it in program order. We demonstrate that the implementation and use of FENL has relatively low overhead, and represents an effective tool for systematically localising and reducing leakage.
2020
TCHES
Dismantling DST80-based Immobiliser Systems 📺
Car manufacturers deploy vehicle immobiliser systems in order to prevent car theft. However, in many cases the underlying cryptographic primitives used to authenticate a transponder are proprietary in nature and thus not open to public scrutiny. In this paper we publish the proprietary Texas Instruments DST80 cipher used in immobilisers of several manufacturers. Additionally, we expose serious flaws in immobiliser systems of major car manufacturers such as Toyota, Kia, Hyundai and Tesla. Specifically, by voltage glitching the firmware protection mechanisms of the microcontroller, we extracted the firmware from several immobiliser ECUs and reverse engineered the key diversification schemes employed within. We discovered that Kia and Hyundai immobiliser keys have only three bytes of entropy and that Toyota only relies on publicly readable information such as the transponder serial number and three constants to generate cryptographic keys. Furthermore, we present several practical attacks which can lead to recovering the full 80-bit cryptographic key in a matter of seconds or permanently disabling the transponder. Finally, even without key management or configuration issues, we demonstrate how an attacker can recover the cryptographic key using a profiled side-channel attack. We target the key loading procedure and investigate the practical applicability in the context of portability. Our work once again highlights the issues automotive vendors face in implementing cryptography securely.
2020
TCHES
Efficient and Private Computations with Code-Based Masking 📺
Code-based masking is a very general type of masking scheme that covers Boolean masking, inner product masking, direct sum masking, and so on. The merits of the generalization are twofold. Firstly, the higher algebraic complexity of the sharing function decreases the information leakage in “low noise conditions” and may increase the “statistical security order” of an implementation (with linear leakages). Secondly, the underlying error-correction codes can offer improved fault resistance for the encoded variables. Nevertheless, this higher algebraic complexity also implies additional challenges. On the one hand, a generic multiplication algorithm applicable to any linear code is still unknown. On the other hand, masking schemes with higher algebraic complexity usually come with implementation overheads, as for example witnessed by inner-product masking. In this paper, we contribute to these challenges in two directions. Firstly, we propose a generic algorithm that allows us (to the best of our knowledge for the first time) to compute on data shared with linear codes. Secondly, we introduce a new amortization technique that can significantly mitigate the implementation overheads of code-based masking, and illustrate this claim with a case study. Precisely, we show that, although performing every single code-based masked operation is relatively complex, processing multiple secrets in parallel leads to much better performances. This property enables code-based masked implementations of the AES to compete with the state-of-the-art in randomness complexity. Since our masked operations can be instantiated with various linear codes, we hope that these investigations open new avenues for the study of code-based masking schemes, by specializing the codes for improved performances, better side-channel security or improved fault tolerance.
2020
TCHES
Persistent Fault Attack in Practice 📺
Persistence fault analysis (PFA) is a novel fault analysis technique proposed in CHES 2018 and demonstrated with rowhammer-based fault injections. However, whether such analysis can be applied to traditional fault attack scenario, together with its difficulty in practice, has not been carefully investigated. For the first time, a persistent fault attack is conducted on an unprotected AES implemented on ATmega163L microcontroller in this paper. Several critical challenges are solved with our new improvements, including (1) how to decide whether the fault is injected in SBox; (2) how to use the maximum likelihood estimation to pursue the minimum number of ciphertexts; (3) how to utilize the unknown fault in SBox to extract the key. Our experiments show that: to break AES with physical laser injections despite all these challenges, the minimum and average number of required ciphertexts are 926 and 1641, respectively. It is about 38% and 28% reductions of the ciphertexts required in comparison to 1493 and 2273 in previous work where both fault value and location have to be known. Furthermore, our analysis is extended to the PRESENT cipher. By applying the persistent fault analysis to the penultimate round, the full PRESENT key of 80 bits can be recovered. Eventually, an experimental validation is performed to confirm the accuracy of our attack with more insights. This paper solves the challenges in most aspects of practice and also demonstrates the feasibility and universality of PFA on SPN block ciphers.
2020
TCHES
When one vulnerable primitive turns viral: Novel single-trace attacks on ECDSA and RSA 📺
Microarchitecture based side-channel attacks are common threats nowadays. Intel SGX technology provides a strong isolation from an adversarial OS, however, does not guarantee protection against side-channel attacks. In this paper, we analyze the security of the mbedTLS binary GCD algorithm, an implementation that offers interesting challenges when compared for example with OpenSSL, due to the usage of very tight loops in the former. Using practical experiments we demonstrate the mbedTLS binary GCD implementation is vulnerable to side-channel analysis using the SGX-Step framework against mbedTLS based SGX enclaves.We analyze the security of some use cases of this algorithm in this library, resulting in the discovery of a new vulnerability in the ECDSA code path that allows a single-trace attack against this implementation. This vulnerability is three-fold interesting: It resides in the implementation of a countermeasure which makes it more dangerous due to the false state of security the countermeasure currently offers. It reduces mbedTLS ECDSA security to an integer factorization problem. An unexpected GCD call inside the ECDSA code path compromises the countermeasure. We also cover an orthogonal use case, this time inside the mbedTLS RSA code path during the computation of a CRT parameter when loading a private key. The attack also exploits the binary GCD implementation threat, showing how a single vulnerable primitive leads to multiple vulnerabilities. We demonstrate both security threats with end-to-end attacks using 1000 trials each, showing in both cases single-trace attacks can be achieved with success rates very close to 100%.
2020
TCHES
Time-memory trade-off in Toom-Cook multiplication: an application to module-lattice based cryptography 📺
Since the introduction of the ring-learning with errors problem, the number theoretic transform (NTT) based polynomial multiplication algorithm has been studied extensively. Due to its faster quasilinear time complexity, it has been the preferred choice of cryptographers to realize ring-learning with errors cryptographic schemes. Compared to NTT, Toom-Cook or Karatsuba based polynomial multiplication algorithms, though being known for a long time, still have a fledgling presence in the context of post-quantum cryptography.In this work, we observe that the pre- and post-processing steps in Toom-Cook based multiplications can be expressed as linear transformations. Based on this observation we propose two novel techniques that can increase the efficiency of Toom-Cook based polynomial multiplications. Evaluation is reduced by a factor of 2, and we call this method precomputation, and interpolation is reduced from quadratic to linear, and we call this method lazy interpolation.As a practical application, we applied our algorithms to the Saber post-quantum key-encapsulation mechanism. We discuss in detail the various implementation aspects of applying our algorithms to Saber. We show that our algorithm can improve the efficiency of the computationally costly matrix-vector multiplication by 12−37% compared to previous methods on their respective platforms. Secondly, we propose different methods to reduce the memory footprint of Saber for Cortex-M4 microcontrollers. Our implementation shows between 2.6 and 5.7 KB reduction in the memory usage with respect to the smallest implementation in the literature.
2020
TCHES
A Compact and Scalable Hardware/Software Co-design of SIKE 📺
We present efficient and compact hardware/software co-design implementations of the Supersingular Isogeny Key Encapsulation (SIKE) protocol on field-programmable gate arrays (FPGAs). In order to be better equipped for different post-quantum scenarios, our architectures were designed to feature high-flexibility by covering all the currently available parameter sets and with support for primes up to 1016 bits. In particular, any of the current SIKE parameters equivalent to the post-quantum security of AES-128/192/256 and SHA3-256 can be selected and run on-the-fly. This security scalability property, together with the small footprint and efficiency of our architectures, makes them ideal for embedded applications in a post-quantum world. In addition, the proposed implementations exhibit regular, constant-time execution, which provides protection against timing and simple sidechannel attacks. Our results demonstrate that supersingular isogeny-based primitives such as SIDH and SIKE can indeed be deployed for embedded applications featuring competitive performance. For example, our smallest architecture based on a 128-bit MAC unit takes only 3415 slices, 21 BRAMs and 57 DSPs on a Virtex 7 690T and can perform key generation, encapsulation and decapsulation in 14.4, 24.4 and 26.0 milliseconds for SIKEp434 and in 52.3, 86.4 and 93.2 milliseconds for SIKEp751, respectively.
2020
TCHES
FEDS: Comprehensive Fault Attack Exploitability Detection for Software Implementations of Block Ciphers 📺
Fault injection attacks are one of the most powerful forms of cryptanalytic attacks on ciphers. A single, precisely injected fault during the execution of a cipher like the AES, can completely reveal the key within a few milliseconds. Software implementations of ciphers, therefore, need to be thoroughly evaluated for such attacks. In recent years, automated tools have been developed to perform these evaluations. These tools either work on the cipher algorithm or on their implementations. Tools that work at the algorithm level can provide a comprehensive assessment of fault attack vulnerability for different fault attacks and with different fault models. Their application is, however, restricted because every realization of the cipher has unique vulnerabilities. On the other hand, tools that work on cipher implementations have a much wider application but are often restricted by the range of fault attacks and the number of fault models they can evaluate.In this paper, we propose a framework, called FEDS, that uses a combination of compiler techniques and model checking to merge the advantages of both, algorithmic level tools as well as implementation level tools. Like the algorithmic level tools, FEDS can provide a comprehensive assessment of fault attack exploitability considering a wide range of fault attacks and fault models. Like implementation level tools, FEDS works with implementations, therefore has wide application. We demonstrate the versatility of FEDS by evaluating seven different implementations of AES (including bitsliced implementation) and implementations of CLEFIA and CAMELLIA for Differential Fault Attacks. The framework automatically identifies exploitable instructions in all implementations. Further, we present an application of FEDS in a Fault Attack Aware Compiler, that can automatically identify and protect exploitable regions of the code. We demonstrate that the compiler can generate significantly more efficient code than a naïvely protected equivalent, while maintaining the same level of protection.
2020
TCHES
Low-Latency Hardware Masking with Application to AES 📺
During the past two decades there has been a great deal of research published on masked hardware implementations of AES and other cryptographic primitives. Unfortunately, many hardware masking techniques can lead to increased latency compared to unprotected circuits for algorithms such as AES, due to the high-degree of nonlinear functions in their designs. In this paper, we present a hardware masking technique which does not increase the latency for such algorithms. It is based on the LUT-based Masked Dual-Rail with Pre-charge Logic (LMDPL) technique presented at CHES 2014. First, we show 1-glitch extended strong noninterference of a nonlinear LMDPL gadget under the 1-glitch extended probing model. We then use this knowledge to design an AES implementation which computes a full AES-128 operation in 10 cycles and a full AES-256 operation in 14 cycles. We perform practical side-channel analysis of our implementation using the Test Vector Leakage Assessment (TVLA) methodology and analyze univariate as well as bivariate t-statistics to demonstrate its DPA resistance level.
2020
TCHES
On the Security Goals of White-Box Cryptography 📺
We discuss existing and new security notions for white-box cryptography and comment on their suitability for Digital Rights Management and Mobile Payment Applications, the two prevalent use-cases of white-box cryptography. In particular, we put forward indistinguishability for white-box cryptography with hardware-binding (IND-WHW) as a new security notion that we deem central. We also discuss the security property of application-binding and explain the issues faced when defining it as a formal security notion. Based on our proposed notion for hardware-binding, we describe a possible white-box competition setup which assesses white-box implementations w.r.t. hardware-binding. Our proposed competition setup allows us to capture hardware-binding in a practically meaningful way.While some symmetric encryption schemes have been proven to admit plain white-box implementations, we show that not all secure symmetric encryption schemes are white-boxeable in the plain white-box attack scenario, i.e., without hardware-binding. Thus, even strong assumptions such as indistinguishability obfuscation cannot be used to provide secure white-box implementations for arbitrary ciphers. Perhaps surprisingly, our impossibility result does not carry over to the hardware-bound scenario. In particular, Alpirez Bock, Brzuska, Fischlin, Janson and Michiels (ePrint 2019/1014) proved a rather general feasibility result in the hardware-bound model. Equally important, the apparent theoretical distinction between the plain white-box model and the hardware-bound white-box model also translates into practically reduced attack capabilities as we explain in this paper.
2019
CHES
RISCV and Security: how, when and why? 📺
Invited talk
In this talk we will provide an overview of the current activities of the RISCV Foundation, including the creation of a Security Standing Committee about a year ago which is in charge of assessing new threats and opportunities in security in the RISCV world; we will discuss progress being made by the security-related task groups. The first one is working on specifying extensions of the base instruction set architecture (ISA) that will enable high-performance and high security cryptographic operations (AES, SHA-2, Public Key Cryptography); the second one is looking at creating extensions and hardware/software specifications to enable a trusted execution environment built on top of a RISCV processor; we will also provide details on the activities of the Security Standing Committee itself, and what some of the plans are to tackle the newest microarchitectural cache timing side-channel attacks such as Spectre, Meltdown, Foreshadow, etc. We will review some additional work on secure RISCV and existing security extension initiatives by academia around the world. Finally, we will describe some approaches of how a side-channel and DPA-resistant RISCV CPU could be built and elaborate on the research we have been focused on in the past months.
2019
CHES
Developing High-Performance Mechanically-Verified Cryptographic Code 📺
Invited talk
Project Everest is constructing a high-performance, standards-compliant, formally verified implementation of the HTTPS ecosystem, including TLS, X.509, and the core cryptographic algorithms. This talk will present an overview of how we verify our implementations are correct, cryptographically secure, and resilient to basic side channels. We will focus on our EverCrypt cryptographic provider, a comprehensive collection of verified, high-performance cryptographic functionalities available via a carefully designed API. The API provably supports agility (choosing between multiple algorithms for the same functionality) and multiplexing (choosing between multiple implementations of the same algorithm). Through a combination of abstraction and zero-cost generic programming, we show how agility can simplify verification without sacrificing performance, and we demonstrate how C and assembly can be composed and verified against shared specifications. The result is several functionalities whose performance matches or exceeds the best unverified implementations. Altogether, EverCrypt consists of over 100K verified lines of specs, code, and proofs, and it produces over 45K lines of C and assembly code.
2019
CRYPTO
Cryptanalysis of OCB2: Attacks on Authenticity and Confidentiality 📺
Best paper
We present practical attacks on OCB2. This mode of operation of a blockcipher was designed with the aim to provide particularly efficient and provably-secure authenticated encryption services, and since its proposal about 15 years ago it belongs to the top performers in this realm. OCB2 was included in an ISO standard in 2009.An internal building block of OCB2 is the tweakable blockcipher obtained by operating a regular blockcipher in $$ \text {XEX} ^*$$ mode. The latter provides security only when evaluated in accordance with certain technical restrictions that, as we note, are not always respected by OCB2. This leads to devastating attacks against OCB2’s security promises: We develop a range of very practical attacks that, amongst others, demonstrate universal forgeries and full plaintext recovery. We complete our report with proposals for (provably) repairing OCB2. To our understanding, as a direct consequence of our findings, OCB2 is currently in a process of removal from ISO standards. Our attacks do not apply to OCB1 and OCB3, and our privacy attacks on OCB2 require an active adversary.
2019
CRYPTO
Quantum Cryptanalysis in the RAM Model: Claw-Finding Attacks on SIKE 📺
Best Young Researcher Paper
We introduce models of computation that enable direct comparisons between classical and quantum algorithms. Incorporating previous work on quantum computation and error correction, we justify the use of the gate-count and depth-times-width cost metrics for quantum circuits. We demonstrate the relevance of these models to cryptanalysis by revisiting, and increasing, the security estimates for the Supersingular Isogeny Diffie–Hellman (SIDH) and Supersingular Isogeny Key Encapsulation (SIKE) schemes. Our models, analyses, and physical justifications have applications to a number of memory intensive quantum algorithms.
2019
CRYPTO
Fully Secure Attribute-Based Encryption for t-CNF from LWE 📺
Best young researcher
Attribute-based Encryption (ABE), first introduced by [SW05, GPSW06], is a public key encryption system that can support multiple users with varying decryption permissions. One of the main properties of such schemes is the supported function class of policies. While there are fully secure constructions from bilinear maps for a fairly large class of policies, the situation with lattice-based constructions is less satisfactory and many efforts were made to close this gap. Prior to this work the only known fully secure lattice construction was for the class of point functions (also known as IBE).In this work we construct for the first time a lattice-based (ciphertext-policy) ABE scheme for the function class t-CNF, which consists of CNF formulas where each clause depends on at most t bits of the input, for any constant t. This class includes NP-verification policies, bit-fixing policies and t-threshold policies. Towards this goal we also construct a fully secure single-key constrained PRF from OWF for the same function class, which might be of independent interest.
2019
CRYPTO
Noninteractive Zero Knowledge for NP from (Plain) Learning with Errors 📺
We finally close the long-standing problem of constructing a noninteractive zero-knowledge (NIZK) proof system for any NP language with security based on the plain Learning With Errors (LWE) problem, and thereby on worst-case lattice problems. Our proof system instantiates the framework recently developed by Canetti et al.  [EUROCRYPT’18], Holmgren and Lombardi [FOCS’18], and Canetti et al.  [STOC’19] for soundly applying the Fiat–Shamir transform using a hash function family that is correlation intractable for a suitable class of relations. Previously, such hash families were based either on “exotic” assumptions (e.g., indistinguishability obfuscation or optimal hardness of certain LWE variants) or, more recently, on the existence of circularly secure fully homomorphic encryption (FHE). However, none of these assumptions are known to be implied by plain LWE or worst-case hardness.Our main technical contribution is a hash family that is correlation intractable for arbitrary size-S circuits, for any polynomially bounded S, based on plain LWE (with small polynomial approximation factors). The construction combines two novel ingredients: a correlation-intractable hash family for log-depth circuits based on LWE (or even the potentially harder Short Integer Solution problem), and a “bootstrapping” transform that uses (leveled) FHE to promote correlation intractability for the FHE decryption circuit to arbitrary (bounded) circuits. Our construction can be instantiated in two possible “modes,” yielding a NIZK that is either computationally sound and statistically zero knowledge in the common random string model, or vice-versa in the common reference string model.
2019
CRYPTO
Lattice-Based Zero-Knowledge Proofs: New Techniques for Shorter and Faster Constructions and Applications 📺
We devise new techniques for design and analysis of efficient lattice-based zero-knowledge proofs (ZKP). First, we introduce one-shot proof techniques for non-linear polynomial relations of degree $$k\ge 2$$, where the protocol achieves a negligible soundness error in a single execution, and thus performs significantly better in both computation and communication compared to prior protocols requiring multiple repetitions. Such proofs with degree $$k\ge 2$$ have been crucial ingredients for important privacy-preserving protocols in the discrete logarithm setting, such as Bulletproofs (IEEE S&P ’18) and arithmetic circuit arguments (EUROCRYPT ’16). In contrast, one-shot proofs in lattice-based cryptography have previously only been shown for the linear case ($$k=1$$) and a very specific quadratic case ($$k=2$$), which are obtained as a special case of our technique.Moreover, we introduce two speedup techniques for lattice-based ZKPs: a CRT-packing technique supporting “inter-slot” operations, and “NTT-friendly” tools that permit the use of fully-splitting rings. The former technique comes at almost no cost to the proof length, and the latter one barely increases it, which can be compensated for by tweaking the rejection sampling parameters while still having faster computation overall.To illustrate the utility of our techniques, we show how to use them to build efficient relaxed proofs for important relations, namely proof of commitment to bits, one-out-of-many proof, range proof and set membership proof. Despite their relaxed nature, we further show how our proof systems can be used as building blocks for advanced cryptographic tools such as ring signatures.Our ring signature achieves a dramatic improvement in length over all the existing proposals from lattices at the same security level. The computational evaluation also shows that our construction is highly likely to outperform all the relevant works in running times. Being efficient in both aspects, our ring signature is particularly suitable for both small-scale and large-scale applications such as cryptocurrencies and e-voting systems. No trusted setup is required for any of our proposals.
2019
CRYPTO
Efficient Lattice-Based Zero-Knowledge Arguments with Standard Soundness: Construction and Applications 📺
We provide new zero-knowledge argument of knowledge systems that work directly for a wide class of language, namely, ones involving the satisfiability of matrix-vector relations and integer relations commonly found in constructions of lattice-based cryptography. Prior to this work, practical arguments for lattice-based relations either have a constant soundness error $$(2/3)$$, or consider a weaker form of soundness, namely, extraction only guarantees that the prover is in possession of a witness that “approximates” the actual witness. Our systems do not suffer from these limitations.The core of our new argument systems is an efficient zero-knowledge argument of knowledge of a solution to a system of linear equations, where variables of this solution satisfy a set of quadratic constraints. This argument enjoys standard soundness, a small soundness error $$(1/poly)$$, and a complexity linear in the size of the solution. Using our core argument system, we construct highly efficient argument systems for a variety of statements relevant to lattices, including linear equations with short solutions and matrix-vector relations with hidden matrices.Based on our argument systems, we present several new constructions of common privacy-preserving primitives in the standard lattice setting, including a group signature, a ring signature, an electronic cash system, and a range proof protocol. Our new constructions are one to three orders of magnitude more efficient than the state of the art (in standard lattice). This illustrates the efficiency and expressiveness of our argument system.
2019
CRYPTO
Algebraic Techniques for Short(er) Exact Lattice-Based Zero-Knowledge Proofs 📺
A key component of many lattice-based protocols is a zero-knowledge proof of knowledge of a vector $$\vec {s}$$ with small coefficients satisfying $$A\vec {s}=\vec {u}\bmod \,q$$ . While there exist fairly efficient proofs for a relaxed version of this equation which prove the knowledge of $$\vec {s}'$$ and c satisfying $$A\vec {s}'=\vec {u}c$$ where $$\Vert \vec {s}'\Vert \gg \Vert \vec {s}\Vert $$ and c is some small element in the ring over which the proof is performed, the proofs for the exact version of the equation are considerably less practical. The best such proof technique is an adaptation of Stern’s protocol (Crypto ’93), for proving knowledge of nearby codewords, to larger moduli. The scheme is a $$\varSigma $$ -protocol, each of whose iterations has soundness error $$2{/}3$$ , and thus requires over 200 repetitions to obtain soundness error of $$2^{-128}$$ , which is the main culprit behind the large size of the proofs produced. In this paper, we propose the first lattice-based proof system that significantly outperforms Stern-type proofs for proving knowledge of a short $$\vec {s}$$ satisfying $$A\vec {s}=\vec {u}\bmod \,q$$ . Unlike Stern’s proof, which is combinatorial in nature, our proof is more algebraic and uses various relaxed zero-knowledge proofs as sub-routines. The main savings in our proof system comes from the fact that each round has soundness error of $$1{/}n$$ , where n is the number of columns of A. For typical applications, n is a few thousand, and therefore our proof needs to be repeated around 10 times to achieve a soundness error of $$2^{-128}$$ . For concrete parameters, it produces proofs that are around an order of magnitude smaller than those produced using Stern’s approach.
2019
CRYPTO
Seedless Fruit Is the Sweetest: Random Number Generation, Revisited 📺
The need for high-quality randomness in cryptography makes random-number generation one of its most fundamental tasks.A recent important line of work (initiated by Dodis et al., CCS ’13) focuses on the notion of robustness for pseudorandom number generators (PRNGs) with inputs. These are primitives that use various sources to accumulate sufficient entropy into a state, from which pseudorandom bits are extracted. Robustness ensures that PRNGs remain secure even under state compromise and adversarial control of entropy sources. However, the achievability of robustness inherently depends on a seed, or, alternatively, on an ideal primitive (e.g., a random oracle), independent of the source of entropy. Both assumptions are problematic: seed generation requires randomness to start with, and it is arguable whether the seed or the ideal primitive can be kept independent of the source. This paper resolves this dilemma by putting forward new notions of robustness which enable both (1) seedless PRNGs and (2) primitive-dependent adversarial sources of entropy. To bypass obvious impossibility results, we make a realistic compromise by requiring that the source produce sufficient entropy even given its evaluations of the underlying primitive. We also provide natural, practical, and provably secure constructions based on hash-function designs from compression functions, block ciphers, and permutations. Our constructions can be instantiated with minimal changes to industry-standard hash functions SHA-2 and SHA-3, or key derivation function HKDF, and can be downgraded to (online) seedless randomness extractors, which are of independent interest.On the way we consider both a computational variant of robustness, where attackers only make a bounded number of queries to the ideal primitive, as well as a new information-theoretic variant, which dispenses with this assumption to a certain extent, at the price of requiring a high rate of injected weak randomness (as it is, e.g., plausible on Intel’s on-chip RNG). The latter notion enables applications such as everlasting security. Finally, we show that the CBC extractor, used by Intel’s on-chip RNG, is provably insecure in our model.
2019
CRYPTO
Nonces Are Noticed: AEAD Revisited 📺
We draw attention to a gap between theory and usage of nonce-based symmetric encryption, under which the way the former treats nonces can result in violation of privacy in the latter. We bridge the gap with a new treatment of nonce-based symmetric encryption that modifies the syntax (decryption no longer takes a nonce), upgrades the security goal (asking that not just messages, but also nonces, be hidden) and gives simple, efficient schemes conforming to the new definitions. We investigate both basic security (holding when nonces are not reused) and advanced security (misuse resistance, providing best-possible guarantees when nonces are reused).
2019
CRYPTO
How to Build Pseudorandom Functions from Public Random Permutations 📺
Pseudorandom functions are traditionally built upon block ciphers, but with the trend of permutation based cryptography, it is a natural question to investigate the design of pseudorandom functions from random permutations. We present a generic study of how to build beyond birthday bound secure pseudorandom functions from public random permutations. We first show that a pseudorandom function based on a single permutation call cannot be secure beyond the $$2^{n/2}$$ birthday bound, where n is the state size of the function. We next consider the Sum of Even-Mansour (SoEM) construction, that instantiates the sum of permutations with the Even-Mansour construction. We prove that SoEM achieves tight $$2n{/}3$$-bit security if it is constructed from two independent permutations and two randomly drawn keys. We also demonstrate a birthday bound attack if either the permutations or the keys are identical. Finally, we present the Sum of Key Alternating Ciphers (SoKAC) construction, a translation of Encrypted Davies-Meyer Dual to a public permutation based setting, and show that SoKAC achieves tight $$2n{/}3$$-bit security even when a single key is used.
2019
CRYPTO
New Results on Modular Inversion Hidden Number Problem and Inversive Congruential Generator 📺
The Modular Inversion Hidden Number Problem (MIHNP), introduced by Boneh, Halevi and Howgrave-Graham in Asiacrypt 2001, is briefly described as follows: Let $${\mathrm {MSB}}_{\delta }(z)$$ refer to the $$\delta $$ most significant bits of z. Given many samples $$\left( t_{i}, {\mathrm {MSB}}_{\delta }((\alpha + t_{i})^{-1} \bmod {p})\right) $$ for random $$t_i \in \mathbb {Z}_p$$, the goal is to recover the hidden number $$\alpha \in \mathbb {Z}_p$$. MIHNP is an important class of Hidden Number Problem.In this paper, we revisit the Coppersmith technique for solving a class of modular polynomial equations, which is respectively derived from the recovering problem of the hidden number $$\alpha $$ in MIHNP. For any positive integer constant d, let integer $$n=d^{3+o(1)}$$. Given a sufficiently large modulus p, $$n+1$$ samples of MIHNP, we present a heuristic algorithm to recover the hidden number $$\alpha $$ with a probability close to 1 when $$\delta /\log _2 p>\frac{1}{d\,+\,1}+o(\frac{1}{d})$$. The overall time complexity of attack is polynomial in $$\log _2 p$$, where the complexity of the LLL algorithm grows as $$d^{\mathcal {O}(d)}$$ and the complexity of the Gröbner basis computation grows as $$(2d)^{\mathcal {O}(n^2)}$$. When $$d> 2$$, this asymptotic bound outperforms $$\delta /\log _2 p>\frac{1}{3}$$ which is the asymptotic bound proposed by Boneh, Halevi and Howgrave-Graham in Asiacrypt 2001. It is the first time that a better bound for solving MIHNP is given, which implies that the conjecture that MIHNP is hard whenever $$\delta /\log _2 p<\frac{1}{3}$$ is broken. Moreover, we also get the best result for attacking the Inversive Congruential Generator (ICG) up to now.
2019
CRYPTO
On the Shortness of Vectors to Be Found by the Ideal-SVP Quantum Algorithm 📺
The hardness of finding short vectors in ideals of cyclotomic number fields (hereafter, Ideal-SVP) can serve as a worst-case assumption for numerous efficient cryptosystems, via the average-case problems Ring-SIS and Ring-LWE. For a while, it could be assumed the Ideal-SVP problem was as hard as the analog problem for general lattices (SVP), even when considering quantum algorithms.But in the last few years, a series of works has lead to a quantum algorithm for Ideal-SVP that outperforms what can be done for general SVP in certain regimes. More precisely, it was demonstrated (under certain hypotheses) that one can find in quantum polynomial time a vector longer by a factor at most $$\alpha = \exp ({\widetilde{O}(n^{1/2})})$$ than the shortest non-zero vector in a cyclotomic ideal lattice, where n is the dimension.In this work, we explore the constants hidden behind this asymptotic claim. While these algorithms have quantum steps, the steps that impact the approximation factor $$\alpha $$ are entirely classical, which allows us to estimate it experimentally using only classical computing. Moreover, we design heuristic improvements for those steps that significantly decrease the hidden factors in practice. Finally, we derive new provable effective lower bounds based on volumetric arguments.This study allows to predict the crossover point with classical lattice reduction algorithms, and thereby determine the relevance of this quantum algorithm in any cryptanalytic context. For example we predict that this quantum algorithm provides shorter vectors than BKZ-300 (roughly the weakest security level of NIST lattice-based candidates) for cyclotomic rings of rank larger than about 24000.
2019
CRYPTO
Proofs of Replicated Storage Without Timing Assumptions 📺
In this paper we provide a formal treatment of proof of replicated storage, a novel cryptographic primitive recently proposed in the context of a novel cryptocurrency, namely Filecoin.In a nutshell, proofs of replicated storage is a solution to the following problem: A user stores a file m on n different servers to ensure that the file will be available even if some of the servers fail. Using proof of retrievability, the user could check that every server is indeed storing the file. However, what if the servers collude and, in order to save on resources, decide to only store one copy of the file? A proof of replicated storage guarantees that, unless the (potentially colluding) servers are indeed reserving the space necessary to store n copies of the file, the user will not accept the proofs. While some candidate proofs of replicated storage have already been proposed, their soundness relies on timing assumptions i.e., the user must reject the proof if the prover does not reply within a certain time-bound.In this paper we provide the first construction of a proof of replication which does not rely on any timing assumptions.
2019
CRYPTO
Simple Proofs of Space-Time and Rational Proofs of Storage 📺
We introduce a new cryptographic primitive: Proofs of Space-Time (PoSTs) and construct an extremely simple, practical protocol for implementing these proofs. A PoST allows a prover to convince a verifier that she spent a “space-time” resource (storing data—space—over a period of time). Formally, we define the PoST resource as a trade-off between CPU work and space-time (under reasonable cost assumptions, a rational user will prefer to use the lower-cost space-time resource over CPU work).Compared to a proof-of-work, a PoST requires less energy use, as the “difficulty” can be increased by extending the time period over which data is stored without increasing computation costs. Our definition is very similar to “Proofs of Space” [ePrint 2013/796, 2013/805] but, unlike the previous definitions, takes into account amortization attacks and storage duration. Moreover, our protocol uses a very different (and much simpler) technique, making use of the fact that we explicitly allow a space-time tradeoff, and doesn’t require any non-standard assumptions (beyond random oracles). Unlike previous constructions, our protocol allows incremental difficulty adjustment, which can gracefully handle increases in the price of storage compared to CPU work. In addition, we show how, in a crypto-currency context, the parameters of the scheme can be adjusted using a market-based mechanism, similar in spirit to the difficulty adjustment for PoW protocols.
2019
CRYPTO
Non-malleable Codes for Decision Trees 📺
We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by decision trees of depth $$d= n^{1/4-o(1)}$$ . In particular, each bit of the tampered codeword is set arbitrarily after adaptively reading up to d arbitrary locations within the original codeword. Prior to this work, no efficient unconditional non-malleable codes were known for decision trees beyond depth $$O(\log ^2 n)$$ .Our result also yields efficient, unconditional non-malleable codes that are $$\exp (-n^{\varOmega (1)})$$ -secure against constant-depth circuits of $$\exp (n^{\varOmega (1)})$$ -size. Prior work of Chattopadhyay and Li (STOC 2017) and Ball et al. (FOCS 2018) only provide protection against $$\exp (O(\log ^2n))$$ -size circuits with $$\exp (-O(\log ^2n))$$ -security.We achieve our result through simple non-malleable reductions of decision tree tampering to split-state tampering. As an intermediary, we give a simple and generic reduction of leakage-resilient split-state tampering to split-state tampering with improved parameters. Prior work of Aggarwal et al. (TCC 2015) only provides a reduction to split-state non-malleable codes with decoders that exhibit particular properties.
2019
CRYPTO
Explicit Rate-1 Non-malleable Codes for Local Tampering 📺
This paper constructs high-rate non-malleable codes in the information-theoretic plain model against tampering functions with bounded locality. We consider $$\delta $$-local tampering functions; namely, each output bit of the tampering function is a function of (at most) $$\delta $$ input bits. This work presents the first explicit and efficient rate-1 non-malleable code for $$\delta $$-local tampering functions, where $$\delta =\xi \lg n$$ and $$\xi <1$$ is any positive constant. As a corollary, we construct the first explicit rate-1 non-malleable code against NC$$^0$$ tampering functions.Before our work, no explicit construction for a constant-rate non-malleable code was known even for the simplest 1-local tampering functions. Ball et al. (EUROCRYPT–2016), and Chattopadhyay and Li (STOC–2017) provided the first explicit non-malleable codes against $$\delta $$-local tampering functions. However, these constructions are rate-0 even when the tampering functions have 1-locality. In the CRS model, Faust et al. (EUROCRYPT–2014) constructed efficient rate-1 non-malleable codes for $$\delta = O(\log n)$$ local tampering functions.Our main result is a general compiler that bootstraps a rate-0 non-malleable code against leaky input and output local tampering functions to construct a rate-1 non-malleable code against $$\xi \lg n$$-local tampering functions, for any positive constant $$\xi < 1$$. Our explicit construction instantiates this compiler using an appropriate encoding by Ball et al. (EUROCRYPT–2016).
2019
CRYPTO
Continuous Space-Bounded Non-malleable Codes from Stronger Proofs-of-Space 📺
Non-malleable codes are encoding schemes that provide protections against various classes of tampering attacks. Recently Faust et al. (CRYPTO 2017) initiated the study of space-bounded non-malleable codes that provide such protections against tampering within small-space devices. They put forward a construction based on any non-interactive proof-of-space(NIPoS). However, the scheme only protects against an a priori bounded number of tampering attacks.We construct non-malleable codes that are resilient to an unbounded polynomial number of space-bounded tamperings. Towards that we introduce a stronger variant of $$\text {NIPoS}$$ called proof-extractable$$\text {NIPoS}$$ ($$\text {PExt-NIPoS}$$), and propose two approaches of constructing such a primitive. Using a new proof strategy we show that the generic encoding scheme of Faust et al. achieves unbounded tamper-resilience when instantiated with a $$\text {PExt-NIPoS}$$. We show two methods to construct $$\text {PExt-NIPoS}$$:1.The first method uses a special family of “memory-hard” graphs, called challenge-hard graphs (CHG), a notion we introduce here. We instantiate such family of graphs based on an extension of stack of localized expanders (first used by Ren and Devadas in the context of proof-of-space). In addition, we show that the graph construction used as a building block for the proof-of-space by Dziembowski et al. (CRYPTO 2015) satisfies challenge-hardness as well. These two CHG-instantiations lead to continuous space-bounded NMC with different features in the random oracle model.2.Our second instantiation relies on a new measurable property, called uniqueness of $$\text {NIPoS}$$. We show that standard extractability can be upgraded to proof-extractability if the $$\text {NIPoS}$$ also has uniqueness. We propose a simple heuristic construction of $$\text {NIPoS}$$, that achieves (partial) uniqueness, based on a candidate memory-hard function in the standard model and a publicly verifiable computation with small-space verification. Instantiating the encoding scheme of Faust et al. with this $$\text {NIPoS}$$, we obtain a continuous space-bounded NMC that supports the “most practical” parameters, complementing the provably secure but “relatively impractical” CHG-based constructions. Additionally, we revisit the construction of Faust et al. and observe that due to the lack of uniqueness of their $$\text {NIPoS}$$, the resulting encoding schemes yield “highly impractical” parameters in the continuous setting. We conclude the paper with a comparative study of all our non-malleable code constructions with an estimation of concrete parameters.
2019
CRYPTO
Synchronous, with a Chance of Partition Tolerance 📺
Murphy, Murky, Mopey, Moody, and Morose decide to write a paper together over the Internet and submit it to the prestigious CRYPTO’19 conference that has the most amazing PC. They encounter a few problems. First, not everyone is online every day: some are lazy and go skiing on Mondays; others cannot use git correctly and they are completely unaware that they are losing messages. Second, a small subset of the co-authors may be secretly plotting to disrupt the project (e.g., because they are writing a competing paper in stealth).Suppose that each day, sufficiently many honest co-authors are online (and use git correctly); moreover, suppose that messages checked into git on Monday can be correctly received by honest and online co-authors on Tuesday or any future day. Can the honest co-authors successfully finish the paper in a small number of days such that they make the CRYPTO deadline; and perhaps importantly, can all the honest co-authors, including even those who are lazy and those who sometimes use git incorrectly, agree on the final theorem?
2019
CRYPTO
Subvector Commitments with Application to Succinct Arguments 📺
We put forward the notion of subvector commitments (SVC): An SVC allows one to open a committed vector at a set of positions, where the opening size is independent of length of the committed vector and the number of positions to be opened. We propose two constructions under variants of the root assumption and the CDH assumption, respectively. We further generalize SVC to a notion called linear map commitments (LMC), which allows one to open a committed vector to its images under linear maps with a single short message, and propose a construction over pairing groups.Equipped with these newly developed tools, we revisit the “CS proofs” paradigm [Micali, FOCS 1994] which turns any arguments with public-coin verifiers into non-interactive arguments using the Fiat-Shamir transform in the random oracle model. We propose a compiler that turns any (linear, resp.) PCP into a non-interactive argument, using exclusively SVCs (LMCs, resp.). For an approximate 80 bits of soundness, we highlight the following new implications:1.There exists a succinct non-interactive argument of knowledge (SNARK) with public-coin setup with proofs of size 5360 bits, under the adaptive root assumption over class groups of imaginary quadratic orders against adversaries with runtime $$2^{128}$$. At the time of writing, this is the shortest SNARK with public-coin setup.2.There exists a non-interactive argument with private-coin setup, where proofs consist of 2 group elements and 3 field elements, in the generic bilinear group model.
2019
CRYPTO
Batching Techniques for Accumulators with Applications to IOPs and Stateless Blockchains 📺
We present batching techniques for cryptographic accumulators and vector commitments in groups of unknown order. Our techniques are tailored for distributed settings where no trusted accumulator manager exists and updates to the accumulator are processed in batches. We develop techniques for non-interactively aggregating membership proofs that can be verified with a constant number of group operations. We also provide a constant sized batch non-membership proof for a large number of elements. These proofs can be used to build the first positional vector commitment (VC) with constant sized openings and constant sized public parameters. As a core building block for our batching techniques we develop several succinct proof systems in groups of unknown order. These extend a recent construction of a succinct proof of correct exponentiation, and include a succinct proof of knowledge of an integer discrete logarithm between two group elements. We circumvent an impossibility result for Sigma-protocols in these groups by using a short trapdoor-free CRS. We use these new accumulator and vector commitment constructions to design a stateless blockchain, where nodes only need a constant amount of storage in order to participate in consensus. Further, we show how to use these techniques to reduce the size of IOP instantiations, such as STARKs. The full version of the paper is available online [BBF18b].
2019
CRYPTO
On the Plausibility of Fully Homomorphic Encryption for RAMs 📺
We initiate the study of fully homomorphic encryption for RAMs (RAM-FHE). This is a public-key encryption scheme where, given an encryption of a large database D, anybody can efficiently compute an encryption of P(D) for an arbitrary RAM program P. The running time over the encrypted data should be as close as possible to the worst case running time of P, which may be sub-linear in the data size.A central difficulty in constructing a RAM-FHE scheme is hiding the sequence of memory addresses accessed by P. This is particularly problematic because an adversary may homomorphically evaluate many programs over the same ciphertext, therefore effectively “rewinding” any mechanism for making memory accesses oblivious.We identify a necessary prerequisite towards constructing RAM-FHE that we call rewindable oblivious RAM (rewindable ORAM), which provides security even in this strong adversarial setting. We show how to construct rewindable ORAM using symmetric-key doubly efficient PIR (SK-DEPIR) (Canetti-Holmgren-Richelson, Boyle-Ishai-Pass-Wootters: TCC ’17). We then show how to use rewindable ORAM, along with virtual black-box (VBB) obfuscation for specific circuits, to construct RAM-FHE. The latter primitive can be heuristically instantiated using existing indistinguishability obfuscation candidates. Overall, we obtain a RAM-FHE scheme where the multiplicative overhead in running time is polylogarithmic in the database size N. Our basic scheme is single-hop, but we also extend it to obtain multi-hop RAM-FHE with overhead $$N^\epsilon $$ for arbitrarily small $$\epsilon >0$$ .We view our work as the first evidence that RAM-FHE is likely to exist.
2019
CRYPTO
Homomorphic Time-Lock Puzzles and Applications 📺
Time-lock puzzles allow one to encrypt messages for the future, by efficiently generating a puzzle with a solution s that remains hidden until time $$\mathcal {T}$$ has elapsed. The solution is required to be concealed from the eyes of any algorithm running in (parallel) time less than $$\mathcal {T}$$. We put forth the concept of homomorphic time-lock puzzles, where one can evaluate functions over puzzles without solving them, i.e., one can manipulate a set of puzzles with solutions $$(s_1, \dots , s_n)$$ to obtain a puzzle that solves to $$f(s_1, \ldots , s_n)$$, for any function f. We propose candidate constructions under concrete cryptographic assumptions for different classes of functions. Then we show how homomorphic time-lock puzzles overcome the limitations of classical time-lock puzzles by proposing new protocols for applications of interest, such as e-voting, multi-party coin flipping, and fair contract signing.
2019
CRYPTO
Symmetric Primitives with Structured Secrets 📺
Securely managing encrypted data on an untrusted party is a challenging problem that has motivated the study of a wide variety of cryptographic primitives. A special class of such primitives allows an untrusted party to transform a ciphertext encrypted under one key to a ciphertext under another key, using some auxiliary information that does not leak the underlying data. Prominent examples of such primitives in the symmetric setting are key-homomorphic (weak) PRFs, updatable encryption, and proxy re-encryption. Although these primitives differ significantly in terms of their constructions and security requirements, they share two important properties: (a) they have secrets with structure or extra functionality, and (b) all known constructions of these primitives satisfying reasonably strong definitions of security are based on concrete public-key assumptions, e.g., DDH and LWE. This raises the question of whether these objects inherently belong to the world of public-key primitives, or they can potentially be built from simple symmetric-key objects such as pseudorandom functions. In this work, we show that the latter possibility is unlikely. More specifically, we show that:Any (bounded) key-homomorphic weak PRF with an abelian output group implies a (bounded) input-homomorphic weak PRF, which has recently been shown to imply not only public-key encryption  but also a variety of primitives such as PIR, lossy TDFs, and even IBE.Any ciphertext-independent updatable encryption scheme that is forward and post-compromise secure implies PKE. Moreover, any symmetric-key proxy re-encryption scheme with reasonably strong security guarantees implies a forward and post-compromise secure ciphertext-independent updatable encryption, and hence PKE. In addition, we show that unbounded (or exact) key-homomorphic weak PRFs over abelian groups are impossible in the quantum world. In other words, over abelian groups, bounded key-homomorphism is the best that we can hope for in terms of post-quantum security. Our attack also works over other structured primitives with abelian groups and exact homomorphisms, including homomorphic one-way functions and input-homomorphic weak PRFs.
2019
CRYPTO
Unifying Leakage Models on a Rényi Day 📺
In the last decade, several works have focused on finding the best way to model the leakage in order to obtain provably secure implementations. One of the most realistic models is the noisy leakage model, introduced in [PR13, DDF14] together with secure constructions. These works suffer from various limitations, in particular the use of ideal leak-free gates in [PR13] and an important loss (in the size of the field) in the reduction in [DDF14].In this work, we provide new strategies to prove the security of masked implementations and start by unifying the different noisiness metrics used in prior works by relating all of them to a standard notion in information theory: the pointwise mutual information. Based on this new interpretation, we define two new natural metrics and analyze the security of known compilers with respect to these metrics. In particular, we prove (1) a tighter bound for reducing the noisy leakage models to the probing model using our first new metric, (2) better bounds for amplification-based security proofs using the second metric.To support that the improvements we obtain are not only a consequence of the use of alternative metrics, we show that for concrete representation of leakage (e.g., “Hamming weight + Gaussian noise”), our approach significantly improves the parameters compared to prior works. Finally, using the Rényi divergence, we quantify concretely the advantage of an adversary in attacking a block cipher depending on the number of leakage acquisitions available to it.
2019
CRYPTO
Leakage Certification Revisited: Bounding Model Errors in Side-Channel Security Evaluations 📺
Leakage certification aims at guaranteeing that the statistical models used in side-channel security evaluations are close to the true statistical distribution of the leakages, hence can be used to approximate a worst-case security level. Previous works in this direction were only qualitative: for a given amount of measurements available to an evaluation laboratory, they rated a model as “good enough” if the model assumption errors (i.e., the errors due to an incorrect choice of model family) were small with respect to the model estimation errors. We revisit this problem by providing the first quantitative tools for leakage certification. For this purpose, we provide bounds for the (unknown) Mutual Information metric that corresponds to the true statistical distribution of the leakages based on two easy-to-compute information theoretic quantities: the Perceived Information, which is the amount of information that can be extracted from a leaking device thanks to an estimated statistical model, possibly biased due to estimation and assumption errors, and the Hypothetical Information, which is the amount of information that would be extracted from an hypothetical device exactly following the model distribution. This positive outcome derives from the observation that while the estimation of the Mutual Information is in general a hard problem (i.e., estimators are biased and their convergence is distribution-dependent), it is significantly simplified in the case of statistical inference attacks where a target random variable (e.g., a key in a cryptographic setting) has a constant (e.g., uniform) probability. Our results therefore provide a general and principled path to bound the worst-case security level of an implementation. They also significantly speed up the evaluation of any profiled side-channel attack, since they imply that the estimation of the Perceived Information, which embeds an expensive cross-validation step, can be bounded by the computation of a cheaper Hypothetical Information, for any estimated statistical model.
2019
CRYPTO
Security in the Presence of Key Reuse: Context-Separable Interfaces and Their Applications 📺
Key separation is often difficult to enforce in practice. While key reuse can be catastrophic for security, we know of a number of cryptographic schemes for which it is provably safe. But existing formal models, such as the notions of joint security (Haber-Pinkas, CCS ’01) and agility (Acar et al., EUROCRYPT ’10), do not address the full range of key-reuse attacks—in particular, those that break the abstraction of the scheme, or exploit protocol interactions at a higher level of abstraction. This work attends to these vectors by focusing on two key elements: the game that codifies the scheme under attack, as well as its intended adversarial model; and the underlying interface that exposes secret key operations for use by the game. Our main security experiment considers the implications of using an interface (in practice, the API of a software library or a hardware platform such as TPM) to realize the scheme specified by the game when the interface is shared with other unspecified, insecure, or even malicious applications. After building up a definitional framework, we apply it to the analysis of two real-world schemes: the EdDSA signature algorithm and the Noise protocol framework. Both provide some degree of context separability, a design pattern for interfaces and their applications that aids in the deployment of secure protocols.
2019
CRYPTO
The Communication Complexity of Threshold Private Set Intersection 📺
Threshold private set intersection enables Alice and Bob who hold sets $$S_{\mathsf {A}}$$ and $$S_{\mathsf {B}}$$ of size n to compute the intersection $$S_{\mathsf {A}} \cap S_{\mathsf {B}} $$ if the sets do not differ by more than some threshold parameter $$t$$ . In this work, we investigate the communication complexity of this problem and we establish the first upper and lower bounds. We show that any protocol has to have a communication complexity of $$\varOmega (t)$$ . We show that an almost matching upper bound of $$\tilde{\mathcal {O}}(t)$$ can be obtained via fully homomorphic encryption. We present a computationally more efficient protocol based on weaker assumptions, namely additively homomorphic encryption, with a communication complexity of $$\tilde{\mathcal {O}}(t ^2)$$ . For applications like biometric authentication, where a given fingerprint has to have a large intersection with a fingerprint from a database, our protocols may result in significant communication savings.Prior to this work, all previous protocols had a communication complexity of $$\varOmega (n)$$ . Our protocols are the first ones with communication complexities that mainly depend on the threshold parameter $$t$$ and only logarithmically on the set size n.
2019
CRYPTO
Adaptively Secure MPC with Sublinear Communication Complexity 📺
A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results:A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size.Under the same assumptions, we construct a “Bob-optimized” 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice’s input size. We prove impossibility of “Alice-optimized” protocols.Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS’18) and shed light on an interesting duality between LFE and FHE.Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints.
2019
CRYPTO
Communication Lower Bounds for Statistically Secure MPC, With or Without Preprocessing 📺
We prove a lower bound on the communication complexity of unconditionally secure multiparty computation, both in the standard model with $$n=2t+1$$ parties of which t are corrupted, and in the preprocessing model with $$n=t+1$$ . In both cases, we show that for any $$g \in \mathbb {N}$$ there exists a Boolean circuit C with g gates, where any secure protocol implementing C must communicate $$\varOmega (n g)$$ bits, even if only passive and statistical security is required. The results easily extends to constructing similar circuits over any fixed finite field. This shows that for all sizes of circuits, the O(n) overhead of all known protocols when t is maximal is inherent. It also shows that security comes at a price: the circuit we consider could namely be computed among n parties with communication only O(g) bits if no security was required. Our results extend to the case where the threshold t is suboptimal. For the honest majority case, this shows that the known optimizations via packed secret-sharing can only be obtained if one accepts that the threshold is $$t= (1/2 - c)n$$ for a constant c. For the honest majority case, we also show an upper bound that matches the lower bound up to a constant factor (existing upper bounds are a factor $$\lg n$$ off for Boolean circuits).
2019
CRYPTO
Communication-Efficient Unconditional MPC with Guaranteed Output Delivery 📺
We study the communication complexity of unconditionally secure MPC with guaranteed output delivery over point-to-point channels for corruption threshold $$t < n/3$$ . We ask the question: “is it possible to construct MPC in this setting s.t. the communication complexity per multiplication gate is linear in the number of parties?” While a number of works have focused on reducing the communication complexity in this setting, the answer to the above question has remained elusive for over a decade.We resolve the above question in the affirmative by providing an MPC with communication complexity $$O(Cn\kappa + n^3\kappa )$$ where $$\kappa $$ is the size of an element in the field, C is the size of the (arithmetic) circuit, and, n is the number of parties. This represents a strict improvement over the previously best known communication complexity of $$O(Cn\kappa +D_Mn^2\kappa +n^3\kappa )$$ where $$D_M$$ is the multiplicative depth of the circuit. To obtain this result, we introduce a novel technique called 4-consistent tuples of sharings which we believe to be of independent interest.
2019
CRYPTO
Efficient Collision Attack Frameworks for RIPEMD-160 📺
RIPEMD-160 is an ISO/IEC standard and has been applied to generate the Bitcoin address with SHA-256. Due to the complex dual-stream structure, the first collision attack on reduced RIPEMD-160 presented by Liu, Mendel and Wang at Asiacrypt 2017 only reaches 30 steps, having a time complexity of $$2^{70}$$. Apart from that, several semi-free-start collision attacks have been published for reduced RIPEMD-160 with the start-from-the-middle method. Inspired from such start-from-the middle structures, we propose two novel efficient collision attack frameworks for reduced RIPEMD-160 by making full use of the weakness of its message expansion. Those two frameworks are called dense-left-and-sparse-right (DLSR) framework and sparse-left-and-dense-right (SLDR) framework. As it turns out, the DLSR framework is more efficient than SLDR framework since one more step can be fully controlled, though with extra $$2^{32}$$ memory complexity. To construct the best differential characteristics for the DLSR framework, we carefully build the linearized part of the characteristics and then solve the corresponding nonlinear part using a guess-and-determine approach. Based on the newly discovered differential characteristics, we provide colliding messages pairs for the first practical collision attacks on 30 and 31 (out of 80) steps of RIPEMD-160 with time complexity $$2^{35.9}$$ and $$2^{41.5}$$ respectively. In addition, benefiting from the partial calculation, we can attack 33 and 34 (out of 80) steps of RIPEMD-160 with time complexity $$2^{67.1}$$ and $$2^{74.3}$$ respectively. When applying the SLDR framework to the differential characteristic used in the Asiacrypt 2017 paper, we significantly improve the time complexity by a factor of $$2^{13}$$. However, it still cannot compete with the results obtained from the DLSR framework. To the best of our knowledge, these are the best collision attacks on reduced RIPEMD-160 with respect to the number of steps, including the first colliding message pairs for 30 and 31 steps of RIPEMD-160.
2019
CRYPTO
Improving Attacks on Round-Reduced Speck32/64 Using Deep Learning 📺
This paper has four main contributions. First, we calculate the predicted difference distribution of Speck32/64 with one specific input difference under the Markov assumption completely for up to eight rounds and verify that this yields a globally fairly good model of the difference distribution of Speck32/64. Secondly, we show that contrary to conventional wisdom, machine learning can produce very powerful cryptographic distinguishers: for instance, in a simple low-data, chosen plaintext attack on nine rounds of Speck, we present distinguishers based on deep residual neural networks that achieve a mean key rank roughly five times lower than an analogous classical distinguisher using the full difference distribution table. Thirdly, we develop a highly selective key search policy based on a variant of Bayesian optimization which, together with our neural distinguishers, can be used to reduce the remaining security of 11-round Speck32/64 to roughly 38 bits. This is a significant improvement over previous literature. Lastly, we show that our neural distinguishers successfully use features of the ciphertext pair distribution that are invisible to all purely differential distinguishers even given unlimited data.While our attack is based on a known input difference taken from the literature, we also show that neural networks can be used to rapidly (within a matter of minutes on our machine) find good input differences without using prior human cryptanalysis. Supplementary code and data for this paper is available at https://github.com/agohr/deep_speck.
2019
CRYPTO
Correlation of Quadratic Boolean Functions: Cryptanalysis of All Versions of Full $\mathsf {MORUS}$ 📺
We show that the correlation of any quadratic Boolean function can be read out from its so-called disjoint quadratic form. We further propose a polynomial-time algorithm that can transform an arbitrary quadratic Boolean function into its disjoint quadratic form. With this algorithm, the exact correlation of quadratic Boolean functions can be computed efficiently.We apply this method to analyze the linear trails of $$\mathsf {MORUS}$$ (one of the seven finalists of the CAESAR competition), which are found with the help of a generic model for linear trails of $$\mathsf {MORUS}$$-like key-stream generators. In our model, any tool for finding linear trails of block ciphers can be used to search for trails of $$\mathsf {MORUS}$$-like key-stream generators. As a result, a set of trails with correlation $$2^{-38}$$ is identified for all versions of full $$\mathsf {MORUS}$$, while the correlations of previously published best trails for $$\mathsf {MORUS}$$-640 and $$\mathsf {MORUS}$$-1280 are $$2^{-73}$$ and $$2^{-76}$$ respectively (ASIACRYPT 2018). This significantly improves the complexity of the attack on $$\mathsf {MORUS}$$-1280-256 from $$2^{152}$$ to $$2^{76}$$. These new trails also lead to the first distinguishing and message-recovery attacks on $$\mathsf {MORUS}$$-640-128 and $$\mathsf {MORUS}$$-1280-128 with surprisingly low complexities around $$2^{76}$$.Moreover, we observe that the condition for exploiting these trails in an attack can be more relaxed than previously thought, which shows that the new trails are superior to previously published ones in terms of both correlation and the number of ciphertext blocks involved.
2019
CRYPTO
Low-Memory Attacks Against Two-Round Even-Mansour Using the 3-XOR Problem 📺
The iterated Even-Mansour construction is an elegant construction that idealizes block cipher designs such as the AES. In this work we focus on the simplest variant, the 2-round Even-Mansour construction with a single key. This is the most minimal construction that offers security beyond the birthday bound: there is a security proof up to $$2^{2n/3}$$ evaluations of the underlying permutations and encryption, and the best known attacks have a complexity of roughly $$2^n/n$$ operations.We show that attacking this scheme with block size n is related to the 3-XOR problem with element size $$\ell = 2n$$, an important algorithmic problem that has been studied since the nineties. In particular the 3-XOR problem is known to require at least $$2^{\ell /3}$$ queries, and the best known algorithms require around $$2^{\ell /2}/\ell $$ operations: this roughly matches the known bounds for the 2-round Even-Mansour scheme.Using this link we describe new attacks against the 2-round Even-Mansour scheme. In particular, we obtain the first algorithms where both the data and the memory complexity are significantly lower than $$2^{n}$$. From a practical standpoint, previous works with a data and/or memory complexity close to $$2^n$$ are unlikely to be more efficient than a simple brute-force search over the key. Our best algorithm requires just $$\lambda n$$ known plaintext/ciphertext pairs, for some constant $$0< \lambda < 1$$, $$2^n/\lambda n$$ time, and $$2^{\lambda n}$$ memory. For instance, with $$n=64$$ and $$\lambda = 1/2$$, the memory requirement is practical, and we gain a factor 32 over brute-force search. We also describe an algorithm with asymptotic complexity $$\mathcal {O}(2^{n} \ln ^2 n/n^2)$$, improving the previous asymptotic complexity of $$\mathcal {O}(2^n/n)$$, using a variant of the 3-SUM algorithm of Baran, Demaine, and Pǎtraşcu.
2019
CRYPTO
How to Record Quantum Queries, and Applications to Quantum Indifferentiability 📺
The quantum random oracle model (QROM) has become the standard model in which to prove the post-quantum security of random-oracle-based constructions. Unfortunately, none of the known proof techniques allow the reduction to record information about the adversary’s queries, a crucial feature of many classical ROM proofs, including all proofs of indifferentiability for hash function domain extension.In this work, we give a new QROM proof technique that overcomes this “recording barrier”. We do so by giving a new “compressed oracle” which allows for efficient on-the-fly simulation of random oracles, roughly analogous to the usual classical simulation. We then use this new technique to give the first proof of quantum indifferentiability for the Merkle-Damgård domain extender for hash functions. We also give a proof of security for the Fujisaki-Okamoto transformation; previous proofs required modifying the scheme to include an additional hash term. Given the threat posed by quantum computers and the push toward quantum-resistant cryptosystems, our work represents an important tool for efficient post-quantum cryptosystems.
2019
CRYPTO
Quantum Security Proofs Using Semi-classical Oracles 📺
We present an improved version of the one-way to hiding (O2H) Theorem by Unruh, J ACM 2015. Our new O2H Theorem gives higher flexibility (arbitrary joint distributions of oracles and inputs, multiple reprogrammed points) as well as tighter bounds (removing square-root factors, taking parallelism into account). The improved O2H Theorem makes use of a new variant of quantum oracles, semi-classical oracles, where queries are partially measured. The new O2H Theorem allows us to get better security bounds in several public-key encryption schemes.
2019
CRYPTO
Quantum Indistinguishability of Random Sponges 📺
In this work we show that the sponge construction can be used to construct quantum-secure pseudorandom functions. As our main result we prove that random sponges are quantum indistinguishable from random functions. In this setting the adversary is given superposition access to the input-output behavior of the construction but not to the internal function. Our proofs hold under the assumption that the internal function is a random function or permutation. We then use this result to obtain a quantum-security version of a result by Andreeva, Daemen, Mennink, and Van Assche (FSE’15) which shows that a sponge that uses a secure PRP or PRF as internal function is a secure PRF. This result also proves that the recent attacks against CBC-MAC in the quantum-access model by Kaplan, Leurent, Leverrier, and Naya-Plasencia (Crypto’16) and Santoli, and Schaffner (QIC’16) can be prevented by introducing a state with a non-trivial inner part.The proof of our main result is derived by analyzing the joint distribution of any q input-output pairs. Our method analyzes the statistical behavior of the considered construction in great detail. The used techniques might prove useful in future analysis of different cryptographic primitives considering quantum adversaries. Using Zhandry’s PRF/PRP switching lemma we then obtain that quantum indistinguishability also holds if the internal block function is a random permutation.
2019
CRYPTO
Revisiting Post-quantum Fiat-Shamir 📺
The Fiat-Shamir transformation is a useful approach to building non-interactive arguments (of knowledge) in the random oracle model. Unfortunately, existing proof techniques are incapable of proving the security of Fiat-Shamir in the quantum setting. The problem stems from (1) the difficulty of quantum rewinding, and (2) the inability of current techniques to adaptively program random oracles in the quantum setting. In this work, we show how to overcome the limitations above in many settings. In particular, we give mild conditions under which Fiat-Shamir is secure in the quantum setting. As an application, we show that existing lattice signatures based on Fiat-Shamir are secure without any modifications.
2019
CRYPTO
Security of the Fiat-Shamir Transformation in the Quantum Random-Oracle Model 📺
The famous Fiat-Shamir transformation turns any public-coin three-round interactive proof, i.e., any so-called $$\Sigma {\text {-protocol}}$$ , into a non-interactive proof in the random-oracle model. We study this transformation in the setting of a quantum adversary that in particular may query the random oracle in quantum superposition.Our main result is a generic reduction that transforms any quantum dishonest prover attacking the Fiat-Shamir transformation in the quantum random-oracle model into a similarly successful quantum dishonest prover attacking the underlying $$\Sigma {\text {-protocol}}$$ (in the standard model). Applied to the standard soundness and proof-of-knowledge definitions, our reduction implies that both these security properties, in both the computational and the statistical variant, are preserved under the Fiat-Shamir transformation even when allowing quantum attacks. Our result improves and completes the partial results that have been known so far, but it also proves wrong certain claims made in the literature.In the context of post-quantum secure signature schemes, our results imply that for any $$\Sigma {\text {-protocol}}$$ that is a proof-of-knowledge against quantum dishonest provers (and that satisfies some additional natural properties), the corresponding Fiat-Shamir signature scheme is secure in the quantum random-oracle model. For example, we can conclude that the non-optimized version of Fish, which is the bare Fiat-Shamir variant of the NIST candidate Picnic, is secure in the quantum random-oracle model.
2019
CRYPTO
Unconditionally Secure Computation Against Low-Complexity Leakage 📺
We consider the problem of constructing leakage-resilient circuit compilers that are secure against global leakage functions with bounded output length. By global, we mean that the leakage can depend on all circuit wires and output a low-complexity function (represented as a multi-output Boolean circuit) applied on these wires. In this work, we design compilers both in the stateless (a.k.a. single-shot leakage) setting and the stateful (a.k.a. continuous leakage) setting that are unconditionally secure against $$\mathsf {AC}^0$$ leakage and similar low-complexity classes.In the stateless case, we show that the original private circuits construction of Ishai, Sahai, and Wagner (Crypto 2003) is actually secure against $$\mathsf {AC}^0$$ leakage. In the stateful case, we modify the construction of Rothblum (Crypto 2012), obtaining a simple construction with unconditional security. Prior works that designed leakage-resilient circuit compilers against $$\mathsf {AC}^0$$ leakage had to rely either on secure hardware components (Faust et al., Eurocrypt 2010, Miles-Viola, STOC 2013) or on (unproven) complexity-theoretic assumptions (Rothblum, Crypto 2012).
2019
CRYPTO
Tight Leakage-Resilient CCA-Security from Quasi-Adaptive Hash Proof System 📺
We propose the concept of quasi-adaptive hash proof system (QAHPS), where the projection key is allowed to depend on the specific language for which hash values are computed. We formalize leakage-resilient(LR)-ardency for QAHPS by defining two statistical properties, including LR-$$\langle \mathscr {L}_0, \mathscr {L}_1 \rangle $$-universal and LR-$$\langle \mathscr {L}_0, \mathscr {L}_1 \rangle $$-key-switching.We provide a generic approach to tightly leakage-resilient CCA (LR-CCA) secure public-key encryption (PKE) from LR-ardent QAHPS. Our approach is reminiscent of the seminal work of Cramer and Shoup (Eurocrypt’02), and employ three QAHPS schemes, one for generating a uniform string to hide the plaintext, and the other two for proving the well-formedness of the ciphertext. The LR-ardency of QAHPS makes possible the tight LR-CCA security. We give instantiations based on the standard k-Linear (k-LIN) assumptions over asymmetric and symmetric pairing groups, respectively, and obtain fully compact PKE with tight LR-CCA security. The security loss is $${{O}}(\log {Q_{{e}}})$$ where $${Q_{{e}}}$$ denotes the number of encryption queries. Specifically, our tightly LR-CCA secure PKE instantiation from SXDH has only 4 group elements in the public key and 7 group elements in the ciphertext, thus is the most efficient one.
2019
CRYPTO
Non-malleable Secret Sharing in the Computational Setting: Adaptive Tampering, Noisy-Leakage Resilience, and Improved Rate 📺
We revisit the concept of non-malleable secret sharing (Goyal and Kumar, STOC 2018) in the computational setting. In particular, under the assumption of one-to-one one-way functions, we exhibit a computationally private, threshold secret sharing scheme satisfying all of the following properties. Continuous non-malleability: No computationally-bounded adversary tampering independently with all the shares can produce mauled shares that reconstruct to a value related to the original secret. This holds even in case the adversary can tamper continuously, for an unbounded polynomial number of times, with the same target secret sharing, where the next sequence of tampering functions, as well as the subset of shares used for reconstruction, can be chosen adaptively based on the outcome of previous reconstructions.Resilience to noisy leakage: Non-malleability holds even if the adversary can additionally leak information independently from all the shares. There is no bound on the length of leaked information, as long as the overall leakage does not decrease the min-entropy of each share by too much.Improved rate: The information rate of our final scheme, defined as the ratio between the size of the message and the maximal size of a share, asymptotically approaches 1 when the message length goes to infinity. Previous constructions achieved information-theoretic security, sometimes even for arbitrary access structures, at the price of at least one of the following limitations: (i) Non-malleability only holds against one-time tampering attacks; (ii) Non-malleability holds against a bounded number of tampering attacks, but both the choice of the tampering functions and of the sets used for reconstruction is non-adaptive; (iii) Information rate asymptotically approaching zero; (iv) No security guarantee in the presence of leakage.
2019
CRYPTO
Leakage Resilient Secret Sharing and Applications 📺
A secret sharing scheme allows a dealer to share a secret among a set of n parties such that any authorized subset of the parties can recover the secret, while any unauthorized subset learns no information about the secret. A leakage-resilient secret sharing scheme (introduced in independent works by Goyal and Kumar, STOC ’18 and Benhamouda, Degwekar, Ishai and Rabin, CRYPTO ’18) additionally requires the secrecy to hold against every unauthorized set of parties even if they obtain some bounded leakage from every other share. The leakage is said to be local if it is computed independently for each share. So far, the only known constructions of local leakage resilient secret sharing schemes are for threshold access structures for very low (O(1)) or very high ( $$n -o(\log n)$$ ) thresholds.In this work, we give a compiler that takes a secret sharing scheme for any monotone access structure and produces a local leakage resilient secret sharing scheme for the same access structure, with only a constant-factor asymptotic blow-up in the sizes of the shares. Furthermore, the resultant secret sharing scheme has optimal leakage-resilience rate, i.e., the ratio between the leakage tolerated and the size of each share can be made arbitrarily close to 1. Using this secret sharing scheme as the main building block, we obtain the following results:Rate Preserving Non-Malleable Secret Sharing. We give a compiler that takes any secret sharing scheme for a 4-monotone access structure (A 4-monotone access structure has the property that any authorized set has size at least 4.) with rate R and converts it into a non-malleable secret sharing scheme for the same access structure with rate $$\varOmega (R)$$ . The previous such non-zero rate construction (Badrinarayanan and Srinivasan, EUROCRYPT ’19) achieved a rate of $$\varTheta (R/{t_{\max }\log ^2 n})$$ , where $$t_{\max }$$ is the maximum size of any minimal set in the access structure. As a special case, for any threshold $$t \ge 4$$ and an arbitrary $$n \ge t$$ , we get the first constant-rate construction of t-out-of-n non-malleable secret sharing.Leakage-Tolerant Multiparty Computation for General Interaction Patterns. For any function f, we give a reduction from constructing a leakage-tolerant secure multi-party computation protocol for computing f that obeys any given interaction pattern to constructing a secure (but not necessarily leakage-tolerant) protocol for a related function that obeys the star interaction pattern. Together with the known results for the star interaction pattern, this gives leakage tolerant MPC for any interaction pattern with statistical/computational security. This improves upon the result of (Halevi et al., ITCS 2016), who presented such a reduction in a leak-free environment.
2019
CRYPTO
Stronger Leakage-Resilient and Non-Malleable Secret Sharing Schemes for General Access Structures 📺
In this work we present a collection of compilers that take secret sharing schemes for an arbitrary access structure as input and produce either leakage-resilient or non-malleable secret sharing schemes for the same access structure. A leakage-resilient secret sharing scheme hides the secret from an adversary, who has access to an unqualified set of shares, even if the adversary additionally obtains some size-bounded leakage from all other secret shares. A non-malleable secret sharing scheme guarantees that a secret that is reconstructed from a set of tampered shares is either equal to the original secret or completely unrelated. To the best of our knowledge we present the first generic compiler for leakage-resilient secret sharing for general access structures. In the case of non-malleable secret sharing, we strengthen previous definitions, provide separations between them, and construct a non-malleable secret sharing scheme for general access structures that fulfills the strongest definition with respect to independent share tampering functions. More precisely, our scheme is secure against concurrent tampering: The adversary is allowed to (non-adaptively) tamper the shares multiple times, and in each tampering attempt can freely choose the qualified set of shares to be used by the reconstruction algorithm to reconstruct the tampered secret. This is a strong analogue of the multiple-tampering setting for split-state non-malleable codes and extractors.We show how to use leakage-resilient and non-malleable secret sharing schemes to construct leakage-resilient and non-malleable threshold signatures. Classical threshold signatures allow to distribute the secret key of a signature scheme among a set of parties, such that certain qualified subsets can sign messages. We construct threshold signature schemes that remain secure even if an adversary leaks from or tampers with all secret shares.
2019
CRYPTO
Memory-Hard Functions from Cryptographic Primitives 📺
Memory-hard functions (MHFs) are moderately-hard functions which enforce evaluation costs both in terms of time and memory (often, in form of a trade-off). They are used e.g. for password protection, password-based key-derivation, and within cryptocurrencies, and have received a considerable amount of theoretical scrutiny over the last few years. However, analyses see MHFs as modes of operation of some underlying hash function $$\mathcal {H}$$, modeled as a monolithic random oracle. This is however a very strong assumption, as such hash functions are built from much simpler primitives, following somewhat ad-hoc design paradigms.This paper initiates the study of how to securely instantiate $$\mathcal {H}$$ within MHF designs using common cryptographic primitives like block ciphers, compression functions, and permutations. Security here will be in a model in which the adversary has parallel access to an idealized version of the underlying primitive. We will provide provably memory-hard constructions from all the aforementioned primitives. Our results are generic, in that we will rely on hard-to-pebble graphs designed in prior works to obtain our constructions.One particular challenge we encounter is that $$\mathcal {H}$$ is usually required to have large outputs (to increase memory hardness without changing the description size of MHFs), whereas the underlying primitives generally have small output sizes.
2019
CRYPTO
Data-Independent Memory Hard Functions: New Attacks and Stronger Constructions 📺
Memory-hard functions (MHFs) are a key cryptographic primitive underlying the design of moderately expensive password hashing algorithms and egalitarian proofs of work. Over the past few years several increasingly stringent goals for an MHF have been proposed including the requirement that the MHF have high sequential space-time (ST) complexity, parallel space-time complexity, amortized area-time (aAT) complexity and sustained space complexity. Data-Independent Memory Hard Functions (iMHFs) are of special interest in the context of password hashing as they naturally resist side-channel attacks. iMHFs can be specified using a directed acyclic graph (DAG) G with $$N=2^n$$ nodes and low indegree and the complexity of the iMHF can be analyzed using a pebbling game. Recently, Alwen et al. [ABH17] constructed a DAG called DRSample that has aAT complexity at least $$\varOmega \!\left( N^2/{\text {log}} N\right) $$ . Asymptotically DRSample outperformed all prior iMHF constructions including Argon2i, winner of the password hashing competition (aAT cost $${\mathcal {O}} \!\left( N^{1.767}\right) $$ ), though the constants in these bounds are poorly understood. We show that the greedy pebbling strategy of Boneh et al. [BCS16] is particularly effective against DRSample e.g., the aAT cost is $${\mathcal {O}} (N^2/{\text {log}} N)$$ . In fact, our empirical analysis reverses the prior conclusion of Alwen et al. that DRSample provides stronger resistance to known pebbling attacks for practical values of $$N \le 2^{24}$$ . We construct a new iMHF candidate (DRSample+BRG) by using the bit-reversal graph to extend DRSample. We then prove that the construction is asymptotically optimal under every MHF criteria, and we empirically demonstrate that our iMHF provides the best resistance to known pebbling attacks. For example, we show that any parallel pebbling attack either has aAT cost $$\omega (N^2)$$ or requires at least $$\varOmega (N)$$ steps with $$\varOmega (N/{\text {log}} N)$$ pebbles on the DAG. This makes our construction the first practical iMHF with a strong sustained space-complexity guarantee and immediately implies that any parallel pebbling has aAT complexity $$\varOmega (N^2/{\text {log}} N)$$ . We also prove that any sequential pebbling (including the greedy pebbling attack) has aAT cost $$\varOmega \!\left( N^2\right) $$ and, if a plausible conjecture holds, any parallel pebbling has aAT cost $$\varOmega (N^2 \log \log N/{\text {log}} N)$$ —the best possible bound for an iMHF. We implement our new iMHF and demonstrate that it is just as fast as Argon2. Along the way we propose a simple modification to the Argon2 round function that increases an attacker’s aAT cost by nearly an order of magnitude without increasing running time on a CPU. Finally, we give a pebbling reduction that proves that in the parallel random oracle model (PROM) the cost of evaluating an iMHF like Argon2i or DRSample+BRG is given by the pebbling cost of the underlying DAG. Prior pebbling reductions assumed that the iMHF round function concatenates input labels before hashing and did not apply to practical iMHFs such as Argon2i, DRSample or DRSample+BRG where input labels are instead XORed together.
2019
CRYPTO
Simultaneous Amplification: The Case of Non-interactive Zero-Knowledge 📺
In this work, we explore the question of simultaneous privacy and soundness amplification for non-interactive zero-knowledge argument systems (NIZK). We show that any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge NIZK candidate satisfying $$\delta _s+\delta _z=1-\epsilon $$, for any constant $$\epsilon >0$$, can be turned into a computationally sound and zero-knowledge candidate with the only extra assumption of a subexponentially secure public-key encryption.We develop novel techniques to leverage the use of leakage simulation lemma (Jetchev-Peitzrak TCC 2014) to argue amplification. A crucial component of our result is a new notion for secret sharing $$\mathsf {NP}$$ instances. We believe that this may be of independent interest.To achieve this result we analyze following two transformations:Parallel Repetition: We show that using parallel repetition any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge $$\mathsf {NIZK}$$ candidate can be turned into (roughly) $$\delta ^n_s-$$sound and $$1-(1-\delta _{z})^n-$$zero-knowledge candidate. Here n is the repetition parameter.MPC based Repetition: We propose a new transformation that amplifies zero-knowledge in the same way that parallel repetition amplifies soundness. We show that using this any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge $$\mathsf {NIZK}$$ candidate can be turned into (roughly) $$1-(1-\delta _s)^n-$$sound and $$2\cdot \delta ^n_{z}-$$zero-knowledge candidate. Then we show that using these transformations in a zig-zag fashion we can obtain our result. Finally, we also present a simple transformation which directly turns any $$\mathsf {NIZK}$$ candidate satisfying $$\delta _s,\delta _z<1/3 -1/\mathsf {poly}(\lambda )$$ to a secure one.
2019
CRYPTO
The Privacy Blanket of the Shuffle Model 📺
This work studies differential privacy in the context of the recently proposed shuffle model. Unlike in the local model, where the server collecting privatized data from users can track back an input to a specific user, in the shuffle model users submit their privatized inputs to a server anonymously. This setup yields a trust model which sits in between the classical curator and local models for differential privacy. The shuffle model is the core idea in the Encode, Shuffle, Analyze (ESA) model introduced by Bittau et al. (SOPS 2017). Recent work by Cheu et al. (EUROCRYPT 2019) analyzes the differential privacy properties of the shuffle model and shows that in some cases shuffled protocols provide strictly better accuracy than local protocols. Additionally, Erlingsson et al. (SODA 2019) provide a privacy amplification bound quantifying the level of curator differential privacy achieved by the shuffle model in terms of the local differential privacy of the randomizer used by each user.In this context, we make three contributions. First, we provide an optimal single message protocol for summation of real numbers in the shuffle model. Our protocol is very simple and has better accuracy and communication than the protocols for this same problem proposed by Cheu et al. Optimality of this protocol follows from our second contribution, a new lower bound for the accuracy of private protocols for summation of real numbers in the shuffle model. The third contribution is a new amplification bound for analyzing the privacy of protocols in the shuffle model in terms of the privacy provided by the corresponding local randomizer. Our amplification bound generalizes the results by Erlingsson et al. to a wider range of parameters, and provides a whole family of methods to analyze privacy amplification in the shuffle model.
2019
CRYPTO
Realizing Chosen Ciphertext Security Generically in Attribute-Based Encryption and Predicate Encryption 📺
We provide generic and black box transformations from any chosen plaintext secure Attribute-Based Encryption (ABE) or One-sided Predicate Encryption system into a chosen ciphertext secure system. Our transformation requires only the IND-CPA security of the original ABE scheme coupled with a pseudorandom generator (PRG) with a special security property.In particular, we consider a PRG with an n bit input $$s \in \{0,1\}^n$$ and $$n \cdot \ell $$ bit output $$y_1, \ldots , y_n$$ where each $$y_i$$ is an $$\ell $$ bit string. Then for a randomly chosen s the following two distributions should be computationally indistinguishable. In the first distribution $$r_{s_i, i} = y_i$$ and $$r_{\bar{s}_i, i}$$ is chosen randomly for $$i \in [n]$$. In the second distribution all $$r_{b, i}$$ are chosen randomly for $$i \in [n], b \in \{0,1\}$$.We show that such PRGs can be built from either the computational Diffie-Hellman assumption (in non-bilinear groups) or the Learning with Errors (LWE) assumption (and potentially other assumptions). Thus, one can transform any IND-CPA secure system into a chosen ciphertext secure one by adding either assumption. (Or by simply assuming an existing PRG is hinting secure.) In addition, our work provides a new approach and perspective for obtaining chosen ciphertext security in the basic case of public key encryption.
2019
CRYPTO
Match Me if You Can: Matchmaking Encryption and Its Applications 📺
We introduce a new form of encryption that we name matchmaking encryption (ME). Using ME, sender S and receiver R (each with its own attributes) can both specify policies the other party must satisfy in order for the message to be revealed. The main security guarantee is that of privacy-preserving policy matching: During decryption nothing is leaked beyond the fact that a match occurred/did not occur.ME opens up new ways of secretly communicating, and enables several new applications where both participants can specify fine-grained access policies to encrypted data. For instance, in social matchmaking, S can encrypt a file containing his/her personal details and specify a policy so that the file can be decrypted only by his/her ideal partner. On the other end, a receiver R will be able to decrypt the file only if S corresponds to his/her ideal partner defined through a policy.On the theoretical side, we define security for ME, as well as provide generic frameworks for constructing ME from functional encryption.These constructions need to face the technical challenge of simultaneously checking the policies chosen by S and R, to avoid any leakage.On the practical side, we construct an efficient identity-based scheme for equality policies, with provable security in the random oracle model under the standard BDH assumption. We implement and evaluate our scheme and provide experimental evidence that our construction is practical. We also apply identity-based ME to a concrete use case, in particular for creating an anonymous bulletin board over a Tor network.
2019
CRYPTO
ABE for DFA from k-Lin 📺
We present the first attribute-based encryption (ABE) scheme for deterministic finite automaton (DFA) based on static assumptions in bilinear groups; this resolves an open problem posed by Waters (CRYPTO 2012). Our main construction achieves selective security against unbounded collusions under the standard k-linear assumption in prime-order bilinear groups, whereas previous constructions all rely on q-type assumptions.
2019
CRYPTO
Attribute Based Encryption (and more) for Nondeterministic Finite Automata from LWE 📺
Constructing Attribute Based Encryption (ABE) [56] for uniform models of computation from standard assumptions, is an important problem, about which very little is known. The only known ABE schemes in this setting that (i) avoid reliance on multilinear maps or indistinguishability obfuscation, (ii) support unbounded length inputs and (iii) permit unbounded key requests to the adversary in the security game, are by Waters from Crypto, 2012 [57] and its variants. Waters provided the first ABE for Deterministic Finite Automata (DFA) satisfying the above properties, from a parametrized or “q-type” assumption over bilinear maps. Generalizing this construction to Nondeterministic Finite Automata (NFA) was left as an explicit open problem in the same work, and has seen no progress to date. Constructions from other assumptions such as more standard pairing based assumptions, or lattice based assumptions has also proved elusive.In this work, we construct the first symmetric key attribute based encryption scheme for nondeterministic finite automata (NFA) from the learning with errors (LWE) assumption. Our scheme supports unbounded length inputs as well as unbounded length machines. In more detail, secret keys in our construction are associated with an NFA M of unbounded length, ciphertexts are associated with a tuple $$(\mathbf {x}, m)$$ where $$\mathbf {x}$$ is a public attribute of unbounded length and m is a secret message bit, and decryption recovers m if and only if $$M(\mathbf {x})=1$$.Further, we leverage our ABE to achieve (restricted notions of) attribute hiding analogous to the circuit setting, obtaining the first predicate encryption and bounded key functional encryption schemes for NFA from LWE. We achieve machine hiding in the single/bounded key setting to obtain the first reusable garbled NFA from standard assumptions. In terms of lower bounds, we show that secret key functional encryption even for DFAs, with security against unbounded key requests implies indistinguishability obfuscation ($$\mathsf {iO}$$) for circuits; this suggests a barrier in achieving full fledged functional encryption for NFA.
2019
CRYPTO
The Distinction Between Fixed and Random Generators in Group-Based Assumptions 📺
There is surprisingly little consensus on the precise role of the generator g in group-based assumptions such as DDH. Some works consider g to be a fixed part of the group description, while others take it to be random. We study this subtle distinction from a number of angles. In the generic group model, we demonstrate the plausibility of groups in which random-generator DDH (resp. CDH) is hard but fixed-generator DDH (resp. CDH) is easy. We observe that such groups have interesting cryptographic applications.We find that seemingly tight generic lower bounds for the Discrete-Log and CDH problems with preprocessing (Corrigan-Gibbs and Kogan, Eurocrypt 2018) are not tight in the sub-constant success probability regime if the generator is random. We resolve this by proving tight lower bounds for the random generator variants; our results formalize the intuition that using a random generator will reduce the effectiveness of preprocessing attacks.We observe that DDH-like assumptions in which exponents are drawn from low-entropy distributions are particularly sensitive to the fixed- vs. random-generator distinction. Most notably, we discover that the Strong Power DDH assumption of Komargodski and Yogev (Komargodski and Yogev, Eurocrypt 2018) used for non-malleable point obfuscation is in fact false precisely because it requires a fixed generator. In response, we formulate an alternative fixed-generator assumption that suffices for a new construction of non-malleable point obfuscation, and we prove the assumption holds in the generic group model. We also give a generic group proof for the security of fixed-generator, low-entropy DDH (Canetti, Crypto 1997).
2019
CRYPTO
Unifying Computational Entropies via Kullback–Leibler Divergence 📺
We introduce hardness in relative entropy, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible entropy, two forms of computational entropy used in recent constructions of pseudorandom generators and statistically hiding commitment schemes, respectively. Thus, hardness in relative entropy unifies the latter two notions of computational entropy and sheds light on the apparent “duality” between them. Additionally, it yields a more modular and illuminating proof that one-way functions imply next-block inaccessible entropy, similar in structure to the proof that one-way functions imply next-block pseudoentropy (Vadhan and Zheng, STOC ‘12).
2019
CRYPTO
Trapdoor Hash Functions and Their Applications 📺
We introduce a new primitive, called trapdoor hash functions (TDH), which are hash functions $$\mathsf {H}: \{0,1\}^n \rightarrow \{0,1\}^\lambda $$ with additional trapdoor function-like properties. Specifically, given an index $$i\in [n]$$, TDHs allow for sampling an encoding key $$\mathsf {ek}$$ (that hides i) along with a corresponding trapdoor. Furthermore, given $$\mathsf {H}(x)$$, a hint value $$\mathsf {E}(\mathsf {ek},x)$$, and the trapdoor corresponding to $$\mathsf {ek}$$, the $$i^{th}$$ bit of x can be efficiently recovered. In this setting, one of our main questions is: How small can the hint value $$\mathsf {E}(\mathsf {ek},x)$$ be? We obtain constructions where the hint is only one bit long based on DDH, QR, DCR, or LWE.This primitive opens a floodgate of applications for low-communication secure computation. We mainly focus on two-message protocols between a receiver and a sender, with private inputs x and y, resp., where the receiver should learn f(x, y). We wish to optimize the (download) rate of such protocols, namely the asymptotic ratio between the size of the output and the sender’s message. Using TDHs, we obtain:1.The first protocols for (two-message) rate-1 string OT based on DDH, QR, or LWE. This has several useful consequences, such as:(a)The first constructions of PIR with communication cost poly-logarithmic in the database size based on DDH or QR. These protocols are in fact rate-1 when considering block PIR.(b)The first constructions of a semi-compact homomorphic encryption scheme for branching programs, where the encrypted output grows only with the program length, based on DDH or QR.(c)The first constructions of lossy trapdoor functions with input to output ratio approaching 1 based on DDH, QR or LWE.(d)The first constant-rate LWE-based construction of a 2-message “statistically sender-private” OT protocol in the plain model.2.The first rate-1 protocols (under any assumption) for n parallel OTs and matrix-vector products from DDH, QR or LWE. We further consider the setting where f evaluates a RAM program y with running time $$T\ll |x|$$ on x. We obtain the first protocols with communication sublinear in the size of x, namely $$T\cdot \sqrt{|x|}$$ or $$T\cdot \root 3 \of {|x|}$$, based on DDH or, resp., pairings (and correlated-input secure hash functions).
2019
CRYPTO
CCA Security and Trapdoor Functions via Key-Dependent-Message Security 📺
We study the relationship among public-key encryption (PKE) satisfying indistinguishability against chosen plaintext attacks (IND-CPA security), that against chosen ciphertext attacks (IND-CCA security), and trapdoor functions (TDF). Specifically, we aim at finding a unified approach and some additional requirement to realize IND-CCA secure PKE and TDF based on IND-CPA secure PKE, and show the following two main results.As the first main result, we show how to achieve IND-CCA security via a weak form of key-dependent-message (KDM) security. More specifically, we construct an IND-CCA secure PKE scheme based on an IND-CPA secure PKE scheme and a secret-key encryption (SKE) scheme satisfying one-time KDM security with respect to projection functions (projection-KDM security). Projection functions are very simple functions with respect to which KDM security has been widely studied. Since the existence of projection-KDM secure PKE implies that of the above two building blocks, as a corollary of this result, we see that the existence of IND-CCA secure PKE is implied by that of projection-KDM secure PKE.As the second main result, we extend the above construction of IND-CCA secure PKE into that of TDF by additionally requiring a mild requirement for each building block. Our TDF satisfies adaptive one-wayness. We can instantiate our TDF based on a wide variety of computational assumptions. Especially, we obtain the first TDF (with adaptive one-wayness) based on the sub-exponential hardness of the constant-noise learning-parity-with-noise (LPN) problem.
2019
CRYPTO
Zero-Knowledge Proofs on Secret-Shared Data via Fully Linear PCPs 📺
We introduce and study the notion of fully linear probabilistically checkable proof systems. In such a proof system, the verifier can make a small number of linear queries that apply jointly to the input and a proof vector.Our new type of proof system is motivated by applications in which the input statement is not fully available to any single verifier, but can still be efficiently accessed via linear queries. This situation arises in scenarios where the input is partitioned or secret-shared between two or more parties, or alternatively is encoded using an additively homomorphic encryption or commitment scheme. This setting appears in the context of secure messaging platforms, verifiable outsourced computation, PIR writing, private computation of aggregate statistics, and secure multiparty computation (MPC). In all these applications, there is a need for fully linear proof systems with short proofs.While several efficient constructions of fully linear proof systems are implicit in the interactive proofs literature, many questions about their complexity are open. We present several new constructions of fully linear zero-knowledge proof systems with sublinear proof size for “simple” or “structured” languages. For example, in the non-interactive setting of fully linear PCPs, we show how to prove that an input vector $$x\in {\mathbb {F}}^n$$, for a finite field $${\mathbb {F}}$$, satisfies a single degree-2 equation with a proof of size $$O(\sqrt{n})$$ and $$O(\sqrt{n})$$ linear queries, which we show to be optimal. More generally, for languages that can be recognized by systems of constant-degree equations, we can reduce the proof size to $$O(\log n)$$ at the cost of $$O(\log n)$$ rounds of interaction.We use our new proof systems to construct new short zero-knowledge proofs on distributed and secret-shared data. These proofs can be used to improve the performance of the example systems mentioned above.Finally, we observe that zero-knowledge proofs on distributed data provide a general-purpose tool for protecting MPC protocols against malicious parties. Applying our short fully linear PCPs to “natural” MPC protocols in the honest-majority setting, we can achieve unconditional protection against malicious parties with sublinear additive communication cost. We use this to improve the communication complexity of recent honest-majority MPC protocols. For instance, using any pseudorandom generator, we obtain a 3-party protocol for Boolean circuits in which the amortized communication cost is only one bit per AND gate per party (compared to 10 bits in the best previous protocol), matching the best known protocols for semi-honest parties.
2019
CRYPTO
Non-Uniformly Sound Certificates with Applications to Concurrent Zero-Knowledge 📺
We introduce the notion of non-uniformly sound certificates: succinct single-message (unidirectional) argument systems that satisfy a “best-possible security” against non-uniform polynomial-time attackers. In particular, no polynomial-time attacker with s bits of non-uniform advice can find significantly more than s accepting proofs for false statements. Our first result is a construction of non-uniformly sound certificates for all $$\mathbf{NP }$$ in the random oracle model, where the attacker’s advice can depend arbitrarily on the random oracle.We next show that the existence of non-uniformly sound certificates for $$\mathbf{P }$$ (and collision resistant hash functions) yields a public-coin constant-round fully concurrent zero-knowledge argument for $$\mathbf{NP } $$.
2019
CRYPTO
On Round Optimal Statistical Zero Knowledge Arguments 📺
We construct the first three message statistical zero knowledge arguments for all of NP, matching the known lower bound. We do so based on keyless multi-collision resistant hash functions and the Learning with Errors assumption—the same assumptions used to obtain round optimal computational zero knowledge.The main component in our construction is a statistically witness indistinguishable argument of knowledge based on a new notion of statistically hiding commitments with subset opening.
2019
CRYPTO
It Wasn’t Me! 📺
Ring signatures, introduced by [RST01], are a variant of digital signatures which certify that one among a particular set of parties has endorsed a message while hiding which party in the set was the signer. Ring signatures are designed to allow anyone to attach anyone else’s name to a signature, as long as the signer’s own name is also attached. But what guarantee do ring signatures provide if a purported signatory wishes to denounce a signed message—or alternatively, if a signatory wishes to later come forward and claim ownership of a signature? Prior security definitions for ring signatures do not give a conclusive answer to this question: under most existing definitions, the guarantees could go either way. That is, it is consistent with some standard definitions that a non-signer might be able to repudiate a signature that he did not produce, or that this might be impossible. Similarly, a signer might be able to later convincingly claim that a signature he produced is indeed his own, or not. Any of these guarantees might be desirable. For instance, a whistleblower might have reason to want to later claim an anonymously released signature, or a person falsely implicated in a crime associated with a ring signature might wish to denounce the signature that is framing them and damaging their reputation. In other circumstances, it might be desirable that even under duress, a member of a ring cannot produce proof that he did or did not sign a particular signature. In any case, a guarantee one way or the other seems highly desirable.In this work, we formalize definitions and give constructions of the new notions of repudiable, unrepudiable, claimable, and unclaimable ring signatures. Our repudiable construction is based on VRFs, which are implied by several number-theoretic assumptions (including strong RSA or bilinear maps); our claimable construction is a black-box transformation from any standard ring signature scheme to a claimable one; and our unclaimable construction is derived from the lattice-based ring signatures of [BK10], which rely on hardness of SIS. Our repudiable construction also provides a new construction of standard ring signatures.
2019
CRYPTO
Two-Party ECDSA from Hash Proof Systems and Efficient Instantiations 📺
ECDSA is a widely adopted digital signature standard. Unfortunately, efficient distributed variants of this primitive are notoriously hard to achieve and known solutions often require expensive zero knowledge proofs to deal with malicious adversaries. For the two party case, Lindell [Lin17] recently managed to get an efficient solution which, to achieve simulation-based security, relies on an interactive, non standard, assumption on Paillier’s cryptosystem. In this paper we generalize Lindell’s solution using hash proof systems. The main advantage of our generic method is that it results in a simulation-based security proof without resorting to non-standard interactive assumptions.Moving to concrete constructions, we show how to instantiate our framework using class groups of imaginary quadratic fields. Our implementations show that the practical impact of dropping such interactive assumptions is minimal. Indeed, while for 128-bit security our scheme is marginally slower than Lindell’s, for 256-bit security it turns out to be better both in key generation and signing time. Moreover, in terms of communication cost, our implementation significantly reduces both the number of rounds and the transmitted bits without exception.
2019
CRYPTO
Asymmetric Message Franking: Content Moderation for Metadata-Private End-to-End Encryption 📺
Content moderation is crucial for stopping abusive and harassing messages in online platforms. Existing moderation mechanisms, such as message franking, require platform providers to be able to associate user identifiers to encrypted messages. These mechanisms fail in metadata-private messaging systems, such as Signal, where users can hide their identities from platform providers. The key technical challenge preventing moderation is achieving cryptographic accountability while preserving deniability.In this work, we resolve this tension with a new cryptographic primitive: asymmetric message franking (AMF) schemes. We define strong security notions for AMF schemes, including the first formal treatment of deniability in moderation settings. We then construct, analyze, and implement an AMF scheme that is fast enough to use for content moderation of metadata-private messaging.
2019
CRYPTO
Statistical Zeroizing Attack: Cryptanalysis of Candidates of BP Obfuscation over GGH15 Multilinear Map 📺
We present a new cryptanalytic algorithm on obfuscations based on GGH15 multilinear map. Our algorithm, statistical zeroizing attack, directly distinguishes two distributions from obfuscation while it follows the zeroizing attack paradigm, that is, it uses evaluations of zeros of obfuscated programs.Our attack breaks the recent indistinguishability obfuscation candidate suggested by Chen et al. (CRYPTO’18) for the optimal parameter settings. More precisely, we show that there are two functionally equivalent branching programs whose CVW obfuscations can be efficiently distinguished by computing the sample variance of evaluations.This statistical attack gives a new perspective on the security of the indistinguishability obfuscations: we should consider the shape of the distributions of evaluation of obfuscation to ensure security.In other words, while most of the previous (weak) security proofs have been studied with respect to algebraic attack model or ideal model, our attack shows that this algebraic security is not enough to achieve indistinguishability obfuscation. In particular, we show that the obfuscation scheme suggested by Bartusek et al. (TCC’18) does not achieve the desired security in a certain parameter regime, in which their algebraic security proof still holds.The correctness of statistical zeroizing attacks holds under a mild assumption on the preimage sampling algorithm with a lattice trapdoor. We experimentally verify this assumption for implemented obfuscation by Halevi et al. (ACM CCS’17).
2019
CRYPTO
Indistinguishability Obfuscation Without Multilinear Maps: New Paradigms via Low Degree Weak Pseudorandomness and Security Amplification 📺
The existence of secure indistinguishability obfuscators ( $$i\mathcal {O}$$ ) has far-reaching implications, significantly expanding the scope of problems amenable to cryptographic study. All known approaches to constructing $$i\mathcal {O}$$ rely on d-linear maps. While secure bilinear maps are well established in cryptographic literature, the security of candidates for $$d>2$$ is poorly understood.We propose a new approach to constructing $$i\mathcal {O}$$ for general circuits. Unlike all previously known realizations of $$i\mathcal {O}$$ , we avoid the use of d-linear maps of degree $$d \ge 3$$ .At the heart of our approach is the assumption that a new weak pseudorandom object exists. We consider two related variants of these objects, which we call perturbation resilient generator ( $$\varDelta $$ RG) and pseudo flawed-smudging generator ( $$\mathrm {PFG}$$ ), respectively. At a high level, both objects are polynomially expanding functions whose outputs partially hide (or smudge) small noise vectors when added to them. We further require that they are computable by a family of degree-3 polynomials over $$\mathbb {Z}$$ . We show how they can be used to construct functional encryption schemes with weak security guarantees. Finally, we use novel amplification techniques to obtain full security.As a result, we obtain $$i\mathcal {O}$$ for general circuits assuming:Subexponentially secure LWEBilinear Maps $$\mathrm {poly}(\lambda )$$ -secure 3-block-local PRGs $$\varDelta $$ RGs or $$\mathrm {PFG}$$ s
2019
CRYPTO
Watermarking PRFs from Lattices: Stronger Security via Extractable PRFs 📺
A software watermarking scheme enables one to embed a “mark” (i.e., a message) within a program while preserving the program’s functionality. Moreover, there is an extraction algorithm that recovers an embedded message from a program. The main security goal is that it should be difficult to remove the watermark without destroying the functionality of the program. Existing constructions of watermarking focus on watermarking cryptographic functions like pseudorandom functions (PRFs); even in this setting, realizing watermarking from standard assumptions remains difficult. The first lattice-based construction of secret-key watermarking due to Kim and Wu (CRYPTO 2017) only ensures mark-unremovability against an adversary who does not have access to the mark-extraction oracle. The construction of Quach et al. (TCC 2018) achieves the stronger notion of mark-unremovability even if the adversary can make extraction queries, but has the drawback that the watermarking authority (who holds the watermarking secret key) can break pseudorandomness of all PRF keys in the family (including unmarked keys).In this work, we construct new lattice-based secret-key watermarking schemes for PRFs that both provide unremovability against adversaries that have access to the mark-extraction oracle and offer a strong and meaningful notion of pseudorandomness even against the watermarking authority (i.e., the outputs of unmarked keys are pseudorandom almost everywhere). Moreover, security of several of our schemes can be based on the hardness of computing nearly polynomial approximations to worst-case lattice problems. This is a qualitatively weaker assumption than that needed for existing lattice-based constructions of watermarking (that support message-embedding), all of which require quasi-polynomial approximation factors. Our constructions rely on a new cryptographic primitive called an extractable PRF, which may be of independent interest.
2019
CRYPTO
Watermarking Public-Key Cryptographic Primitives 📺
A software watermarking scheme enables users to embed a message or mark within a program while preserving its functionality. Moreover, it is difficult for an adversary to remove a watermark from a marked program without corrupting its behavior. Existing constructions of software watermarking from standard assumptions have focused exclusively on watermarking pseudorandom functions (PRFs).In this work, we study watermarking public-key primitives such as the signing key of a digital signature scheme or the decryption key of a public-key (predicate) encryption scheme. While watermarking public-key primitives might intuitively seem more challenging than watermarking PRFs, our constructions only rely on simple assumptions. Our watermarkable signature scheme can be built from the minimal assumption of one-way functions while our watermarkable public-key encryption scheme can be built from most standard algebraic assumptions that imply public-key encryption (e.g., factoring, discrete log, or lattice assumptions). Our schemes also satisfy a number of appealing properties: public marking, public mark-extraction, and collusion resistance. Our schemes are the first to simultaneously achieve all of these properties.The key enabler of our new constructions is a relaxed notion of functionality-preserving. While traditionally, we require that a marked program (approximately) preserve the input/output behavior of the original program, in the public-key setting, preserving the “functionality” does not necessarily require preserving the exact input/output behavior. For instance, if we want to mark a signing algorithm, it suffices that the marked algorithm still output valid signatures (even if those signatures might be different from the ones output by the unmarked algorithm). Similarly, if we want to mark a decryption algorithm, it suffices that the marked algorithm correctly decrypt all valid ciphertexts (but may behave differently from the unmarked algorithm on invalid or malformed ciphertexts). Our relaxed notion of functionality-preserving captures the essence of watermarking and still supports the traditional applications, but provides additional flexibility to enable new and simple realizations of this powerful cryptographic notion.
2019
CRYPTO
SpOT-Light: Lightweight Private Set Intersection from Sparse OT Extension 📺
We describe a novel approach for two-party private set intersection (PSI) with semi-honest security. Compared to existing PSI protocols, ours has a more favorable balance between communication and computation. Specifically, our protocol has the lowest monetary cost of any known PSI protocol, when run over the Internet using cloud-based computing services (taking into account current rates for CPU + data). On slow networks (e.g., 10 Mbps) our protocol is actually the fastest.Our novel underlying technique is a variant of oblivious transfer (OT) extension that we call sparse OT extension. Conceptually it can be thought of as a communication-efficient multipoint oblivious PRF evaluation. Our sparse OT technique relies heavily on manipulating high-degree polynomials over large finite fields (i.e. elements whose representation requires hundreds of bits). We introduce extensive algorithmic and engineering improvements for interpolation and multi-point evaluation of such polynomials, which we believe will be of independent interest.Finally, we present an extensive empirical comparison of state-of-the-art PSI protocols in several application scenarios and along several dimensions of measurement: running time, communication, peak memory consumption, and—arguably the most relevant metric for practice—monetary cost.
2019
CRYPTO
Universally Composable Secure Computation with Corrupted Tokens 📺
We introduce the corrupted token model. This model generalizes the tamper-proof token model proposed by Katz (EUROCRYPT ’07) relaxing the trust assumption on the honest behavior of tokens. Our model is motivated by the real-world practice of outsourcing hardware production to possibly corrupted manufacturers. We capture the malicious behavior of token manufacturers by allowing the adversary to corrupt the tokens of honest players at the time of their creation.We show that under minimal complexity assumptions, i.e., the existence of one-way functions, it is possible to UC-securely realize (a variant of) the tamper-proof token functionality of Katz in the corrupted token model with n stateless tokens assuming that the adversary corrupts at most $$n-1$$ of them (for any $$n>0$$). We apply this result to existing multi-party protocols in Katz’s model to achieve UC-secure MPC in the corrupted token model assuming only the existence of one-way functions. Finally, we show how to obtain the above results using tokens of small size that take only short inputs. The technique in this result can also be used to improve the assumption of UC-secure hardware obfuscation recently proposed by Nayak et al. (NDSS ’17). While their construction requires the existence of collision-resistant hash functions, we can obtain the same result from only one-way functions. Moreover using our main result we can improve the trust assumption on the tokens as well.
2019
CRYPTO
Reusable Non-Interactive Secure Computation 📺
We consider the problem of Non-Interactive Two-Party Secure Computation (NISC), where Rachel wishes to publish an encryption of her input x, in such a way that any other party, who holds an input y, can send her a single message which conveys to her the value f(x, y), and nothing more. We demand security against malicious parties. While such protocols are easy to construct using garbled circuits and general non-interactive zero-knowledge proofs, this approach inherently makes a non-black-box use of the underlying cryptographic primitives and is infeasible in practice.Ishai et al. (Eurocrypt 2011) showed how to construct NISC protocols that only use parallel calls to an ideal oblivious transfer (OT) oracle, and additionally make only a black-box use of any pseudorandom generator. Combined with the efficient 2-message OT protocol of Peikert et al. (Crypto 2008), this leads to a practical approach to NISC that has been implemented in subsequent works. However, a major limitation of all known OT-based NISC protocols is that they are subject to selective failure attacks that allows a malicious sender to entirely compromise the security of the protocol when the receiver’s first message is reused.Motivated by the failure of the OT-based approach, we consider the problem of basing reusable NISC on parallel invocations of a standard arithmetic generalization of OT known as oblivious linear-function evaluation (OLE). We obtain the following results:We construct an information-theoretically secure reusable NISC protocol for arithmetic branching programs and general zero-knowledge functionalities in the OLE-hybrid model. Our zero-knowledge protocol only makes an absolute constant number of OLE calls per gate in an arithmetic circuit whose satisfiability is being proved. We also get reusable NISC in the OLE-hybrid model for general Boolean circuits using any one-way function.We complement this by a negative result, showing that reusable NISC is impossible to achieve in the OT-hybrid model. This provides a formal justification for the need to replace OT by OLE.We build a universally composable 2-message reusable OLE protocol in the CRS model that can be based on the security of Paillier encryption and requires only a constant number of modular exponentiations. This provides the first arithmetic analogue of the 2-message OT protocols of Peikert et al. (Crypto 2008).By combining our NISC protocol in the OLE-hybrid model and the 2-message OLE protocol, we get protocols with new attractive asymptotic and concrete efficiency features. In particular, we get the first (designated-verifier) NIZK protocols for NP where following a statement-independent preprocessing, both proving and verifying are entirely “non-cryptographic” and involve only a constant computational overhead. Furthermore, we get the first statistical designated-verifier NIZK argument for NP under an assumption related to factoring.
2019
CRYPTO
Efficient Pseudorandom Correlation Generators: Silent OT Extension and More 📺
Secure multiparty computation (MPC) often relies on correlated randomness for better efficiency and simplicity. This is particularly useful for MPC with no honest majority, where input-independent correlated randomness enables a lightweight “non-cryptographic” online phase once the inputs are known. However, since the amount of randomness typically scales with the circuit size of the function being computed, securely generating correlated randomness forms an efficiency bottleneck, involving a large amount of communication and storage.A natural tool for addressing the above limitations is a pseudorandom correlation generator (PCG). A PCG allows two or more parties to securely generate long sources of useful correlated randomness via a local expansion of correlated short seeds and no interaction. PCGs enable MPC with silent preprocessing, where a small amount of interaction used for securely sampling the seeds is followed by silent local generation of correlated pseudorandomness.A concretely efficient PCG for Vector-OLE correlations was recently obtained by Boyle et al. (CCS 2018) based on variants of the learning parity with noise (LPN) assumption over large fields. In this work, we initiate a systematic study of PCGs and present concretely efficient constructions for several types of useful MPC correlations. We obtain the following main contributions:PCG foundations. We give a general security definition for PCGs. Our definition suffices for any MPC protocol satisfying a stronger security requirement that is met by existing protocols. We prove that a stronger security requirement is indeed necessary, and justify our PCG definition by ruling out a stronger and more natural definition.Silent OT extension. We present the first concretely efficient PCG for oblivious transfer correlations. Its security is based on a variant of the binary LPN assumption and any correlation-robust hash function. We expect it to provide a faster alternative to the IKNP OT extension protocol (Crypto 2003) when communication is the bottleneck. We present several applications, including protocols for non-interactive zero-knowledge with bounded-reusable preprocessing from binary LPN, and concretely efficient related-key oblivious pseudorandom functions.PCGs for simple 2-party correlations. We obtain PCGs for several other types of useful 2-party correlations, including (authenticated) one-time truth-tables and Beaver triples. While the latter PCGs are slower than our PCG for OT, they are still practically feasible. These PCGs are based on a host of assumptions and techniques, including specialized homomorphic secret sharing schemes and pseudorandom generators tailored to their structure.Multiparty correlations. We obtain PCGs for multiparty correlations that can be used to make the (input-dependent) online communication of MPC protocols scale linearly with the number of parties, instead of quadratically.
2019
CRYPTO
Non-interactive Non-malleability from Quantum Supremacy 📺
We construct non-interactive non-malleable commitments without setup in the plain model, under well-studied assumptions.First, we construct non-interactive non-malleable commitments w.r.t. commitment for $$\epsilon \log \log n$$ tags for a small constant $$\epsilon > 0$$, under the following assumptions:1.Sub-exponential hardness of factoring or discrete log.2.Quantum sub-exponential hardness of learning with errors (LWE). Second, as our key technical contribution, we introduce a new tag amplification technique. We show how to convert any non-interactive non-malleable commitment w.r.t. commitment for $$\epsilon \log \log n$$ tags (for any constant $$\epsilon >0$$) into a non-interactive non-malleable commitment w.r.t. replacement for $$2^n$$ tags. This part only assumes the existence of sub-exponentially secure non-interactive witness indistinguishable (NIWI) proofs, which can be based on sub-exponential security of the decisional linear assumption.Interestingly, for the tag amplification technique, we crucially rely on the leakage lemma due to Gentry and Wichs (STOC 2011). For the construction of non-malleable commitments for $$\epsilon \log \log n$$ tags, we rely on quantum supremacy. This use of quantum supremacy in classical cryptography is novel, and we believe it will have future applications. We provide one such application to two-message witness indistinguishable (WI) arguments from (quantum) polynomial hardness assumptions.
2019
CRYPTO
Cryptographic Sensing 📺
Is it possible to measure a physical object in a way that makes the measurement signals unintelligible to an external observer? Alternatively, can one learn a natural concept by using a contrived training set that makes the labeled examples useless without the line of thought that has led to their choice? We initiate a study of “cryptographic sensing” problems of this type, presenting definitions, positive and negative results, and directions for further research.
2019
CRYPTO
Public-Key Cryptography in the Fine-Grained Setting 📺
Cryptography is largely based on unproven assumptions, which, while believable, might fail. Notably if $$P = NP$$, or if we live in Pessiland, then all current cryptographic assumptions will be broken. A compelling question is if any interesting cryptography might exist in Pessiland.A natural approach to tackle this question is to base cryptography on an assumption from fine-grained complexity. Ball, Rosen, Sabin, and Vasudevan [BRSV’17] attempted this, starting from popular hardness assumptions, such as the Orthogonal Vectors (OV) Conjecture. They obtained problems that are hard on average, assuming that OV and other problems are hard in the worst case. They obtained proofs of work, and hoped to use their average-case hard problems to build a fine-grained one-way function. Unfortunately, they proved that constructing one using their approach would violate a popular hardness hypothesis. This motivates the search for other fine-grained average-case hard problems.The main goal of this paper is to identify sufficient properties for a fine-grained average-case assumption that imply cryptographic primitives such as fine-grained public key cryptography (PKC). Our main contribution is a novel construction of a cryptographic key exchange, together with the definition of a small number of relatively weak structural properties, such that if a computational problem satisfies them, our key exchange has provable fine-grained security guarantees, based on the hardness of this problem. We then show that a natural and plausible average-case assumption for the key problem Zero-k-Clique from fine-grained complexity satisfies our properties. We also develop fine-grained one-way functions and hardcore bits even under these weaker assumptions.Where previous works had to assume random oracles or the existence of strong one-way functions to get a key-exchange computable in O(n) time secure against $$O(n^2)$$ adversaries (see [Merkle’78] and [BGI’08]), our assumptions seem much weaker. Our key exchange has a similar gap between the computation of the honest party and the adversary as prior work, while being non-interactive, implying fine-grained PKC.
2019
CRYPTO
Exploring Constructions of Compact NIZKs from Various Assumptions 📺
A non-interactive zero-knowledge (NIZK) protocol allows a prover to non-interactively convince a verifier of the truth of the statement without leaking any other information. In this study, we explore shorter NIZK proofs for all $$\mathbf{NP }$$ languages. Our primary interest is NIZK proofs from falsifiable pairing/pairing-free group-based assumptions. Thus far, NIZKs in the common reference string model (CRS-NIZKs) for $$\mathbf{NP }$$ based on falsifiable pairing-based assumptions all require a proof size at least as large as $$O(|C| \kappa )$$, where C is a circuit computing the $$\mathbf{NP }$$ relation and $$\kappa $$ is the security parameter. This holds true even for the weaker designated-verifier NIZKs (DV-NIZKs). Notably, constructing a (CRS, DV)-NIZK with proof size achieving an additive-overhead $$O(|C|) + \mathsf {poly}(\kappa )$$, rather than a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$, based on any falsifiable pairing-based assumptions is an open problem.In this work, we present various techniques for constructing NIZKs with compact proofs, i.e., proofs smaller than $$O(|C|) + \mathsf {poly}(\kappa )$$, and make progress regarding the above situation. Our result is summarized below. We construct CRS-NIZK for all $$\mathbf{NP }$$ with proof size $$|C| +\mathsf {poly}(\kappa )$$ from a (non-static) falsifiable Diffie-Hellman (DH) type assumption over pairing groups. This is the first CRS-NIZK to achieve a compact proof without relying on either lattice-based assumptions or non-falsifiable assumptions. Moreover, a variant of our CRS-NIZK satisfies universal composability (UC) in the erasure-free adaptive setting. Although it is limited to $$\mathbf{NP }$$ relations in $$\mathbf{NC }^1$$, the proof size is $$|w| \cdot \mathsf {poly}(\kappa )$$ where w is the witness, and in particular, it matches the state-of-the-art UC-NIZK proposed by Cohen, shelat, and Wichs (CRYPTO’19) based on lattices.We construct (multi-theorem) DV-NIZKs for $$\mathbf{NP }$$ with proof size $$|C|+\mathsf {poly}(\kappa )$$ from the computational DH assumption over pairing-free groups. This is the first DV-NIZK that achieves a compact proof from a standard DH type assumption. Moreover, if we further assume the $$\mathbf{NP }$$ relation to be computable in $$\mathbf{NC }^1$$ and assume hardness of a (non-static) falsifiable DH type assumption over pairing-free groups, the proof size can be made as small as $$|w| + \mathsf {poly}(\kappa )$$. Another related but independent issue is that all (CRS, DV)-NIZKs require the running time of the prover to be at least $$|C|\cdot \mathsf {poly}(\kappa )$$. Considering that there exists NIZKs with efficient verifiers whose running time is strictly smaller than |C|, it is an interesting problem whether we can construct prover-efficient NIZKs. To this end, we construct prover-efficient CRS-NIZKs for $$\mathbf{NP }$$ with compact proof through a generic construction using laconic functional evaluation schemes (Quach, Wee, and Wichs (FOCS’18)). This is the first NIZK in any model where the running time of the prover is strictly smaller than the time it takes to compute the circuit C computing the $$\mathbf{NP }$$ relation.Finally, perhaps of an independent interest, we formalize the notion of homomorphic equivocal commitments, which we use as building blocks to obtain the first result, and show how to construct them from pairing-based assumptions.
2019
CRYPTO
New Constructions of Reusable Designated-Verifier NIZKs 📺
Non-interactive zero-knowledge arguments (NIZKs) for $$\mathsf {NP}$$ are an important cryptographic primitive, but we currently only have instantiations under a few specific assumptions. Notably, we are missing constructions from the learning with errors (LWE) assumption, the Diffie-Hellman (CDH/DDH) assumption, and the learning parity with noise (LPN) assumption.In this paper, we study a relaxation of NIZKs to the designated-verifier setting (DV-NIZK), where a trusted setup generates a common reference string together with a secret key for the verifier. We want reusable schemes, which allow the verifier to reuse the secret key to verify many different proofs, and soundness should hold even if the malicious prover learns whether various proofs are accepted or rejected. Such reusable DV-NIZKs were recently constructed under the CDH assumption, but it was open whether they can also be constructed under LWE or LPN.We also consider an extension of reusable DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK). In this setting, the only trusted setup consists of a common random string. However, there is also an additional untrusted setup in which the verifier chooses a public/secret key needed to generate/verify proofs, respectively. We require that zero-knowledge holds even if the public key is chosen maliciously by the verifier. Such reusable MDV-NIZKs were recently constructed under the “one-more CDH” assumption, but constructions under CDH/LWE/LPN remained open.In this work, we give new constructions of (reusable) DV-NIZKs and MDV-NIZKs using generic primitives that can be instantiated under CDH, LWE, or LPN.
2019
CRYPTO
Scalable Zero Knowledge with No Trusted Setup 📺
One of the approaches to constructing zero knowledge (ZK) arguments relies on “PCP techniques” that date back to influential works from the early 1990’s [Babai et al., Arora et al. 1991-2]. These techniques require only minimal cryptographic assumptions, namely, the existence of a family of collision-resistant hash functions [Kilian, STOC 1992], and achieve two remarkable properties: (i) all messages generated by the verifier are public random coins, and (ii) total verification time is merely poly-logarithmic in the time needed to naïvely execute the computation being verified [Babai et al., STOC 1991].Those early constructions were never realized in code, mostly because proving time was too large. To address this, the model of interactive oracle proofs (IOPs), which generalizes the PCP model, was recently suggested. Proving time for ZK-IOPs was reduced to quasi-linear, even for problems that require nondeterministic exponential time to decide [Ben-Sasson et al., TCC 2016, ICALP 2017].Despite these recent advances it was still not clear whether ZK-IOP systems can lead to concretely efficient succinct argument systems. Our main claim is that this is indeed the case. We present a new construction of an IOP of knowledge (which we call a zk-STIK) that improves, asymptotically, on the state of art: for log-space computations of length T it is the first to $$O(T \log T)$$ arithmetic prover complexity and $$O(\log T)$$ verifier arithmetic complexity. Prior IOPs had additional $$\mathsf{poly} \log T$$ factors in both prover and verifier. Additionally, we report a C++ realization of this system (which we call libSTARK). Compared to prevailing ZK realizations, it has the fastest proving and (total) verification time for sufficiently large sequential computations.
2019
CRYPTO
Libra: Succinct Zero-Knowledge Proofs with Optimal Prover Computation 📺
We present Libra, the first zero-knowledge proof system that has both optimal prover time and succinct proof size/verification time. In particular, if C is the size of the circuit being proved (i) the prover time is O(C) irrespective of the circuit type; (ii) the proof size and verification time are both $$O(d\log C)$$ for d-depth log-space uniform circuits (such as RAM programs). In addition Libra features an one-time trusted setup that depends only on the size of the input to the circuit and not on the circuit logic. Underlying Libra is a new linear-time algorithm for the prover of the interactive proof protocol by Goldwasser, Kalai and Rothblum (also known as GKR protocol), as well as an efficient approach to turn the GKR protocol to zero-knowledge using small masking polynomials. Not only does Libra have excellent asymptotics, but it is also efficient in practice. For example, our implementation shows that it takes 200 s to generate a proof for constructing a SHA2-based Merkle tree root on 256 leaves, outperforming all existing zero-knowledge proof systems. Proof size and verification time of Libra are also competitive.
2019
CRYPTO
Highly Efficient Key Exchange Protocols with Optimal Tightness 📺
In this paper we give nearly-tight reductions for modern implicitly authenticated Diffie-Hellman protocols in the style of the Signal and Noise protocols, which are extremely simple and efficient. Unlike previous approaches, the combination of nearly-tight proofs and efficient protocols enables the first real-world instantiations for which the parameters can be chosen in a theoretically sound manner.Our reductions have only a linear loss in the number of users, implying that our protocols are more efficient than the state of the art when instantiated with theoretically sound parameters. We also prove that our security proofs are optimal: a linear loss in the number of users is unavoidable for our protocols for a large and natural class of reductions.
2019
CRYPTO
Strong Asymmetric PAKE Based on Trapdoor CKEM 📺
Password-Authenticated Key Exchange (PAKE) protocols allow two parties that share a password to establish a shared key in a way that is immune to offline attacks. Asymmetric PAKE (aPAKE) [20] adapts this notion to the common client-server setting, where the server stores a one-way hash of the password instead of the password itself, and server compromise allows the adversary to recover the password only via the (inevitable) offline dictionary attack. Most aPAKE protocols, however, allow an attacker to pre-compute a dictionary of hashed passwords, thus instantly learning the password on server compromise. Recently, Jarecki, Krawczyk, and Xu formalized a Universally Composable strong aPAKE (saPAKE) [23], which requires the password hash to be salted so that the dictionary attack can only start after the server compromise leaks the salt and the salted hash. The UC saPAKE protocol shown in [23], called OPAQUE, uses 3 protocol flows, 3–4 exponentiations per party, and relies on the One-More Diffie-Hellman assumption in ROM.We propose an alternative UC saPAKE construction based on a novel use of the encryption+SPHF paradigm for UC PAKE design [19, 26]. Compared to OPAQUE, our protocol uses only 2 flows, has comparable costs, avoids hashing onto a group, and relies on different assumptions, namely Decisional Diffie-Hellman (DDH), Strong Diffie-Hellman (SDH), and an assumption that the Boneh-Boyen function $$f_ s (x)=g^{1/( s +x)}$$ [9] is a Salted Tight One-Way Function (STOWF). We formalize a UC model for STOWF and analyze the Boneh-Boyen function as UC STOWF in the generic group model and ROM.Our saPAKE protocol employs a new form of Conditional Key Encapsulation Mechanism (CKEM), a generalization of SPHF, which we call an implicit-statement CKEM. This strengthening of SPHF allows for a UC (sa)PAKE design where only the client commits to its password, and only the server performs an SPHF, compared to the standard UC PAKE design paradigm where the encrypt+SPHF subroutine is used symmetrically by both parties.
2019
CRYPTO
Broadcast and Trace with $N^{\varepsilon }$ Ciphertext Size from Standard Assumptions 📺
We construct a broadcast and trace scheme (also known as trace and revoke or broadcast, trace and revoke) with N users, where the ciphertext size can be made as low as $$O(N^\varepsilon )$$ , for any arbitrarily small constant $$\varepsilon >0$$ . This improves on the prior best construction of broadcast and trace under standard assumptions by Boneh and Waters (CCS ‘06), which had ciphertext size $$O(N^{1/2})$$ . While that construction relied on bilinear maps, ours uses a combination of the learning with errors (LWE) assumption and bilinear maps.Recall that, in both broadcast encryption and traitor-tracing schemes, there is a collection of N users, each of which gets a different secret key $${\mathsf {sk}}_i$$ . In broadcast encryption, it is possible to create ciphertexts targeted to a subset $$S \subseteq [N]$$ of the users such that only those users can decrypt it correctly. In a traitor tracing scheme, if a subset of users gets together and creates a decoder box D that is capable of decrypting ciphertexts, then it is possible to trace at least one of the users responsible for creating D. A broadcast and trace scheme intertwines the two properties, in a way that results in more than just their union. In particular, it ensures that if a decoder D is able to decrypt ciphertexts targeted toward a set S of users, then it should be possible to trace one of the users in the set S responsible for creating D, even if other users outside of S also participated. As of recently, we have essentially optimal broadcast encryption (Boneh, Gentry, Waters CRYPTO ’05) under bilinear maps and traitor tracing (Goyal, Koppula, Waters STOC ’18) under LWE, where the ciphertext size is at most poly-logarithmic in N. The main contribution of our paper is to carefully combine LWE and bilinear-map based components, and get them to interact with each other, to achieve broadcast and trace.
2019
CRYPTO
2019
CRYPTO
Contextual Integrity 📺
Invited Talk
2019
EUROCRYPT
2019
EUROCRYPT
2019
EUROCRYPT
Lower Bounds for Differentially Private RAMs 📺
In this work, we study privacy-preserving storage primitives that are suitable for use in data analysis on outsourced databases within the differential privacy framework. The goal in differentially private data analysis is to disclose global properties of a group without compromising any individual’s privacy. Typically, differentially private adversaries only ever learn global properties. For the case of outsourced databases, the adversary also views the patterns of access to data. Oblivious RAM (ORAM) can be used to hide access patterns but ORAM might be excessive as in some settings it could be sufficient to be compatible with differential privacy and only protect the privacy of individual accesses.We consider $$(\epsilon ,\delta )$$(ϵ,δ)-Differentially Private RAM, a weakening of ORAM that only protects individual operations and seems better suited for use in data analysis on outsourced databases. As differentially private RAM has weaker security than ORAM, there is hope that we can bypass the $$\varOmega (\log (nb/c))$$Ω(log(nb/c)) bandwidth lower bounds for ORAM by Larsen and Nielsen [CRYPTO ’18] for storing an array of nb-bit entries and a client with c bits of memory. We answer in the negative and present an $$\varOmega (\log (nb/c))$$Ω(log(nb/c)) bandwidth lower bound for privacy budgets of $$\epsilon = O(1)$$ϵ=O(1) and $$\delta \le 1/3$$δ≤1/3.The information transfer technique used for ORAM lower bounds does not seem adaptable for use with the weaker security guarantees of differential privacy. Instead, we prove our lower bounds by adapting the chronogram technique to our setting. To our knowledge, this is the first work that uses the chronogram technique for lower bounds on privacy-preserving storage primitives.
2019
EUROCRYPT
Beyond Birthday Bound Secure MAC in Faulty Nonce Model 📺
Encrypt-then-MAC (EtM) is a popular mode for authenticated encryption (AE). Unfortunately, almost all designs following the EtM paradigm, including the AE suites for TLS, are vulnerable against nonce misuse. A single repetition of the nonce value reveals the hash key, leading to a universal forgery attack. There are only two authenticated encryption schemes following the EtM paradigm which can resist nonce misuse attacks, the GCM-RUP (CRYPTO-17) and the $$\mathsf {GCM/2}^{+} $$ (INSCRYPT-12). However, they are secure only up to the birthday bound in the nonce respecting setting, resulting in a restriction on the data limit for a single key. In this paper we show that nEHtM, a nonce-based variant of EHtM (FSE-10) constructed using a block cipher, has a beyond birthday bound (BBB) unforgeable security that gracefully degrades under nonce misuse. We combine nEHtM with the CENC (FSE-06) mode of encryption using the EtM paradigm to realize a nonce-based AE, CWC+. CWC+ is very close (requiring only a few more xor operations) to the CWC AE scheme (FSE-04) and it not only provides BBB security but also gracefully degrading security on nonce misuse.
2019
EUROCRYPT
Tight Time-Memory Trade-Offs for Symmetric Encryption 📺
Concrete security proofs give upper bounds on the attacker’s advantage as a function of its time/query complexity. Cryptanalysis suggests however that other resource limitations – most notably, the attacker’s memory – could make the achievable advantage smaller, and thus these proven bounds too pessimistic. Yet, handling memory limitations has eluded existing security proofs.This paper initiates the study of time-memory trade-offs for basic symmetric cryptography. We show that schemes like counter-mode encryption, which are affected by the Birthday Bound, become more secure (in terms of time complexity) as the attacker’s memory is reduced.One key step of this work is a generalization of the Switching Lemma: For adversaries with S bits of memory issuing q distinct queries, we prove an n-to-n bit random function indistinguishable from a permutation as long as $$S \times q \ll 2^n$$S×q≪2n. This result assumes a combinatorial conjecture, which we discuss, and implies right away trade-offs for deterministic, stateful versions of CTR and OFB encryption.We also show an unconditional time-memory trade-off for the security of randomized CTR based on a secure PRF. Via the aforementioned conjecture, we extend the result to assuming a PRP instead, assuming only one-block messages are encrypted.Our results solely rely on standard PRF/PRP security of an underlying block cipher. We frame the core of our proofs within a general framework of indistinguishability for streaming algorithms which may be of independent interest.
2019
EUROCRYPT
Non-Malleable Codes Against Bounded Polynomial Time Tampering 📺
We construct efficient non-malleable codes (NMC) that are (computationally) secure against tampering by functions computable in any fixed polynomial time. Our construction is in the plain (no-CRS) model and requires the assumptions that (1) $$\mathbf {E}$$E is hard for $$\mathbf {NP}$$NP circuits of some exponential $$2^{\beta n}$$2βn ($$\beta >0$$β>0) size (widely used in the derandomization literature), (2) sub-exponential trapdoor permutations exist, and (3) $$\mathbf {P}$$P-certificates with sub-exponential soundness exist.While it is impossible to construct NMC secure against arbitrary polynomial-time tampering (Dziembowski, Pietrzak, Wichs, ICS ’10), the existence of NMC secure against $$O(n^c)$$O(nc)-time tampering functions (for any fixedc), was shown (Cheraghchi and Guruswami, ITCS ’14) via a probabilistic construction. An explicit construction was given (Faust, Mukherjee, Venturi, Wichs, Eurocrypt ’14) assuming an untamperable CRS with length longer than the runtime of the tampering function. In this work, we show that under computational assumptions, we can bypass these limitations. Specifically, under the assumptions listed above, we obtain non-malleable codes in the plain model against $$O(n^c)$$O(nc)-time tampering functions (for any fixed c), with codeword length independent of the tampering time bound.Our new construction of NMC draws a connection with non-interactive non-malleable commitments. In fact, we show that in the NMC setting, it suffices to have a much weaker notion called quasi non-malleable commitments—these are non-interactive, non-malleable commitments in the plain model, in which the adversary runs in $$O(n^c)$$O(nc)-time, whereas the honest parties may run in longer (polynomial) time. We then construct a 4-tag quasi non-malleable commitment from any sub-exponential OWF and the assumption that $$\mathbf {E}$$E is hard for some exponential size $$\mathbf {NP}$$NP-circuits, and use tag amplification techniques to support an exponential number of tags.
2019
EUROCRYPT
Continuous Non-Malleable Codes in the 8-Split-State Model 📺
Non-malleable codes (NMCs), introduced by Dziembowski, Pietrzak and Wichs [20], provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. NMCs have emerged as a fundamental object at the intersection of coding theory and cryptography. In particular, progress in the study of non-malleable codes and the related notion of non-malleable extractors has led to new insights and progress on even more fundamental problems like the construction of multi-source randomness extractors. A large body of the recent work has focused on various constructions of non-malleable codes in the split-state model. Many variants of NMCs have been introduced in the literature, e.g., strong NMCs, super strong NMCs and continuous NMCs. The most general, and hence also the most useful notion among these is that of continuous non-malleable codes, that allows for continuous tampering by the adversary. We present the first efficient information-theoretically secure continuously non-malleable code in the constant split-state model. We believe that our main technical result could be of independent interest and some of the ideas could in future be used to make progress on other related questions.
2019
EUROCRYPT
Correlated-Source Extractors and Cryptography with Correlated-Random Tapes 📺
In this paper, we consider the setting where a party uses correlated random tapes across multiple executions of a cryptographic algorithm. We ask if the security properties could still be preserved in such a setting. As examples, we introduce the notion of correlated-tape zero knowledge, and, correlated-tape multi-party computation, where, the zero-knowledge property, and, the ideal/real model security must still be preserved even if a party uses correlated random tapes in multiple executions.Our constructions are based on a new type of randomness extractor which we call correlated-source extractors. Correlated-source extractors can be seen as a dual of non-malleable extractors, and, allow an adversary to choose several tampering functions which are applied to the randomness source. Correlated-source extractors guarantee that even given the output of the extractor on the tampered sources, the output on the original source is still uniformly random. Given (seeded) correlated-source extractors, and, resettably-secure computation protocols, we show how to directly get a positive result for both correlated-tape zero-knowledge and correlated-tape multi-party computation in the CRS model. This is tight considering the known impossibility results on cryptography with imperfect randomness.Our main technical contribution is an explicit construction of a correlated-source extractor where the length of the seed is independent of the number of tamperings. Additionally, we also provide a (non-explicit) existential result for correlated source extractors with almost optimal parameters.
2019
EUROCRYPT
Revisiting Non-Malleable Secret Sharing 📺
A threshold secret sharing scheme (with threshold t) allows a dealer to share a secret among a set of parties such that any group of t or more parties can recover the secret and no group of at most $$t-1$$ t-1 parties learn any information about the secret. A non-malleable threshold secret sharing scheme, introduced in the recent work of Goyal and Kumar (STOC’18), additionally protects a threshold secret sharing scheme when its shares are subject to tampering attacks. Specifically, it guarantees that the reconstructed secret from the tampered shares is either the original secret or something that is unrelated to the original secret.In this work, we continue the study of threshold non-malleable secret sharing against the class of tampering functions that tamper each share independently. We focus on achieving greater efficiency and guaranteeing a stronger security property. We obtain the following results:Rate Improvement. We give the first construction of a threshold non-malleable secret sharing scheme that has rate $$> 0$$ >0. Specifically, for every $$n,t \ge 4$$ n,t≥4, we give a construction of a t-out-of-n non-malleable secret sharing scheme with rate $$\varTheta (\frac{1}{t\log ^2 n})$$ Θ(1tlog2n). In the prior constructions, the rate was $$\varTheta (\frac{1}{n\log m})$$ Θ(1nlogm) where m is the length of the secret and thus, the rate tends to 0 as $$m \rightarrow \infty $$ m→∞. Furthermore, we also optimize the parameters of our construction and give a concretely efficient scheme.Multiple Tampering. We give the first construction of a threshold non-malleable secret sharing scheme secure in the stronger setting of bounded tampering wherein the shares are tampered by multiple (but bounded in number) possibly different tampering functions. The rate of such a scheme is $$\varTheta (\frac{1}{k^3t\log ^2 n})$$ Θ(1k3tlog2n) where k is an apriori bound on the number of tamperings. We complement this positive result by proving that it is impossible to have a threshold non-malleable secret sharing scheme that is secure in the presence of an apriori unbounded number of tamperings.General Access Structures. We extend our results beyond threshold secret sharing and give constructions of rate-efficient, non-malleable secret sharing schemes for more general monotone access structures that are secure against multiple (bounded) tampering attacks.
2019
EUROCRYPT
Multi-party Virtual State Channels 📺
Smart contracts are self-executing agreements written in program code and are envisioned to be one of the main applications of blockchain technology. While they are supported by prominent cryptocurrencies such as Ethereum, their further adoption is hindered by fundamental scalability challenges. For instance, in Ethereum contract execution suffers from a latency of more than 15 s, and the total number of contracts that can be executed per second is very limited. State channel networks are one of the core primitives aiming to address these challenges. They form a second layer over the slow and expensive blockchain, thereby enabling instantaneous contract processing at negligible costs.In this work we present the first complete description of a state channel network that exhibits the following key features. First, it supports virtual multi-party state channels, i.e. state channels that can be created and closed without blockchain interaction and that allow contracts with any number of parties. Second, the worst case time complexity of our protocol is constant for arbitrary complex channels. This is in contrast to the existing virtual state channel construction that has worst case time complexity linear in the number of involved parties. In addition to our new construction, we provide a comprehensive model for the modular design and security analysis of our construction.
2019
EUROCRYPT
Aggregate Cash Systems: A Cryptographic Investigation of Mimblewimble 📺
Mimblewimble is an electronic cash system proposed by an anonymous author in 2016. It combines several privacy-enhancing techniques initially envisioned for Bitcoin, such as Confidential Transactions (Maxwell, 2015), non-interactive merging of transactions (Saxena, Misra, Dhar, 2014), and cut-through of transaction inputs and outputs (Maxwell, 2013). As a remarkable consequence, coins can be deleted once they have been spent while maintaining public verifiability of the ledger, which is not possible in Bitcoin. This results in tremendous space savings for the ledger and efficiency gains for new users, who must verify their view of the system.In this paper, we provide a provable-security analysis for Mimblewimble. We give a precise syntax and formal security definitions for an abstraction of Mimblewimble that we call an aggregate cash system. We then formally prove the security of Mimblewimble in this definitional framework. Our results imply in particular that two natural instantiations (with Pedersen commitments and Schnorr or BLS signatures) are provably secure against inflation and coin theft under standard assumptions.
2019
EUROCRYPT
Consensus Through Herding 📺
State Machine Replication (SMR) is an important abstraction for a set of nodes to agree on an ever-growing, linearly-ordered log of transactions. In decentralized cryptocurrency applications, we would like to design SMR protocols that (1) resist adaptive corruptions; and (2) achieve small bandwidth and small confirmation time. All past approaches towards constructing SMR fail to achieve either small confirmation time or small bandwidth under adaptive corruptions (without resorting to strong assumptions such as the erasure model or proof-of-work).We propose a novel paradigm for reaching consensus that departs significantly from classical approaches. Our protocol is inspired by a social phenomenon called herding, where people tend to make choices considered as the social norm. In our consensus protocol, leader election and voting are coalesced into a single (randomized) process: in every round, every node tries to cast a vote for what it views as the most popular item so far: such a voting attempt is not always successful, but rather, successful with a certain probability. Importantly, the probability that the node is elected to vote for v is independent from the probability it is elected to vote for $$v' \ne v$$v′≠v. We will show how to realize such a distributed, randomized election process using appropriate, adaptively secure cryptographic building blocks.We show that amazingly, not only can this new paradigm achieve consensus (e.g., on a batch of unconfirmed transactions in a cryptocurrency system), but it also allows us to derive the first SMR protocol which, even under adaptive corruptions, requires only polylogarithmically many rounds and polylogarithmically many honest messages to be multicast to confirm each batch of transactions; and importantly, we attain these guarantees under standard cryptographic assumptions.
2019
EUROCRYPT
Homomorphic Secret Sharing from Lattices Without FHE 📺
Homomorphic secret sharing (HSS) is an analog of somewhat- or fully homomorphic encryption (S/FHE) to the setting of secret sharing, with applications including succinct secure computation, private manipulation of remote databases, and more. While HSS can be viewed as a relaxation of S/FHE, the only constructions from lattice-based assumptions to date build atop specific forms of threshold or multi-key S/FHE. In this work, we present new techniques directly yielding efficient 2-party HSS for polynomial-size branching programs from a range of lattice-based encryption schemes, without S/FHE. More concretely, we avoid the costly key-switching and modulus-reduction steps used in S/FHE ciphertext multiplication, replacing them with a new distributed decryption procedure for performing “restricted” multiplications of an input with a partial computation value. Doing so requires new methods for handling the blowup of “noise” in ciphertexts in a distributed setting, and leverages several properties of lattice-based encryption schemes together with new tricks in share conversion.The resulting schemes support a superpolynomial-size plaintext space and negligible correctness error, with share sizes comparable to SHE ciphertexts, but cost of homomorphic multiplication roughly one order of magnitude faster. Over certain rings, our HSS can further support some level of packed SIMD homomorphic operations. We demonstrate the practical efficiency of our schemes within two application settings, where we compare favorably with current best approaches: 2-server private database pattern-match queries, and secure 2-party computation of low-degree polynomials.
2019
EUROCRYPT
Improved Bootstrapping for Approximate Homomorphic Encryption 📺
Since Cheon et al. introduced a homomorphic encryption scheme for approximate arithmetic (Asiacrypt ’17), it has been recognized as suitable for important real-life usecases of homomorphic encryption, including training of machine learning models over encrypted data. A follow up work by Cheon et al. (Eurocrypt ’18) described an approximate bootstrapping procedure for the scheme. In this work, we improve upon the previous bootstrapping result. We improve the amortized bootstrapping time per plaintext slot by two orders of magnitude, from $$\sim $$∼1 s to $$\sim $$∼0.01 s. To achieve this result, we adopt a smart level-collapsing technique for evaluating DFT-like linear transforms on a ciphertext. Also, we replace the Taylor approximation of the sine function with a more accurate and numerically stable Chebyshev approximation, and design a modified version of the Paterson-Stockmeyer algorithm for fast evaluation of Chebyshev polynomials over encrypted data.
2019
EUROCRYPT
Minicrypt Primitives with Algebraic Structure and Applications 📺
Algebraic structure lies at the heart of Cryptomania as we know it. An interesting question is the following: instead of building (Cryptomania) primitives from concrete assumptions, can we build them from simple Minicrypt primitives endowed with some additional algebraic structure? In this work, we affirmatively answer this question by adding algebraic structure to the following Minicrypt primitives:One-Way Function (OWF)Weak Unpredictable Function (wUF)Weak Pseudorandom Function (wPRF) The algebraic structure that we consider is group homomorphism over the input/output spaces of these primitives. We also consider a “bounded” notion of homomorphism where the primitive only supports an a priori bounded number of homomorphic operations in order to capture lattice-based and other “noisy” assumptions. We show that these structured primitives can be used to construct many cryptographic protocols. In particular, we prove that: (Bounded) Homomorphic OWFs (HOWFs) imply collision-resistant hash functions, Schnorr-style signatures and chameleon hash functions.(Bounded) Input-Homomorphic weak UFs (IHwUFs) imply CPA-secure PKE, non-interactive key exchange, trapdoor functions, blind batch encryption (which implies anonymous IBE, KDM-secure and leakage-resilient PKE), CCA2 deterministic PKE, and hinting PRGs (which in turn imply transformation of CPA to CCA security for ABE/1-sided PE).(Bounded) Input-Homomorphic weak PRFs (IHwPRFs) imply PIR, lossy trapdoor functions, OT and MPC (in the plain model). In addition, we show how to realize any CDH/DDH-based protocol with certain properties in a generic manner using IHwUFs/IHwPRFs, and how to instantiate such a protocol from many concrete assumptions.We also consider primitives with substantially richer structure, namely Ring IHwPRFs and L-composable IHwPRFs. In particular, we show the following: Ring IHwPRFs with certain properties imply FHE.2-composable IHwPRFs imply (black-box) IBE, and L-composable IHwPRFs imply non-interactive $$(L+1)$$ (L+1)-party key exchange. Our framework allows us to categorize many cryptographic protocols based on which structured Minicrypt primitive implies them. In addition, it potentially makes showing the existence of many cryptosystems from novel assumptions substantially easier in the future.
2019
EUROCRYPT
Attacks only Get Better: How to Break FF3 on Large Domains 📺
We improve the attack of Durak and Vaudenay (CRYPTO’17) on NIST Format-Preserving Encryption standard FF3, reducing the running time from $$O(N^5)$$O(N5) to $$O(N^{17/6})$$O(N17/6) for domain $$\mathbb {Z}_N \times \mathbb {Z}_N$$ZN×ZN. Concretely, DV’s attack needs about $$2^{50}$$250 operations to recover encrypted 6-digit PINs, whereas ours only spends about $$2^{30}$$230 operations. In realizing this goal, we provide a pedagogical example of how to use distinguishing attacks to speed up slide attacks. In addition, we improve the running time of DV’s known-plaintext attack on 4-round Feistel of domain $$\mathbb {Z}_N \times \mathbb {Z}_N$$ZN×ZN from $$O(N^3)$$O(N3) time to just $$O(N^{5/3})$$O(N5/3) time. We also generalize our attacks to a general domain $$\mathbb {Z}_M \times \mathbb {Z}_N$$ZM×ZN, allowing one to recover encrypted SSNs using about $$2^{50}$$250 operations. Finally, we provide some proof-of-concept implementations to empirically validate our results.
2019
EUROCRYPT
Session Resumption Protocols and Efficient Forward Security for TLS 1.3 0-RTT 📺
The TLS 1.3 0-RTT mode enables a client reconnecting to a server to send encrypted application-layer data in “0-RTT” (“zero round-trip time”), without the need for a prior interactive handshake. This fundamentally requires the server to reconstruct the previous session’s encryption secrets upon receipt of the client’s first message. The standard techniques to achieve this are Session Caches or, alternatively, Session Tickets. The former provides forward security and resistance against replay attacks, but requires a large amount of server-side storage. The latter requires negligible storage, but provides no forward security and is known to be vulnerable to replay attacks.In this paper, we first formally define session resumption protocols as an abstract perspective on mechanisms like Session Caches and Session Tickets. We give a new generic construction that provably provides forward security and replay resilience, based on puncturable pseudorandom functions (PPRFs). This construction can immediately be used in TLS 1.3 0-RTT and deployed unilaterally by servers, without requiring any changes to clients or the protocol.We then describe two new constructions of PPRFs, which are particularly suitable for use for forward-secure and replay-resilient session resumption in TLS 1.3. The first construction is based on the strong RSA assumption. Compared to standard Session Caches, for “128-bit security” it reduces the required server storage by a factor of almost 20, when instantiated in a way such that key derivation and puncturing together are cheaper on average than one full exponentiation in an RSA group. Hence, a 1 GB Session Cache can be replaced with only about 51 MBs of storage, which significantly reduces the amount of secure memory required. For larger security parameters or in exchange for more expensive computations, even larger storage reductions are achieved. The second construction combines a standard binary tree PPRF with a new “domain extension” technique. For a reasonable choice of parameters, this reduces the required storage by a factor of up to 5 compared to a standard Session Cache. It employs only symmetric cryptography, is suitable for high-traffic scenarios, and can serve thousands of tickets per second.
2019
EUROCRYPT
An Analysis of NIST SP 800-90A 📺
We investigate the security properties of the three deterministic random bit generator (DRBG) mechanisms in NIST SP 800-90A [2]. The standard received considerable negative attention due to the controversy surrounding the now retracted $$\mathsf{{DualEC\text {-}DRBG}}$$DualEC-DRBG, which appeared in earlier versions. Perhaps because of the attention paid to the DualEC, the other algorithms in the standard have received surprisingly patchy analysis to date, despite widespread deployment. This paper addresses a number of these gaps in analysis, with a particular focus on $$\mathsf{{HASH\text {-}DRBG}}$$HASH-DRBG and $$\mathsf{{HMAC\text {-}DRBG}}$$HMAC-DRBG. We uncover a mix of positive and less positive results. On the positive side, we prove (with a caveat) the robustness [13] of $$\mathsf{{HASH\text {-}DRBG}}$$HASH-DRBG and $$\mathsf{{HMAC\text {-}DRBG}}$$HMAC-DRBG in the random oracle model (ROM). Regarding the caveat, we show that if an optional input is omitted, then – contrary to claims in the standard—$$\mathsf{{HMAC\text {-}DRBG}}$$HMAC-DRBG does not even achieve the (weaker) property of forward security. We then conduct a more informal and practice-oriented exploration of flexibility in the standard. Specifically, we argue that these DRBGs have the property that partial state leakage may lead security to break down in unexpected ways. We highlight implementation choices allowed by the overly flexible standard that exacerbate both the likelihood, and impact, of such attacks. While our attacks are theoretical, an analysis of two open source implementations of $$\mathsf{{CTR\text {-}DRBG}}$$CTR-DRBG shows that these potentially problematic implementation choices are made in the real world.
2019
EUROCRYPT
Computationally Volume-Hiding Structured Encryption 📺
We initiate the study of structured encryption schemes with computationally-secure leakage. Specifically, we focus on the design of volume-hiding encrypted multi-maps; that is, of encrypted multi-maps that hide the response length to computationally-bounded adversaries. We describe the first volume-hiding STE schemes that do not rely on naïve padding; that is, padding all tuples to the same length. Our first construction has efficient query complexity and storage but can be lossy. We show, however, that the information loss can be bounded with overwhelming probability for a large class of multi-maps (i.e., with lengths distributed according to a Zipf distribution). Our second construction is not lossy and can achieve storage overhead that is asymptotically better than naïve padding for Zipf-distributed multi-maps. We also show how to further improve the storage when the multi-map is highly concentrated in the sense that it has a large number of tuples with a large intersection. We achieve these results by leveraging computational assumptions; not just for encryption but, more interestingly, to hide the volumes themselves. Our first construction achieves this using a pseudo-random function whereas our second construction achieves this by relying on the conjectured hardness of the planted densest subgraph problem which is a planted variant of the well-studied densest subgraph problem. This assumption was previously used to design public-key encryptions schemes (Applebaum et al., STOC ’10) and to study the computational complexity of financial products (Arora et al., ICS ’10).
2019
EUROCRYPT
Locality-Preserving Oblivious RAM 📺
Oblivious RAMs, introduced by Goldreich and Ostrovsky [JACM’96], compile any RAM program into one that is “memory oblivious”, i.e., the access pattern to the memory is independent of the input. All previous ORAM schemes, however, completely break the locality of data accesses (for instance, by shuffling the data to pseudorandom positions in memory).In this work, we initiate the study of locality-preserving ORAMs—ORAMs that preserve locality of the accessed memory regions, while leaking only the lengths of contiguous memory regions accessed. Our main results demonstrate the existence of a locality-preserving ORAM with poly-logarithmic overhead both in terms of bandwidth and locality. We also study the tradeoff between locality, bandwidth and leakage, and show that any scheme that preserves locality and does not leak the lengths of the contiguous memory regions accessed, suffers from prohibitive bandwidth.To the best of our knowledge, before our work, the only works combining locality and obliviousness were for symmetric searchable encryption [e.g., Cash and Tessaro (EUROCRYPT’14), Asharov et al. (STOC’16)]. Symmetric search encryption ensures obliviousness if each keyword is searched only once, whereas ORAM provides obliviousness to any input program. Thus, our work generalizes that line of work to the much more challenging task of preserving locality in ORAMs.
2019
EUROCRYPT
Private Anonymous Data Access 📺
We consider a scenario where a server holds a huge database that it wants to make accessible to a large group of clients. After an initial setup phase, clients should be able to read arbitrary locations in the database while maintaining privacy (the server does not learn which locations are being read) and anonymity (the server does not learn which client is performing each read). This should hold even if the server colludes with a subset of the clients. Moreover, the run-time of both the server and the client during each read operation should be low, ideally only poly-logarithmic in the size of the database and the number of clients. We call this notion Private Anonymous Data Access (PANDA). PANDA simultaneously combines aspects of Private Information Retrieval (PIR) and Oblivious RAM (ORAM). PIR has no initial setup, and allows anybody to privately and anonymously access a public database, but the server’s run-time is linear in the data size. On the other hand, ORAM achieves poly-logarithmic server run-time, but requires an initial setup after which only a single client with a secret key can access the database. The goal of PANDA is to get the best of both worlds: allow many clients to privately and anonymously access the database as in PIR, while having an efficient server as in ORAM.In this work, we construct bounded-collusion PANDA schemes, where the efficiency scales linearly with a bound on the number of corrupted clients that can collude with the server, but is otherwise poly-logarithmic in the data size and the total number of clients. Our solution relies on standard assumptions, namely the existence of fully homomorphic encryption, and combines techniques from both PIR and ORAM. We also extend PANDA to settings where clients can write to the database.
2019
EUROCRYPT
Reversible Proofs of Sequential Work 📺
Proofs of sequential work (PoSW) are proof systems where a prover, upon receiving a statement $$\chi $$ and a time parameter T computes a proof $$\phi (\chi ,T)$$ which is efficiently and publicly verifiable. The proof can be computed in T sequential steps, but not much less, even by a malicious party having large parallelism. A PoSW thus serves as a proof that T units of time have passed since $$\chi $$ was received.PoSW were introduced by Mahmoody, Moran and Vadhan [MMV11], a simple and practical construction was only recently proposed by Cohen and Pietrzak [CP18].In this work we construct a new simple PoSW in the random permutation model which is almost as simple and efficient as [CP18] but conceptually very different. Whereas the structure underlying [CP18] is a hash tree, our construction is based on skip lists and has the interesting property that computing the PoSW is a reversible computation.The fact that the construction is reversible can potentially be used for new applications like constructing proofs of replication. We also show how to “embed” the sloth function of Lenstra and Weselowski [LW17] into our PoSW to get a PoSW where one additionally can verify correctness of the output much more efficiently than recomputing it (though recent constructions of “verifiable delay functions” subsume most of the applications this construction was aiming at).
2019
EUROCRYPT
Incremental Proofs of Sequential Work 📺
A proof of sequential work allows a prover to convince a verifier that a certain amount of sequential steps have been computed. In this work we introduce the notion of incremental proofs of sequential work where a prover can carry on the computation done by the previous prover incrementally, without affecting the resources of the individual provers or the size of the proofs.To date, the most efficient instance of proofs of sequential work [Cohen and Pietrzak, Eurocrypt 2018] for N steps require the prover to have $$\sqrt{N}$$N memory and to run for $$N + \sqrt{N}$$N+N steps. Using incremental proofs of sequential work we can bring down the prover’s storage complexity to $$\log N$$logN and its running time to N.We propose two different constructions of incremental proofs of sequential work: Our first scheme requires a single processor and introduces a poly-logarithmic factor in the proof size when compared with the proposals of Cohen and Pietrzak. Our second scheme assumes $$\log N$$logN parallel processors but brings down the overhead of the proof size to a factor of 9. Both schemes are simple to implement and only rely on hash functions (modelled as random oracles).
2019
EUROCRYPT
Tight Proofs of Space and Replication 📺
We construct a concretely practical proof-of-space (PoS) with arbitrarily tight security based on stacked depth robust graphs and constant-degree expander graphs. A proof-of-space (PoS) is an interactive proof system where a prover demonstrates that it is persistently using space to store information. A PoS is arbitrarily tight if the honest prover uses exactly N space and for any $$\epsilon > 0$$ϵ>0 the construction can be tuned such that no adversary can pass verification using less than $$(1-\epsilon ) N$$(1-ϵ)N space. Most notably, the degree of the graphs in our construction are independent of $$\epsilon $$ϵ, and the number of layers is only $$O(\log (1/\epsilon ))$$O(log(1/ϵ)). The proof size is $$O(d/\epsilon )$$O(d/ϵ). The degree d depends on the depth robust graphs, which are only required to maintain $$\varOmega (N)$$Ω(N) depth in subgraphs on 80% of the nodes. Our tight PoS is also secure against parallel attacks.Tight proofs of space are necessary for proof-of-replication (PoRep), which is a publicly verifiable proof that the prover is dedicating unique resources to storing one or more retrievable replicas of a specified file. Our main PoS construction can be used as a PoRep, but data extraction is as inefficient as replica generation. We present a second variant of our construction called ZigZag PoRep that has fast/parallelizable data extraction compared to replica generation and maintains the same space tightness while only increasing the number of levels by roughly a factor two.
2019
EUROCRYPT
Founding Secure Computation on Blockchains 📺
We study the foundations of secure computation in the blockchain-hybrid model, where a blockchain – modeled as a global functionality – is available as an Oracle to all the participants of a cryptographic protocol. We demonstrate both destructive and constructive applications of blockchains:We show that classical rewinding-based simulation techniques used in many security proofs fail against blockchain-active adversaries that have read and post access to a global blockchain. In particular, we show that zero-knowledge (ZK) proofs with black-box simulation are impossible against blockchain-active adversaries.Nevertheless, we show that achieving security against blockchain-active adversaries is possible if the honest parties are also blockchain active. We construct an $$\omega (1)$$-round ZK protocol with black-box simulation. We show that this result is tight by proving the impossibility of constant-round ZK with black-box simulation.Finally, we demonstrate a novel application of blockchains to overcome the known impossibility results for concurrent secure computation in the plain model. We construct a concurrent self-composable secure computation protocol for general functionalities in the blockchain-hybrid model based on standard cryptographic assumptions. We develop a suite of techniques for constructing secure protocols in the blockchain-hybrid model that we hope will find applications to future research in this area.
2019
EUROCRYPT
Uncovering Algebraic Structures in the MPC Landscape 📺
A fundamental problem in the theory of secure multi-party computation (MPC) is to characterize functions with more than 2 parties which admit MPC protocols with information-theoretic security against passive corruption. This question has seen little progress since the work of Chor and Ishai (1996), which demonstrated difficulties in resolving it. In this work, we make significant progress towards resolving this question in the important case of aggregating functionalities, in which m parties $$P_1,\dots ,P_m$$ P1,⋯,Pm hold inputs $$x_1,\dots ,x_m$$ x1,⋯,xm and an aggregating party $$P_0$$ P0 must learn $$f(x_1,\dots ,x_m)$$ f(x1,⋯,xm).We uncover a rich class of algebraic structures that are closely related to secure computability, namely, “Commuting Permutations Systems” (CPS) and its variants. We present an extensive set of results relating these algebraic structures among themselves and to MPC, including new protocols, impossibility results and separations. Our results include a necessary algebraic condition and slightly stronger sufficient algebraic condition for a function to admit information-theoretically secure MPC protocols.We also introduce and study new models of minimally interactive MPC (called UNIMPC and ), which not only help in understanding our positive and negative results better, but also open up new avenues for studying the cryptographic complexity landscape of multi-party functionalities. Our positive results include novel protocols in these models, which may be of independent practical interest.Finally, we extend our results to a definition that requires UC security as well as semi-honest security (which we term strong security). In this model we are able to carry out the characterization of all computable functions, except for a gap in the case of aggregating functionalities.
2019
EUROCRYPT
Quantum Circuits for the CSIDH: Optimizing Quantum Evaluation of Isogenies 📺
Choosing safe post-quantum parameters for the new CSIDH isogeny-based key-exchange system requires concrete analysis of the cost of quantum attacks. The two main contributions to attack cost are the number of queries in hidden-shift algorithms and the cost of each query. This paper analyzes algorithms for each query, introducing several new speedups while showing that some previous claims were too optimistic for the attacker. This paper includes a full computer-verified simulation of its main algorithm down to the bit-operation level.
2019
EUROCRYPT
A Quantum-Proof Non-malleable Extractor 📺
In privacy amplification, two mutually trusted parties aim to amplify the secrecy of an initial shared secret X in order to establish a shared private key K by exchanging messages over an insecure communication channel. If the channel is authenticated the task can be solved in a single round of communication using a strong randomness extractor; choosing a quantum-proof extractor allows one to establish security against quantum adversaries.In the case that the channel is not authenticated, this simple solution is no longer secure. Nevertheless, Dodis and Wichs (STOC’09) showed that the problem can be solved in two rounds of communication using a non-malleable extractor, a stronger pseudo-random construction than a strong extractor.We give the first construction of a non-malleable extractor that is secure against quantum adversaries. The extractor is based on a construction by Li (FOCS’12), and is able to extract from source of min-entropy rates larger than 1 / 2. Combining this construction with a quantum-proof variant of the reduction of Dodis and Wichs, due to Cohen and Vidick (unpublished) we obtain the first privacy amplification protocol secure against active quantum adversaries.
2019
EUROCRYPT
A Note on the Communication Complexity of Multiparty Computation in the Correlated Randomness Model 📺
Secure multiparty computation ( $$\mathsf {MPC}$$ MPC) addresses the challenge of evaluating functions on secret inputs without compromising their privacy. A central question in multiparty computation is to understand the amount of communication needed to securely evaluate a circuit of size s. In this work, we revisit this fundamental question in the setting of information-theoretically secure $$\mathsf {MPC}$$ MPC in the correlated randomness model, where a trusted dealer distributes correlated random coins, independent of the inputs, to all parties before the start of the protocol. This setting is of strong theoretical interest, and has led to the most practically efficient $$\mathsf {MPC}$$ MPC protocols known to date.While it is known that protocols with optimal communication (proportional to input plus output size) can be obtained from the LWE assumption, and that protocols with sublinear communication o(s) can be obtained from the DDH assumption, the question of constructing protocols with o(s) communication remains wide open for the important case of information-theoretic $$\mathsf {MPC}$$ MPC in the correlated randomness model; all known protocols in this model require O(s) communication in the online phase.In this work, we exhibit the first generic multiparty computation protocol in the correlated randomness model with communication sublinear in the circuit size, for a large class of circuits. More precisely, we show the following: any size-slayered circuit (whose nodes can be partitioned into layers so that any edge connects adjacent layers) can be evaluated with $$O(s/\log \log s)$$ O(s/loglogs) communication. Our results holds for both boolean and arithmetic circuits, in the honest-but-curious setting, and do not assume honest majority. For boolean circuits, we extend our results to handle malicious corruption.
2019
EUROCRYPT
Degree 2 is Complete for the Round-Complexity of Malicious MPC 📺
We show, via a non-interactive reduction, that the existence of a secure multi-party computation (MPC) protocol for degree-2 functions implies the existence of a protocol with the same round complexity for general functions. Thus showing that when considering the round complexity of MPC, it is sufficient to consider very simple functions.Our completeness theorem applies in various settings: information theoretic and computational, fully malicious and malicious with various types of aborts. In fact, we give a master theorem from which all individual settings follow as direct corollaries. Our basic transformation does not require any additional assumptions and incurs communication and computation blow-up which is polynomial in the number of players and in $$S,2^D$$S,2D, where S, D are the circuit size and depth of the function to be computed. Using one-way functions as an additional assumption, the exponential dependence on the depth can be removed.As a consequence, we are able to push the envelope on the state of the art in various settings of MPC, including the following cases. 3-round perfectly-secure protocol (with guaranteed output delivery) against an active adversary that corrupts less than 1/4 of the parties.2-round statistically-secure protocol that achieves security with “selective abort” against an active adversary that corrupts less than half of the parties.Assuming one-way functions, 2-round computationally-secure protocol that achieves security with (standard) abort against an active adversary that corrupts less than half of the parties. This gives a new and conceptually simpler proof to the recent result of Ananth et al. (Crypto 2018). Technically, our non-interactive reduction draws from the encoding method of Applebaum, Brakerski and Tsabary (TCC 2018). We extend these methods to ones that can be meaningfully analyzed even in the presence of malicious adversaries.
2019
EUROCRYPT
Two Round Information-Theoretic MPC with Malicious Security 📺
We provide the first constructions of two round information-theoretic (IT) secure multiparty computation (MPC) protocols in the plain model that tolerate any $$t<n/2$$t<n/2 malicious corruptions. Our protocols satisfy the strongest achievable standard notions of security in two rounds in different communication models.Previously, IT-MPC protocols in the plain model either required a larger number of rounds, or a smaller minority of corruptions.
2019
EUROCRYPT
Designated-Verifier Pseudorandom Generators, and Their Applications 📺
We provide a generic construction of non-interactive zero-knowledge (NIZK) schemes. Our construction is a refinement of Dwork and Naor’s (FOCS 2000) implementation of the hidden bits model using verifiable pseudorandom generators (VPRGs). Our refinement simplifies their construction and relaxes the necessary assumptions considerably.As a result of this conceptual improvement, we obtain interesting new instantiations:A designated-verifier NIZK (with unbounded soundness) based on the computational Diffie-Hellman (CDH) problem. If a pairing is available, this NIZK becomes publicly verifiable. This constitutes the first fully secure CDH-based designated-verifier NIZKs (and more generally, the first fully secure designated-verifier NIZK from a non-generic assumption which does not already imply publicly-verifiable NIZKs), and it answers an open problem recently raised by Kim and Wu (CRYPTO 2018).A NIZK based on the learning with errors (LWE) assumption, and assuming a non-interactive witness-indistinguishable (NIWI) proof system for bounded distance decoding (BDD). This simplifies and improves upon a recent NIZK from LWE that assumes a NIZK for BDD (Rothblum et al., PKC 2019).
2019
EUROCRYPT
Reusable Designated-Verifier NIZKs for all NP from CDH 📺
Non-interactive zero-knowledge proofs (NIZKs) are a fundamental cryptographic primitive. Despite a long history of research, we only know how to construct NIZKs under a few select assumptions, such as the hardness of factoring or using bilinear maps. Notably, there are no known constructions based on either the computational or decisional Diffie-Hellman (CDH/DDH) assumption without relying on a bilinear map.In this paper, we study a relaxation of NIZKs in the designated verifier setting (DV-NIZK), in which the public common-reference string is generated together with a secret key that is given to the verifier in order to verify proofs. In this setting, we distinguish between one-time and reusable schemes, depending on whether they can be used to prove only a single statement or arbitrarily many statements. For reusable schemes, the main difficulty is to ensure that soundness continues to hold even when the malicious prover learns whether various proofs are accepted or rejected by the verifier. One-time DV-NIZKs are known to exist for general NP statements assuming only public-key encryption. However, prior to this work, we did not have any construction of reusable DV-NIZKs for general NP statements from any assumption under which we didn’t already also have standard NIZKs.In this work, we construct reusable DV-NIZKs for general NP statements under the CDH assumption, without requiring a bilinear map. Our construction is based on the hidden-bits paradigm, which was previously used to construct standard NIZKs. We define a cryptographic primitive called a hidden-bits generator (HBG), along with a designated-verifier variant (DV-HBG), which modularly abstract out how to use this paradigm to get both standard NIZKs and reusable DV-NIZKs. We construct a DV-HBG scheme under the CDH assumption by relying on techniques from the Cramer-Shoup hash-proof system, and this yields our reusable DV-NIZK for general NP statements under CDH.We also consider a strengthening of DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK) where the setup consists of an honestly generated common random string and the verifier then gets to choose his own (potentially malicious) public/secret key pair to generate/verify proofs. We construct MDV-NIZKs under the “one-more CDH” assumption without relying on bilinear maps.
2019
EUROCRYPT
Designated Verifier/Prover and Preprocessing NIZKs from Diffie-Hellman Assumptions 📺
In a non-interactive zero-knowledge (NIZK) proof, a prover can non-interactively convince a verifier of a statement without revealing any additional information. Thus far, numerous constructions of NIZKs have been provided in the common reference string (CRS) model (CRS-NIZK) from various assumptions, however, it still remains a long standing open problem to construct them from tools such as pairing-free groups or lattices. Recently, Kim and Wu (CRYPTO’18) made great progress regarding this problem and constructed the first lattice-based NIZK in a relaxed model called NIZKs in the preprocessing model (PP-NIZKs). In this model, there is a trusted statement-independent preprocessing phase where secret information are generated for the prover and verifier. Depending on whether those secret information can be made public, PP-NIZK captures CRS-NIZK, designated-verifier NIZK (DV-NIZK), and designated-prover NIZK (DP-NIZK) as special cases. It was left as an open problem by Kim and Wu whether we can construct such NIZKs from weak paring-free group assumptions such as DDH. As a further matter, all constructions of NIZKs from Diffie-Hellman (DH) type assumptions (regardless of whether it is over a paring-free or paring group) require the proof size to have a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$|C|·poly(κ), where |C| is the size of the circuit that computes the $$\mathbf {NP}$$NP relation.In this work, we make progress of constructing (DV, DP, PP)-NIZKs with varying flavors from DH-type assumptions. Our results are summarized as follows:DV-NIZKs for $$\mathbf {NP}$$NP from the CDH assumption over pairing-free groups. This is the first construction of such NIZKs on pairing-free groups and resolves the open problem posed by Kim and Wu (CRYPTO’18).DP-NIZKs for $$\mathbf {NP}$$NP with short proof size from a DH-type assumption over pairing groups. Here, the proof size has an additive-overhead $$|C|+\mathsf {poly}(\kappa )$$|C|+poly(κ) rather then an multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$|C|·poly(κ). This is the first construction of such NIZKs (including CRS-NIZKs) that does not rely on the LWE assumption, fully-homomorphic encryption, indistinguishability obfuscation, or non-falsifiable assumptions.PP-NIZK for $$\mathbf {NP}$$NP with short proof size from the DDH assumption over pairing-free groups. This is the first PP-NIZK that achieves a short proof size from a weak and static DH-type assumption such as DDH. Similarly to the above DP-NIZK, the proof size is $$|C|+\mathsf {poly}(\kappa )$$|C|+poly(κ). This too serves as a solution to the open problem posed by Kim and Wu (CRYPTO’18). Along the way, we construct two new homomorphic authentication (HomAuth) schemes which may be of independent interest.
2019
EUROCRYPT
Building an Efficient Lattice Gadget Toolkit: Subgaussian Sampling and More 📺
Many advanced lattice cryptography applications require efficient algorithms for inverting the so-called “gadget” matrices, which are used to formally describe a digit decomposition problem that produces an output with specific (statistical) properties. The common gadget inversion problems are the classical (often binary) digit decomposition, subgaussian decomposition, Learning with Errors (LWE) decoding, and discrete Gaussian sampling. In this work, we build and implement an efficient lattice gadget toolkit that provides a general treatment of gadget matrices and algorithms for their inversion/sampling. The main contribution of our work is a set of new gadget matrices and algorithms for efficient subgaussian sampling that have a number of major theoretical and practical advantages over previously known algorithms. Another contribution deals with efficient algorithms for LWE decoding and discrete Gaussian sampling in the Residue Number System (RNS) representation.We implement the gadget toolkit in PALISADE and evaluate the performance of our algorithms both in terms of runtime and noise growth. We illustrate the improvements due to our algorithms by implementing a concrete complex application, key-policy attribute-based encryption (KP-ABE), which was previously considered impractical for CPU systems (except for a very small number of attributes). Our runtime improvements for the main bottleneck operation based on subgaussian sampling range from 18x (for 2 attributes) to 289x (for 16 attributes; the maximum number supported by a previous implementation). Our results are applicable to a wide range of other advanced applications in lattice cryptography, such as GSW-based homomorphic encryption schemes, leveled fully homomorphic signatures, other forms of ABE, some program obfuscation constructions, and more.
2019
EUROCRYPT
Approx-SVP in Ideal Lattices with Pre-processing 📺
We describe an algorithm to solve the approximate Shortest Vector Problem for lattices corresponding to ideals of the ring of integers of an arbitrary number field K. This algorithm has a pre-processing phase, whose run-time is exponential in  $$\log |\varDelta |$$ log|Δ| with  $$\varDelta $$ Δ the discriminant of K. Importantly, this pre-processing phase depends only on K. The pre-processing phase outputs an “advice”, whose bit-size is no more than the run-time of the query phase. Given this advice, the query phase of the algorithm takes as input any ideal I of the ring of integers, and outputs an element of I which is at most $$\exp (\widetilde{O}((\log |\varDelta |)^{\alpha +1}/n))$$ exp(O~((log|Δ|)α+1/n)) times longer than a shortest non-zero element of I (with respect to the Euclidean norm of its canonical embedding). This query phase runs in time and space $$\exp (\widetilde{O}( (\log |\varDelta |)^{\max (2/3, 1-2\alpha )}))$$ exp(O~((log|Δ|)max(2/3,1-2α))) in the classical setting, and $$\exp (\widetilde{O}((\log |\varDelta |)^{1-2\alpha }))$$ exp(O~((log|Δ|)1-2α)) in the quantum setting. The parameter $$\alpha $$ α can be chosen arbitrarily in [0, 1 / 2]. Both correctness and cost analyses rely on heuristic assumptions, whose validity is consistent with experiments.The algorithm builds upon the algorithms from Cramer et al. [EUROCRYPT 2016] and Cramer et al. [EUROCRYPT 2017]. It relies on the framework from Buchmann [Séminaire de théorie des nombres 1990], which allows to merge them and to extend their applicability from prime-power cyclotomic fields to all number fields. The cost improvements are obtained by allowing precomputations that depend on the field only.
2019
EUROCRYPT
The General Sieve Kernel and New Records in Lattice Reduction 📺
We propose the General Sieve Kernel (G6K, pronounced / e.si.ka/), an abstract stateful machine supporting a wide variety of lattice reduction strategies based on sieving algorithms. Using the basic instruction set of this abstract stateful machine, we first give concise formulations of previous sieving strategies from the literature and then propose new ones. We then also give a light variant of BKZ exploiting the features of our abstract stateful machine. This encapsulates several recent suggestions (Ducas at Eurocrypt 2018; Laarhoven and Mariano at PQCrypto 2018) to move beyond treating sieving as a blackbox SVP oracle and to utilise strong lattice reduction as preprocessing for sieving. Furthermore, we propose new tricks to minimise the sieving computation required for a given reduction quality with mechanisms such as recycling vectors between sieves, on-the-fly lifting and flexible insertions akin to Deep LLL and recent variants of Random Sampling Reduction.Moreover, we provide a highly optimised, multi-threaded and tweakable implementation of this machine which we make open-source. We then illustrate the performance of this implementation of our sieving strategies by applying G6K to various lattice challenges. In particular, our approach allows us to solve previously unsolved instances of the Darmstadt SVP (151, 153, 155) and LWE (e.g. (75, 0.005)) challenges. Our solution for the SVP-151 challenge was found 400 times faster than the time reported for the SVP-150 challenge, the previous record. For exact-SVP, we observe a performance crossover between G6K and FPLLL’s state of the art implementation of enumeration at dimension 70.
2019
EUROCRYPT
Misuse Attacks on Post-quantum Cryptosystems 📺
Many post-quantum cryptosystems which have been proposed in the National Institute of Standards and Technology (NIST) standardization process follow the same meta-algorithm, but in different algebras or different encoding methods. They usually propose two constructions, one being weaker and the other requiring a random oracle. We focus on the weak version of nine submissions to NIST. Submitters claim no security when the secret key is used several times. In this paper, we analyze how easy it is to run a key recovery under multiple key reuse. We mount a classical key recovery under plaintext checking attacks (i.e., with a plaintext checking oracle saying if a given ciphertext decrypts well to a given plaintext) and a quantum key recovery under chosen ciphertext attacks. In the latter case, we assume quantum access to the decryption oracle.
2019
EUROCRYPT
On ELFs, Deterministic Encryption, and Correlated-Input Security 📺
We construct deterministic public key encryption secure for any constant number of arbitrarily correlated computationally unpredictable messages. Prior works required either random oracles or non-standard knowledge assumptions. In contrast, our constructions are based on the exponential hardness of DDH, which is plausible in elliptic curve groups. Our central tool is a new trapdoored extremely lossy function, which modifies extremely lossy functions by adding a trapdoor.
2019
EUROCRYPT
New Techniques for Efficient Trapdoor Functions and Applications 📺
We develop techniques for constructing trapdoor functions (TDFs) with short image size and advanced security properties. Our approach builds on the recent framework of Garg and Hajiabadi [CRYPTO 2018]. As applications of our techniques, we obtainThe first construction of deterministic-encryption schemes for block-source inputs (both for the CPA and CCA cases) based on the Computational Diffie-Hellman (CDH) assumption. Moreover, by applying our efficiency-enhancing techniques, we obtain CDH-based schemes with ciphertext size linear in plaintext size.The first construction of lossy TDFs based on the Decisional Diffie-Hellman (DDH) assumption with image size linear in input size, while retaining the lossiness rate of [Peikert-Waters STOC 2008]. Prior to our work, all constructions of deterministic encryption based even on the stronger DDH assumption incurred a quadratic gap between the ciphertext and plaintext sizes. Moreover, all DDH-based constructions of lossy TDFs had image size quadratic in the input size.At a high level, we break the previous quadratic barriers by introducing a novel technique for encoding input bits via hardcore output bits with the use of erasure-resilient codes. All previous schemes used group elements for encoding input bits, resulting in quadratic expansions.
2019
EUROCRYPT
Symbolic Encryption with Pseudorandom Keys 📺
We give an efficient decision procedure that, on input two (acyclic) expressions making arbitrary use of common cryptographic primitives (namely, encryption and pseudorandom generators), determines (in polynomial time) if the two expressions produce computationally indistinguishable distributions for any cryptographic instantiation satisfying the standard security notions of pseudorandomness and indistinguishability under chosen plaintext attack. The procedure works by mapping each expression to a symbolic pattern that captures, in a fully abstract way, the information revealed by the expression to a computationally bounded observer. Our main result shows that if two expressions are mapped to different symbolic patterns, then there are secure pseudorandom generators and encryption schemes for which the two distributions can be distinguished with overwhelming advantage. At the same time if any two (acyclic) expressions are mapped to the same pattern, then the associated distributions are indistinguishable.
2019
EUROCRYPT
Efficient Circuit-Based PSI with Linear Communication 📺
We present a new protocol for computing a circuit which implements the private set intersection functionality (PSI). Using circuits for this task is advantageous over the usage of specific protocols for PSI, since many applications of PSI do not need to compute the intersection itself but rather functions based on the items in the intersection.Our protocol is the first circuit-based PSI protocol to achieve linear communication complexity. It is also concretely more efficient than all previous circuit-based PSI protocols. For example, for sets of size $$2^{20}$$ it improves the communication of the recent work of Pinkas et al. (EUROCRYPT’18) by more than 10 times, and improves the run time by a factor of 2.8x in the LAN setting, and by a factor of 5.8x in the WAN setting.Our protocol is based on the usage of a protocol for computing oblivious programmable pseudo-random functions (OPPRF), and more specifically on our technique to amortize the cost of batching together multiple invocations of OPPRF.
2019
EUROCRYPT
An Algebraic Approach to Maliciously Secure Private Set Intersection 📺
Private set intersection (PSI) is an important area of research and has been the focus of many works over the past decades. It describes the problem of finding an intersection between the input sets of at least two parties without revealing anything about the input sets apart from their intersection.In this paper, we present a new approach to compute the intersection between sets based on a primitive called Oblivious Linear Function Evaluation (OLE). On an abstract level, we use this primitive to efficiently add two polynomials in a randomized way while preserving the roots of the added polynomials. Setting the roots of the input polynomials to be the elements of the input sets, this directly yields an intersection protocol with optimal asymptotic communication complexity $$O(m\kappa )$$. We highlight that the protocol is information-theoretically secure against a malicious adversary assuming OLE.We also present a natural generalization of the 2-party protocol for the fully malicious multi-party case. Our protocol does away with expensive (homomorphic) threshold encryption and zero-knowledge proofs. Instead, we use simple combinatorial techniques to ensure the security. As a result we get a UC-secure protocol with asymptotically optimal communication complexity $$O((n^2+nm)\kappa )$$, where n is the number of parties, m is the set size and $$\kappa $$ is the security parameter. Apart from yielding an asymptotic improvement over previous works, our protocols are also conceptually simple and require only simple field arithmetic. Along the way we develop techniques that might be of independent interest.
2019
EUROCRYPT
On Finding Quantum Multi-collisions 📺
A k-collision for a compressing hash function H is a set of k distinct inputs that all map to the same output. In this work, we show that for any constant k, $$\varTheta \left( N^{\frac{1}{2}(1-\frac{1}{2^k-1})}\right) $$ quantum queries are both necessary and sufficient to achieve a k-collision with constant probability. This improves on both the best prior upper bound (Hosoyamada et al., ASIACRYPT 2017) and provides the first non-trivial lower bound, completely resolving the problem.
2019
EUROCRYPT
On Quantum Advantage in Information Theoretic Single-Server PIR 📺
In (single-server) Private Information Retrieval (PIR), a server holds a large database $${\mathtt {DB}}$$ of size n, and a client holds an index $$i \in [n]$$ and wishes to retrieve $${\mathtt {DB}}[i]$$ without revealing i to the server. It is well known that information theoretic privacy even against an “honest but curious” server requires $$\varOmega (n)$$ communication complexity. This is true even if quantum communication is allowed and is due to the ability of such an adversarial server to execute the protocol on a superposition of databases instead of on a specific database (“input purification attack”).Nevertheless, there have been some proposals of protocols that achieve sub-linear communication and appear to provide some notion of privacy. Most notably, a protocol due to Le Gall (ToC 2012) with communication complexity $$O(\sqrt{n})$$ , and a protocol by Kerenidis et al. (QIC 2016) with communication complexity $$O(\log (n))$$ , and O(n) shared entanglement.We show that, in a sense, input purification is the only potent adversarial strategy, and protocols such as the two protocols above are secure in a restricted variant of the quantum honest but curious (a.k.a specious) model. More explicitly, we propose a restricted privacy notion called anchored privacy, where the adversary is forced to execute on a classical database (i.e. the execution is anchored to a classical database). We show that for measurement-free protocols, anchored security against honest adversarial servers implies anchored privacy even against specious adversaries.Finally, we prove that even with (unlimited) pre-shared entanglement it is impossible to achieve security in the standard specious model with sub-linear communication, thus further substantiating the necessity of our relaxation. This lower bound may be of independent interest (in particular recalling that PIR is a special case of Fully Homomorphic Encryption).
2019
EUROCRYPT
Verifier-on-a-Leash: New Schemes for Verifiable Delegated Quantum Computation, with Quasilinear Resources 📺
The problem of reliably certifying the outcome of a computation performed by a quantum device is rapidly gaining relevance. We present two protocols for a classical verifier to verifiably delegate a quantum computation to two non-communicating but entangled quantum provers. Our protocols have near-optimal complexity in terms of the total resources employed by the verifier and the honest provers, with the total number of operations of each party, including the number of entangled pairs of qubits required of the honest provers, scaling as $$O(g\log g)$$ for delegating a circuit of size g. This is in contrast to previous protocols, whose overhead in terms of resources employed, while polynomial, is far beyond what is feasible in practice. Our first protocol requires a number of rounds that is linear in the depth of the circuit being delegated, and is blind, meaning neither prover can learn the circuit or its input. The second protocol is not blind, but requires only a constant number of rounds of interaction.Our main technical innovation is an efficient rigidity theorem which allows a verifier to test that two entangled provers perform measurements specified by an arbitrary m-qubit tensor product of single-qubit Clifford observables on their respective halves of m shared EPR pairs, with a robustness that is independent of m. Our two-prover classical-verifier delegation protocols are obtained by combining this rigidity theorem with a single-prover quantum-verifier protocol for the verifiable delegation of a quantum computation, introduced by Broadbent.
2019
EUROCRYPT
Ring Signatures: Logarithmic-Size, No Setup—from Standard Assumptions 📺
Ring signatures allow for creating signatures on behalf of an ad hoc group of signers, hiding the true identity of the signer among the group. A natural goal is to construct a ring signature scheme for which the signature size is short in the number of ring members. Moreover, such a construction should not rely on a trusted setup and be proven secure under falsifiable standard assumptions. Despite many years of research this question is still open.In this paper, we present the first construction of size-optimal ring signatures which do not rely on a trusted setup or the random oracle heuristic. Specifically, our scheme can be instantiated from standard assumptions and the size of signatures grows only logarithmically in the number of ring members.We also extend our techniques to the setting of linkable ring signatures, where signatures created using the same signing key can be linked.
2019
EUROCRYPT
A Modular Treatment of Blind Signatures from Identification Schemes 📺
We propose a modular security treatment of blind signatures derived from linear identification schemes in the random oracle model. To this end, we present a general framework that captures several well known schemes from the literature and allows to prove their security. Our modular security reduction introduces a new security notion for identification schemes called One-More-Man In the Middle Security which we show equivalent to the classical One-More-Unforgeability notion for blind signatures.We also propose a generalized version of the Forking Lemma due to Bellare and Neven (CCS 2006) and show how it can be used to greatly improve the understandability of the classical security proofs for blind signatures schemes by Pointcheval and Stern (Journal of Cryptology 2000).
2019
EUROCRYPT
Efficient Verifiable Delay Functions 📺
Best Young Researcher Paper
We construct a verifiable delay function (VDF). A VDF is a function whose evaluation requires running a given number of sequential steps, yet the result can be efficiently verified. They have applications in decentralised systems, such as the generation of trustworthy public randomness in a trustless environment, or resource-efficient blockchains. To construct our VDF, we actually build a trapdoor VDF. A trapdoor VDF is essentially a VDF which can be evaluated efficiently by parties who know a secret (the trapdoor). By setting up this scheme in a way that the trapdoor is unknown (not even by the party running the setup, so that there is no need for a trusted setup environment), we obtain a simple VDF. Our construction is based on groups of unknown order such as an RSA group, or the class group of an imaginary quadratic field. The output of our construction is very short (the result and the proof of correctness are each a single element of the group), and the verification of correctness is very efficient.
2019
EUROCRYPT
Quantum Lightning Never Strikes the Same State Twice 📺
Best Paper
Public key quantum money can be seen as a version of the quantum no-cloning theorem that holds even when the quantum states can be verified by the adversary. In this work, we investigate quantum lightning where no-cloning holds even when the adversary herself generates the quantum state to be cloned. We then study quantum money and quantum lightning, showing the following results:We demonstrate the usefulness of quantum lightning beyond quantum money by showing several potential applications, such as generating random strings with a proof of entropy, to completely decentralized cryptocurrency without a block-chain, where transactions is instant and local.We give Either/Or results for quantum money/lightning, showing that either signatures/hash functions/commitment schemes meet very strong recently proposed notions of security, or they yield quantum money or lightning. Given the difficulty in constructing public key quantum money, this suggests that natural schemes do attain strong security guarantees.We show that instantiating the quantum money scheme of Aaronson and Christiano [STOC’12] with indistinguishability obfuscation that is secure against quantum computers yields a secure quantum money scheme. This construction can be seen as an instance of our Either/Or result for signatures, giving the first separation between two security notions for signatures from the literature.Finally, we give a plausible construction for quantum lightning, which we prove secure under an assumption related to the multi-collision resistance of degree-2 hash functions. Our construction is inspired by our Either/Or result for hash functions, and yields the first plausible standard model instantiation of a non-collapsing collision resistant hash function. This improves on a result of Unruh [Eurocrypt’16] which is relative to a quantum oracle.
2019
EUROCRYPT
Secret-Sharing Schemes for General and Uniform Access Structures 📺
A secret-sharing scheme allows some authorized sets of parties to reconstruct a secret; the collection of authorized sets is called the access structure. For over 30 years, it was known that any (monotone) collection of authorized sets can be realized by a secret-sharing scheme whose shares are of size $$2^{n-o(n)}$$ and until recently no better scheme was known. In a recent breakthrough, Liu and Vaikuntanathan (STOC 2018) have reduced the share size to $$O(2^{0.994n})$$. Our first contribution is improving the exponent of secret sharing down to 0.892. For the special case of linear secret-sharing schemes, we get an exponent of 0.942 (compared to 0.999 of Liu and Vaikuntanathan).Motivated by the construction of Liu and Vaikuntanathan, we study secret-sharing schemes for uniform access structures. An access structure is k-uniform if all sets of size larger than k are authorized, all sets of size smaller than k are unauthorized, and each set of size k can be either authorized or unauthorized. The construction of Liu and Vaikuntanathan starts from protocols for conditional disclosure of secrets, constructs secret-sharing schemes for uniform access structures from them, and combines these schemes in order to obtain secret-sharing schemes for general access structures. Our second contribution in this paper is constructions of secret-sharing schemes for uniform access structures. We achieve the following results:A secret-sharing scheme for k-uniform access structures for large secrets in which the share size is $$O(k^2)$$ times the size of the secret.A linear secret-sharing scheme for k-uniform access structures for a binary secret in which the share size is $$\tilde{O}(2^{h(k/n)n/2})$$ (where h is the binary entropy function). By counting arguments, this construction is optimal (up to polynomial factors).A secret-sharing scheme for k-uniform access structures for a binary secret in which the share size is $$2^{\tilde{O}(\sqrt{k \log n})}$$. Our third contribution is a construction of ad-hoc PSM protocols, i.e., PSM protocols in which only a subset of the parties will compute a function on their inputs. This result is based on ideas we used in the construction of secret-sharing schemes for k-uniform access structures for a binary secret.
2019
EUROCRYPT
Towards Optimal Robust Secret Sharing with Security Against a Rushing Adversary 📺
Robust secret sharing enables the reconstruction of a secret-shared message in the presence of up to t (out of n) incorrect shares. The most challenging case is when $$n = 2t+1$$, which is the largest t for which the task is still possible, up to a small error probability $$2^{-\kappa }$$ and with some overhead in the share size.Recently, Bishop, Pastro, Rajaraman and Wichs [3] proposed a scheme with an (almost) optimal overhead of $$\widetilde{O}(\kappa )$$. This seems to answer the open question posed by Cevallos et al. [6] who proposed a scheme with overhead of $$\widetilde{O}(n+\kappa )$$ and asked whether the linear dependency on n was necessary or not. However, a subtle issue with Bishop et al.’s solution is that it (implicitly) assumes a non-rushing adversary, and thus it satisfies a weaker notion of security compared to the scheme by Cevallos et al. [6], or to the classical scheme by Rabin and BenOr [13].In this work, we almost close this gap. We propose a new robust secret sharing scheme that offers full security against a rushing adversary, and that has an overhead of $$O(\kappa n^\varepsilon )$$, where $$\varepsilon > 0$$ is arbitrary but fixed. This $$n^\varepsilon $$-factor is obviously worse than the $$\mathrm {polylog}(n)$$-factor hidden in the $$\widetilde{O}$$ notation of the scheme of Bishop et al. [3], but it greatly improves on the linear dependency on n of the best known scheme that features security against a rushing adversary (when $$\kappa $$ is substantially smaller than n).A small variation of our scheme has the same $$\widetilde{O}(\kappa )$$ overhead as the scheme of Bishop et al. and achieves security against a rushing adversary, but suffers from a (slightly) superpolynomial reconstruction complexity.
2019
EUROCRYPT
Simple Schemes in the Bounded Storage Model 📺
The bounded storage model promises unconditional security proofs against computationally unbounded adversaries, so long as the adversary’s space is bounded. In this work, we develop simple new constructions of two-party key agreement, bit commitment, and oblivious transfer in this model. In addition to simplicity, our constructions have several advantages over prior work, including an improved number of rounds and enhanced correctness. Our schemes are based on Raz’s lower bound for learning parities.
2019
EUROCRYPT
From Collisions to Chosen-Prefix Collisions Application to Full SHA-1 📺
A chosen-prefix collision attack is a stronger variant of a collision attack, where an arbitrary pair of challenge prefixes are turned into a collision. Chosen-prefix collisions are usually significantly harder to produce than (identical-prefix) collisions, but the practical impact of such an attack is much larger. While many cryptographic constructions rely on collision-resistance for their security proofs, collision attacks are hard to turn into break of concrete protocols, because the adversary has a limited control over the colliding messages. On the other hand, chosen-prefix collisions have been shown to break certificates (by creating a rogue CA) and many internet protocols (TLS, SSH, IPsec).In this article, we propose new techniques to turn collision attacks into chosen-prefix collision attacks. Our strategy is composed of two phases: first a birthday search that aims at taking the random chaining variable difference (due to the chosen-prefix model) to a set of pre-defined target differences. Then, using a multi-block approach, carefully analysing the clustering effect, we map this new chaining variable difference to a colliding pair of states using techniques developed for collision attacks.We apply those techniques to MD5 and SHA-1, and obtain improved attacks. In particular, we have a chosen-prefix collision attack against SHA-1 with complexity between $$2^{66.9}$$ and $$2^{69.4}$$ (depending on assumptions about the cost of finding near-collision blocks), while the best-known attack has complexity $$2^{77.1}$$. This is within a small factor of the complexity of the classical collision attack on SHA-1 (estimated as $$2^{64.7}$$). This represents yet another warning that industries and users have to move away from using SHA-1 as soon as possible.
2019
EUROCRYPT
Preimage Attacks on Round-Reduced Keccak-224/256 via an Allocating Approach 📺
We present new preimage attacks on standard Keccak-224 and Keccak-256 that are reduced to 3 and 4 rounds. An allocating approach is used in the attacks, and the whole complexity is allocated to two stages, such that fewer constraints are considered and the complexity is lowered in each stage. Specifically, we are trying to find a 2-block preimage, instead of a 1-block one, for a given hash value, and the first and second message blocks are found in two stages, respectively. Both the message blocks are constrained by a set of newly proposed conditions on the middle state, which are weaker than those brought by the initial values and the hash values. Thus, the complexities in the two stages are both lower than that of finding a 1-block preimage directly. Together with the basic allocating approach, an improved method is given to balance the complexities of two stages, and hence, obtains the optimal attacks. As a result, we present the best theoretical preimage attacks on Keccak-224 and Keccak-256 that are reduced to 3 and 4 rounds. Moreover, we practically found a (second) preimage for 3-round Keccak-224 with a complexity of $$2^{39.39}$$.
2019
EUROCRYPT
bison Instantiating the Whitened Swap-Or-Not Construction 📺
We give the first practical instance – bison – of the Whitened Swap-Or-Not construction. After clarifying inherent limitations of the construction, we point out that this way of building block ciphers allows easy and very strong arguments against differential attacks.
2019
EUROCRYPT
Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing 📺
We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of $$1/2-1/\mathrm{poly}(n)$$ . Thus we can only show that “very hard” LPN is harder than some “very mildly hard” worst case problem. We note that LPN with noise $$1/2-1/\mathrm{poly}(n)$$ already implies symmetric cryptography.Specifically, we consider the (n, m, w)-nearest codeword problem ((n, m, w)-NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error $$w/m \approx {\log ^2 n}/{n}$$ , (n, m, w)-NCP can be solved given oracle access to an LPN distinguisher with noise ratio $$1/2-1/\mathrm{poly}(n)$$ .Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n, m, w)-NCP with the aforementioned parameters lies in the complexity class $$\mathrm {{Search}\hbox {-}\mathcal {BPP}}^\mathcal {SZK}$$ (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be $$\mathcal {NP}$$ -hard. We then show that the hardness of LPN with very low noise rate $$\log ^2(n)/n$$ implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in $$\mathcal {BPP}^\mathcal {SZK}$$ ).
2019
EUROCRYPT
New Techniques for Obfuscating Conjunctions 📺
A conjunction is a function $$f(x_1,\dots ,x_n) = \bigwedge _{i \in S} l_i$$ where $$S \subseteq [n]$$ and each $$l_i$$ is $$x_i$$ or $$\lnot x_i$$. Bishop et al. (CRYPTO 2018) recently proposed obfuscating conjunctions by embedding them in the error positions of a noisy Reed-Solomon codeword and placing the codeword in a group exponent. They prove distributional virtual black box (VBB) security in the generic group model for random conjunctions where $$|S| \ge 0.226n$$. While conjunction obfuscation is known from LWE [31, 47], these constructions rely on substantial technical machinery.In this work, we conduct an extensive study of simple conjunction obfuscation techniques. We abstract the Bishop et al. scheme to obtain an equivalent yet more efficient “dual” scheme that can handle conjunctions over exponential size alphabets. This scheme admits a straightforward proof of generic group security, which we combine with a novel combinatorial argument to obtain distributional VBB security for |S| of any size.If we replace the Reed-Solomon code with a random binary linear code, we can prove security from standard LPN and avoid encoding in a group. This addresses an open problem posed by Bishop et al. to prove security of this simple approach in the standard model.We give a new construction that achieves information theoretic distributional VBB security and weak functionality preservation for $$|S| \ge n - n^\delta $$ and $$\delta < 1$$. Assuming discrete log and $$\delta < 1/2$$, we satisfy a stronger notion of functionality preservation for computationally bounded adversaries while still achieving information theoretic security.
2019
EUROCRYPT
Distributional Collision Resistance Beyond One-Way Functions 📺
Distributional collision resistance is a relaxation of collision resistance that only requires that it is hard to sample a collision (x, y) where x is uniformly random and y is uniformly random conditioned on colliding with x. The notion lies between one-wayness and collision resistance, but its exact power is still not well-understood. On one hand, distributional collision resistant hash functions cannot be built from one-way functions in a black-box way, which may suggest that they are stronger. On the other hand, so far, they have not yielded any applications beyond one-way functions.Assuming distributional collision resistant hash functions, we construct constant-round statistically hiding commitment scheme. Such commitments are not known based on one-way functions, and are impossible to obtain from one-way functions in a black-box way. Our construction relies on the reduction from inaccessible entropy generators to statistically hiding commitments by Haitner et al. (STOC ’09). In the converse direction, we show that two-message statistically hiding commitments imply distributional collision resistance, thereby establishing a loose equivalence between the two notions.A corollary of the first result is that constant-round statistically hiding commitments are implied by average-case hardness in the class $${\textsf {SZK}}$$ (which is known to imply distributional collision resistance). This implication seems to be folklore, but to the best of our knowledge has not been proven explicitly. We provide yet another proof of this implication, which is arguably more direct than the one going through distributional collision resistance.
2019
EUROCRYPT
Multi-target Attacks on the Picnic Signature Scheme and Related Protocols 📺
Picnic is a signature scheme that was presented at ACM CCS 2017 by Chase et al. and submitted to NIST’s post-quantum standardization project. Among all submissions to NIST’s project, Picnic is one of the most innovative, making use of recent progress in construction of practically efficient zero-knowledge (ZK) protocols for general circuits.In this paper, we devise multi-target attacks on Picnic and its underlying ZK protocol, ZKB++. Given access to S signatures, produced by a single or by several users, our attack can (information theoretically) recover the $$\kappa $$-bit signing key of a user in complexity of about $$2^{\kappa - 7}/S$$. This is faster than Picnic’s claimed $$2^{\kappa }$$ security against classical (non-quantum) attacks by a factor of $$2^7 \cdot S$$ (as each signature contains about $$2^7$$ attack targets).Whereas in most multi-target attacks, the attacker can easily sort and match the available targets, this is not the case in our attack on Picnic, as different bits of information are available for each target. Consequently, it is challenging to reach the information theoretic complexity in a computational model, and we had to perform cryptanalytic optimizations by carefully analyzing ZKB++ and its underlying circuit. Our best attack for $$\kappa = 128$$ has time complexity of $$T = 2^{77}$$ for $$S = 2^{64}$$. Alternatively, we can reach the information theoretic complexity of $$T = 2^{64}$$ for $$S = 2^{57}$$, given that all signatures are produced with the same signing key.Our attack exploits a weakness in the way that the Picnic signing algorithm uses a pseudo-random generator. The weakness is fixed in the recent Picnic 2.0 version.In addition to our attack on Picnic, we show that a recently proposed improvement of the ZKB++ protocol (due to Katz, Kolesnikov and Wang) is vulnerable to a similar multi-target attack.
2019
EUROCRYPT
Durandal: A Rank Metric Based Signature Scheme 📺
We describe a variation of the Schnorr-Lyubashevsky approach to devising signature schemes that is adapted to rank based cryptography. This new approach enables us to obtain a randomization of the signature, which previously seemed difficult to derive for code-based cryptography. We provide a detailed analysis of attacks and an EUF-CMA proof for our scheme. Our scheme relies on the security of the Ideal Rank Support Learning and the Ideal Rank Syndrome problems and a newly introduced problem: Product Spaces Subspaces Indistinguishability, for which we give a detailed analysis. Overall the parameters we propose are efficient and comparable in terms of signature size to the Dilithium lattice-based scheme, with a signature size of 4 kB for a public key of size less than 20 kB.
2019
EUROCRYPT
SeaSign: Compact Isogeny Signatures from Class Group Actions 📺
We give a new signature scheme for isogenies that combines the class group actions of CSIDH with the notion of Fiat-Shamir with aborts. Our techniques allow to have signatures of size less than one kilobyte at the 128-bit security level, even with tight security reduction (to a non-standard problem) in the quantum random oracle model. Hence our signatures are potentially shorter than lattice signatures, but signing and verification are currently very expensive.
2019
EUROCRYPT
Differential Privacy and the People's Data 📺
IACR Distingished Lecture
2019
FSE
On Invariant Attacks 📺
Invited talk
2019
TCHES
Methodology for Efficient CNN Architectures in Profiling Attacks 📺
The side-channel community recently investigated a new approach, based on deep learning, to significantly improve profiled attacks against embedded systems. Previous works have shown the benefit of using convolutional neural networks (CNN) to limit the effect of some countermeasures such as desynchronization. Compared with template attacks, deep learning techniques can deal with trace misalignment and the high dimensionality of the data. Pre-processing is no longer mandatory. However, the performance of attacks depends to a great extent on the choice of each hyperparameter used to configure a CNN architecture. Hence, we cannot perfectly harness the potential of deep neural networks without a clear understanding of the network’s inner-workings. To reduce this gap, we propose to clearly explain the role of each hyperparameters during the feature selection phase using some specific visualization techniques including Weight Visualization, Gradient Visualization and Heatmaps. By highlighting which features are retained by filters, heatmaps come in handy when a security evaluator tries to interpret and understand the efficiency of CNN. We propose a methodology for building efficient CNN architectures in terms of attack efficiency and network complexity, even in the presence of desynchronization. We evaluate our methodology using public datasets with and without desynchronization. In each case, our methodology outperforms the previous state-of-the-art CNN models while significantly reducing network complexity. Our networks are up to 25 times more efficient than previous state-of-the-art while their complexity is up to 31810 times smaller. Our results show that CNN networks do not need to be very complex to perform well in the side-channel context.
2019
TCHES
Recovering the CTR_DRBG state in 256 traces 📺
The NIST CTR_DRBG specification prescribes a maximum size on each random number request, limiting the number of encryptions in CTR mode with the same key to 4 096. Jaffe’s attack on AES in CTR mode without knowledge of the nonce from CHES 2007 requires 216 traces, which is safely above this recommendation. In this work, we exhibit an attack that requires only 256 traces, which is well within the NIST limits. We use simulated traces to investigate the success probability as a function of the signal-to-noise ratio. We also demonstrate its success in practice by attacking an AES-CTR implementation on a Cortex-M4 among others and recovering both the key and nonce. Our traces and code are made openly available for reproducibility.
2019
TCHES
Lightweight Authenticated Encryption Mode of Operation for Tweakable Block Ciphers 📺
The use of a small block length is a common strategy when designing lightweight (tweakable) block ciphers (TBCs), and several 64-bit primitives have been proposed. However, when such a 64-bit primitive is used for an authenticated encryption with birthday-bound security, it has only 32-bit data complexity, which is subject to practical attacks. To employ a short block length without compromising security, we propose PFB, a lightweight TBC-based authenticated encryption with associated data mode, which achieves beyond-birthday-bound security. For this purpose, we extend iCOFB, which is originally defined with a tweakable random function. Unlike iCOFB, the proposed method can be instantiated with a TBC using a fixed tweak length and can handle variable-length data. Moreover, its security bound is improved and independent of the data length; this improves the key lifetime, particularly in lightweight blocks with a small size. The proposed method also covers a broader class of feedback functions because of the generalization presented in our proof. We evaluate the concrete hardware performances of PFB, which benefits from the small block length and shows particularly good performances in threshold implementation.
2019
TCHES
SITM: See-In-The-Middle Side-Channel Assisted Middle Round Differential Cryptanalysis on SPN Block Ciphers 📺
Side-channel analysis constitutes a powerful attack vector against cryptographic implementations. Techniques such as power and electromagnetic side-channel analysis have been extensively studied to provide an efficient way to recover the secret key used in cryptographic algorithms. To protect against such attacks, countermeasure designers have developed protection methods, such as masking and hiding, to make the attacks harder. However, due to significant overheads, these protections are sometimes deployed only at the beginning and the end of encryption, which are the main targets for side-channel attacks.In this paper, we present a methodology for side-channel assisted differential cryptanalysis attack to target middle rounds of block cipher implementations. Such method presents a powerful attack vector against designs that normally only protect the beginning and end rounds of ciphers. We generalize the attack to SPN based ciphers and calculate the effort the attacker needs to recover the secret key. We provide experimental results on 8-bit and 32-bit microcontrollers. We provide case studies on state-of-the-art symmetric block ciphers, such as AES, SKINNY, and PRESENT. Furthermore, we show how to attack shuffling-protected implementations.
2019
TCHES
Power Analysis on NTRU Prime 📺
This paper applies a variety of power analysis techniques to several implementations of NTRU Prime, a Round 2 submission to the NIST PQC Standardization Project. The techniques include vertical correlation power analysis, horizontal indepth correlation power analysis, online template attacks, and chosen-input simple power analysis. The implementations include the reference one, the one optimized using smladx, and three protected ones. Adversaries in this study can fully recover private keys with one single trace of short observation span, with few template traces from a fully controlled device similar to the target and no a priori power model, or sometimes even with the naked eye. The techniques target the constant-time generic polynomial multiplications in the product scanning method. Though in this work they focus on the decapsulation, they also work on the key generation and encapsulation of NTRU Prime. Moreover, they apply to the ideal-lattice-based cryptosystems where each private-key coefficient comes from a small set of possibilities.
2019
TCHES
Share-slicing: Friend or Foe? 📺
Masking is a well loved and widely deployed countermeasure against side channel attacks, in particular in software. Under certain assumptions (w.r.t. independence and noise level), masking provably prevents attacks up to a certain security order and leads to a predictable increase in the number of required leakages for successful attacks beyond this order. The noise level in typical processors where software masking is used may not be very high, thus low masking orders are not sufficient for real world security. Higher order masking however comes at a great cost, and therefore a number techniques have been published over the years that make such implementations more efficient via parallelisation in the form of bit or share slicing. We take two highly regarded schemes (ISW and Barthe et al.), and some corresponding open source implementations that make use of share slicing, and discuss their true security on an ARM Cortex-M0 and an ARM Cortex-M3 processor (both from the LPC series). We show that micro-architectural features of the M0 and M3 undermine the independence assumptions made in masking proofs and thus their theoretical guarantees do not translate into practice (even worse it seems unpredictable at which order leaks can be expected). Our results demonstrate how difficult it is to link theoretical security proofs to practical real-world security guarantees.
2019
TCHES
CAS-Lock: A Security-Corruptibility Trade-off Resilient Logic Locking Scheme 📺
Logic locking has recently been proposed as a solution for protecting gatelevel semiconductor intellectual property (IP). However, numerous attacks have been mounted on this technique, which either compromise the locking key or restore the original circuit functionality. SAT attacks leverage golden IC information to rule out all incorrect key classes, while bypass and removal attacks exploit the limited output corruptibility and/or structural traces of SAT-resistant locking schemes. In this paper, we propose a new lightweight locking technique: CAS-Lock (cascaded locking) which nullifies both SAT and bypass attacks, while simultaneously maintaining nontrivial output corruptibility. This property of CAS-Lock is in stark contrast to the well-accepted notion that there is an inherent trade-off between output corruptibility and SAT resistance. We theoretically and experimentally validate the SAT resistance of CAS-Lock, and show that it reduces the attack to brute-force, regardless of its construction. Further, we evaluate its resistance to recently proposed approximate SAT attacks (i.e., AppSAT). We also propose a modified version of CAS-Lock (mirrored CAS-Lock or M-CAS) to protect against removal attacks. M-CAS allows a trade-off evaluation between removal attack and SAT attack resiliency, while incurring minimal area overhead. We also show how M-CAS parameters such as the implemented Boolean function and selected key can be tuned by the designer so that a desired level of protection against all known attacks can be achieved.
2019
TCHES
Improved Heuristics for Short Linear Programs 📺
In this article, we propose new heuristics for minimising the amount of XOR gates required to compute a system of linear equations in GF(2). We first revisit the well known Boyar-Peralta strategy and argue that a proper randomisation process during the selection phases can lead to great improvements. We then propose new selection criteria and explain their rationale. Our new methods outperform state-of-the-art algorithms such as Paar or Boyar-Peralta (or open synthesis tools such as Yosys) when tested on random matrices with various densities. They can be applied to matrices of reasonable sizes (up to about 32 × 32). Notably, we provide a new implementation record for the matrix underlying the MixColumns function of the AES block cipher, requiring only 94 XORs.
2019
TCHES
Cache vs. Key-Dependency: Side Channeling an Implementation of Pilsung 📺
Over the past two decades, cache attacks have been identified as a threat to the security of cipher implementations. These attacks recover secret information by combining observations of the victim cache accesses with the knowledge of the internal structure of the cipher. So far, cache attacks have been applied to ciphers that have fixed state transformations, leaving open the question of whether using secret, key-dependent transformations enhances the security against such attacks. In this paper we investigate this question. We look at an implementation of the North Korean cipher Pilsung, as reverse-engineered by Kryptos Logic. Like AES, Pilsung is a permutation-substitution cipher, but unlike AES, both the substitution and the permutation steps in Pilsung depend on the key, and are not known to the attacker. We analyze Pilsung and design a cache-based attack. We improve the state of the art by developing techniques for reversing secret-dependent transformations. Our attack, which requires an average of eight minutes on a typical laptop computer, demonstrates that secret transformations do not necessarily protect ciphers against side channel attacks.
2019
TCHES
TEDT, a Leakage-Resist AEAD Mode for High Physical Security Applications 📺
We propose TEDT, a new Authenticated Encryption with Associated Data (AEAD) mode leveraging Tweakable Block Ciphers (TBCs). TEDT provides the following features: (i) It offers full leakage-resistance, that is, it limits the exploitability of physical leakages via side-channel attacks, even if these leakages happen during every message encryption and decryption operation. Moreover, the leakage integrity bound is asymptotically optimal in the multi-user setting. (ii) It offers nonce misuse-resilience, that is, the repetition of nonces does not impact the security of ciphertexts produced with fresh nonces. (iii) It can be implemented with a remarkably low energy cost when strong resistance to side-channel attacks is needed, supports online encryption and handles static and incremental associated data efficiently. Concretely, TEDT encourages so-called leveled implementations, in which two TBCs are implemented: the first one needs strong and energy demanding protections against side-channel attacks but is used in a limited way, while the other only requires weak and energy-efficient protections and performs the bulk of the computation. As a result, TEDT leads to more energy-efficient implementations compared to traditional AEAD schemes, whose side-channel security requires to uniformly protect every (T)BC execution.
2019
TCHES
Bluethunder: A 2-level Directional Predictor Based Side-Channel Attack against SGX 📺
Software Guard Extension (SGX) is a hardware-based trusted execution environment (TEE) implemented in recent Intel commodity processors. By isolating the memory of security-critical applications from untrusted software, this mechanism provides users with a strongly shielded environment called enclave for executing programs safely. However, recent studies have demonstrated that SGX enclaves are vulnerable to side-channel attacks. In order to deal with these attacks, several protection techniques have been studied and utilized.In this paper, we explore a new pattern history table (PHT) based side-channel attack against SGX named Bluethunder, which can bypass existing protection techniques and reveal the secret information inside an enclave. Comparing to existing PHT-based attacks (such as Branchscope [ERAG+18]), Bluethunder abuses the 2-level directional predictor in the branch prediction unit, on top of which we develop an exploitation methodology to disclose the input-dependent control flow in an enclave. Since the cost of training the 2-level predictor is pretty low, Bluethunder can achieve a high bandwidth during the attack. We evaluate our attacks on two case studies: extracting the format string information in the vfprintf function in the Intel SGX SDK and attacking the implementation of RSA decryption algorithm in mbed TLS. Both attacks show that Bluethunder can recover fine-grained information inside an enclave with low training overhead, which outperforms the latest PHT-based side channel attack (Branchscope) by 52×. Specifically, in the second attack, Bluethunder can recover the RSA private key with 96.76% accuracy in a single run.
2019
TCHES
A Comprehensive Study of Deep Learning for Side-Channel Analysis 📺
Recently, several studies have been published on the application of deep learning to enhance Side-Channel Attacks (SCA). These seminal works have practically validated the soundness of the approach, especially against implementations protected by masking or by jittering. Concurrently, important open issues have emerged. Among them, the relevance of machine (and thereby deep) learning based SCA has been questioned in several papers based on the lack of relation between the accuracy, a typical performance metric used in machine learning, and common SCA metrics like the Guessing entropy or the key-discrimination success rate. Also, the impact of the classical side-channel counter-measures on the efficiency of deep learning has been questioned, in particular by the semi-conductor industry. Both questions enlighten the importance of studying the theoretical soundness of deep learning in the context of side-channel and of developing means to quantify its efficiency, especially with respect to the optimality bounds published so far in the literature for side-channel leakage exploitation. The first main contribution of this paper directly concerns the latter point. It is indeed proved that minimizing the Negative Log Likelihood (NLL for short) loss function during the training of deep neural networks is actually asymptotically equivalent to maximizing the Perceived Information introduced by Renauld et al. at EUROCRYPT 2011 as a lower bound of the Mutual Information between the leakage and the target secret. Hence, such a training can be considered as an efficient and effective estimation of the PI, and thereby of the MI (known to be complex to accurately estimate in the context of secure implementations). As a second direct consequence of our main contribution, it is argued that, in a side-channel exploitation context, choosing the NLL loss function to drive the training is sound from an information theory point of view. As a third contribution, classical counter-measures like Boolean masking or execution flow shuffling, initially dedicated to classical SCA, are proved to stay sound against deep Learning based attacks.