International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Recent videos of IACR talks

Year
Venue
Title
2022
CRYPTO
Oblivious Message Retrieval 📺
Anonymous message delivery systems, such as private messaging services and privacy-preserving payment systems, need a mechanism for recipients to retrieve the messages addressed to them, without leaking metadata or letting their messages be linked. Recipients could download all posted messages and scan for those addressed to them, but communication and computation costs are excessive at scale. We show how untrusted servers can detect messages on behalf of recipients, and summarize these into a compact encrypted digest that recipients can easily decrypt. These servers operate obliviously and do not learn anything about which messages are addressed to which recipients. Privacy, soundness, and completeness hold even if everyone but the recipient is adversarial and colluding (unlike in prior schemes). Our starting point is an asymptotically-efficient approach, using Fully Homomorphic Encryption and homomorphically-encoded Sparse Random Linear Codes. We then address the concrete performance using bespoke tailoring of lattice-based cryptographic components, alongside various algebraic and algorithmic optimizations. This reduces the digest size to a few bits per message scanned. Concretely, the servers' cost is ~$1 per million messages scanned, and the resulting digests can be decoded by recipients in ~20ms. Our schemes can thus practically attain the strongest form of receiver privacy for current applications such as privacy-preserving cryptocurrencies.
2022
CRYPTO
Time-Space Lower Bounds for Finding Collisions in Merkle-Damgard Hash Functions 📺
We revisit the problem of finding B-block-long collisions in Merkle-Damgard Hash Functions in the auxiliary-input random oracle model, in which an attacker gets a piece of S-bit advice about the random oracle and makes T oracle queries. Akshima, Cash, Drucker and Wee (CRYPTO 2020), based on the work of Coretti, Dodis, Guo and Steinberger (EUROCRYPT 2018), showed a simple attack for 2\leq B\leq T (with respect to a random salt). The attack achieves advantage \Tilde{\Omega}(STB/2^n+T^2/2^n) where n is the output length of the random oracle. They conjectured that this attack is optimal. However, this so-called STB conjecture was only proved for B\approx T and B=2. Very recently, Ghoshal and Komargodski (CRYPTO 22) confirmed STB conjecture for all constant values of B, and provided an \Tilde{O}(S^4TB^2/2^n+T^2/2^n) bound for all choices of B. In this work, we prove an \Tilde{O}((STB/2^n)\cdot\max\{1,ST^2/2^n\}+ T^2/2^n) bound for every 2< B < T. Our bound confirms the STB conjecture for ST^2\leq 2^n, and is optimal up to a factor of S for ST^2>2^n (note as T^2 is always at most 2^n, otherwise finding a collision is trivial by the birthday attack). Our result subsumes all previous upper bounds for all ranges of parameters except for B=\Tilde{O}(1) and ST^2>2^n. We obtain our results by adopting and refining the technique of Chung, Guo, Liu, and Qian (FOCS 2020). Our approach yields more modular proofs and sheds light on how to bypass the limitations of prior techniques. Along the way, we obtain a considerably simpler and illuminating proof for B=2, recovering the main result of Akshima, Cash, Drucker and Wee.
2022
CRYPTO
Universally Composable End-to-End Secure Messaging 📺
We model and analyze the Signal end-to-end messaging protocol within the UC framework. In particular: - We formulate an ideal functionality that captures end-to-end secure messaging, in a setting with PKI and an untrusted server, against an adversary that has full control over the network and can adaptively and momentarily compromise parties at any time and obtain their entire internal states. In particular our analysis captures the forward secrecy and recovery-of-security properties of Signal and the conditions under which they break. - We model the main components of the Signal architecture (PKI and long-term keys, the backbone continuous-key-exchange or "asymmetric ratchet," epoch-level symmetric ratchets, authenticated encryption) as individual ideal functionalities that are realized and analyzed separately and then composed using the UC and Global-State UC theorems. - We show how the ideal functionalities representing these components can be realized using standard cryptographic primitives under minimal hardness assumptions. Our modeling introduces additional innovations that enable arguing about the security of Signal irrespective of the underlying communication medium, as well as secure composition of dynamically generated modules that share state. These features, together with the basic modularity of the UC framework, will hopefully facilitate the use of both Signal-as-a-whole and its individual components within cryptographic applications. Two other features of our modeling are the treatment of fully adaptive corruptions, and making minimal use of random oracle abstractions. In particular, we show how to realize continuous key exchange in the plain model, while preserving security against adaptive corruptions.
2022
CRYPTO
Correlated Pseudorandomness from Expand-Accumulate Codes 📺
A pseudorandom correlation generator (PCG) is a recent tool for securely generating useful sources of correlated randomness, such as random oblivious transfers (OT) and vector oblivious linear evaluations (VOLE), with low communication cost. We introduce a simple new design for PCGs based on so-called expand-accumulate codes, which first apply a sparse random expander graph to replicate each message entry, and then accumulate the entries by computing the sum of each prefix. Our design offers the following advantages compared to state-of-the-art PCG constructions: - Competitive concrete efficiency backed by provable security against relevant classes of attacks; - An offline-online mode that combines near-optimal cache-friendliness with simple parallelization; - Concretely efficient extensions to pseudorandom correlation functions, which enable incremental generation of new correlation instances on demand, and to new kinds of correlated randomness that include circuit-dependent correlations. To further improve the concrete computational cost, we propose a method for speeding up a full-domain evaluation of a puncturable pseudorandom function (PPRF). This is independently motivated by other cryptographic applications of PPRFs.
2022
CRYPTO
Better than Advertised Security for Non-Interactive Threshold Signatures 📺
We give a unified syntax, and a hierarchy of definitions of security of increasing strength, for non-interactive threshold signature schemes. These are schemes having a single-round signing protocol, possibly with one prior round of message-independent pre-processing. We fit FROST1 and BLS, which are leading practical schemes, into our hierarchy, in particular showing they meet stronger security definitions than they have been shown to meet so far. We also fit in our hierarchy a more efficient version FROST2 of FROST1 that we give. These definitions and results, for simplicity, all assume trusted key generation. Finally, we prove the security of FROST2 with key generation performed by an efficient distributed key generation protocol.
2022
CRYPTO
Quantum Commitments and Signatures without One-Way Functions 📺
In the classical world, the existence of commitments is equivalent to the existence of one-way functions. In the quantum setting, on the other hand, commitments are not known to imply one-way functions, but all known constructions of quantum commitments use at least one-way functions. Are one-way functions really necessary for commitments in the quantum world? In this work, we show that non-interactive quantum commitments (for classical messages) with computational hiding and statistical binding exist if pseudorandom quantum states exist. Pseudorandom quantum states are sets of quantum states that are efficiently generated but their polynomially many copies are computationally indistinguishable from the same number of copies of Haar random states [Ji, Liu, and Song, CRYPTO 2018]. It is known that pseudorandom quantum states exist even if BQP = QMA (relative to a quantum oracle) [Kretschmer, TQC 2021], which means that pseudorandom quantum states can exist even if no quantum-secure classical cryptographic primitive exists. Our result therefore shows that quantum commitments can exist even if no quantum-secure classical cryptographic primitive exists. In particular, quantum commitments can exist even if no quantum-secure one-way function exists. In this work, we also consider digital signatures, which are other fundamental primitives in cryptography. We show that one-time secure digital signatures with quantum public keys exist if pseudorandom quantum states exist. In the classical setting, the existence of digital signatures is equivalent to the existence of one-way functions. Our result, on the other hand, shows that quantum signatures can exist even if no quantum-secure classical cryptographic primitive (including quantum-secure one-way functions) exists.
2022
CRYPTO
Implicit White-Box Implementations: White-Boxing ARX Ciphers 📺
Since the first white-box implementation of AES published twenty years ago, no significant progress has been made in the design of secure implementations against an attacker with full control of the device. Designing white-box implementations of existing block ciphers is a challenging problem, as all proposals have been broken. Only two white-box design strategies have been published this far: the CEJO framework, which can only be applied to ciphers with small S-boxes, and self-equivalence encodings, which were only applied to AES. In this work we propose implicit implementations, a new design of white-box implementations based on implicit functions, and we show that current generic attacks that break CEJO or self-equivalence implementations are not successful against implicit implementations. The generation and the security of implicit implementations are related to the self-equivalences of the non-linear layer of the cipher, and we propose a new method to obtain self-equivalences based on the CCZ-equivalence. We implemented this method and many other functionalities in a new open-source tool BoolCrypt, which we used to obtain for the first time affine, linear, and even quadratic self-equivalences of the permuted modular addition. Using the implicit framework and these self-equivalences, we describe for the first time a practical white-box implementation of a generic Addition-Rotation-XOR (ARX) cipher, and we provide an open-source tool to easily generate implicit implementations of ARX ciphers.
2022
CRYPTO
Simplified MITM Modeling for Permutations: New (Quantum) Attacks 📺
Meet-in-the-middle (MITM) is a general paradigm where internal states are computed along two independent paths ('forwards' and 'backwards') that are then matched. Over time, MITM attacks improved using more refined techniques and exploiting additional freedoms and structure, which makes it more involved to find and optimize such attacks. This has led to the use of detailed attack models for generic solvers to automatically search for improved attacks, notably a MILP model developed by Bao et al. at EUROCRYPT 2021. In this paper, we study a simpler MILP modeling combining a greatly reduced attack representation as input to the generic solver, together with a theoretical analysis that, for any solution, proves the existence and complexity of a detailed attack. This modeling allows to find both classical and quantum attacks on a broad class of cryptographic permutations. First, Present-like constructions, with the permutations of the Spongent hash functions: we improve the MITM step in distinguishers by up to 3 rounds. Second, AES-like designs: despite being much simpler than Bao et al.'s, our model allows to recover the best previous results. The only limitation is that we do not use degrees of freedom from the key schedule. Third, we show that the model can be extended to target more permutations, like Feistel networks. In this context we give new Guess-and-determine attacks on reduced Simpira v2 and Sparkle. Finally, using our model, we find several new quantum preimage and pseudo-preimage attacks (e.g. Haraka v2, Simpira v2 ... ) targeting the same number of rounds as the classical attacks.
2022
CRYPTO
Batch Arguments for NP and More from Standard Bilinear Group Assumptions 📺
Non-interactive batch arguments for NP provide a way to amortize the cost of NP verification across multiple instances. They enable a prover to convince a verifier of multiple NP statements with communication much smaller than the total witness length and verification time much smaller than individually checking each instance. In this work, we give the first construction of a non-interactive batch argument for NP from standard assumptions on groups with bilinear maps (specifically, from either the subgroup decision assumption in composite-order groups or from the k-Lin assumption in prime-order groups for any k >= 1). Previously, batch arguments for NP were only known from LWE, or a combination of multiple assumptions, or from non-standard/non-falsifiable assumptions. Moreover, our work introduces a new direct approach for batch verification and avoids heavy tools like correlation-intractable hash functions or probabilistically-checkable proofs common to previous approaches. As corollaries to our main construction, we obtain the first publicly-verifiable non-interactive delegation scheme for RAM programs (i.e., a succinct non-interactive argument (SNARG) for P) with a CRS of sublinear size (in the running time of the RAM program), as well as the first aggregate signature scheme (supporting bounded aggregation) from standard assumptions on bilinear maps.
2022
CRYPTO
Constructive Post-Quantum Reductions 📺
Is it possible to convert classical reductions into post-quantum ones? It is customary to argue that while this is problematic in the interactive setting, non-interactive reductions do carry over. However, when considering quantum auxiliary input, this conversion results in a *non-constructive* post-quantum reduction that requires duplicating the quantum auxiliary input, which is in general inefficient or even impossible. This violates the win-win premise of provable cryptography: an attack against a cryptographic primitive should lead to an algorithmic advantage. We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the post-quantum setting in a constructive manner. We show that any non-interactive non-adaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made post-quantum constructive. In contrast, assumptions with super-polynomial solution space (such as general search assumptions) cannot be generally converted. Along the way, we make several additional contributions: 1. We put forth a framework for reductions (or general interaction) with *stateful* solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting. 2. A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a super-polynomial solution space cannot be generically ``restored'' post-measurement. This shows that the novel rewinding technique of Chiesa et al.\ (FOCS 2021) is tight in the sense that it cannot be extended beyond a polynomial measurement space.
2022
CRYPTO
An Algebraic Framework for Silent Preprocessing with Trustless Setup and Active Security 📺
Recently, number-theoretic assumptions including DDH, DCR and QR have been used to build powerful tools for secure computation, in the form of homomorphic secret-sharing (HSS), which leads to secure two-party computation protocols with succinct communication, and pseudorandom correlation functions (PCFs), which allow non-interactive generation of a large quantity of correlated randomness. In this work, we present a group-theoretic framework for these classes of constructions, which unifies their approach to computing distributed discrete logarithms in various groups. We cast existing constructions in our framework, and also present new constructions, including one based on class groups of imaginary quadratic fields. This leads to the first construction of two-party homomorphic secret sharing for branching programs from class group assumptions. Using our framework, we also obtain pseudorandom correlation functions for generating oblivious transfer and vector-OLE correlations from number-theoretic assumptions. These have a trustless, public-key setup when instantiating our framework using class groups. Previously, such constructions either needed a trusted setup in the form of an RSA modulus with unknown factorisation, or relied on multi-key fully homomorphic encryption from the learning with errors assumption. We also show how to upgrade our constructions to achieve active security using appropriate zero-knowledge proofs. In the random oracle model, this leads to a one-round, actively secure protocol for setting up the PCF, as well as a 3-round, actively secure HSS-based protocol for secure two-party computation of branching programs with succinct communication.
2022
CRYPTO
Some Easy Instances of Ideal-SVP and Implications to the Partial Vandermonde Knapsack Problem 📺
In this article, we generalize the works of Pan et al. (Eurocrypt'21) and Porter et al. (ArXiv'21) and provide a simple condition under which an ideal lattice defines an easy instance of the shortest vector problem. Namely, we show that the more automorphisms stabilize the ideal, the easier it is to find a short vector in it. This observation was already made for prime ideals in Galois fields, and we generalize it to any ideal (whose prime factors are not ramified) of any number field. We then provide a cryptographic application of this result by showing that particular instances of the partial Vandermonde knapsack problem, also known as partial Fourier recovery problem, can be solved classically in polynomial time. As a proof of concept, we implemented our attack and managed to solve those particular instances for concrete parameter settings proposed in the literature. For random instances, we can halve the lattice dimension with non-negligible probability.
2022
CRYPTO
Public-Key Watermarking Schemes for Pseudorandom Functions 📺
A software watermarking scheme can embed a message into a program while preserving its functionality. The embedded message can be extracted later by an extraction algorithm, and no one could remove it without significantly changing the functionality of the program. A watermarking scheme is public key if neither the marking procedure nor the extraction procedure needs a watermarking secret key. Prior constructions of watermarking schemes mainly focus on watermarking pseudorandom functions (PRFs), and the major open problem in this direction is to construct a public-key watermarkable PRF. In this work, we solve the open problem via constructing public-key watermarkable PRFs with different trade-offs from various assumptions, ranging from standard lattice assumptions to the existence of indistinguishability obfuscation. To achieve the results, we first construct watermarking schemes in a weaker model, where the extraction algorithm is provided with a “hint” about the watermarked PRF key. Then we upgrade the constructions to standard watermarking schemes using a robust unobfuscatable PRF. We also provide the first construction of robust unobfuscatable PRF in this work, which is of independent interest.
2022
CRYPTO
Rotational Differential-Linear Distinguishers of ARX Ciphers with Arbitrary Output Linear Masks 📺
The rotational differential-linear attacks, proposed at EUROCRYPT 2021, is a generalization of differential-linear attacks by replacing the differential part of the attacks with rotational differentials. At EUROCRYPT 2021, Liu et al. presented a method based on Morawiecki et al.’s technique (FSE 2013) for evaluating the rotational differential-linear correlations for the special cases where the output linear masks are unit vectors. With this method, some powerful (rotational) differential-linear distinguishers with output linear masks being unit vectors against Friet, Xoodoo, and Alzette were discovered. However, how to compute the rotational differential-linear correlations for arbitrary output masks was left open. In this work, we partially solve this open problem by presenting an efficient algorithm for computing the (rotational) differential-linear correlation of modulo additions for arbitrary output linear masks, based on which a technique for evaluating the (rotational) differential-linear correlation of ARX ciphers is derived. We apply the technique to Alzette, SipHash, Chacha, and Speck. As a result, significantly improved (rotational) differential-linear distinguishers including deterministic ones are identified. All results of this work are practical and experimentally verified to confirm the validity of our methods. In addition, we try to explain the experimental distinguishers employed in FSE 2008, FSE 2016, and CRYPTO 2020 against Chacha. The predicted correlations are close to the experimental ones.
2022
CRYPTO
Lattice-Based Zero-Knowledge Proofs and Applications: Shorter, Simpler, and More General 📺
We present a much-improved practical protocol, based on the hardness of Module-SIS and Module-LWE problems, for proving knowledge of a short vector $s$ satisfying $As=t\bmod q$. The currently most-efficient technique for constructing such a proof works by showing that the $\ell_\infty$ norm of $s$ is small. It creates a commitment to a polynomial vector $m$ whose CRT coefficients are the coefficients of $s$ and then shows that (1) $A\cdot \mathsf{CRT}(m)=t\bmod\,q$ and (2) in the case that we want to prove that the $\ell_\infty$ norm is at most $1$, the polynomial product $(m - 1)\cdot m\cdot(m+1)$ equals to $0$. While these schemes are already quite efficient for practical applications, the requirement of using the CRT embedding and only being naturally adapted to proving the $\ell_\infty$-norm, hinders the efficiency of this approach. In this work, we show that there is a direct and efficient way to prove that the coefficients of $s$ have a small $\ell_2$ norm which does not require an equivocation with the $\ell_\infty$ norm, nor any conversion to the CRT representation. We observe that the inner product between two vectors $ r$ and $s$ can be made to appear as a coefficient of a product (or sum of products) between polynomials which are functions of $r$ and $s$. Thus, by using a polynomial product proof system and hiding all but one coefficient, we are able to prove knowledge of the inner product of two vectors modulo $q$. Using a cheap, approximate range proof, one can then lift the proof to be over $\mathbb{Z}$ instead of $\mathbb{Z}_q$. Our protocols for proving short norms work over all (interesting) polynomial rings, but are particularly efficient for rings like $\mathbb{Z}[X]/(X^n+1)$ in which the function relating the inner product of vectors and polynomial products happens to be a ``nice'' automorphism. The new proof system can be plugged into constructions of various lattice-based privacy primitives in a black-box manner. As examples, we instantiate a verifiable encryption scheme and a group signature scheme which are more than twice as compact as the previously best solutions.
2022
CRYPTO
Dynamic Local Searchable Symmetric Encryption 📺
In this article, we tackle for the first time the problem of \emph{dynamic} memory-efficient Searchable Symmetric Encryption (SSE). In the term ``memory-efficient'' SSE, we encompass both the goals of \emph{local} SSE, and \emph{page-efficient} SSE. The centerpiece of our approach is a novel connection between those two goals. We introduce a map, called the Generic Local Transform, which takes as input a \emph{page-efficient} SSE scheme with certain special features, and outputs an SSE scheme with strong \emph{locality} properties. We obtain several results. (1) First, for page-efficient SSE with page size $p$, we build a \emph{dynamic} scheme with storage efficiency $\bigO{1}$ and page efficiency $O(\log \log (N/p))$, called LayeredSSE. The main technical innovation behind LayeredSSE is a novel weighted extension of the two-choice allocation process, of independent interest. (2) Second, we introduce the Generic Local Transform, and combine it with LayeredSSE to build a \emph{dynamic} SSE scheme with storage efficiency $O{1}$, locality $O{1}$, and read efficiency $O(\log\log N)$, under the condition that the longest list is of size $O(N^{1-1/\log \log \lambda})$. This matches, in every respect, the purely \emph{static} construction of Asharov et al. presented at STOC 2016: dynamism comes at no extra cost. (3) Finally, by applying the Generic Local Transform to a variant of the Tethys scheme by Bossuat et al. from Crypto 2021, we build an unconditional static SSE with storage efficiency $O(1)$, locality $O(1)$, and read efficiency $O(\log^\varepsilon N)$, for an arbitrarily small constant $\varepsilon > 0$. To our knowledge, this is the construction that comes closest to the lower bound presented by Cash and Tessaro at Eurocrypt 2014.
2022
CRYPTO
A New Approach to Efficient Non-Malleable Zero-Knowledge 📺
Non-malleable zero-knowledge, originally introduced in the context of man-in-the-middle attacks, serves as an important building block to protect against concurrent attacks where different protocols may coexist and interleave. While this primitive admits almost optimal constructions in the plain model, they are several orders of magnitude slower in practice than standalone zero-knowledge. This is in sharp contrast to non-malleable commitments where practical constructions (under the DDH assumption) have been known for a while. We present a new approach for constructing efficient non-malleable zero-knowledge for all languages in NP, based on a new primitive called instance-based non-malleable commitment (IBNMC). We show how to construct practical IBNMC by leveraging the fact that simulators of sub-linear zero-knowledge protocols can be much faster than the honest prover algorithm. With an efficient implementation of IBNMC, our approach yields the first general-purpose non-malleable zero-knowledge protocol that achieves practical efficiency in the plain model. All of our protocols can be instantiated from symmetric primitives such as block-ciphers and hash functions, have reasonable efficiency in practice, and are general-purpose. Our techniques also yield the first efficient non-malleable commitment scheme without public-key assumptions.
2022
CRYPTO
Statistically Sender-Private OT From LPN and Derandomization 📺
We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK). The protocol is plausibly post-quantum secure. The only other constructions with plausible post quantum security are based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques. Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK.
2022
CRYPTO
Efficient NIZKs and Signatures from Commit-and-Open Protocols in the QROM 📺
Commit-and-open sigma-protocols are a popular class of protocols for constructing non-interactive zero-knowledge arguments and digital-signature schemes via the Fiat-Shamir transformation. Instantiated with hash-based commitments, the resulting non-interactive schemes enjoy tight online-extractability in the random oracle model. Online extractability improves the tightness of security proofs for the resulting digital-signature schemes by avoiding lossy rewinding or forking-lemma based extraction. In this work, we prove tight online extractability in the quantum random oracle model (QROM), showing that the construction supports post-quantum security. First, we consider the default case where committing is done by element-wise hashing. In a second part, we extend our result to Merkle-tree based commitments. Our results yield a significant improvement of the provable post-quantum security of the digital-signature scheme Picnic. Our analysis makes use of a recent framework by Chung et al. [CFHL21] for analysing quantum algorithms in the QROM using purely classical reasoning. Therefore, our results can to a large extent be understood and verified without prior knowledge of quantum information science.
2022
CRYPTO
(Nondeterministic) Hardness vs. Non-Malleability 📺
We present the first truly explicit constructions of \emph{non-malleable codes} against tampering by bounded polynomial size circuits. These objects imply unproven circuit lower bounds and our construction is secure provided $\Eclass$ requires exponential size nondeterministic circuits, an assumption from the derandomization literature. Prior works on NMC for polysize circuits, either required an untamperable CRS [Cheraghchi, Guruswami ITCS'14; Faust, Mukherjee, Venturi, Wichs EUROCRYPT'14] or very strong cryptographic assumptions [Ball, Dachman-Soled, Kulkarni, Lin, Malkin EUROCRYPT'18; Dachman-Soled, Komargodski, Pass CRYPTO'21]. Both of works in the latter category only achieve non-malleability with respect to efficient distinguishers and, more importantly, utilize cryptographic objects for which no provably secure instantiations are known outside the random oracle model. In this sense, none of the prior yields fully explicit codes from non-heuristic assumptions. Our assumption is not known to imply the existence of one-way functions, which suggests that cryptography is unnecessary for non-malleability against this class. Technically, security is shown by \emph{non-deterministically} reducing polynomial size tampering to split-state tampering. The technique is general enough that it allows us to to construct the first \emph{seedless non-malleable extractors} [Cheraghchi, Guruswami TCC'14] for sources sampled by polynomial size circuits [Trevisan, Vadhan FOCS'00] (resp.~recognized by polynomial size circuits [Shaltiel CC'11]) and tampered by polynomial size circuits. Our construction is secure assuming $\Eclass$ requires exponential size $\Sigma_4$-circuits (resp. $\Sigma_3$-circuits), this assumption is the state-of-the-art for extracting randomness from such sources (without non-malleability). Several additional results are included in the full version of this paper [Eprint 2022/070]. First, we observe that non-malleable codes and non-malleable secret sharing [Goyal, Kumar STOC'18] are essentially equivalent with respect to polynomial size tampering. In more detail, assuming $\Eclass$ is hard for exponential size nondeterministic circuits, any efficient secret sharing scheme can be made non-malleable against polynomial size circuit tampering. Second, we observe that the fact that our constructions only achieve inverse polynomial (statistical) security is inherent. Extending a result from [Applebaum, Artemenko, Shaltiel, Yang CC'16] we show it is impossible to do better using black-box reductions. However, we extend the notion of relative error from [Applebaum, Artemenko, Shaltiel, Yang CC'16] to non-malleable extractors and show that they can be constructed from similar assumptions. Third, we observe that relative-error non-malleable extractors can be utilized to render a broad class of cryptographic primitives tamper and leakage resilient, while preserving negligible security guarantees.
2022
CRYPTO
On Time-Space Tradeoffs for Bounded-Length Collisions in Merkle-Damgård Hashing 📺
We study the power of preprocessing adversaries in finding bounded-length collisions in the widely used Merkle-Damgard (MD) hashing in the random oracle model. Specifically, we consider adversaries which have arbitrary $S$-bit advice about the random oracle and can make at most $T$ queries to it. Our goal is to characterize the advantage of such adversaries in finding a $B$-block collision in an MD hash function constructed using the random oracle with range size $N$ as the compression function (given a random salt). The answer to this question is completely understood for very large values of $B$ (essentially $\Omega(T)$) as well as for $B=1,2$. For $B\approx T$, Coretti et al.~(EUROCRYPT '18) gave matching upper and lower bounds of $\tilde\Theta(ST^2/N)$. Akshima et al.~(CRYPTO '20) observed that the attack of Coretti et al.\ could be adapted to work for any value of $B>1$, giving an attack with advantage $\tilde\Omega(STB/N + T^2/N)$. Unfortunately, they could only prove that this attack is optimal for $B=2$. Their proof involves a compression argument with exhaustive case analysis and, as they claim, a naive attempt to generalize their bound to larger values of B (even for $B=3$) would lead to an explosion in the number of cases needed to be analyzed, making it unmanageable. With the lack of a more general upper bound, they formulated the \emph{STB conjecture}, stating that the best-possible advantage is $\tildeO(STB/N + T^2/N)$ for any $B>1$. In this work, we confirm the STB conjecture in many new parameter settings. For instance, in one result, we show that the conjecture holds for all constant values of $B$, significantly extending the result of Akshima et al. Further, using combinatorial properties of graphs, we are able to confirm the conjecture even for super constant values of $B$, as long as some restriction is made on $S$. For instance, we confirm the conjecture for all $B \le T^{1/4}$ as long as $S \le T^{1/8}$. Technically, we develop structural characterizations for bounded-length collisions in MD hashing that allow us to give a compression argument in which the number of cases needed to be handled does not explode.
2022
CRYPTO
Lower Bound on SNARGs in the Random Oracle Model 📺
Succinct non-interactive arguments (SNARGs) have become a fundamental primitive in the cryptographic community. The focus of this work is constructions of SNARGs in the Random Oracle Model (ROM). Such SNARGs enjoy post-quantum security and can be deployed using lightweight cryptography to heuristically instantiate the random oracle. A ROM-SNARG is \emph{$(t,\varepsilon)$-sound} if no $t$-query malicious prover can convince the verifier to accept a false statement with probability larger than $\varepsilon$. Recently, Chiesa-Yogev (CRYPTO '21) presented a ROM-SNARG of length ${\Theta}(\log (t/\varepsilon) \cdot \log t)$ (ignoring $\log n$ factors, for $n$ being the instance size). This improvement, however, is still far from the (folklore) lower bound of $\Omega(\log (t/\varepsilon))$. Assuming the \textit{randomized exponential-time hypothesis}, we prove a tight lower bound of ${\Omega}(\log (t/\varepsilon) \cdot \log t)$ for the length of {$(t,\varepsilon)$-sound} ROM-SNARGs. Our lower bound holds for constructions with non-adaptive verifiers and strong soundness notion called \textit{salted soundness}, restrictions that hold for \emph{all} known constructions (ignoring contrived counterexamples). We prove our lower bound by transforming any short ROM-SNARG (of the considered family) into a same length ROM-SNARG in which the verifier asks only a \emph{few} oracles queries, and then apply the recent lower bound of Chiesa-Yogev (TCC '20) for such SNARGs.
2022
CRYPTO
Triangulating Rebound Attack on AES-like Hashing 📺
Rebound attack was introduced by Mendel et al. at FSE~2009 to fulfill a heavy middle round of a differential path for free, utilizing the degree of freedom from states. The inbound phase was extended to 2 rounds by Super-Sbox technique invented by Lamberger et al. at ASIACRYPT~2009 and Gilbert and Peyrin at FSE~2010. In ASIACRYPT~2010, Sasaki et al. further reduced the requirement of memory by introducing the non-full-active Super-Sbox. In this paper, we further develop this line of research by introducing Super-Inbound, which is able to connect multiple 1-round or 2-round (non-full-active) Super-Sbox inbound phases by utilizing fully the degrees of freedom from both states and key, yet without the use of large memory. This essentially extends the inbound phase by up to 3 rounds. We applied this technique to find classic or quantum collisions on several AES-like hash functions, and improved the attacked round number by 1 to 5 in targets including AES-128 and Skinny hashing modes, Saturnin-hash, and Gr{\o}stl-512. To demonstrate the correctness of our attacks, the semi-free-start collision on 6-round AES-128-MMO/MP with estimated time complexity $2^{24}$ in classical setting was implemented and an example pair was found instantly on a standard PC.
2022
CRYPTO
Superposition Meet-in-the-Middle Attacks: Updates on Fundamental Security of AES-like Ciphers 📺
The Meet-in-the-Middle approach is one of the most powerful cryptanalysis techniques, demonstrated by its applications in preimage attacks on the full MD4, MD5, Tiger, HAVAL, and Haraka-512 v2 hash functions, and key recovery of the full block cipher KTANTAN. The success relies on the separation of a primitive into two independent chunks, where each active cell of the state is used to represent only one chunk or is otherwise considered unusable once mixed. We observe that some of such cells are linearly mixed and can be as useful as the independent ones. This leads to the introduction of superposition states and a whole suite of accompanied techniques, which we incorporate into the MILP-based search framework proposed by Bao et al. at EUROCRYPT 2021 and Dong et al. at CRYPTO 2021, and find applications on a wide range of AES-like hash functions and block ciphers.
2022
CRYPTO
Breaking Rainbow Takes a Weekend on a Laptop 📺
This work introduces new key recovery attacks against the Rainbow signature scheme, which is one of the three finalist signature schemes still in the NIST Post-Quantum Cryptography standardization project. The new attacks dramatically outperform previously known attacks for all the parameter sets submitted to NIST and make a key-recovery practical for the SL 1 parameters. Concretely, given a Rainbow public key for the SL 1 parameters of the second-round submission, our attack returns the corresponding public key after on average 53 hours (one weekend) of computation time on a standard laptop.
2022
CRYPTO
Formalizing Delayed Adaptive Corruptions and the Security of Flooding Networks 📺
Many decentralized systems rely on flooding protocols for message dissemination. In such a protocol, the sender of a message sends it to a randomly selected set of peers. These peers again send the message to their randomly selected peers, until every network participant has received the message. This type of protocols clearly fail in face of an adaptive adversary who can simply corrupt all peers of the sender and thereby prevent the message from being delivered. Nevertheless, flooding protocols are commonly used within protocols that aim to be cryptographically secure, most notably in blockchain protocols. While it is possible to revert to static corruptions, this gives unsatisfactory security guarantees, especially in the setting of a blockchain that is supposed to run for an extended period of time. To be able to provide meaningful security guarantees in such settings, we give precise semantics to what we call $\delta$-delayed adversaries in the Universal Composability (UC) framework. Such adversaries can adaptively corrupt parties, but there is a delay of time $\delta$ from when an adversary decides to corrupt a party until they succeed in overtaking control of the party. Within this model, we formally prove the intuitive result that flooding protocols are secure against $\delta$-delayed adversaries when $\delta$ is at least the time it takes to send a message from one peer to another plus the time it takes the recipient to resend the message. To this end, we show how to reduce the adaptive setting with a $\delta$-delayed adversary to a static experiment with an Erdős–Rényi graph. Using the established theory of Erdős–Rényi graphs, we provide upper bounds on the propagation time of the flooding functionality for different neighborhood sizes of the gossip network. More concretely, we show the following for security parameter $\kappa$, point-to-point channels with delay at most $\Delta$, and $n$ parties in total, with a sufficiently delayed adversary that can corrupt any constant fraction of the parties: If all parties send to $\Omega(\kappa)$ parties on average, then we can realize a flooding functionality with maximal delay $\mathcal{O}\bigl(\Delta \cdot \log (n) \bigr)$; and if all parties send to $\Omega\bigl( \sqrt{\kappa n \log (n)} \bigr)$ parties on average, we can realize a flooding functionality with maximal delay $\mathcal{O}(\Delta)$.
2022
CRYPTO
Certified Everlasting Zero-Knowledge Proof for QMA 📺
In known constructions of classical zero-knowledge protocols for NP, either of zero-knowledge or soundness holds only against computationally bounded adversaries. Indeed, achieving both statistical zero-knowledge and statistical soundness at the same time with classical verifier is impossible for NP unless the polynomial-time hierarchy collapses, and it is also believed to be impossible even with a quantum verifier. In this work, we introduce a novel compromise, which we call the certified everlasting zero-knowledge proof for QMA. It is a computational zero-knowledge proof for QMA, but the verifier issues a classical certificate that shows that the verifier has deleted its quantum information. If the certificate is valid, even an unbounded malicious verifier can no longer learn anything beyond the validity of the statement. We construct a certified everlasting zero-knowledge proof for QMA. For the construction, we introduce a new quantum cryptographic primitive, which we call commitment with statistical binding and certified everlasting hiding, where the hiding property becomes statistical once the receiver has issued a valid certificate that shows that the receiver has deleted the committed information. We construct commitment with statistical binding and certified everlasting hiding from quantum encryption with certified deletion by Broadbent and Islam [TCC 2020] (in a black-box way), and then combine it with the quantum sigma-protocol for QMA by Broadbent and Grilo [FOCS 2020] to construct the certified everlasting zero-knowledge proof for QMA. Our constructions are secure in the quantum random oracle model. Commitment with statistical binding and certified everlasting hiding itself is of independent interest, and there will be many other useful applications beyond zero-knowledge.
2022
CRYPTO
Accelerating the Delfs-Galbraith algorithm with fast subfield root detection 📺
We give a new algorithm for finding an isogeny from a given supersingular elliptic curve $E/\F_{p^2}$ to a subfield elliptic curve $E'/\F_p$, which is the bottleneck step of the Delfs-Galbraith algorithm for the general supersingular isogeny problem. Our core ingredient is a novel method of rapidly determining whether a polynomial $f \in L[X]$ has any roots in a subfield $K \subset L$, while avoiding expensive root-finding algorithms. In the special case when $f=\Upphi_{\ell,p}(X,j) \in \F_{p^2}[X]$, i.e., when $f$ is the $\ell$-th modular polynomial evaluated at a supersingular $j$-invariant, this provides a means of efficiently determining whether there is an $\ell$-isogeny connecting the corresponding elliptic curve to a subfield curve. Together with the traditional Delfs-Galbraith walk, inspecting many $\ell$-isogenous neighbours in this way allows us to search through a larger proportion of the supersingular set per unit of time. Though the asymptotic $\tilde{O}(p^{1/2})$ complexity of our improved algorithm remains unchanged from that of the original Delfs-Galbraith algorithm, our theoretical analysis and practical implementation both show a significant reduction in the runtime of the subfield search. This sheds new light on the concrete hardness of the general supersingular isogeny problem (i.e. the foundational problem underlying isogeny-based cryptography), and has immediate implications on the bit-security of schemes like B-SIDH and SQISign for which Delfs-Galbraith is the best known classical attack.
2022
CRYPTO
Verifiable Relation Sharing and Multi-Verifier Zero-Knowledge in Two Rounds: Trading NIZKs with Honest Majority 📺
We introduce the problem of Verifiable Relation Sharing (VRS) where a client (prover) wishes to share a vector of secret data items among $k$ servers (the verifiers) while proving in zero-knowledge that the shared data satisfies some properties. This combined task of sharing and proving generalizes notions like verifiable secret sharing and zero-knowledge proofs over secret-shared data. We study VRS from a theoretical perspective and focus on its round complexity. As our main contribution, we show that every efficiently-computable relation can be realized by a VRS with an optimal round complexity of two rounds where the first round is input-independent (offline round). The protocol achieves full UC-security against an active adversary that is allowed to corrupt any $t$-subset of the parties that may include the client together with some of the verifiers. For a small (logarithmic) number of parties, we achieve an optimal resiliency threshold of $t<0.5(k+1)$, and for a large (polynomial) number of parties, we achieve an almost-optimal resiliency threshold of $t<0.5(k+1)(1-\epsilon)$ for an arbitrarily small constant $\epsilon>0$. Both protocols can be based on sub-exponentially hard injective one-way functions. If the parties have an access to a collision resistance hash function, we can derive statistical everlasting security, i.e., the protocols are secure against adversaries that are computationally bounded during the protocol execution and become computationally unbounded after the protocol execution. Previous 2-round solutions achieve smaller resiliency thresholds and weaker security notions regardless of the underlying assumptions. As a special case, our protocols give rise to 2-round offline/online constructions of multi-verifier zero-knowledge proofs (MVZK). Such constructions were previously obtained under the same type of assumptions that are needed for NIZK, i.e., public-key assumptions or random-oracle type assumptions (Abe et al., Asiacrypt 2002; Groth and Ostrovsky, Crypto 2007; Boneh et al., Crypto 2019; Yang, and Wang, Eprint 2022). Our work shows, for the first time, that in the presence of an honest majority these assumptions can be replaced with more conservative ``Minicrypt''-type assumptions like injective one-way functions and collision-resistance hash functions. Indeed, our MVZK protocols provide a round-efficient substitute for NIZK in settings where honest-majority is present. Additional applications are also presented.
2022
CRYPTO
Post-Quantum Simulatable Extraction with Minimal Assumptions: Black-Box and Constant-Round 📺
From the minimal assumption of post-quantum semi-honest oblivious transfers, we build the first $\epsilon$-simulatable two-party computation (2PC) against quantum polynomial-time (QPT) adversaries that is both constant-round and black-box (for both the construction and security reduction). A recent work by Chia, Chung, Liu, and Yamakawa (FOCS'21) shows that post-quantum 2PC with standard simulation-based security is impossible in constant rounds, unless either $NP \subseteq BQP$ or relying on non-black-box simulation. The $\epsilon$-simulatability we target is a relaxation of the standard simulation-based security that allows for an arbitrarily small noticeable simulation error $\epsilon$. Moreover, when quantum communication is allowed, we can further weaken the assumption to post-quantum secure one-way functions (PQ-OWFs), while maintaining the constant-round and black-box property. Our techniques also yield the following set of constant-round and black-box two-party protocols secure against QPT adversaries, only assuming black-box access to PQ-OWFs: - extractable commitments for which the extractor is also an $\epsilon$-simulator; - $\epsilon$-zero-knowledge commit-and-prove whose commit stage is extractable with $\epsilon$-simulation; - $\epsilon$-simulatable coin-flipping; - $\epsilon$-zero-knowledge arguments of knowledge for $NP$ for which the knowledge extractor is also an $\epsilon$-simulator; - $\epsilon$-zero-knowledge arguments for $QMA$. At the heart of the above results is a black-box extraction lemma showing how to efficiently extract secrets from QPT adversaries while disturbing their quantum state in a controllable manner, i.e., achieving $\epsilon$-simulatability of the after-extraction state of the adversary.
2022
CRYPTO
Shorter Hash-and-Sign Lattice-Based Signatures 📺
Lattice-based digital signature schemes following the hash-and-sign design paradigm of Gentry, Peikert and Vaikuntanathan (GPV) tend to offer an attractive level of efficiency, particularly when instantiated with structured compact trapdoors. In particular, NIST postquantum finalist Falcon is both quite fast for signing and verification and quite compact: NIST notes that it has the smallest bandwidth (as measured in combined size of public key and signature) of all round 2 digital signature candidates. Nevertheless, while Falcon--512, for instance, compares favorably to ECDSA--384 in terms of speed, its signatures are well over 10 times larger. For applications that store large number of signatures, or that require signatures to fit in prescribed packet sizes, this can be a critical limitation. In this paper, we explore several approaches to further improve the size of hash-and-sign lattice-based signatures, particularly instantiated over NTRU lattices like Falcon and its recent variant Mitaka. In particular, while GPV signatures are usually obtained by sampling lattice points according to some *spherical* discrete Gaussian distribution, we show that it can be beneficial to sample instead according to a suitably chosen *ellipsoidal* discrete Gaussian: this is because only half of the sampled Gaussian vector is actually output as the signature, while the other half is recovered during verification. Making the half that actually occurs in signatures shorter reduces signature size at essentially no security loss (in a suitable range of parameters). Similarly, we show that reducing the modulus $q$ with respect to which signatures are computed can improve signature size as well as verification key size almost ``for free''; this is particularly true for constructions like Falcon and Mitaka that do not make substantial use of NTT-based multiplication (and rely instead on transcendental FFT). Finally, we show that the Gaussian vectors in signatures can be represented in a more compact way with appropriate coding-theoretic techniques, improving signature size by an additional 7 to 14%. All in all, we manage to reduce the size of, e.g., Falcon signatures by 30--40% at the cost of only 4--6 bits of Core-SVP security.
2022
CRYPTO
A New Framework For More Efficient Round-Optimal Lattice-Based (Partially) Blind Signature via Trapdoor Sampling 📺
Blind signatures, originally proposed by Chaum (CRYPTO'82), are interactive protocols between a signer and a user, where the user can obtain a signature without revealing the message to be signed. Recently, Hauck et al. (EUROCRYPT'20) observed that all efficient lattice-based blind signatures following the blueprint of the original blind signature by Rukert (ASIACRYPT'10) have a flawed security proof. This puts us in a situation where all known lattice-based blind signatures have at least two of the following drawbacks: heuristic security; 1~MB or more signature size; only supporting bounded polynomially many signatures, or is based on non-standard assumptions. In this work, we construct the first __round-optimal__ (i.e., two-round) lattice-based blind signature with a signature size roughly 100~KB that supports unbounded polynomially many signatures and is provably secure under standard assumptions. Even if we allow non-standard assumptions and more rounds, ours provide the shortest signature size while also supporting unbounded polynomially many signatures. The main idea of our work is revisiting the generic blind signature construction by Fischlin (CRYPTO'06) and optimizing the __commit-then-open__ proof using techniques tailored to lattices. Our blind signature is also the first construction to have a formal security proof in the __quantum__ random oracle model. Finally, our blind signature extends naturally to __partially__ blind signatures, where the user and signer can include an agreed-upon public string in the message.
2022
CRYPTO
Collision-Resistance from Multi-Collision-Resistance 📺
Collision-resistant hash functions (CRH) are a fundamental and ubiquitous cryptographic primitive. Several recent works have studied a relaxation of CRH called t-way multi-collision-resistant hash functions (t-MCRH). These are families of functions for which it is computationally hard to find a t-way collision, even though such collisions are abundant (and even (t-1)-way collisions may be easy to find). The case of t=2 corresponds to standard CRH, but it is natural to study t-MCRH for larger values of t. Multi-collision-resistance seems to be a qualitatively weaker property than standard collision-resistance. In particular, Komargodski et al. (Eurocrypt, 2018) showed that there does not exist a blackbox transformation of MCRH into CRH. Nevertheless, in this work we show a non-blackbox transformation of any moderately shrinking t-MCRH, for t in {3,4}, into an (infinitely often secure) CRH. This transformation is non-constructive - we can prove the existence of a CRH but cannot explicitly point out a construction. Our result partially extends to larger values of t. In particular, we show that for suitable values of t>t', we can transform a t-MCRH into a t'-MCRH, at the cost of reducing the shrinkage of the resulting hash function family and settling for infinitely often security. This result utilizes the list-decodability properties of Reed-Solomon codes.
2022
CRYPTO
Quadratic Multiparty Randomized Encodings Beyond Honest Majority and Their Applications 📺
Multiparty randomized encodings (Applebaum, Brakerski, and Tsabary, SICOMP 2021) reduce the task of securely computing a complicated multiparty functionality $f$ to the task of securely computing a simpler functionality $g$. The reduction is non-interactive and preserves information-theoretic security against a passive (semi-honest) adversary, also referred to as {\em privacy}. The special case of a degree-2 encoding $g$ (2MPRE) has recently found several applications to secure multiparty computation (MPC) with either information-theoretic security or making black-box access to cryptographic primitives. Unfortunately, as all known constructions are based on information-theoretic MPC protocols in the plain model, they can only be private with an honest majority. In this paper, we break the honest-majority barrier and present the first construction of general 2MPRE that remains secure in the presence of a dishonest majority. Our construction encodes every $n$-party functionality $f$ by a 2MPRE that tolerates at most $t=\lfloor 2n/3\rfloor$ passive corruptions. We derive several applications including: (1) The first non-interactive client-server MPC protocol with perfect privacy against any coalition of a minority of the servers and up to $t$ of the $n$ clients; (2) Completeness of 3-party functionalities under non-interactive $t$-private reductions; and (3) A single-round $t$-private reduction from general-MPC to an ideal oblivious transfer (OT). These positive results partially resolve open questions that were posed in several previous works. We also show that $t$-private 2MPREs are necessary for solving (2) and (3), thus establishing new equivalence theorems between these three notions. Finally, we present a new approach for constructing fully-private 2MPREs based on multi-round protocols in the OT-hybrid model that achieve \emph{perfect privacy} against active attacks. Moreover, by slightly restricting the power of the active adversary, we derive an equivalence between these notions. This forms a surprising, and quite unique, connection between a non-interactive passively-private primitive to an interactive actively-private primitive.
2022
CRYPTO
Simon's Algorithm and Symmetric Crypto: Generalizations and Automatized Applications 📺
In this paper we deepen our understanding of how to apply Simon's algorithm to break symmetric cryptographic primitives. On the one hand, we automate the search for new attacks. Using this approach we automatically find the first efficient key-recovery attacks against constructions like 5-round MISTY L-FK or 5-round Feistel-FK (with internal permutation) using Simon's algorithm. On the other hand, we study generalizations of Simon's algorithm using non-standard Hadamard matrices, with the aim to expand the quantum symmetric cryptanalysis toolkit with properties other than the periods. Our main conclusion here is that none of these generalizations can accomplish that, and we conclude that exploiting non-standard Hadamard matrices with quantum computers to break symmetric primitives will require fundamentally new attacks.
2022
CRYPTO
Password-Authenticated Key Exchange from Group Actions 📺
We present two provably secure password-authenticated key exchange (PAKE) protocols based on a commutative group action. To date the most important instantiation of isogeny-based group actions is given by CSIDH. To model the properties more accurately, we extend the framework of cryptographic group actions (Alamati et al., ASIACRYPT 2020) by the ability of computing the quadratic twist of an elliptic curve. This property is always present in the CSIDH setting and turns out to be crucial in the security analysis of our PAKE protocols. Despite the resemblance, the translation of Diffie-Hellman based PAKE protocols to group actions either does not work with known techniques or is insecure (``How not to create an isogeny-based PAKE'', Azarderakhsh et al. ACNS 20). We overcome the difficulties mentioned in previous work by using a ``bit-by-bit'' approach, where each password bit is considered separately. Our first protocol X-GA-PAKE can be executed in a single round. Both parties need to send two set elements for each password bit in order to prevent offline dictionary attacks. The second protocol Com-GA-PAKE requires only one set element per password bit, but one party has to send a commitment on its message first. We also discuss different optimizations that can be used to reduce the computational cost. We provide comprehensive security proofs for our base protocols and deduce security for the optimized versions.
2022
CRYPTO
Parallel Repetition of $(k_1,\dots,k_{\mu})$-Special-Sound Multi-Round Interactive Proofs 📺
In many occasions, the knowledge error $\kappa$ of an interactive proof is not small enough, and thus needs to be reduced. This can be done generically by repeating the interactive proof in parallel. While there have been many works studying the effect of parallel repetition on the {\em soundness error} of interactive proofs and arguments, the effect of parallel repetition on the {\em knowledge error} has largely remained unstudied. Only recently it was shown that the $t$-fold parallel repetition of {\em any} interactive protocol reduces the knowledge error from $\kappa$ down to $\kappa^t +\nu$ for any non-negligible term $\nu$. This generic result is suboptimal in that it does not give the knowledge error $\kappa^t$ that one would expect for typical protocols, and, worse, the knowledge error remains non-negligible. In this work we show that indeed the $t$-fold parallel repetition of any $(k_1,\dots,k_{\mu})$-special-sound multi-round public-coin interactive proof optimally reduces the knowledge error from $\kappa$ down to $\kappa^t$. At the core of our results is an alternative, in some sense more fine-grained, measure of quality of a dishonest prover than its success probability, for which we show that it characterizes when knowledge extraction is possible. This new measure then turns out to be very convenient when it comes to analyzing the parallel repetition of such interactive proofs. While parallel repetition reduces the knowledge error, it is easily seen to {\em increase} the {\em completeness error}. For this reason, we generalize our result to the case of $s$-out-of-$t$ threshold parallel repetition, where the verifier accepts if $s$ out of $t$ of the parallel instances are accepting. An appropriately chosen threshold $s$ allows both the knowledge error and completeness error to be reduced simultaneously.
2022
CRYPTO
Improving Support-Minors rank attacks: applications to GeMSS and Rainbow 📺
The Support-Minors (SM) method has opened new routes to attack multivariate schemes with rank properties that were previously impossible to exploit, as shown by the recent attacks of [1] and [2] on the Round 3 NIST candidates GeMSS and Rainbow respectively. In this paper, we study this SM approach more in depth and we propose a greatly improved attack on GeMSS based on this Support-Minors method. Even though GeMSS was already affected by [1], our attack affects it even more and makes it completely unfeasible to repair the scheme by simply increasing the size of its parameters or even applying the recent projection technique from [3] whose purpose was to make GeMSS immune to [1]. For instance, our attack on the GeMSS128 parameter set has estimated time complexity $2^{72}$, and repairing the scheme by applying [3] would result in a signature with slower signing time by an impractical factor of $2^{14}$. Another contribution is to suggest optimizations that can reduce memory access costs for an XL strategy on a large SM system using the Block-Wiedemann algorithm as subroutine when these costs are a concern. In a memory cost model based on [4], we show that the rectangular MinRank attack from [2] may indeed reduce the security for all Round 3 Rainbow parameter sets below their targeted security strengths, contradicting the lower bound claimed by [5] using the same memory cost model. ***** [1] Improved Key Recovery of the HFEv- Signature Scheme, Chengdong Tao and Albrecht Petzoldt and Jintai Ding, CRYPTO 2021. [2] Improved Cryptanalysis of UOV and Rainbow, Ward Beullens, EUROCRYPT 2021. [3] On the Effect of Projection on Rank Attacks in Multivariate Cryptography, Morten Øygarden and Daniel Smith-Tone and Javier Verbel, PQCrypto 2021. [4] NTRU Prime: Round 3 submission. [5] Rainbow Team: Response to recent paper by Ward Beullens. https://troll.iis. sinica.edu.tw/by-publ/recent/response-ward.pdf
2022
CRYPTO
Constructing and Deconstructing Intentional Weaknesses in Symmetric Ciphers 📺
Deliberately weakened ciphers are of great interest in political discussion on law enforcement, as in the constantly recurring crypto wars, and have been put in the spotlight of academics by recent progress. A paper at Eurocrypt 2021 showed a strong indication that the security of the widely-deployed stream cipher GEA-1 was deliberately and secretly weakened to 40 bits in order to fulfill European export restrictions that have been in place in the late 1990s. However, no explanation of how this could have been constructed was given. On the other hand, we have seen the MALICIOUS design framework, published at CRYPTO 2020, that allows to construct tweakable block ciphers with a backdoor, where the difficulty of recovering the backdoor relies on well-understood cryptographic assumptions. The constructed tweakable block cipher however is rather unusual and very different from, say, general-purpose ciphers like the AES. In this paper, we pick up both topics. For GEA-1 we thoroughly explain how the weakness was constructed, solving the main open question of the work mentioned above. By generalizing MALICIOUS we - for the first time - construct backdoored tweakable block ciphers that follow modern design principles for general-purpose block ciphers, i.e., more natural-looking deliberately weakened tweakable block ciphers.
2022
CRYPTO
Time-Space Tradeoffs for Sponge Hashing: Attacks and Limitations for Short Collisions 📺
Sponge hashing is a novel alternative to the popular Merkle-Damg\aa rd hashing design. The sponge construction has become increasingly popular in various applications, perhaps most notably, it underlies the SHA-3 hashing standard. Sponge hashing is parametrized by two numbers, $r$ and $c$ (bitrate and capacity, respectively), and by a fixed-size permutation on $r+c$ bits. In this work, we study the collision resistance of sponge hashing instantiated with a random permutation by adversaries with an arbitrary $S$-bit auxiliary advice input about the random permutation and $T$ queries. Recent work by Coretti et al.\ (CRYPTO '18) showed that such adversaries can find collisions (with respect to a random IV) with advantage $\Theta(ST^2/2^c + T^2/ 2^{r})$. Although the above attack formally breaks collision resistance in some range of parameters, its practical relevance is limited since the resulting collision is very long (on the order of $T$ blocks). Focusing on the task of finding \emph{short} collisions, we study the complexity of finding a $B$-block collision for a given parameter $B\ge 1$. We give several new attacks and limitations. Most notably, we give a new attack that results in a single-block collision and has advantage \begin{align*} \Omega \left(\left(\frac{S^{2}T}{2^{2c}}\right)^{2/3} + \frac{T^2}{2^r}\right). \end{align*} In some range of parameters, our attack has constant advantage of winning while the previously-known best attack has exponentially small advantage. To the best of our knowledge, this is the first natural application for which sponge hashing is \emph{provably less secure} than the corresponding instance of Merkle-Damg\aa rd hashing. Our attack relies on a novel connection between single-block collision finding in sponge hashing and the well-studied function inversion problem. We also give a general attack that works for any $B\ge 2$ and has advantage $\Omega({STB}/{2^{c}} + {T^2}/{2^{\min\{r,c\}}})$, adapting an idea of Akshima et al. (CRYPTO '20). We complement the above attacks with bounds on the best possible attacks. Specifically, we prove that there is a qualitative jump in the advantage of best possible attacks for finding unbounded-length collisions and those for finding very short collisions. Most notably, we prove (via a highly non-trivial compression argument) that the above attack is optimal for $B=2$ and in some range of parameters.
2022
CRYPTO
On the Insider Security of MLS 📺
The Messaging Layer Security (MLS) protocol is an open standard for end-to-end (E2E) secure group messaging being developed by the IETF, poised for deployment to consumers, industry, and government. It is designed to provide E2E privacy and authenticity for messages in long-lived sessions whenever possible, despite the participation (at times) of malicious insiders that can adaptively interact with the PKI at will, actively deviate from the protocol, leak honest parties' states, and fully control the network. The core of the MLS protocol (from which it inherits essentially all of its efficiency and security properties) is a Continuous Group Key Agreement (CGKA) protocol. It provides asynchronous E2E group management by allowing group members to agree on a fresh independent symmetric key after every change to the group's state (e.g. when someone joins/leaves the group). In this work, we make progress towards a precise understanding of the insider security of MLS (Draft 12). On the theory side, we overcome several subtleties to formulate the first notion of insider security for CGKA (or group messaging). Next, we isolate the core components of MLS to obtain a CGKA protocol we dub Insider Secure TreeKEM (ITK). Finally, we give a rigorous security proof for ITK. In particular, this work also initiates the study of insider secure CGKA and group messaging protocols. Along the way we give three new (very practical) attacks on MLS and corresponding fixes. (Those fixes have now been included into the standard.) We also describe a second attack against MLS-like CGKA protocols proven secure under all previously considered security notions (including those designed specifically to analyze MLS). These attacks highlight the pitfalls in simplifying security notions even in the name of tractability.
2022
CRYPTO
Cryptography from Pseudorandom Quantum States 📺
Pseudorandom states, introduced by Ji, Liu and Song (Crypto'18), are efficiently-computable quantum states that are computationally indistinguishable from Haar-random states. One-way functions imply the existence of pseudorandom states, but Kretschmer (TQC'20) recently constructed an oracle relative to which there are no one-way functions but pseudorandom states still exist. Motivated by this, we study the intriguing possibility of basing interesting cryptographic tasks on pseudorandom states. We construct, assuming the existence of pseudorandom state generators that map a $\lambda$-bit seed to a $\omega(\log\lambda)$-qubit state, (a) statistically binding and computationally hiding commitments and (b) pseudo one-time encryption schemes. A consequence of (a) is that pseudorandom states are sufficient to construct maliciously secure multiparty computation protocols in the dishonest majority setting. Our constructions are derived via a new notion called pseudorandom function-like states (PRFS), a generalization of pseudorandom states that parallels the classical notion of pseudorandom functions. Beyond the above two applications, we believe our notion can effectively replace pseudorandom functions in many other cryptographic applications.
2022
CRYPTO
Securing Approximate Homomorphic Encryption using Differential Privacy 📺
Recent work of Li and Micciancio (Eurocrypt 2021) has shown that the traditional formulation of indistinguishabiity under chosen plaintext attack (IND-CPA) is not adequate to capture the security of approximate homomorphic encryption against passive adversaries, and identified a stronger IND-CPA^D security definition (IND-CPA with decryption oracles) as the appropriate security target for approximate encryption schemes. We show how to transform any approximate homomorphic encryption scheme achieving the weak IND-CPA security definition, into one which is provably IND-CPA^D secure, offering strong guarantees against realistic passive attacks. The method works by postprocessing the output of the decryption function with a mechanism satisfying an appropriate notion of differentially privacy (DP), adding an amount of noise tailored to the worst-case error growth of the homomorphic computation. We apply these results to the approximate homomorphic encryption scheme of Cheon, Kim, Kim, and Song (CKKS, Asiacrypt 2017), proving that adding Gaussian noise to the output of CKKS decryption suffices to achieve IND-CPA^D security. We precisely quantify how much Gaussian noise must be added by proving nearly matching upper and lower bounds, showing that one cannot hope to significantly reduce the amount of noise added in this post-processing step. Based on our upper and lower bounds, we propose parameters for the counter-measures recently adopted by open-source libraries implementing CKKS. Lastly, we investigate the plausible claim that smaller DP noise parameters might suffice to achieve IND-CPA^D-security for schemes supporting more accurate (dynamic, key dependent) estimates of ciphertext noise during decryption. Perhaps surprisingly, we show that this claim is false, and that DP mechanisms with noise parameters tailored to the error present in a given ciphertext, rather than worst-case error, are vulnerable to IND-CPA^D attacks.
2022
CRYPTO
Tight Bounds on the Randomness Complexity of Secure Multiparty Computation 📺
We revisit the question of minimizing the randomness complexity of protocols for secure multiparty computation (MPC) in the setting of perfect information-theoretic security. Kushilevitz and Mansour (SIAM J. Discret. Math., 1997) studied the case of n-party semi-honest MPC for the XOR function with security threshold t<n, showing that O(t^2 * log(n/t)) random bits are sufficient and \Omega(t) random bits are necessary. Their positive result was obtained via a non-explicit protocol, whose existence was proved using the probabilistic method. We essentially close the question by proving an \Omega(t^2) lower bound on the randomness complexity of XOR, matching the previous upper bound up to a logarithmic factor (or constant factor when t=\Omega(n)). We also obtain an explicit protocol that uses O(t^2 * \log^2n) random bits, matching our lower bound up to a polylogarithmic factor. We extend these results from XOR to general symmetric Boolean functions and to addition over a finite Abelian group, showing how to amortize the randomness complexity over multiple additions. Finally, combining our techniques with recent randomness-efficient constructions of private circuits, we obtain an explicit protocol for evaluating a general circuit C using only O(t^2 * \log |C|) random bits, by employing additional ``helper parties'' who do not contribute any inputs. This upper bound too matches our lower bound up to a logarithmic factor.
2022
CRYPTO
Short Leakage Resilient and Non-malleable Secret Sharing Schemes 📺
Leakage resilient secret sharing (LRSS) allows a dealer to share a secret amongst $n$ parties such that any authorized subset of the parties can recover the secret from their shares, while an adversary that obtains shares of any unauthorized subset of parties along with bounded leakage from the other shares learns no information about the secret. Non-malleable secret sharing (NMSS) provides a guarantee that even shares that are tampered by an adversary will reconstruct to either the original message or something independent of it. The most important parameter of LRSS and NMSS schemes is the size of each share. For LRSS, in the "local leakage model" (i.e., when the leakage functions on each share are independent of each other and bounded), Srinivasan and Vasudevan (CRYPTO 2019), gave a scheme for threshold access structures with a share size of approximately 3.((message length) + $\mu$), where $\mu$ is the number of bits of leakage tolerated from every share. For the case of NMSS, the best-known result (again due to the above work) has a share size of 11.(message length). In this work, we build LRSS and NMSS schemes with much improved share size. Additionally, our LRSS scheme obtains optimal share and leakage size. In particular, we get the following results: -We build an information-theoretic LRSS scheme for threshold access structures with a share size of (message length + $\mu$). -As an application of the above result, we obtain an NMSS with a share size of 4.(message length). Further, for the special case of sharing random messages, we obtain a share size of 2.(message length).
2022
CRYPTO
Faster Sounder Succinct Arguments and IOPs 📺
Succinct arguments allow a prover to convince a verifier that a given statement is true, using an extremely short proof. A major bottleneck that has been the focus of a large body of work is in reducing the overhead incurred by the prover in order to prove correctness of the computation. By overhead we refer to the cost of proving correctness, divided by the cost of the original computation. In this work, for a large class of Boolean circuits C, we construct succinct arguments for satisfiability of C with soundness error 2^{-k}, and with prover overhead polylog(k). This result relies on the existence of (sub-exponentially secure) linear-size computable collision-resistant hash functions. The class of Boolean circuits that we can handle includes circuits with a repeated sub-structure, which arise in natural applications such as batch computation/verification, hashing, and related block chain applications. The succinct argument is obtained by constructing interactive oracle proofs for the same class of languages, with polylog(k) prover overhead, and soundness error 2^{-k}. Prior to our work, the best IOPs for Boolean circuits either had prover overhead of polylog(|C|) based on efficient PCPs due to Ben~Sasson et al. (STOC, 2013) or poly(k) due to Rothblum and Ron-Zewi (STOC, 2022).
2022
CRYPTO
Maliciously Secure Massively Parallel Computation for All-but-One Corruptions 📺
The Massive Parallel Computing (MPC) model gained wide adoption over the last decade. By now, it is widely accepted as the right model for capturing the commonly used programming paradigms (such as MapReduce, Hadoop, and Spark) that utilize parallel computation power to manipulate and analyze huge amounts of data. Motivated by the need to perform large-scale data analytics in a privacy-preserving manner, several recent works have presented generic compilers that transform algorithms in the MPC model into secure counterparts, while preserving various efficiency parameters of the original algorithms. The first paper, due to Chan et al. (ITCS '20), focused on the honest majority setting. Later, Fernando et al. (TCC '20) considered the dishonest majority setting. The latter work presented a compiler that transforms generic MPC algorithms into ones which are secure against \emph{semi-honest} attackers that may control all but one of the parties involved. The security of their resulting algorithm relied on the existence of a PKI and also on rather strong cryptographic assumptions: indistinguishability obfuscation and the circular security of certain LWE-based encryption systems. In this work, we focus on the dishonest majority setting, following Fernando et al. In this setting, the known compilers do not achieve the standard security notion called \emph{malicious} security, where attackers can arbitrarily deviate from the prescribed protocol. In fact, we show that unless very strong setup assumptions as made (such as a \emph{programmable} random oracle), it is provably \emph{impossible} to withstand malicious attackers due to the stringent requirements on space and round complexity. As our main contribution, we complement the above negative result by designing the first general compiler for malicious attackers in the dishonest majority setting. The resulting protocols withstand all-but-one corruptions. Our compiler relies on a simple PKI and a (programmable) random oracle, and is proven secure assuming LWE and SNARKs. Interestingly, even with such strong assumptions, it is rather non-trivial to obtain a secure protocol.
2022
CRYPTO
Secret Can Be Public: Low-Memory AEAD Mode for High-Order Masking 📺
We propose a new AEAD mode of operation for an efficient countermeasure against side-channel attacks. Our mode achieves the smallest memory with high-order masking, by minimizing the states that are duplicated in masking. An $s$-bit key-dependent state is necessary for achieving $s$-bit security, and the conventional schemes always protect the entire $s$ bits with masking. We reduce the protected state size by introducing an unprotected state in the key-dependent state: we protect only a half and give another half to a side-channel adversary. Ensuring independence between the unprotected and protected states is the key technical challenge since mixing these states reveals the protected state to the adversary. We propose a new mode $\mathsf{HOMA}$ that achieves $s$-bit security using a tweakable block cipher with the $s/2$-bit block size. We also propose a new primitive for instantiating $\mathsf{HOMA}$ with $s=128$ by extending the SKINNY tweakable block cipher to a 64-bit plaintext block, a 128-bit key, and a $(256+3)$-bit tweak. We make hardware performance evaluation by implementing $\mathsf{HOMA}$ with high-order masking for $d \le 5$. For any $d > 0$, $\mathsf{HOMA}$ outperforms the current state-of-the-art $\mathsf{PFB\_Plus}$ by reducing the circuit area larger than that of the entire S-box.
2022
CRYPTO
PI-Cut-Choo and Friends: Compact Blind Signatures via Parallel Instance Cut-and-Choose and More 📺
Blind signature schemes are one of the best-studied tools for privacy-preserving authentication. Unfortunately, known constructions of provably secure blind signatures either rely on non-standard hardness assumptions, or require parameters that grow linearly with the number of concurrently issued signatures, or involve prohibitively inefficient general techniques such as general secure two-party computation. Recently, Katz, Loss and Rosenberg (ASIACRYPT'21) gave a technique that, for the security parameter n, transforms blind signature schemes secure for O(log n) concurrent executions of the blind signing protocol into ones that are secure for any poly(n) concurrent executions. This transform has two drawbacks that we eliminate in this paper: 1) the communication complexity of the resulting blind signing protocol grows linearly with the number of signing interactions; 2) the resulting schemes inherit a very loose security bound from the underlying scheme and, as a result, require impractical parameter sizes. In this work, we give an improved transform for obtaining a secure blind signing protocol tolerating any poly(n) concurrent executions from one that is secure for O(log n) concurrent executions. While preserving the advantages of the original transform, the communication complexity of our new transform only grows logarithmically with the number of interactions. Under the CDH and RSA assumptions, we improve on this generic transform in terms of concrete efficiency and give (1) a BLS-based blind signature scheme over a standard-sized group where signatures are of size roughly 3 KB and communication per signature is roughly 120 KB; and (2) an Okamoto-Guillou-Quisquater-based blind signature scheme with signatures and communication of roughly 9 KB and 8 KB, respectively.
2022
CRYPTO
Beyond the Csiszár-Korner Bound: Best-Possible Wiretap Coding via Obfuscation 📺
A wiretap coding scheme (Wyner, Bell Syst.\ Tech.\ J.\ 1975) enables Alice to reliably communicate a message m to an honest Bob by sending an encoding c over a noisy channel ChB while at the same time hiding m from Eve who receives c over another noisy channel ChE. Wiretap coding is clearly impossible when ChB is a degraded version of ChE, in the sense that the output of ChB can be simulated using only the output of ChE. A classic work of Csiszár and Korner (IEEE Trans.\ Inf.\ Theory, 1978) shows that the converse does not hold. This follows from their full characterization of the channel pairs (ChB, ChE) that enable information-theoretic wiretap coding. In this work, we show that in fact the converse does hold when considering computational security; that is, wiretap coding against a computationally bounded Eve is possible if and only if ChB is not a degraded version of ChE. Our construction assumes the existence of virtual black-box (VBB) obfuscation of specific classes of ``evasive'' functions that generalize fuzzy point functions, and can be heuristically instantiated using indistinguishability obfuscation. Finally, our solution has the appealing feature of being universal in the sense that Alice's algorithm depends only on ChB and not on ChE.
2022
CRYPTO
Ofelimos: Combinatorial Optimization via Proof-of-Useful-Work 📺
Minimizing the energy cost and carbon footprint of the Bitcoin blockchain and related protocols is one of the most widely identified open questions in the cryptocurrency space. Substituting the proof-of-work (PoW) primitive in Nakamoto's longest chain protocol with a {\em proof of useful work} (PoUW) has been long theorized as an ideal solution in many respects but, to this day, the concept still lacks a convincingly secure realization. In this work we put forth {\em Ofelimos}, a novel PoUW-based blockchain protocol whose consensus mechanism simultaneously realizes a decentralized optimization-problem solver. Our protocol is built around a novel local search algorithm, which we call Doubly Parallel Local Search (DPLS), that is especially crafted to suit implementation as the PoUW component of our blockchain protocol. We provide a thorough security analysis of our protocol and additionally present metrics that reflect the usefulness of the system. As an illustrative example we show how DPLS can implement a variant of WalkSAT and experimentally demonstrate its competitiveness with respect to a vanilla WalkSAT implementation. In this way, our work paves the way for safely using blockchain systems as generic optimization engines for a variety of hard optimization problems for which a publicly verifiable solution is desired.
2022
CRYPTO
Multi-Input Attribute Based Encryption and Predicate Encryption 📺
Motivated by several new and natural applications, we initiate the study of multi-input predicate encryption (${\sf miPE}$) and further develop multi-input attribute based encryption (${\sf miABE}$). Our contributions are: 1. {\it Formalizing Security:} We provide definitions for ${\sf miABE}$ and ${\sf miPE}$ in the {symmetric} key setting and formalize security in the standard {\it indistinguishability} (IND) paradigm, against {\it unbounded} collusions. 2. {\it Two-input ${\sf ABE}$ for ${\sf NC}_1$ from ${\sf LWE}$ and Pairings:} We provide the first constructions for two-input {\it key-policy} ${\sf ABE}$ for ${\sf NC}_1$ from ${\sf LWE}$ and pairings. Our construction leverages a surprising connection between techniques recently developed by Agrawal and Yamada (Eurocrypt, 2020) in the context of succinct {\it single}-input {\it ciphertext-policy} ${\sf ABE}$, to the seemingly unrelated problem of {\it two}-input {\it key-policy} ${\sf ABE}$. Similarly to Agrawal-Yamada, our construction is proven secure in the bilinear generic group model. By leveraging inner product functional encryption and using (a variant of) the KOALA knowledge assumption, we obtain a construction in the standard model analogously to Agrawal, Wichs and Yamada (TCC, 2020). 3. {\it Heuristic two-input ${\sf ABE}$ for ${\sf P}$ from Lattices:} We show that techniques developed for succinct single-input ciphertext-policy ${\sf ABE}$ by Brakerski and Vaikuntanathan (ITCS 2022) can also be seen from the lens of ${\sf miABE}$ and obtain the first two-input key-policy ${\sf ABE}$ from lattices for ${\sf P}$. 4. {\it Heuristic three-input ${\sf ABE}$ and ${\sf PE}$ for ${\sf NC}_1$ from Pairings and Lattices:} We obtain the first {\it three}-input ${\sf ABE}$ for ${\sf NC}_1$ by harnessing the powers of both the Agrawal-Yamada and the Brakerski-Vaikuntanathan constructions. 5. {\it Multi-input ${\sf ABE}$ to multi-input ${\sf PE}$ via Lockable Obfuscation:} We provide a generic compiler that lifts multi-input ${\sf ABE}$ to multi-input $\PE$ by relying on the hiding properties of Lockable Obfuscation (${\sf LO}$) by Wichs-Zirdelis and Goyal-Koppula-Waters (FOCS 2018), which can be based on ${\sf LWE}$. Our compiler generalises such a compiler for single-input setting to the much more challenging setting of multiple inputs. By instantiating our compiler with our new two and three-input ${\sf ABE}$ schemes, we obtain the first constructions of two and three-input ${\sf PE}$ schemes. Our constructions of multi-input ${\sf ABE}$ provide the first improvement to the compression factor of {\it non-trivially exponentially efficient Witness Encryption} defined by Brakerski et al. (SCN 2018) without relying on compact functional encryption or indistinguishability obfuscation. We believe that the unexpected connection between succinct single-input ciphertext-policy ${\sf ABE}$ and multi-input key-policy ${\sf ABE}$ may lead to a new pathway for witness encryption. We remark that our constructions provide the first candidates for a nontrivial class of ${\sf miFE}$ without needing ${\sf LPN}$ or low depth ${\sf PRG}$. \keywords{Multi-input \and Attribute Based Encryption \and Predicate Encryption.}
2022
CRYPTO
Moz$\mathbb{Z}_{2^k}$zarella: Efficient Vector-OLE and Zero-Knowledge Proofs Over $\mathbb{Z}_{2^k}$ 📺
Zero-knowledge proof systems are usually designed to support computations for circuits over $\mathbb{F}_2$ or $\mathbb{F}_p$ for large $p$, but not for computations over $\mathbb{Z}_{2^k}$, which all modern CPUs operate on. Although $\mathbb{Z}_{2^k}$-arithmetic can be emulated using prime moduli, this comes with an unavoidable overhead. Recently, Baum et al. (CCS 2021) suggested a candidate construction for a designated-verifier zero-knowledge proof system that natively runs over $\mathbb{Z}_{2^k}$. Unfortunately, their construction requires preprocessed random vector oblivious linear evaluation (VOLE) to be instantiated over $\mathbb{Z}_{2^k}$. Currently, it is not known how to efficiently generate such random VOLE in large quantities. In this work, we present a maliciously secure, VOLE extension protocol that can turn a short seed-VOLE over $\mathbb{Z}_{2^k}$ into a much longer, pseudorandom VOLE over the same ring. Our construction borrows ideas from recent protocols over finite fields, which we non-trivially adapt to work over $\mathbb{Z}_{2^k}$. Moreover, we show that the approach taken by the QuickSilver zero-knowledge proof system (Yang et al. CCS 2021) can be generalized to support computations over $\mathbb{Z}_{2^k}$. This new VOLE-based proof system, which we call QuarkSilver, yields better efficiency than the previous zero-knowledge protocols suggested by Baum et al. Furthermore, we implement both our VOLE extension and our zero-knowledge proof system, and show that they can generate 13-50 million VOLEs per second for 64 to 256 bit rings, and evaluate 1.3 million 64 bit multiplications per second in zero-knowledge.
2022
CRYPTO
Le Mans: Dynamic and Fluid MPC for Dishonest Majority 📺
Most MPC protocols require the set of parties to be active for the entire duration of the computation. Deploying MPC for use cases such as complex and resource-intensive scientific computations increases the barrier of entry for potential participants. The model of Fluid MPC (Crypto 2021) tackles this issue by giving parties the flexibility to participate in the protocol only when their resources are free. As such, the set of parties is dynamically changing over time. In this work, we extend Fluid MPC, which only considered an honest majority, to the setting where the majority of participants at any point in the computation may be corrupt. We do this by presenting variants of the SPDZ protocol, which support dynamic participants. Firstly, we describe a \emph{universal preprocessing} for SPDZ, which allows a set of $n$ parties to compute some correlated randomness, such that later on, any subset of the parties can use this to take part in an online secure computation. We complement this with a \emph{Dynamic SPDZ} online phase, designed to work with our universal preprocessing, as well as a protocol for securely realising the preprocessing. Our preprocessing protocol is designed to efficiently use pseudorandom correlation generators, thus, the parties' storage and communication costs can be almost independent of the function being evaluated. We then extend this to support a \emph{fluid online phase}, where the set of parties can dynamically evolve during the online phase. Our protocol achieves \emph{maximal fluidity} and security with abort, similarly to the previous, honest majority construction. Achieving this requires a careful design and techniques to guarantee a small state complexity, allowing us to switch between committees efficiently.
2022
CRYPTO
Overloading the Nonce: Rugged PRPs, Nonce-Set AEAD, and Order-Resilient Channels 📺
We introduce a new security notion that lies right in between pseudorandom permutations (PRPs) and strong pseudorandom permutations (SPRPs). We call this new security notion and any (tweakable) cipher that satisfies it a rugged pseudorandom permutation (RPRP). Rugged pseudorandom permutations lend themselves to some interesting applications, have practical benefits, and lead to novel cryptographic constructions. Analogous to the encode-then-encipher paradigm first proposed by Bellare and Rogaway and later extended by Shrimpton and Terashima, we can transform a variable-length tweakable RPRP into an AEAD scheme. However, we can construct RPRPs more efficiently as they are weaker primitives than SPRPs (the notion traditionally required by the encode-then-encipher paradigm). We can construct RPRPs using two-pass schemes, whereas SPRPs typically require three passes over the input data. We also identify new transformations that yield RUP-secure AEAD schemes with more compact ciphertexts than previously known. Further extending this approach, we arrive at a new generalized notion of authenticated encryption and a matching construction, which we refer to as nonce-set AEAD. Nonce-set AEAD is particularly well-suited in the context of secure channels, like QUIC and DTLS, that operate over unreliable transports and employ a window mechanism at the receiver's end of the channel. We conclude by presenting a generic construction for transforming a nonce-set AEAD scheme into an order-resilient secure channel. Our channel construction sheds new light on order-resilient channels and additionally leads to more compact ciphertexts when instantiated from RPRPs.
2022
CRYPTO
Public Randomness Extraction with Ephemeral Roles and Worst-Case Corruptions 📺
We distill a simple information-theoretic model for randomness extraction motivated by the task of generating publicly verifiable randomness in blockchain settings and which is closely related to You-Only-Speak-Once (YOSO) protocols (CRYPTO 2021). With the goal of avoiding denial-of-service attacks, parties speak only once and in sequence by broadcasting a public value and forwarding secret values to future parties. Additionally, an unbounded adversary can corrupt any chosen subset of at most t parties. In contrast, existing YOSO protocols only handle random corruptions. As a notable example, considering worst-case corruptions allows us to reduce trust in the role assignment mechanism, which is assumed to be perfectly random in YOSO. We study the maximum corruption threshold t which allows for unconditional randomness extraction in our model: - With respect to feasibility, we give protocols for t corruptions and n=6t+1 or n=5t parties depending on whether the adversary learns secret values forwarded to corrupted parties immediately once they are sent or only once the corrupted party is executed, respectively. Both settings are motivated by practical implementations of secret value forwarding. To design such protocols, we go beyond the committee-based approach that is sufficient for random corruptions in YOSO but turns out to be sub-optimal for chosen corruptions. - To complement our protocols, we show that low-error randomness extraction is impossible with corruption threshold t and n\leq 4t parties in both settings above. This also provides a separation between chosen and random corruptions, since the latter allows for randomness extraction with close to n/2 random corruptions.
2022
CRYPTO
Lattice-Based SNARKs: Publicly Verifiable, Preprocessing, and Recursively Composable 📺
A succinct non-interactive argument of knowledge (SNARK) allows a prover to produce a short proof that certifies the veracity of a certain NP-statement. In the last decade, a large body of work has studied candidate constructions that are secure against quantum attackers. Unfortunately, no known candidate matches the efficiency and desirable features of (pre-quantum) constructions based on bilinear pairings. In this work, we make progress on this question. We propose the first lattice-based SNARK that simultaneously satisfies many desirable properties: It (i) is tentatively post-quantum secure, (ii) is publicly-verifiable, (iii) has a logarithmic-time verifier and (iv) has a purely algebraic structure making it amenable to efficient recursive composition. Our construction stems from a general technical toolkit that we develop to translate pairing-based schemes to lattice-based ones. At the heart of our SNARK is a new lattice-based vector commitment (VC) scheme supporting openings to constant-degree multivariate polynomial maps, which is a candidate solution for the open problem of constructing VC schemes with openings to beyond linear functions. However, the security of our constructions is based on a new family of lattice-based computational assumptions which naturally generalises the standard Short Integer Solution (SIS) assumption.
2022
CRYPTO
Succinct Interactive Oracle Proofs: Applications and Limitations 📺
Interactive Oracle Proofs (IOPs) are a new type of proof-system that combines key properties of interactive proofs and PCPs: IOPs enable a verifier to be convinced of the correctness of a statement by interacting with an untrusted prover while reading just a few bits of the messages sent by the prover. IOPs have become very prominent in the design of efficient proof-systems in recent years. In this work we study succinct IOPs, which are IOPs in which the communication complexity is polynomial (or even linear) in the original witness. While there are strong impossibility results for the existence of succinct PCPs (i.e., PCPs whose length is polynomial in the witness), it is known that the rich class of NP relations that are decidable in small space have succinct IOPs. In this work we show both new applications, and limitations, for succinct IOPs: 1. First, using one-way functions, we show how to compile IOPs into zero-knowledge proofs, while nearly preserving the proof length. This complements a recent line of work, initiated by Ben Sasson et al. (TCC,2016B), who compileIOPs into super-succinct zero-knowledge arguments. Applying the compiler to the state-of-the-art succinctIOPs yields zero-knowledge proofs for bounded-space NP relations, with communication that is nearly equal to the original witness length. This yields the shortest known zero-knowledge proofs from the minimal assumption of one-way functions. 2. Second, we give a barrier for obtaining succinct IOPs for more general NP relations. In particular, we show that if a language has a succinct IOP, then it can be decided in space that is proportionate only to the witness length, after a bounded-time probabilistic preprocessing. We use this result to show that under a simple and plausible (but to the best of our knowledge, new) complexity-theoretic conjecture, there is no succinct IOP for CSAT.
2022
CRYPTO
Formal Verification of Saber’s Public-Key Encryption Scheme in EasyCrypt 📺
In this work, we consider the formal verification of the public-key encryption scheme of Saber, one of the selected few post-quantum cipher suites currently considered for potential standardization. We formally verify this public-key encryption scheme's IND-CPA security and delta-correctness properties, i.e., the properties required to transform the public-key encryption scheme into an IND-CCA2 secure and delta-correct key encapsulation mechanism, in EasyCrypt. To this end, we initially devise hand-written proofs for these properties that are significantly more detailed and meticulous than the presently existing proofs. Subsequently, these hand-written proofs serve as a guideline for the formal verification. The results of this endeavor comprise hand-written and computer-verified proofs which demonstrate that Saber's public-key encryption scheme indeed satisfies the desired security and correctness properties.
2022
CRYPTO
On the Feasibility of Unclonable Encryption and, More 📺
Unclonable encryption, first introduced by Broadbent and Lord (TQC'20), is a one-time encryption scheme with the following security guarantee: any non-local adversary (A, B, C) cannot simultaneously distinguish encryptions of two equal length messages. This notion is termed as unclonable indistinguishability. Prior works focused on achieving a weaker notion of unclonable encryption, where we required that any non-local adversary (A, B, C) cannot simultaneously recover the entire message m. Seemingly innocuous, understanding the feasibility of encryption schemes satisfying unclonable indistinguishability (even for 1-bit messages) has remained elusive. We make progress towards establishing the feasibility of unclonable encryption. (*) We show that encryption schemes satisfying unclonable indistinguishability exist unconditionally in the quantum random oracle model. (*) Towards understanding the necessity of oracles, we present a negative result stipulating that a large class of encryption schemes cannot satisfy unclonable indistinguishability. (*) Finally, we also establish the feasibility of another closely related primitive: copy-protection for single-bit output point functions. Prior works only established the feasibility of copy-protection for multi-bit output point functions or they achieved constant security error for single-bit output point functions.
2022
CRYPTO
Syndrome Decoding in the Head: Shorter Signatures from Zero-Knowledge Proofs 📺
Zero-knowledge proofs of knowledge are useful tools to design signature schemes. The ongoing effort to build a quantum computer motivates the cryptography community to develop new secure cryptographic protocols based on quantum-hard cryptographic problems. One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) for random linear codes. This problem is known to be NP-hard and the cryptanalysis state of the art has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. Since its publication, many articles proposed optimizations, implementation, or variants. In this paper, we introduce a new zero-knowledge proof for the syndrome decoding problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. We propose a 5-round zero-knowledge protocol that proves the knowledge of a vector x such that y=Hx and \wt(x) <= w and which achieves a soundness error closed to 1/N for an arbitrary N. While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for a 128-bit security when relying on the hardness of the SD problem on binary fields. Using bigger fields (like \F_{2^8}), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and to be competitive with SPHINCS+, both post-quantum signature candidates in the ongoing NIST standardization effort. Moreover, our scheme outperforms all the existing code-based signature schemes for the common ``signature size + public key size'' metric.
2022
CRYPTO
Tight Preimage Resistance of the Sponge Construction 📺
The cryptographic sponge is a popular method for hash function design. The construction is in the ideal permutation model proven to be indifferentiable from a random oracle up to the birthday bound in the capacity of the sponge. This result in particular implies that, as long as the attack complexity does not exceed this bound, the sponge construction achieves a comparable level of collision, preimage, and second preimage resistance as a random oracle. We investigate these state-of-the-art bounds in detail, and observe that while the collision and second preimage security bounds are tight, the preimage bounds not tight. We derive an improved and tight preimage security bound for the cryptographic sponge construction. The result has direct implications for various lightweight cryptographic hash functions. For example, the NIST Lightweight Cryptography finalist Ascon-Hash does not generically achieve $2^{128}$ preimage security as claimed, but even $2^{192}$ preimage security. Comparable improvements are obtained for the modes of Spongent, PHOTON, ACE, Subterranean 2.0, and QUARK, among others.
2022
CRYPTO
log∗-Round Game-Theoretically-Fair Leader Election 📺
It is well-known that in the presence of majority coalitions, strongly fair coin toss is impossible. A line of recent works have shown that by relaxing the fairness notion to game theoretic, we can overcome this classical lower bound. In particular, Chung et al. (CRYPTO'21) showed how to achieve approximately (game-theoretically) fair leader election in the presence of majority coalitions, with round complexity as small as O(log log n) rounds. In this paper, we revisit the round complexity of game-theoretically fair leader election. We construct O(log* n) rounds leader election protocols that achieve (1-o(1))-approximate fairness in the presence of (1-o(1)) n-sized coalitions. Our protocols achieve the same round-fairness trade offs as Chung et al.'s and have the advantage of being conceptually simpler. Finally, we also obtain game-theoretically fair protocols for committee election which might be of independent interest.
2022
CRYPTO
Partial Key Exposure Attacks on BIKE, Rainbow and NTRU 📺
In a so-called partial key exposure attack one obtains some information about the secret key, e.g. via some side-channel leakage. This information might be a certain fraction of the secret key bits (erasure model) or some erroneous version of the secret key (error model). The goal is to recover the secret key from the leaked information. There is a common belief that, as opposed to e.g. the RSA cryptosystem, most post-quantum cryptosystems are usually resistant against partial key exposure attacks. We strongly question this belief by constructing partial key exposure attacks on code-based, multivariate, and lattice-based schemes (BIKE, Rainbow and NTRU). Our attacks exploit the redundancy that modern PQ cryptosystems inherently use for efficiency reasons. The application and development of techniques from information set decoding plays a crucial role for achieving our results. On the theoretical side, we show non-trivial information leakage bounds that allow for a polynomial time key recovery attack. As an example, for all schemes the knowledge of a constant fraction of the secret key bits suffices to reconstruct the full key in polynomial time. Even if we no longer insist on polynomial time attacks, most of our attacks extend well and remain feasible up to large erasure and error rates. In the case of BIKE for example we obtain attack complexities around 60 bits when half of the secret key bits are erased, or a quarter of the secret key bits are faulty. Our results show that even highly error-prone key leakage of modern PQ cryptosystems may lead to full secret key recoveries.
2022
CRYPTO
Practical Statistically-Sound Proofs of Exponentiation in any Group 📺
A proof of exponentiation (PoE) in a group G of unknown order allows a prover to convince a verifier that a tuple (x, q, T, y) ∈G × N × N × G satisfies x^q^T= y. This primitive has recently found exciting applications in the constructions of verifiable delay functions and succinct arguments of knowledge. The most practical PoEs only achieve soundness either under computational assumptions, i.e., they are arguments (Wesolowski, Journal of Cryptology 2020), or in groups that come with the promise of not having any small subgroups (Pietrzak, ITCS 2019). The only statistically-sound PoE in general groups of unknown order is due to Block et al. (CRYPTO 2021), and can be seen as an elaborate parallel repetition of Pietrzak’s PoE: to achieve λ bits of security, say λ = 80, the number of repetitions required (and thus the blow-up in communication) is as large as λ. In this work we propose a statistically-sound PoE for the case where the exponent q is the product of all primes up to some bound B. We show that, in this case, it suffices to run only λ/ log(B) parallel instances of Pietrzak’s PoE, which reduces the concrete proof-size compared to Block et al. by an order of magnitude. Furthermore, we show that in the known applications where PoEs are used as a building block such structured exponents are viable. Finally, we also discuss batching of our PoE, showing that many proofs (for the same G and q but different x and T) can be batched by adding only a single element to the proof per additional statement.
2022
CRYPTO
New Constructions of Collapsing Hashes 📺
Collapsing is the preferred post-quantum security notion for hash functions, needed to lift many classical results to the quantum setting. Unfortunately, the only existing standard-model proofs of collapsing hashes require LWE. We construct the first collapsing hashes from the quantum hardness of any one of the following problems: - LPN in a variety of low noise or high-hardness regimes, essentially matching what is known for collision resistance from LPN. - Finding cycles on certain exponentially-large expander graphs, such as those arising from isogenies on elliptic curves. - The "optimal" hardness of finding collisions in *any* hash function. - The *polynomial* hardness of finding collisions, assuming a certain plausible regularity condition on the hash. As an immediate corollary, we obtain the first statistically hiding post-quantum commitments and post-quantum succinct arguments (of knowledge) under the same assumptions. Our results are obtained by a general theorem which shows how to construct a collapsing hash H' from a post-quantum collision-resistant hash function H, regardless of whether or not H itself is collapsing, assuming H satisfies a certain regularity condition we call "semi-regularity".
2022
CRYPTO
To Label, or Not To Label (in Generic Groups) 📺
Generic groups are an important tool for analyzing the feasibility and in-feasibility of group-based cryptosystems. There are two distinct wide-spread versions of generic groups, Shoup's and Maurer's, the main difference being whether or not group elements are given explicit labels. The two models are often treated as equivalent. In this work, however, we demonstrate that the models are in fact quite different, and care is needed when stating generic group results: - We show that numerous textbook constructions are \emph{not} captured by Maurer, but are captured by Shoup. In the other direction, any construction captured by Maurer \emph{is} captured by Shoup. - For constructions that exist in both models, we show that security is equivalent for ``single stage'' games, but Shoup security is strictly stronger than Maurer security for some ``multi-stage'' games. - The existing generic group un-instantiability results do not apply to Maurer. We fill this gap with a new un-instantiability result. - We explain how the known black box separations between generic groups and identity-based encryption do not fully apply to Shoup, and resolve this by providing such a separation. - We give a new un-instantiability result for the \emph{algebraic} group model.
2022
CRYPTO
Augmented Random Oracles 📺
We propose a new paradigm for justifying the security of random oracle-based protocols, which we call the Augmented Random Oracle Model (AROM). We show that the AROM captures a wide range of important random oracle impossibility results. Thus a proof in the AROM implies some resiliency to such impossibilities. We then consider three ROM transforms which are subject to impossibilities: Fiat-Shamir (FS), Fujisaki-Okamoto (FO), and Encrypt-with-Hash (EwH). We show in each case how to obtain security in the AROM by strengthening the building blocks or modifying the transform. Along the way, we give a couple other results. We improve the assumptions needed for the FO and EwH impossibilities from indistinguishability obfuscation to circularly secure LWE; we argue that our AROM still captures this improved impossibility. We also demonstrate that there is no ``best possible'' hash function, by giving a pair of security properties, both of which can be instantiated in the standard model separately, which cannot be simultaneously satisfied by a single hash function.
2022
CRYPTO
Low Communication Complexity Protocols, Collision Resistant Hash Functions and Secret Key-Agreement Protocols 📺
We study communication complexity in computational settings where bad inputs may exist, but they should be hard to find for any computationally bounded adversary. We define a model where there is a source of public randomness but the inputs are chosen by a computationally bounded adversarial participant \emph{after seeing the public randomness}. We show that breaking some known communication lower bounds in this model is closely connected to known cryptographic assumptions. We consider the simultaneous messages model and the interactive communication model and show that for any non trivial predicate (such as equality): 1. Breaking the \Omega(\sqrt n) bound in the simultaneous message case or the \Omega(\log n) bound in the interactive communication case, implies the existence of distributional collision-resistant hash functions (dCRH). Note that with a CRH the lower bounds can be broken. This is shown using techniques from Babai and Kimmel [CCC 1997]. 2. There are no protocols of constant communication in this preset randomness settings (unlike the plain public randomness model). The other model we study is that of a stateful ``free talk", where participants can communicate freely \emph{before} the inputs are chosen and may maintain a state, and the communication complexity is measured only afterwards. We show that efficient protocols for equality in this model imply secret key-agreement protocols (in a constructive sense). On the other hand, secret key-agreement protocols imply optimal in terms of error protocols for equality.
2022
CRYPTO
On Codes and Learning with Errors over Function Fields 📺
It is a long standing open problem to find search to decision reductions for structured versions of the decoding problem of linear codes. Such results in the lattice-based setting have been carried out using number fields: Polynomial–LWE, Ring–LWE, Module–LWE and so on. We propose a function field version of the LWE problem. This new framework leads to another point of view on structured codes, e.g. quasi-cyclic codes, strengthening the connection between lattice-based and code-based cryptography. In particular, we obtain the first search to decision reduction for structured codes. Following the historical constructions in lattice–based cryptography, we instantiate our construction with function fields analogues of cyclotomic fields, namely Carlitz ex- tensions, leading to search to decision reductions on various versions of Ring-LPN, which have applications to secure multi party computation and to an authentication protocol.
2022
CRYPTO
Threshold Signatures with Private Accountability 📺
Existing threshold signature schemes come in two flavors: (i) fully private, where the signature reveals nothing about the set of signers that generated the signature, and (ii) accountable, where the signature completely identifies the set of signers. In this paper we propose a new type of threshold signature, called TAPS, that is a hybrid of privacy and accountability. A TAPS signature is fully private from the public's point of view. However, an entity that has a secret tracing key can trace a signature to the threshold of signers that generated it. A TAPS makes it possible for an organization to keep its inner workings private, while ensuring that signers are accountable for their actions. We construct a number of TAPS schemes. First, we present a generic construction that builds a TAPS from any accountable threshold signature. This generic construction is not efficient, and we next focus on efficient schemes based on standard assumptions. We build two efficient TAPS schemes (in the random oracle model) based on the Schnorr signature scheme. We conclude with a number of open problems relating to efficient TAPS
2022
CRYPTO
Sustained Space and Cumulative Complexity Trade-offs for Data-Dependent Memory-Hard Functions 📺
Memory-hard functions (MHFs) are a useful cryptographic primitive which can be used to design egalitarian proof of work puzzles and to protect low entropy secrets like passwords against brute-force attackers. Intuitively, a memory hard function is a function whose evaluation costs are dominated by memory costs even if the attacker uses specialized hardware (FPGAs/ASICs) and several cost metrics have been proposed to quantify this intution. For example, space-time cost looks at the product of running time and the maximum space usage over the entire execution of an algorithm. Alwen and Serbinenko (STOC 2015) observed that the space-time cost of evaluating a function multiple times may not scale linearly in the number of instances being evaluated and introduced the stricter requirement that a memory hard function has high cumulative memory complexity (CMC) to ensure that an attacker's amortized space-time costs remain large even if the attacker evaluates the function on multiple different inputs in parallel. Alwen et al. (EUROCRYPT 2018) observed that the notion of CMC still gives the attacker undesirable flexibility in selecting space-time tradeoffs e.g., while the MHF $\scrypt$ has maximal CMC $\Omega(N^2)$ an attacker could evaluate the function with constant $O(1)$ memory in time $O(N^2)$. Alwen et al. introduced an even stricter notion of Sustained Space complexity and designed an MHF which has $s=\Omega(N/\log N)$ sustained complexity $t=\Omega(N)$ i.e., any algorithm evaluating the function in the parallel random oracle model must have at least $t=\Omega(N)$ steps where the memory usage is at least $\Omega(N/\log N)$. In this work we use dynamic pebbling games and dynamic graphs to explore tradeoffs between sustained space complexity and cumulative memory complexity for data-dependent memory hard functions such as Argon2id and scrypt. We design our own dynamic graph (dMHF) with the property that {\em any} dynamic pebbling strategy either (1) has $\Omega(N)$ rounds with $\Omega(N)$ space, or (2) has CMC $\Omega(N^{3-\epsilon})$ --- substantially larger than $N^2$. For Argon2id we show that {\em any} dynamic pebbling strategy either(1) has $\Omega(N)$ rounds with $\Omega(N^{1-\epsilon})$ space, or (2) has CMC $\omega(N^2)$. We also present a dynamic version of DRSample (Alwen et al. 2017) for which {\em any} dynamic pebbling strategy either (1) has $\Omega(N)$ rounds with $\Omega(N/\log N)$ space, or (2) has CMC $\Omega(N^3/\log N)$.
2022
CRYPTO
Orion: Zero Knowledge Proof with Linear Prover Time 📺
Zero-knowledge proof is a powerful cryptographic primitive that has found various applications in the real world. However, existing schemes suffer from a high overhead on the proof generation time that is super-linear in the size of the statement represented as an arithmetic circuit, limiting their efficiency and scalability in practice. In this paper, we present Orion, a new zero-knowledge argument system that achieves $O(N)$ prover time of field operations and hash functions and $O(\log^2 N)$ proof size. Orion is concretely efficient and our implementation shows that the prover time is 3.09s and the proof size is 1.5MB for a circuit with $2^{20}$ multiplication gates. The prover time is the fastest among all existing systems, and the proof size is an order of magnitude smaller than a recent scheme in Golovnev et al. 2021. In particular, our scheme proposes two new techniques leading to the efficiency improvement. (1) We propose a new algorithm to test whether a random bipartite graph is a lossless expander graph or not based on the densest subgraph algorithm. It allows us to sample lossless expanders with an overwhelming probability. The technique improves the efficiency and/or security of all existing zero-knowledge argument schemes with a linear prover time. The testing algorithm based on densest subgraph may be of independent interest for other applications of expander graphs. (2) We develop an efficient proof composition scheme, code switching, to reduce the proof size from square root to polylogarithmic. The scheme is built on the encoding circuit of a linear code and shows that the witness of a second zero-knowledge argument is the same as the message in the linear code. The proof composition only introduces a small overhead on the prover time.
2022
CRYPTO
Block-Cipher-Based Tree Hashing 📺
First of all we take a thorough look at an error in a paper by Daemen et al. (ToSC 2018) which looks at minimal requirements for tree-based hashing based on multiple primitives, including block ciphers. This reveals that the error is more fundamental than previously shown by Gunsing et al. (ToSC 2020), which is mainly interested in its effect on the security bounds. It turns out that the cause for the error is due to an essential oversight in the interaction between the different oracles used in the indifferentiability proofs. In essence, it reduces the claim from the normal indifferentiability setting to the weaker sequential indifferentiability one. As a matter of fact, this error appeared in multiple earlier indifferentiability papers, including the optimal indifferentiability of the sum of permutations (EUROCRYPT 2018) and the recent ABR+ construction (EUROCRYPT 2021). We discuss in detail how this oversight is caused and how it can be avoided. We next demonstrate how the negative effects on the security bound of the construction by Daemen et al. can be resolved. Instead of only allowing a truncated output, we generalize the construction to allow for any finalization function and investigate the security of this for five different types of finalization. Our findings, among others, show that the security of the SHA-2 mode does not degrade if the feed-forward is dropped and that the modern BLAKE3 construction is secure in principle but that its use of the extendable output requires its counter used for random access to be public. Finally, we introduce the tree sponge, a generalization of the sequential sponge construction with parallel absorbing and squeezing.
2022
CRYPTO
A More Complete Analysis of the Signal Double Ratchet Algorithm 📺
Seminal works by Cohn-Gordon, Cremers, Dowling, Garratt, and Stebila [Journal of Cryptology 2020] and Alwen, Coretti, and Dodis [EUROCRYPT 2019] provided the first formal frameworks for studying the widely-used Signal Double Ratchet (DR for short) algorithm. In this work, we develop a new Universally Composable (UC) definition F_DR that we show is provably achieved by the DR protocol. Our definition captures not only the security and correctness guarantees of the DR already identified in the prior state-of-the-art analyses of Cohn-Gordon et al. and Alwen et al., but also more guarantees that are absent from one or both of these works. In particular, we construct six different modified versions of the DR protocol, all of which are insecure according to our definition F_DR, but remain secure according to one (or both) of their definitions. For example, our definition is the first to capture CCA-style attacks possible immediately after a compromise — attacks that, as we show, the DR protocol provably resists, but were not captured by prior definitions. We additionally show that multiple compromises of a party in a short time interval, which the DR should be able to withstand, as we understand from its whitepaper, nonetheless introduce a new non-trivial (albeit minor) weakness of the DR. Since the definitions in the literature (including our F_DR above) do not capture security against this more nuanced scenario, we define a new stronger definition F_TR that does. Finally, we provide a minimalistic modification to the DR (that we call the Triple Ratchet, or TR for short) and show that the resulting protocol securely realizes the stronger functionality F_TR. Remarkably, the modification incurs no additional communication cost and virtually no additional computational cost. We also show that these techniques can be used to improve communication costs in other scenarios, e.g. practical Updatable Public Key Encryption schemes and the re-randomized TreeKEM protocol of Alwen et al. [CRYPTO 2020] for Secure Group Messaging.
2022
CRYPTO
Structure-Aware Private Set Intersection, With Applications to Fuzzy Matching 📺
In two-party private set intersection (PSI), Alice holds a set $X$, Bob holds a set $Y$, and they learn (only) the contents of $X \cap Y$. We introduce \textbf{structure-aware PSI} protocols, which take advantage of situations where Alice's set $X$ is publicly known to have a certain structure. The goal of structure-aware PSI is to have communication that scales with the \emph{description size} of Alice's set, rather its \emph{cardinality}. We introduce a new generic paradigm for structure-aware PSI based on function secret-sharing (FSS). In short, if there exists compact FSS for a class of structured sets, then there exists a semi-honest PSI protocol that supports this class of input sets, with communication cost proportional only to the FSS share size. Several prior protocols for efficient (plain) PSI can be viewed as special cases of our new paradigm, with an implicit FSS for unstructured sets. Our PSI protocol can be instantiated from a significantly weaker flavor of FSS, which has not been previously studied. We develop several improved FSS techniques that take advantage of these relaxed requirements, and which are in some cases exponentially better than existing FSS. Finally, we explore in depth a natural application of structure-aware PSI. If Alice's set $X$ is the union of many radius-$\delta$ balls in some metric space, then an intersection between $X$ and $Y$ corresponds to \textbf{fuzzy PSI}, in which the parties learn which of their points are within distance $\delta$. In structure-aware PSI, the communication cost scales with the number of balls in Alice's set, rather than their total volume. Our techniques lead to efficient fuzzy PSI for $\ell_\infty$ and $\ell_1$ metrics (and approximations of $\ell_2$ metric) in high dimensions. We implemented this fuzzy PSI protocol for 2-dimensional $\ell_\infty$ metrics. For reasonable input sizes, our protocol requires 45--60\% less time and 85\% less communication than competing approaches that simply reduce the problem to plain PSI.
2022
CRYPTO
Nearly Optimal Property Preserving Hashing 📺
Property-preserving hashing (PPH) consists of a family of compressing hash functions $h$ such that, for any two inputs $x,y$, we can correctly identify whether some property $P(x,y)$ holds given only the digests $h(x),h(y)$. In a basic PPH, correctness should hold with overwhelming probability over the choice of $h$ when $x,y$ are worst-case values chosen a-priori and independently of $h$. In an adversarially robust PPH (RPPH), correctness must hold even when $x,y$ are chosen adversarially and adaptively depending on $h$. Here, we study (R)PPH for the property that the Hamming distance between $x$ and $y$ is at most $t$. The notion of (R)PPH was introduced by Boyle, LaVigne and Vaikuntanathan (ITCS '19), and further studied by Fleischhacker, Simkin (Eurocrypt '21) and Fleischhacker, Larsen, Simkin (Eurocrypt '22). In this work, we obtain improved constructions that are conceptually simpler, have nearly optimal parameters, and rely on more general assumptions than prior works. Our results are: * We construct information-theoretic non-robust PPH for Hamming distance via syndrome list-decoding of linear error-correcting codes. We provide a lower bound showing that this construction is essentially optimal. * We make the above construction robust with little additional overhead, by relying on homomorphic collision-resistant hash functions, which can be constructed from either the discrete-logarithm or the short-integer-solution assumptions. The resulting RPPH achieves improved compression compared to prior constructions, and is nearly optimal. * We also show an alternate construction of RPPH for Hamming distance under the minimal assumption that standard collision-resistant hash functions exist. The compression is slightly worse than our optimized construction using homomorphic collision-resistance, but essentially matches the prior state of the art constructions from specific algebraic assumptions. * Lastly, we study a new notion of randomized robust PPH (R2P2H) for Hamming distance, which relaxes RPPH by allowing the hashing algorithm itself to be randomized. We give an information-theoretic construction with optimal parameters.
2022
CRYPTO
Sharing Transformation and Dishonest Majority MPC with Packed Secret Sharing 📺
In the last few years, the efficiency of secure multi-party computation (MPC) in the dishonest majority setting has increased by several orders of magnitudes starting with the SPDZ protocol family which offers a speedy information-theoretic online phase in the prepossessing model. However, state-of-the-art n-party MPC protocols in the dishonest majority setting incur online communication complexity per multiplication gate which is linear in the number of parties, i.e. O(n), per gate across all parties. In this work, we construct the first MPC protocols in the preprocessing model for dishonest majority with sublinear communication complexity per gate in the number of parties n. To achieve our results, we extend the use of packed secret sharing to the dishonest majority setting. For a constant fraction of corrupted parties (i.e. if 99 percent of the parties are corrupt), we can achieve a communication complexity of O(1) field elements per multiplication gate across all parties. At the crux of our techniques lies a new technique called sharing transformation. The sharing transformation technique allows us to transform shares under one type of linear secret sharing scheme into another, and even perform arbitrary linear maps on the secrets of (packed) secret sharing schemes with optimal communication complexity. This technique can be of independent interest since transferring shares from one type of scheme into another (e.g., for degree reduction) is ubiquitous in MPC. Furthermore, we introduce what we call sparsely packed Shamir sharing which allows us to address the issue of network routing efficiently, and packed Beaver triples which is an extension of the widely used technique of Beaver triples for packed secret sharing (for dishonest majority).
2022
CRYPTO
Public-Coin 3-Round Zero-Knowledge from Learning with Errors and Keyless Multi-Collision-Resistant Hash 📺
We construct a public-coin 3-round zero-knowledge argument for NP assuming (i) the sub-exponential hardness of the learning with errors (LWE) problem and (ii) the existence of keyless multi-collision-resistant hash functions against slightly super-polynomial-time adversaries. These assumptions are almost identical to those that were used recently to obtain a private-coin 3-round zero-knowledge argument [Bitansky et al., STOC 2018]. (The difference is that we assume sub-exponential hardness instead of quasi-polynomial hardness for the LWE problem.)
2022
CRYPTO
Succinct Classical Verification of Quantum Computation 📺
We construct a classically verifiable succinct interactive argument for quantum computation (BQP) with communication complexity and verifier runtime that are poly-logarithmic in the runtime of the BQP computation (and polynomial in the security parameter). Our protocol is secure assuming the post-quantum security of indistinguishability obfuscation (iO) and Learning with Errors (LWE). This is the first succinct argument for quantum computation in the plain model; prior work (Chia-Chung-Yamakawa, TCC ’20) requires both a long common reference string and non-black-box use of a hash function modeled as a random oracle. At a technical level, we revisit the framework for constructing classically verifiable quantum computation (Mahadev, FOCS ’18). We give a self-contained, modular proof of security for Mahadev’s protocol, which we believe is of independent interest. Our proof readily generalizes to a setting in which the verifier’s first message (which consists of many public keys) is compressed. Next, we formalize this notion of compressed public keys; we view the object as a generalization of constrained/programmable PRFs and instantiate it based on indistinguishability obfuscation. Finally, we compile the above protocol into a fully succinct argument using a (sufficiently composable) succinct argument of knowledge for NP. Using our framework, we achieve several additional results, including – Succinct arguments for QMA (given multiple copies of the witness), – Succinct non-interactive arguments for BQP (or QMA) in the quantum random oracle model, and – Succinct batch arguments for BQP (or QMA) assuming post-quantum LWE (without iO).
2022
CRYPTO
Nova: Recursive Zero-Knowledge Arguments from Folding Schemes 📺
We introduce a new approach to realize incrementally verifiable computation (IVC), in which the prover recursively proves the correct execution of incremental computations of the form y=F^{(\ell)}(x), where F is a (potentially non-deterministic) computation, x is the input, y is the output, and \ell > 0. Unlike prior approaches to realize IVC, our approach avoids succinct non-interactive arguments of knowledge (SNARKs) entirely (and arguments of knowledge in general). Instead, we introduce and employ folding schemes, a weaker, simpler, and more efficiently-realizable primitive, which reduces the task of checking two instances in some relation to the task of checking a single instance. We construct a folding scheme for NP and show that it implies an IVC scheme with improved efficiency characteristics: (1) the "recursion overhead" (i.e., the number of steps that the prover proves in addition to proving the execution of F) is a constant and it is dominated by two group scalar multiplications expressed as a circuit (this is the smallest recursion overhead in the literature) and (2) the prover's work at each step is dominated by two multiexponentiations of size O(|F|), providing the fastest prover in the literature. The size of a proof is O(|F|) group elements, but we show that using a variant of an existing zkSNARK, the prover can prove the knowledge of a valid proof succinctly and in zero-knowledge with O(\log{|F|}) group elements. Finally, our approach neither requires a trusted setup nor FFTs, so it can be instantiated efficiently with any cycles of elliptic curves where DLOG is hard.
2022
CRYPTO
Authenticated garbling from simple correlations 📺
We revisit the problem of constant-round malicious secure two-party computation by considering the use of simple correlations, namely sources of correlated randomness that can be securely generated with sublinear communication complexity and good concrete efficiency. The current state-of-the-art protocol of Katz et al. (Crypto 2018) achieves malicious security by realizing a variant of the authenticated garbling functionality of Wang et al. (CCS 2017). Given oblivious transfer correlations, the communication cost of this protocol (with 40 bits of statistical security) is comparable to roughly 10 garbled circuits (GCs). This protocol inherently requires more than 2 rounds of interaction. In this work, we use other kinds of simple correlations to realize the authenticated garbling functionality with better efficiency. Concretely, we get the following reduced costs in the random oracle model: - Using variants of both vector oblivious linear evaluation (VOLE) and multiplication triples (MT), we reduce the cost to 1.31 GCs. - Using only variants of VOLE, we reduce the cost to 2.25 GCs. - Using only variants of MT, we obtain a non-interactive (i.e., 2-message) protocol with cost comparable to 7.47 GCs. Finally, we show that by using recent constructions of pseudorandom correlation generators (Boyle et al., CCS 2018, Crypto 2019, 2020), the simple correlations consumed by our protocols can be securely realized without forming an efficiency bottleneck.
2022
CRYPTO
MuSig-L: Lattice-Based Multi-Signature With Single-Round Online Phase 📺
Multi-signatures are protocols that allow a group of signers to jointly produce a single signature on the same message. In recent years, a number of practical multi-signature schemes have been proposed in the discrete-log setting, such as MuSigT (CRYPTO'21) and DWMS (CRYPTO'21). The main technical challenge in constructing a multi-signature scheme is to achieve a set of several desirable properties, such as (1) security in the plain public-key (PPK) model, (2) concurrent security, (3) low online round complexity, and (4) key aggregation. However, previous lattice-based, post-quantum counterparts to Schnorr multi-signatures fail to satisfy these properties. In this paper, we introduce MuSigL, a lattice-based multi-signature scheme simultaneously achieving these design goals for the first time. Unlike the recent, round-efficient proposal of Damgård et al. (PKC'21), which had to rely on lattice-based trapdoor commitments, we do not require any additional primitive in the protocol, while being able to prove security from the standard module-SIS and LWE assumptions. The resulting output signature of our scheme therefore looks closer to the usual Fiat--Shamir-with-abort signatures.
2022
CRYPTO
Differential Cryptanalysis in the Fixed-Key Model 📺
A systematic approach to the fixed-key analysis of differential probabilities is proposed. It is based on the propagation of 'quasidifferential trails', which keep track of probabilistic linear relations on the values satisfying a differential characteristic in a theoretically sound way. It is shown that the fixed-key probability of a differential can be expressed as the sum of the correlations of its quasidifferential trails. The theoretical foundations of the method are based on an extension of the difference-distribution table, which we call the quasidifferential transition matrix. The role of these matrices is analogous to that of correlation matrices in linear cryptanalysis. This puts the theory of differential and linear cryptanalysis on an equal footing. The practical applicability of the proposed methodology is demonstrated by analyzing several differentials for RECTANGLE, KNOT, Speck and Simon. The analysis is automated and applicable to other SPN and ARX designs. Several attacks are shown to be invalid, most others turn out to work only for some keys but can be improved for weak-keys.
2022
CRYPTO
Practical Sublinear Proofs for R1CS from Lattices 📺
We propose a practical sublinear-size zero-knowledge proof system for Rank-1 Constraint Satisfaction (R1CS) based on lattices. The proof size scales asymptotically with the square root of the witness size. Concretely, the size becomes 2-3 times smaller than Ligero (ACM CCS 2017), which also exhibits square root scaling, for large instances of R1CS. At the core lies an interactive variant of the Schwartz-Zippel Lemma that might be of independent interest.
2022
CRYPTO
Multimodal Private Signatures 📺
We introduce Multimodal Private Signature (MPS) - an anonymous signature system that offers a novel accountability feature: it allows a designated opening authority to learn \emph{some partial information}~$\ms{op}$ about the signer's identity $\ms{id}$, and nothing beyond. Such partial information can flexibly be defined as $\ms{op} = \ms{id}$ (as in group signatures), or as $\ms{op} = \mb{0}$ (like in ring signatures), or more generally, as $\ms{op} = G_j(\ms{id})$, where $G_j(\cdot)$ is certain disclosing function. Importantly, the value of $op$ is known in advanced by the signer, and hence, the latter can decide whether she/he wants to disclose that piece of information. The concept of MPS significantly generalizes the notion of tracing in traditional anonymity-oriented signature primitives, and can enable various new and appealing privacy-preserving applications. We formalize the definitions and security requirements for MPS. We next present a generic construction to demonstrate the feasibility of designing MPS in a modular manner and from commonly used cryptographic building blocks (ordinary signatures, public-key encryption and NIZKs). We also provide an efficient construction in the standard model based on pairings, and a lattice-based construction in the random oracle model.
2022
CRYPTO
Provably Secure Reflection Ciphers 📺
This paper provides the first analysis of reflection ciphers such as PRINCE from a provable security viewpoint. As a first contribution, we initiate the study of key-alternating reflection ciphers in the ideal permutation model. Specifically, we prove the security of the two-round case and give matching attacks. The resulting security bound takes form $O(qp^2/2^{2n}+q^2/2^n)$, where q is the number of construction evaluations and p is the number of direct adversarial queries to the underlying permutation. Since the two-round construction already achieves an interesting security lower bound, this result can also be of interest for the construction of reflection ciphers based on a single public permutation. Our second contribution is a generic key-length extension method for reflection ciphers. It provides an attractive alternative to the FX construction, which is used by PRINCE and other concrete key-alternating reflection ciphers. We show that our construction leads to better security with minimal changes to existing designs. The security proof is in the ideal cipher model and relies on a reduction to the two-round Even-Mansour cipher with a single round key. In order to obtain the desired result, we sharpen the bad-transcript analysis and consequently improve the best-known bounds for the single-key Even-Mansour cipher with two rounds. This improvement is enabled by a new sum-capture theorem that is of independent interest.
2022
CRYPTO
On the Impossibility of Key Agreements from Quantum Random Oracles 📺
We study the following question, first publicly posed by Hosoyamada and Yamakawa in 2018. Can parties A, B with local quantum computing power rely (only) on a random oracle and classical communication to agree on a key? (Note that A, B can now query the random oracle at quantum superpositions.) We make the first progress on the question above and prove the following. – When only one of the parties A is classical and the other party B is quantum powered, as long as they ask a total of d oracle queries and agree on a key with probability 1, then there is always a way to break the key agreement by asking O(d^2) number of classical oracle queries. – When both parties can make quantum queries to the random oracle, we introduce a natural conjecture, which if true would imply attacks with poly(d) classical queries to the random oracle. Our conjecture, roughly speaking, states that the multiplication of any two degree-d real-valued polynomials over the Boolean hypercube of influence at most δ = 1/ poly(d) is nonzero. We then prove our conjecture for exponentially small influences, which leads to an (unconditional) classical 2^O(md)-query attack on any such key agreement protocol, where m is the random oracle’s output length. – Since our attacks are classical, we then ask whether it is possible to find such classical attacks in general. We prove a barrier for this approach, by showing that if the folklore “simulation conjecture” about the possibility of simulating efficient-query quantum algorithms classically is false, then that implies a possible quantum protocol that cannot be broken by classical adversaries.
2022
CRYPTO
Programmable Distributed Point Functions 📺
A distributed point function (DPF) is a cryptographic primitive that enables compressed additive sharing of a secret unit vector across two or more parties. Despite growing ubiquity within applications and notable research efforts, the best 2-party DPF construction to date remains the tree-based construction from (Boyle et al, CCS’16), with no significantly new approaches since. We present a new framework for 2-party DPF construction, which applies in the setting of feasible (polynomial-size) domains. This captures in particular all DPF applications in which the keys are expanded to the full domain. Our approach is motivated by a strengthened notion we put forth, of programmable DPF (PDPF): in which a short, input-independent “offline” key can be reused for sharing many point functions. – PDPF from OWF. We construct a PDPF for feasible domains from the minimal assumption that one-way functions exist, where the second “online” key size is polylogarithmic in the domain size N. Our approach offers multiple new efficiency features and applications: – Privately puncturable PRFs. Our PDPF gives the first OWF-based privately puncturable PRFs (for feasible domains) with sublinear keys. – O(1)-round distributed DPF Gen. We obtain a (standard) DPF with polylog-size keys that admits an analog of Doerner-shelat (CCS’17) distributed key generation, requiring only O(1) rounds (versus log N). – PCG with 1 short key. Compressing useful correlations for secure computation, where one key size is of minimal size. This provides up to exponential communication savings in some application scenarios.
2022
CRYPTO
Gossiping for Communication-Efficient Broadcast 📺
Byzantine Broadcast is crucial for many cryptographic protocols such as secret sharing, multiparty computation and blockchain consensus. In this paper we apply \emph{gossiping} (propagating a message by sending to a few random parties who in turn do the same, until the message is delivered) and propose new communication-efficient protocols, under dishonest majority, for Single-Sender Broadcast (BC) and Parallel Broadcast (PBC), improving the state-of-the-art in several ways. As our first warm-up result, we give a randomized protocol for BC which achieves $O(n^2\kappa^2)$ communication complexity from plain public key setup assumptions. This is the first protocol with subcubic communication in this setting, but does so only against static adversaries. Using some ideas from our BC protocol, we then move to our central contribution and present two protocols for PBC that are secure against adaptive adversaries. To the best of our knowledge we are the first to study PBC \emph{specifically}: All previous approaches for parallel BC (PBC) naively run $n$ instances of single-sender Broadcast, increasing the communication complexity by an undesirable factor of $n$. Our insight of avoiding black-box invocations of BC is particularly crucial for achieving our asymptotic improvements. In particular: \begin{enumerate} \item Our first PBC protocol achieves $\tilde{O}(n^3\kappa^2)$ communication complexity and relies only on plain public key setup assumptions. \item Our second PBC protocol uses trusted setup and achieves nearly optimal communication complexity $\tilde{O}(n^2\kappa^4)$. \end{enumerate} Both PBC protocols yield an almost linear improvement over the best known solutions involving $n$ parallel invocations of the respective BC protocols such as those of Dolev and Strong (SIAM Journal on Computing, 1983) and Chan et al. (Public Key Cryptography, 2020). Central to our PBC protocols is a new problem that we define and solve, that we call ``{Converge}''. In {Converge}, parties must run an adaptively-secure and \emph{efficient} protocol such that by the end of the protocol, the honest parties that remain possess a superset of the union of the inputs of the initial honest parties.
2022
EUROCRYPT
Batch-OT with Optimal Rate 📺
We show that it is possible to perform $n$ independent copies of $1$-out-of-$2$ oblivious transfer in two messages, where the communication complexity of the receiver and sender (each) is $n(1+o(1))$ for sufficiently large $n$. Note that this matches the information-theoretic lower bound. Prior to this work, this was only achievable by using the heavy machinery of rate-$1$ fully homomorphic encryption (Rate-$1$ FHE, Brakerski et al., TCC 2019). To achieve rate-$1$ both on the receiver's and sender's end, we use the LPN assumption, with slightly sub-constant noise rate $1/m^{\epsilon}$ for any $\epsilon>0$ together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHE-based solution which inherently requires an expensive ``bootstrapping'' operation. We believe that in terms of efficiency we compare favorably to existing batch-OT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE). For our DDH-based solution we develop a new technique that may be of independent interest. We show that it is possible to ``emulate'' the binary group $\bbZ_2$ (or any other small-order group) inside a prime-order group $\bbZ_p$ \emph{in a function-private manner}. That is, $\bbZ_2$ operations are mapped to $\bbZ_p$ operations such that the outcome of the latter do not reveal additional information beyond the $\bbZ_2$ outcome. Our encoding technique uses the discrete Gaussian distribution, which to our knowledge was not done before in the context of DDH.
2022
EUROCRYPT
A Correlation Attack on Full SNOW-V and SNOW-Vi 📺
In this paper, a method for searching correlations between the binary stream of Linear Feedback Shift Register (LFSR) and the keystream of SNOW-V and SNOW-Vi is presented based on the technique of approximation to composite functions. With the aid of the linear relationship between the four taps of LFSR input into Finite State Machine (FSM) at three consecutive clocks, we present an automatic search model based on the SAT/SMT technique and search out a series of linear approximation trails with high correlation. By exhausting the intermediate masks, we find a binary linear approximation with a correlation $-2^{-47.76}$. Using such approximation, we propose a correlation attack on SNOW-V with an expected time complexity $2^{246.53}$, a memory complexity $2^{238.77}$ and $2^{237.5}$ keystream words generated by the same key and Initial Vector (IV). For SNOW-Vi, we provide a binary linear approximation with the same correlation and mount a correlation attack with the same complexity as that of SNOW-V. To the best of our knowledge, this is the first known efficient attack on full SNOW-V and SNOW-Vi, which is better than the exhaustive key search. The results indicate that neither SNOW-V nor SNOW-Vi can guarantee the 256-bit security level if we ignore the design constraint that the maximum length of keystream for a single pair of key and IV is less than $2^{64}$.
2022
EUROCRYPT
On the security of ECDSA with additive key derivation and presignatures 📺
Two common variations of ECDSA signatures are {\em additive key derivation} and presignatures. Additive key derivation is a simple mechanism for deriving many subkeys from a single master key, and is already widely used in cryptocurrency applications with the Hierarchical Deterministic Wallet mechanism standardized in Bitcoin Improvement Proposal 32 (BIP32). Because of its linear nature, additive key derivation is also amenable to efficient implementation in the threshold setting. With presignatures, the secret and public nonces used in the ECDSA signing algorithm are precomputed. In the threshold setting, using presignatures along with other precomputed data allows for an extremely efficient "online phase" of the protocol. Recent works have advocated for both of these variations, sometimes combined together. However, somewhat surprisingly, we are aware of no prior security proof for additive key derivation, let alone for additive key derivation in combination with presignatures. In this paper, we provide a thorough analysis of these variations, both in isolation and in combination. Our analysis is in the generic group model (GGM). Importantly, we do not modify ECDSA or weaken the standard notion of security in any way. Of independent interest, we also present a version of the GGM that is specific to elliptic curves. This EC-GGM better models some of the idiosyncrasies (such as the conversion function and malleability) of ECDSA. In addition to this analysis, we report security weaknesses in these variations that apparently have not been previously reported. For example, we show that when both variations are combined, there is a cube-root attack on ECDSA, which is much faster than the best known, square-root attack on plain ECDSA. We also present two mitigations against these weaknesses: re-randomized presignatures and homogeneous key derivation. Each of these mitigations is very lightweight, and when used in combination, the security is essentially the same as that of plain ECDSA (in the EC-GGM).
2022
EUROCRYPT
Key Guessing Strategies for Linear Key-Schedule Algorithms in Rectangle Attacks 📺
When generating quartets for the rectangle attack on ciphers with linear key-schedule ciphers, we find the right quartets which may suggest key candidates have to satisfy some nonlinear relationships. However, some quartets generated always violate these relationships, so that they will never suggest any key candidates. Inspired by previous rectangle frameworks, we find that guessing certain key cells before generating quartets may reduce the number of those invalid quartets. However, guessing a lot of key cells at once may lose the benefit from the early abort technique, which may lead to a higher overall complexity. To get better tradeoff, we build a new rectangle attack framework on ciphers with linear key-schedule with the purpose of reducing the overall complexity or attacking more rounds. In the tradeoff model, there are many parameters affecting the overall complexity, especially for the choices of the number and positions of key guessing cells before generating quartets. To identify optimal parameters, we build a uniform automatic tool on SKINNY as an example, which includes the optimal rectangle distinguishers for key-recovery phase, the number and positions of key guessing cells before generating quartets, the size of key counters to build that affecting the exhaustive search step, etc. Based on the automatic tool, we identify a 32-round key-recovery attack on SKINNY-128-384 in the related-key setting, which extends the best previous attack by 2 rounds. For other versions with n-2n or n-3n, we also achieve one more round than before. In addition, using the previous rectangle distinguishers, we achieve better attacks on round-reduced ForkSkinny, Deoxys-BC-384 and GIFT-64. At last, we discuss the conversion of our rectangle framework from related-key setting into single-key setting and give new single-key rectangle attack on 10-round Serpent.
2022
EUROCRYPT
Group Signature and More from Isogenies and Lattices: Generic, Simple, and Efficient 📺
We construct an efficient dynamic group signature (or more generally an accountable ring signature) from isogeny and lattice assumptions. Our group signature is based on a simple generic construction that can be instantiated by cryptographically hard group actions such as the CSIDH group action or an MLWE-based group action. The signature is of size $O(¥log N)$, where $N$ is the number of users in the group. Our idea builds on the recent efficient OR-proof by Beullens, Katsumata, and Pintore (Asiacrypt'20), where we efficiently add a proof of valid ciphertext to their OR-proof and further show that the resulting non-interactive zero-knowledge proof system is ¥emph{online extractable}. Our group signatures satisfy more ideal security properties compared to previously known constructions, while simultaneously having an attractive signature size. The signature size of our isogeny-based construction is an order of magnitude smaller than all previously known post-quantum group signatures (e.g., 6.6 KB for 64 members). In comparison, our lattice-based construction has a larger signature size (e.g., either 126 KB or 89 KB for 64 members depending on the satisfied security property). However, since the $O(¥cdot)$-notation hides a very small constant factor, it remains small even for very large group sizes, say $2^{20}$.
2022
EUROCRYPT
A Greater GIFT: Strengthening GIFT against Statistical Cryptanalysis 📺
GIFT-64 is a 64-bit block cipher with a 128-bit key that is more lightweight than PRESENT. This paper provides a detailed analysis of GIFT-64 against differential and linear attacks. Our work complements automatic search methods for the best differential and linear characteristics with a careful manual analysis. This hybrid approach leads to new insights. In the differential setting, we theoretically explain the existence of differential characteristics with two active S-boxes per round and derive some novel properties of these characteristics. Furthermore, we prove that all optimal differential characteristics of GIFT-64 covering more than seven rounds must activate two S-boxes per round. We can construct all optimal characteristics by hand. In parallel to the work in the differential setting, we conduct a similar analysis in the linear setting. However, unlike the clear view in differential setting, the optimal linear characteristics of GIFT-64 must have at least one round activating only one S-box. Moreover, with the assistance of automatic searching methods, we identify 24 GIFT-64 variants achieving better resistance against differential attack while maintaining a similar security level against a linear attack. Since the new variants strengthen GIFT-64 against statistical cryptanalysis, we claim that the number of rounds could be reduced from 28 to 26 for the variants. This observation enables us to create a cipher with lower energy consumption than GIFT-64. Similarly to the case in GIFT-64, we do not claim any related-key security for the round-reduced variant as this is not relevant for most applications.
2022
EUROCRYPT
Private Circuits with Quasilinear Randomness 📺
A {\em $t$-private} circuit for a function $f$ is a randomized Boolean circuit $C$ that maps a randomized encoding of an input $x$ to an encoding of the output $f(x)$, such that probing $t$ wires anywhere in $C$ reveals nothing about $x$. Private circuits can be used to protect embedded devices against side-channel attacks. Motivated by the high cost of generating fresh randomness in such devices, several works have studied the question of minimizing the randomness complexity of private circuits. The best known upper bound, due to Coron et al. (Eurocrypt 2020), is $O(t^2\cdot\log s)$ random bits, where $s$ is the circuit size of $f$. We improve this to $O(t\cdot \log s)$, including the randomness used by the input encoder, and extend this bound to the stateful variant of private circuits. Our constructions are semi-explicit in the sense that there is an efficient randomized algorithm that generates the private circuit $C$ from a circuit for $f$ with negligible failure probability.
2022
EUROCRYPT
Orientations and the supersingular endomorphism ring problem 📺
We study two important families of problems in isogeny-based cryptography and how they relate to each other: computing the endomorphism ring of supersingular elliptic curves, and inverting the action of class groups on oriented supersingular curves. We prove that these two families of problems are closely related through polynomial-time reductions, assuming the generalized Riemann hypothesis. We identify two classes of essentially equivalent problems. The first class corresponds to the problem of computing the endomorphism ring of oriented curves. The security of a large family of cryptosystems (such as CSIDH) reduces to (and sometimes from) this class, for which there are heuristic quantum algorithms running in subexponential time. The second class corresponds to computing the endomorphism ring of orientable curves. The security of essentially all isogeny-based cryptosystems reduces to (and sometimes from) this second class, for which the best known algorithms are still exponential. Some of our reductions not only generalise, but also strengthen previously known results. For instance, it was known that in the particular case of curves defined over $\mathbb F_p$, the security of CSIDH reduces to the endomorphism ring problem in subexponential time. Our reductions imply that the security of CSIDH is actually equivalent to the endomorphism ring problem, under polynomial time reductions (circumventing arguments that proved such reductions unlikely).
2022
EUROCRYPT
Property-Preserving Hash Functions for Hamming Distance from Standard Assumptions 📺
Property-preserving hash functions allow for compressing long inputs $x_0$ and $x_1$ into short hashes $h(x_0)$ and $h(x_1)$ in a manner that allows for computing a predicate $P(x_0, x_1)$ given only the two hash values without having access to the original data. Such hash functions are said to be adversarially robust if an adversary that gets to pick $x_0$ and $x_1$ after the hash function has been sampled, cannot find inputs for which the predicate evaluated on the hash values outputs the incorrect result. In this work we construct robust property-preserving hash functions for the hamming-distance predicate which distinguishes inputs with a hamming distance at least some threshold $t$ from those with distance less than $t$. The security of the construction is based on standard lattice hardness assumptions. Our construction has several advantages over the best known previous construction by Fleischhacker and Simkin (Eurocrypt 2021). Our construction relies on a single well-studied hardness assumption from lattice cryptography whereas the previous work relied on a newly introduced family of computational hardness assumptions. In terms of computational effort, our construction only requires a small number of modular additions per input bit, whereas the work of Fleischhacker and Simkin required several exponentiations per bit as well as the interpolation and evaluation of high-degree polynomials over large fields. An additional benefit of our construction is that the description of the hash function can be compressed to $\lambda$ bits assuming a random oracle. Previous work has descriptions of length $\bigO{\ell \lambda}$ bits for input bit-length $\ell$. We prove a lower bound on the output size of any property-preserving hash function for the hamming distance predicate. The bound shows that the size of our hash value is not far from optimal.
2022
EUROCRYPT
Embedding the UC Model into the IITM Model 📺
Universal Composability is a widely used concept for the design and analysis of protocols. Since Canetti's original UC model and the model by Pfitzmann and Waidner several different models for universal composability have been proposed, including, for example, the IITM model, GNUC, CC, but also extensions and restrictions of the UC model, such as JUC, GUC, and SUC. These were motivated by the lack of expressivity of existing models, ease of use, or flaws in previous models. Cryptographers choose between these models based on their needs at hand (e.g., support for joint state and global state) or simply their familiarity with a specific model. While all models follow the same basic idea, there are huge conceptually differences, which raises fundamental and practical questions: (How) do the concepts and results proven in one model relate to those in another model? Do the different models and the security notions formulated therein capture the same classes of attacks? Most importantly, can cryptographers re-use results proven in one model in another model, and if so, how? In this paper, we initiate a line of research with the aim to address this lack of understanding, consolidate the space of models, and enable cryptographers to re-use results proven in other models. As a start, here we focus on Canetti's prominent UC model and the IITM model proposed by K{\"u}sters et al. The latter is an interesting candidate for comparison with the UC model since it has been used to analyze a wide variety of protocols, supports a very general protocol class and provides, among others, seamless treatment of protocols with shared state, including joint and global state. Our main technical contribution is an embedding of the UC model into the IITM model showing that all UC protocols, security and composition results carry over to the IITM model. Hence, protocol designers can profit from the features of the IITM model while being able to use all their results proven in the UC model. We also show that, in general, one cannot embed the full IITM model into the UC model.
2022
EUROCRYPT
Beyond quadratic speedups in quantum attacks on symmetric schemes 📺
In this paper, we report the first quantum key-recovery attack on a symmetric block cipher design, using classical queries only, with a more than quadratic time speedup compared to the best classical attack. We study the 2XOR-Cascade construction of Ga{\v{z}}i and Tessaro (EUROCRYPT~2012). It is a key length extension technique which provides an n-bit block cipher with 5n/2 bits of security out of an n-bit block cipher with 2n bits of key, with a security proof in the ideal model. We show that the offline-Simon algorithm of Bonnetain et al. (ASIACRYPT~2019) can be extended to, in particular, attack this construction in quantum time $\widetilde{\mathcal{O}}{2^n}$, providing a 2.5 quantum speedup over the best classical attack. Regarding post-quantum security of symmetric ciphers, it is commonly assumed that doubling the key sizes is a sufficient precaution. This is because Grover's quantum search algorithm, and its derivatives, can only reach a quadratic speedup at most. Our attack shows that the structure of some symmetric constructions can be exploited to overcome this limit. In particular, the 2XOR-Cascade cannot be used to generically strengthen block ciphers against quantum adversaries, as it would offer only the same security as the block cipher itself.
2022
EUROCRYPT
Universally Composable Subversion-Resilient Cryptography 📺
Subversion attacks undermine security of cryptographic protocols by replacing a legitimate honest party's implementation with one that leaks information in an undetectable manner. An important limitation of all currently known techniques for designing cryptographic protocols with security against subversion attacks is that they do not automatically guarantee security in the realistic setting where a protocol session may run concurrently with other protocols. We remedy this situation by providing a foundation of reverse firewalls (Mironov and Stephens-Davidowitz, EUROCRYPT'15) in the universal composability (UC) framework (Canetti, FOCS'01 and J. ACM'20). More in details, our contributions are threefold: - We generalize the UC framework to the setting where each party consists of a core (which has secret inputs and is in charge of generating protocol messages) and a firewall (which has no secrets and sanitizes the outgoing/incoming communication from/to the core). Both the core and the firewall can be subject to different flavors of corruption, modeling different kinds of subversion attacks. For instance, we capture the setting where a subverted core looks like the honest core to any efficient test, yet it may leak secret information via covert channels (which we call specious subversion). - We show how to sanitize UC commitments and UC coin tossing against specious subversion, under the DDH assumption. - We show how to sanitize the classical GMW compiler (Goldreich, Micali and Wigderson, STOC 1987) for turning MPC with security in the presence of semi-honest adversaries into MPC with security in the presence of malicious adversaries. This yields a completeness theorem for maliciously secure MPC in the presence of specious subversion. Additionally, all our sanitized protocols are transparent, in the sense that communicating with a sanitized core looks indistinguishable from communicating with an honest core. Thanks to the composition theorem, our methodology allows, for the first time, to design subversion-resilient protocols by sanitizing different sub-components in a modular way.
2022
EUROCRYPT
Sine Series Approximation of the Mod Function for Bootstrapping of Approximate HE 📺
While it is well known that the sawtooth function has a point-wise convergent Fourier series, the rate of convergence is not the best possible for the application of approximating the mod function in small intervals around multiples of the modulus. We show a different sine series, such that the sine series of order $n$ has error $O(\epsilon^{2n+1})$ for approximating the mod function in $\epsilon$-sized intervals around multiples of the modulus. Moreover, the resulting polynomial, after Taylor series approximation of the sine function, has small coefficients, and the whole polynomial can be computed at a precision that is only slightly larger than $-(2n+1)\log \epsilon$, the precision of approximation being sought. This polynomial can then be used to approximate the mod function to almost arbitrary precision, and hence allows practical CKKS-HE bootstrapping with arbitrary precision. We validate our approach by an implementation and obtain $100$ bit precision bootstrapping as well as improvements over prior work even at lower precision.
2022
EUROCRYPT
A PCP Theorem for Interactive Proofs and Applications 📺
The celebrated PCP Theorem states that any language in NP can be decided via a verifier that reads O(1) bits from a polynomially long proof. Interactive oracle proofs (IOP), a generalization of PCPs, allow the verifier to interact with the prover for multiple rounds while reading a small number of bits from each prover message. While PCPs are relatively well understood, the power captured by IOPs (beyond $\NP$) has yet to be fully explored. We present a generalization of the PCP theorem for interactive languages. We show that any language decidable by a k(n)-round IP has a k(n)-round public-coin IOP, where the verifier makes its decision by reading only O(1) bits from each (polynomially long) prover message and $O(1)$ bits from each of its own (random) messages to the prover. Our result and the underlying techniques have several applications. We get a new hardness of approximation result for a stochastic satisfiability problem, we show IOP-to-IOP transformations that previously were known to hold only for IPs, and we formulate a new notion of PCPs (index-decodable PCPs) that enables us to obtain a commit-and-prove SNARK in the random oracle model for nondeterministic computations.
2022
EUROCRYPT
Quantum Algorithms for Variants of Average-Case Lattice Problems via Filtering 📺
We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. The SIS, LWE, and EDCP problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for the variants of SIS and LWE we consider. For EDCP, our quantum algorithm slightly extends the result of Ivanyos et al. (2018). Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contribution is solving LWE given LWE-like quantum states with interesting parameters using a filtering technique. We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. The SIS, LWE, and EDCP problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for the variants of SIS and LWE we consider. For EDCP, our quantum algorithm slightly extends the result of Ivanyos et al. (2018). Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contribution is solving LWE given LWE-like quantum states with interesting parameters using a filtering technique.
2022
EUROCRYPT
Highly Efficient OT-Based Multiplication Protocols 📺
We present a new OT-based two-party multiplication protocol that is almost as efficient as Gilboa's semi-honest protocol (Crypto '99), but has a high-level of security without further compilation. The achieved security suffices for many applications, and, assuming DDH, can be cheaply compiled into full security.
2022
EUROCRYPT
Practical Non-interactive Publicly Verifiable Secret Sharing with Thousands of Parties 📺
Non-interactive publicly verifiable secret sharing (PVSS) schemes enables (re-)sharing of secrets in a decentralized setting in the presence of malicious parties. A recently proposed application of PVSS schemes is to enable permissionless proof-of-stake blockchains to ``keep a secret" via a sequence of committees that share that secret. These committees can use the secret to produce signatures on the blockchain's behalf, or to disclose hidden data conditioned on consensus that some event has occurred. That application needs very large committees with thousands of parties, so the PVSS scheme in use must be efficient enough to support such large committees, in terms of both computation and communication. Yet, previous PVSS schemes have large proofs and/or require many exponentiations over large groups. We present a non-interactive PVSS scheme in which the underlying encryption scheme is based on the learning with errors (LWE) problem. While lattice-based encryption schemes are very fast, they often have long ciphertexts and public keys. We use the following two techniques to conserve bandwidth: First, we adapt the Peikert-Vaikuntanathan-Waters (PVW) encryption scheme to the multi-receiver setting, so that the bulk of the parties' keys is a common random string. The resulting scheme yields $\Omega(1)$ amortized plaintext/ciphertext rate, where concretely the rate is $\approx 1/60$ for 100 parties, $\approx 1/8$ for 1000 parties, and approaching 1/2 as the number of parties grows. Second, we use bulletproofs over a DL-group of order about 256 bits to get compact proofs of correct encryption/decryption of shares. Alternating between the lattice and DL settings is relatively painless, as we equate the LWE modulus with the order of the group. We also show how to reduce the the number of exponentiations in the bulletproofs by applying Johnson-Lindenstrauss-like compression to reduce the dimension of the vectors whose properties must be verified. An implementation of our PVSS with 1000 parties showed that it is feasible even at that size, and should remain so even with one or two order of magnitude increase in the committee size.
2022
EUROCRYPT
Garbled Circuits With Sublinear Evaluator 📺
A recent line of work, Stacked Garbled Circuit (SGC), showed that Garbled Circuit (GC) can be improved for functions that include conditional behavior. SGC relieves the communication bottleneck of 2PC by only sending enough garbled material for a single branch out of the $b$ total branches. Hence, communication is sublinear in the circuit size. However, both the evaluator and the generator pay in computation and perform at least factor $\log b$ extra work as compared to standard GC evaluation. We extend the sublinearity of SGC to also include the work performed by the GC Evaluator E; thus we achieve a fully sublinear E, which is essential when optimizing for the online phase. We formalize our approach as a garbling scheme called GCWise: GC WIth Sublinear Evaluator. We show one attractive and immediate application, Garbled PIR, a primitive that marries GC with Private Information Retrieval. Garbled PIR allows the GC to non-interactively and sublinearly access a privately indexed element from a publicly known database, and then use this element in continued GC evaluation.
2022
EUROCRYPT
Practical Post-Quantum Signature Schemes from Isomorphism Problems of Trilinear Forms 📺
In this paper, we propose a practical signature scheme based on the alternating trilinear form equivalence problem. Our scheme is inspired from the Goldreich-Micali-Wigderson's zero-knowledge protocol for graph isomorphism, and can be served as an alternative candidate for the NIST's post-quantum digital signatures. First, we present theoretical evidences to support its security, especially in the post-quantum cryptography context. The evidences are drawn from several research lines, including hidden subgroup problems, multivariate cryptography, cryptography based on group actions, the quantum random oracle model, and recent advances on isomorphism problems for algebraic structures in algorithms and complexity. Second, we demonstrate its potential for practical uses. Based on algorithm studies, we propose concrete parameter choices, and then implement a prototype. One concrete scheme achieves 128 bit security with public key size ~4100 bytes, signature size ~6800 bytes, and running times (key generation, sign, verify) ~0.8ms on a common laptop computer.
2022
EUROCRYPT
Round-Optimal Black-Box Protocol Compilers 📺
We give black-box, round-optimal protocol compilers from semi-honest security to malicious security in the Random Oracle Model (ROM) and in the 1-out-2 OT correlations model. We use our compilers to obtain the following results: \begin{itemize} \item A two-round, two-party protocol secure against malicious adversaries in the random oracle model making black-box use of a two-round semi-honest secure protocol. Prior to our work, such a result was not known even considering special functionalities such as a two-round oblivious transfer. This result also implies the first constructions of two-round malicious (batch) OT/OLE in the random oracle model based on the black-box use of two-round semi-honest (batch) OT/OLE. \item A three-round multiparty secure computation protocol in the random oracle model secure against malicious adversaries that is based on the black-box use of two-round semi-honest OT. This protocol matches a known round complexity lower bound due to Applebaum et al. (ITCS'20) and is based on a minimal cryptographic hardness assumption. \item A two-round, multiparty secure computation protocol in the $1$-out-of-$2$ OT correlations model that is secure against malicious adversaries and makes black-box use of cryptography. This gives new round-optimal protocols for computing arithmetic branching programs that are statistically secure and makes black-box use of the underlying field. \end{itemize} As a contribution of independent interest, we provide a new variant of the IPS compiler (Ishai, Prabhakaran and Sahai, Crypto 2008) in the two-round setting, where we relax requirements on the IPS ``inner protocol'' by strengthening the ``outer protocol''.
2022
EUROCRYPT
Secure Non-Interactive Reduction and Spectral Analysis of Correlations 📺
Correlated pairs of random variables are a central concept in information-theoretically secure cryptography. Secure reductions between different correlations have been studied, and completeness results are known. Further, the complexity of such reductions is intimately connected with circuit complexity and efficiency of locally decodable codes. As such, making progress on these complexity questions faces strong barriers. Motivated by this, in this work, we study a restricted form of secure reductions --- namely, Secure Non-Interactive Reductions (SNIR) --- which is still closely related to the original problem, and establish several fundamental results and relevant techniques for it. We uncover striking connections between SNIR and linear algebraic properties of correlations. Specifically, we define the spectrum of a correlation, and show that a target correlation has a SNIR to a source correlation only if the spectrum of the latter contains the entire spectrum of the former. We also establish a `mirroring lemma' that shows an unexpected symmetry between the two parties in a SNIR, when viewed through the lens of spectral analysis. We also use cryptographic insights and elementary linear algebraic analysis to fully characterize the role of common randomness as well as local randomness in SNIRs. We employ these results to resolve several fundamental questions about SNIRs, and to define future directions.
2022
EUROCRYPT
Approximate Divisor Multiples - Factoring with Only a Third of the Secret CRT-Exponents 📺
We address Partial Key Exposure attacks on CRT-RSA on secret exponents $d_p, d_q$ with small public exponent $e$. For constant $e$ it is known that the knowledge of half of the bits of one of $d_p, d_q$ suffices to factor the RSA modulus $N$ by Coppersmith's famous {\em factoring with a hint} result. We extend this setting to non-constant $e$. Somewhat surprisingly, our attack shows that RSA with $e$ of size $N^{\frac 1 {12}}$ is most vulnerable to Partial Key Exposure, since in this case only a third of the bits of both $d_p, d_q$ suffices to factor $N$ in polynomial time, knowing either most significant bits (MSB) or least significant bits (LSB). Let $ed_p = 1 + k(p-1)$ and $ed_q = 1 + \ell(q-1)$. On the technical side, we find the factorization of $N$ in a novel two-step approach. In a first step we recover $k$ and $\ell$ in polynomial time, in the MSB case completely elementary and in the LSB case using Coppersmith's lattice-based method. We then obtain the prime factorization of $N$ by computing the root of a univariate polynomial modulo $kp$ for our known $k$. This can be seen as an extension of Howgrave-Graham's {\em approximate divisor} algorithm to the case of {\em approximate divisor multiples} for some known multiple $k$ of an unknown divisor $p$ of $N$. The point of {\em approximate divisor multiples} is that the unknown that is recoverable in polynomial time grows linearly with the size of the multiple $k$. Our resulting Partial Key Exposure attack with known MSBs is completely rigorous, whereas in the LSB case we rely on a standard Coppersmith-type heuristic. We experimentally verify our heuristic, thereby showing that in practice we reach our asymptotic bounds already using small lattice dimensions. Thus, our attack is highly practical.
2022
EUROCRYPT
One-Shot Fiat-Shamir-based NIZK Arguments of Composite Residuosity and Logarithmic-Size Ring Signatures in the Standard Model 📺
The standard model security of the Fiat-Shamir transform has been an active research area for many years. In breakthrough results, Canetti {\it et al.} (STOC'19) and Peikert-Shiehian (Crypto'19) showed that, under the Learning-With-Errors (LWE) assumption, it provides soundness by applying correlation-intractable (CI) hash functions to so-called {\it trapdoor} $\Sigma$-protocols. In order to be compatible with CI hash functions based on standard LWE assumptions with polynomial approximation factors, all known such protocols have been obtained via parallel repetitions of a basic protocol with binary challenges. In this paper, we consider languages related to Paillier's composite residuosity assumption (DCR) for which we give the first trapdoor $\Sigma$-protocols providing soundness in one shot, via exponentially large challenge spaces. This improvement is analogous to the one enabled by Schnorr over the original Fiat-Shamir protocol in the random oracle model. Using the correlation-intractable hash function paradigm, we then obtain simulation-sound NIZK arguments showing that an element of $\mathbb{Z}_{N^2}^\ast$ is a composite residue, which opens the door to space-efficient applications in the standard model. As a concrete example, we build logarithmic-size ring signatures (assuming a common reference string) with the shortest signature length among schemes based on standard assumptions in the standard model. We prove security under the DCR and LWE assumptions, while keeping the signature size comparable with that of random-oracle-based schemes.
2022
EUROCRYPT
On Building Fine-Grained One-Way Functions from Strong Average-Case Hardness 📺
Constructing one-way functions from average-case hardness is a long-standing open problem. A positive result would exclude Pessiland (Impagliazzo '95) and establish a highly desirable win-win situation: either (symmetric) cryptography exists unconditionally, or all NP problems can be solved efficiently on the average. Motivated by the lack of progress on this seemingly very hard question, we initiate the investigation of weaker yet meaningful candidate win-win results of the following type: either there are *fine-grained* one-way functions (FGOWF), or nontrivial speedups can be obtained for all NP problems on the average. FGOWFs only require a fixed polynomial gap (as opposed to superpolynomial) between the running time of the function and the running time of an inverter. We obtain three main results: Construction. We show that if there is an NP language having a very strong form of average-case hardness, which we call *block finding hardness*, then FGOWF exist. We provide heuristic support for this very strong average-case hardness notion by showing that it holds for a random language. Then, we study whether weaker (and more natural) forms of average-case hardness could already suffice to obtain FGOWF, and obtain two negative results: Separation I. We provide a strong oracle separation for the implication (exponentially average-case hard languages exist => FGOWF exist). Separation II. We provide a second strong negative result for an even weaker candidate win-win result. Namely, we rule out a black-box proof for the implication (exponentially average-case hard language *whose hardness amplifies optimally through parallel repetitions* exist => FGOWF exist). This separation forms the core technical contribution of our work.
2022
EUROCRYPT
Cryptanalysis of Candidate Obfuscators for Affine Determinant Programs 📺
At ITCS 2020, Bartusek et al. proposed a candidate indistinguishability obfuscator (iO) for affine determinant programs (ADPs). The candidate is special since it is the only unbroken candidate iO to date that does not rely on the hardness of traditional cryptographic assumptions like discrete-log or learning with errors. Instead, it directly applies specific randomization techniques to the underlying ADP. It is relatively efficient compared to the rest of the iO candidates. However, the obfuscation scheme requires further cryptanalysis since it was not known to be based on any well-formed mathematical assumptions. In this paper, we show cryptanalytic attacks on the iO candidate provided by Bartusek et al. Our attack exploits the weakness of one of the randomization steps in the candidate. The attack applies to a fairly general class of programs. At the end of the paper we discuss plausible countermeasures to defend against our attacks.
2022
EUROCRYPT
A Fast and Simple Partially Oblivious PRF, with Applications 📺
We build the first construction of a partially oblivious pseudorandom function (POPRF) that does not rely on bilinear pairings. Our construction can be viewed as combining elements of the 2HashDH OPRF of Jarecki, Kiayias, and Krawczyk with the Dodis-Yampolskiy PRF. We analyze our POPRF’s security in the random oracle model via reduction to a new one-more gap strong Diffie-Hellman inversion assumption. The most significant technical challenge is establishing confidence in the new assumption, which requires new proof techniques that enable us to show that its hardness is implied by the q-DL assumption in the algebraic group model. Our new construction is as fast as the current, standards-track OPRF 2HashDH protocol, yet provides a new degree of flexibility useful in a variety of applications. We show how POPRFs can be used to prevent token hoarding attacks against Privacy Pass, reduce key management complexity in the OPAQUE password authenticated key exchange protocol, and ensure stronger security for password breach alerting services.
2022
EUROCRYPT
Rubato: Noisy Ciphers for Approximate Homomorphic Encryption 📺
A transciphering framework converts a symmetric ciphertext into a homomorphic ciphertext on the server-side, reducing computational and communication overload on the client-side. In Asiacrypt 2021, Cho et al. proposed the RtF framework that supports approximate computation. In this paper, we propose a family of noisy ciphers, dubbed Rubato, with a novel design strategy of introducing noise to a symmetric cipher of a low algebraic degree. With this strategy, the multiplicative complexity of the cipher is significantly reduced, compared to existing HE-friendly ciphers, without degrading the overall security. More precisely, given a moderate block size (16 to 64), Rubato enjoys a low multiplicative depth (2 to 5) and a small number of multiplications per encrypted word (2.1 to 6.25) at the cost of slightly larger ciphertext expansion (1.26 to 1.31). The security of Rubato is supported by comprehensive analysis including symmetric and LWE cryptanalysis. Compared to HERA within the RtF framework, client-side and server-side throughput is improved by 22.9% and 32.2%, respectively, at the cost of only 1.6% larger ciphertext expansion.
2022
EUROCRYPT
A Complete Characterization of Game-Theoretically Fair, Multi-Party Coin Toss 📺
Cleve's celebrated lower bound (STOC'86) showed that a de facto strong fairness notion is impossible in 2-party coin toss, i.e., the corrupt party always has a strategy of biasing the honest party's outcome by a noticeable amount. Nonetheless, Blum's famous coin-tossing protocol (CRYPTO'81) achieves a strictly weaker "game-theoretic'' notion of fairness — specifically, it is a 2-party coin toss protocol in which neither party can bias the outcome towards its own preference; and thus the honest protocol forms a Nash equilibrium in which neither party would want to deviate. Surprisingly, an n-party analog of Blum's famous coin toss protocol was not studied till recently. The work by Chung et al.~(TCC'18) was the first to explore the feasibility of game-theoretically fair n-party coin toss in the presence of corrupt majority. We may assume that each party has a publicly stated preference for either the bit 0 or 0, and if the outcome agrees with the party's preference, it obtains utility 1; else it obtains nothing. A natural game-theoretic formulation is to require that the honest protocol form a coalition-resistant Nash equilibrium, i.e., no coalition should have incentive to deviate from the honest behavior. Chung et al. phrased this game-theoretic notion as “cooperative-strategy-proofness'' or ”CSP-fairness'' for short. Unfortunately, Chung et al.~showed that under (n-1)-sized coalitions, it is impossible to design such a CSP-fair coin toss protocol, unless all parties except one prefer the same bit. In this paper, we show that the impossibility of Chung et al.~is in fact not as broad as it may seem. When coalitions are majority but not $n-1$ in size, we can indeed get feasibility results in some meaningful parameter regimes. We give a complete characterization of the regime in which CSP-fair coin toss is possible, by providing a matching upper- and lower-bound. Our complete characterization theorem also shows that the mathematical structure of game-theoretic fairness is starkly different from the de facto strong fairness notion in the multi-party computation literature.
2022
EUROCRYPT
Single-Server Private Information Retrieval with Sublinear Amortized Time 📺
We construct new private-information-retrieval protocols in the single-server setting. Our schemes allow a client to privately fetch a sequence of database records from a server, while the server answers each query in average time sublinear in the database size. Specifically, we introduce the first single-server private-information-retrieval schemes that have sublinear amortized server time, require sublinear additional storage, and allow the client to make her queries adaptively. Our protocols rely only on standard cryptographic assumptions (decision Diffie-Hellman, quadratic residuosity, learning with errors, etc.). They work by having the client first fetch a small "hint" about the database contents from the server. Generating this hint requires server time linear in the database size. Thereafter, the client can use the hint to make a bounded number of adaptive queries to the server, which the server answers in sublinear time--yielding sublinear amortized cost. Finally, we give lower bounds proving that our most efficient scheme is optimal with respect to the trade-off it achieves between server online time and client storage.
2022
EUROCRYPT
McEliece needs a Break -- Solving McEliece-1284 and Quasi-Cyclic-2918 with Modern ISD 📺
With the recent shift to post-quantum algorithms it becomes increasingly important to provide precise bit-security estimates for code-based cryptography such as McEliece and quasi-cyclic schemes like BIKE and HQC. While there has been significant progress on information set decoding (ISD) algorithms within the last decade, it is still unclear to which extent this affects current cryptographic security estimates. We provide the first concrete implementations for representation-based ISD, such as May-Meurer-Thomae (MMT) or Becker-Joux-May-Meurer (BJMM), that are parameter-optimized for the McEliece and quasi-cyclic setting. Although MMT and BJMM consume more memory than naive ISD algorithms like Prange, we demonstrate that these algorithms lead to significant speedups for practical cryptanalysis already for cryptographic instances of medium security level (around 60 bit). More concretely, we provide data for the record computations of McEliece-1223 and McEliece-1284 (old record: 1161), and for the quasi-cyclic setting up to dimension 2918 (before: 1938). Based on our record computations we extrapolate to the bit-security level of the proposed BIKE, HQC and McEliece parameters in NIST's standardization process. For BIKE/HQC, we also show how to transfer the Decoding-One-Out-of-Many (DOOM) technique to MMT/BJMM. Although we achieve significant DOOM speedups, our estimates confirm the bit-security levels of BIKE and HQC. For the proposed McEliece round-3 parameter sets of 192 and 256 bit, however, our extrapolation indicates a security level overestimate by roughly 20 and 10 bits, respectively, i.e., the high-security McEliece instantiations may be a bit less secure than desired.
2022
EUROCRYPT
Field Instruction Multiple Data 📺
Fully homomorphic encryption~(FHE) has flourished since it was first constructed by Gentry~(STOC 2009). Single instruction multiple data~(SIMD) gave rise to efficient homomorphic operations on vectors in \((\mathbb{F}_{t^d})^\ell\), for prime \(t\). RLWE instantiated with cyclotomic polynomials of the form \(X^{2^N}+1\) dominate implementations of FHE due to highly efficient fast fourier transformations. However, this choice yields very short SIMD plaintext vectors and high degree extension fields, e.g. \(\ell < 100, d > 100\) for small primes~(\(t = 3, 5, \dots\)). In this work, we describe a method to encode more data on top of SIMD, \emph{Field Instruction Multiple Data}, applying reverse multiplication friendly embedding~(RMFE) to FHE. With RMFE, length-\(k\) \(\mathbb{F}_{t}\) vectors can be encoded into \(\mathbb{F}_{t^d}\) and multiplied once. The results have to be recoded~(decoded and then re-encoded) before further multiplications can be done. We introduce an FHE-specific technique to additionally evaluate arbitrary linear transformations on encoded vectors for free during the FHE recode operation. On top of that, we present two optimizations to unlock high degree extension fields with small \(t\) for homomorphic computation: \(r\)-fold RMFE, which allows products of up to \(2^r\) encoded vectors before recoding, and a three-stage recode process for RMFEs obtained by composing two smaller RMFEs. Experiments were performed to evaluate the effectiveness of FIMD from various RMFEs compared to standard SIMD operations. Overall, we found that FIMD generally had \(>2\times\) better (amortized) multiplication times compared to FHE for the same amount of data, while using almost \(k/2 \times\) fewer ciphertexts required.
2022
EUROCRYPT
Mitaka: A Simpler, Parallelizable, Maskable Variant of Falcon 📺
This work describes the Mitaka signature scheme: a new hash-and-sign signature scheme over NTRU lattices which can be seen as a variant of NIST finalist Falcon. It achieves comparable efficiency but is considerably simpler, online/offline, and easier to parallelize and protect against side-channels, thus offering significant advantages from an implementation standpoint. It is also much more versatile in terms of parameter selection. We obtain this signature scheme by replacing the FFO lattice Gaussian sampler in Falcon by the “hybrid” sampler of Ducas and Prest, for which we carry out a detailed and corrected security analysis. In principle, such a change can result in a substantial security loss, but we show that this loss can be largely mitigated using new techniques in key generation that allow us to construct much higher quality lattice trapdoors for the hybrid sampler relatively cheaply. This new approach can also be instantiated on a wide variety of base fields, in contrast with Falcon's restriction to power-of-two cyclotomics. We also introduce a new lattice Gaussian sampler with the same quality and efficiency, but which is moreover compatible with the integral matrix Gram root technique of Ducas et al., allowing us to avoid floating point arithmetic. This makes it possible to realize the same signature scheme as Mitaka efficiently on platforms with poor support for floating point numbers. Finally, we describe a provably secure masking of Mitaka. More precisely, we introduce novel gadgets that allow provable masking at any order at much lower cost than previous masking techniques for Gaussian sampling-based signature schemes, for cheap and dependable side-channel protection.
2022
EUROCRYPT
On IND-qCCA security in the ROM and its applications: CPA security is sufficient for TLS 1.3 📺
Bounded IND-CCA security (IND-qCCA) is a notion similar to the traditional IND-CCA security, except the adversary is restricted to a constant number q of decryption/decapsulation queries. We show in this work that IND-qCCA is easily obtained from any passively secure PKE in the (Q)ROM. That is, simply adding a confirmation hash or computing the key as the hash of the plaintext and ciphertext holds an IND-qCCA KEM. In particular, there is no need for derandomization or re-encryption as in the Fujisaki-Okamoto (FO) transform (JoC 2013). This makes the decapsulation process of such IND-qCCA KEM much more efficient than its FO-derived counterpart. In addition, IND-qCCA KEMs could be used in the recently proposed KEMTLS protocol (ACM CCS 2020) that requires IND-1CCA ephemeral key-exchange mechanisms or in TLS 1.3. Then, using similar proof techniques, we show that CPA-secure KEMs are sufficient for the TLS 1.3 handshake to be secure, solving an open problem in the ROM. In turn, this implies that the PRF-ODH assumption used to prove the security of TLS 1.3 is not necessary in the ROM. We also highlight and briefly discuss several use cases of IND-1CCA KEMs in protocols and ratcheting primitives.
2022
EUROCRYPT
Online-Extractability in the Quantum Random-Oracle Model 📺
We show the following generic result: Whenever a quantum query algorithm in the quantum random-oracle model outputs a classical value t that is promised to be in some tight relation with H(x) for some x, then x can be efficiently extracted with almost certainty. The extraction is by means of a suitable simulation of the random oracle and works online, meaning that it is straightline, i.e., without rewinding, and on- the-fly, i.e., during the protocol execution and without disturbing it. The technical core of our result is a new commutator bound that bounds the operator norm of the commutator of the unitary operator that describes the evolution of the compressed oracle (which is used to simulate the random oracle above) and of the measurement that extracts x. We show two applications of our generic online extractability result. We show tight online extractability of commit-and-open Σ-protocols in the quantum setting, and we offer the first complete post-quantum security proof of the textbook Fujisaki-Okamoto transformation, i.e, without adjustments to facilitate the proof, including concrete security bounds.
2022
EUROCRYPT
A Novel Completeness Test for Leakage Models and its Application to Side Channel Attacks and Responsibly Engineered Simulators 📺
Today’sdside channel attack targets are often complex devices in which instructions are processed in parallel and work on 32-bit datae words. Consedsquently, the state that is involved in producing leakage in these modern devices is large, and basing evaluations (i.e. worst case attacks) and simulators, and on a potentially incomplete state can lead to wrong conclusions. We put forward a novel notion for the “completeness” of an assumed state, together with an efficient statistical test that is based on “collapsed models”. Our novel test can be used to recover a state that contains multiple 32-bit variables in a grey box setting. We illustrate how our novel test can help to guide side channel attacks and we reveal new attack vectors for existing implementations. We then demonstrate the application of this test in the context of leakage modelling for leakage simulators and confirm that even the most recent leakage simulators do not capture all available leakage of their respective target devices. Our new test enables finding nominal models that capture all available leakage but do not give a helping hand to adversaries. Thereby we make a first step towards leakage simulators that are responsibly engineered.
2022
EUROCRYPT
Towards Micro-Architectural Leakage Simulators: Reverse Engineering Micro-Architectural Leakage Features is Practical 📺
Leakage simulators offer the tantalising promise of easy and quick testing of software with respect to the presence of side channel leakage. The quality of their build in leakage models is therefore crucial, this includes the faithful inclusion of micro-architectural leakage. Microarchitectural leakage is a reality even on low- to mid-range commercial processors, such as the ARM Cortex M series. Dealing with it seems initially infeasible in a grey box setting: how should we describe it if micro-architectural elements are not publicly known? We demonstrate, for the first time, that it is feasible, using a recent leakage modelling technique, to reverse engineer significant elements of the micro-architectural leakage of a commercial processor. Our approach first recovers the micro-architectural leakage of each stage in the pipeline, and the leakage of elements that are known to produce glitches. Using the reverse engineered leakage features we build an enhanced version of the popular leakage simulator ELMO.
2022
EUROCRYPT
Fiat-Shamir Bulletproofs are Non-Malleable (in the Algebraic Group Model) 📺
Bulletproofs (B{\"u}nz et al.~IEEE S\&P 2018) are a celebrated ZK proof system that allows for short and efficient proofs, and have been implemented and deployed in several real-world systems. In practice, they are most often implemented in their \emph{non-interactive} version obtained using the Fiat-Shamir transform, despite the lack of a formal proof of security for this setting. Prior to this work, there was no evidence that \emph{malleability attacks} were not possible against Fiat-Shamir Bulletproofs. Malleability attacks can lead to very severe vulnerabilities, as they allow an adversary to forge proofs re-using or modifying parts of the proofs provided by the honest parties. In this paper, we show for the first time that Bulletproofs (or any other similar multi-round proof system satisfying some form of \emph{weak unique response} property) achieve \emph{simulation-extractability} in the \emph{algebraic group model}. This implies that Fiat-Shamir Bulletproofs are \emph{non-malleable}.
2022
EUROCRYPT
On the Lattice Isomorphism Problem, Quadratic Forms, Remarkable Lattices, and Cryptography 📺
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds. This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance. In this work, we provide generic realizations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the lattice isomorphism problem (LIP). More specifically, we provide: - a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014). - a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP. - a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP. The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski's bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski's bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
2022
EUROCRYPT
Limits of Polynomial Packings for $\mathbb{Z}_{p^k}$ and $\mathbb{F}_{p^k}$ 📺
We formally define polynomial packing methods and initiate a unified study of related concepts in various contexts of cryptography. This includes homomorphic encryption (HE) packing and reverse multiplication-friendly embedding (RMFE) in information-theoretically secure multi-party computation (MPC). We prove several upper bounds and impossibility results on packing methods for $\mathbb{Z}_{p^k}$ or $\mathbb{F}_{p^k}$-messages into $\mathbb{Z}_{p^t}[x]/f(x)$ in terms of (i) packing density, (ii) level-consistency, and (iii) surjectivity. These results have implications on recent development of HE-based MPC over $\mathbb{Z}_{2^k}$ secure against actively corrupted majority and provide new proofs for upper bounds on RMFE.
2022
EUROCRYPT
High-Precision Bootstrapping for Approximate Homomorphic Encryption by Error Variance Minimization 📺
The Cheon-Kim-Kim-Song (CKKS) scheme (Asiacrypt'17) is one of the most promising homomorphic encryption (HE) schemes as it enables privacy-preserving computing over real (or complex) numbers. It is known that bootstrapping is the most challenging part of the CKKS scheme. Further, homomorphic evaluation of modular reduction is the core of the CKKS bootstrapping. As modular reduction is not represented by the addition and multiplication of complex numbers, approximate polynomials for modular reduction should be used. The best-known techniques (Eurocrypt'21) use a polynomial approximation for trigonometric functions and their composition. However, all the previous methods are based on an indirect approximation, and thus it requires lots of multiplicative depth to achieve high accuracy. This paper proposes a direct polynomial approximation of modular reduction for CKKS bootstrapping, which is optimal in error variance and depth. Further, we propose an efficient algorithm, namely the lazy baby-step giant-step (BSGS) algorithm, to homomorphically evaluate the approximate polynomial, utilizing the lazy relinearization/rescaling technique. The lazy-BSGS reduces the computational complexity by half compared to the ordinary BSGS algorithm. The performance improvement for the CKKS scheme by the proposed algorithm is verified by implementation using HE libraries. The implementation results show that the proposed method has a multiplicative depth of 10 for modular reduction to achieve the state-of-the-art accuracy, while the previous methods have depths of 11 to 12. Moreover, we achieve higher accuracy within a small multiplicative depth, for example, 93-bit within multiplicative depth 11.
2022
EUROCRYPT
Dynamic Collusion Bounded Functional Encryption from Identity-Based Encryption 📺
Functional Encryption is a powerful notion of encryption in which each decryption key is associated with a function f such that decryption recovers the function evaluation f(m). Informally, security states that a user with access to function keys sk_f1,sk_f2,..., (and so on) can only learn f1(m), f2(m),... (and so on) but nothing more about the message. The system is said to be q-bounded collusion resistant if the security holds as long as an adversary gets access to at most q = q(λ) function keys. A major drawback of such statically bounded collusion systems is that the collusion bound q must be declared at setup time and is fixed for the entire lifetime of the system. We initiate the study of dynamically bounded collusion resistant functional encryption systems which provide more flexibility in terms of selecting the collusion bound, while reaping the benefits of statically bounded collusion FE systems (such as quantum resistance, simulation security, and general assumptions). Briefly, the virtues of a dynamically bounded scheme can be summarized as: -Fine-grained individualized selection: It lets each encryptor select the collusion bound by weighing the trade-off between performance overhead and the amount of collusion resilience. -Evolving encryption strategies: Since the system is no longer tied to a single collusion bound, thus it allows to dynamically adjust the desired collusion resilience based on any number of evolving factors such as the age of the system, or a number of active users, etc. -Ease and simplicity of updatability: None of the system parameters have to be updated when adjusting the collusion bound. That is, the same key skf can be used to decrypt ciphertexts for collusion bound q = 2 as well as q = 2^λ. We construct such a dynamically bounded functional encryption scheme for the class of all polynomial-size circuits under the general assumption of Identity-Based Encryption
2022
EUROCRYPT
Round-Optimal Byzantine Agreement 📺
Byzantine agreement is a fundamental primitive in cryptography and distributed computing, and minimizing its round complexity is of paramount importance. It is long known that any randomized $r$-round protocol must fail with probability at least $(c\cdot r)^{-r}$, for some constant $c$, when the number of corruptions is linear in the number of parties, $t = \theta(n)$. On the other hand, current protocols fail with probability at least $2^{-r}$. Whether we can match the lower bound agreement probability remains unknown. In this work, we resolve this long-standing open question. We present a protocol that matches the lower bound up to constant factors. Our results hold under a (strongly rushing) adaptive adversary that can corrupt up to $t = (1-\epsilon)n/2$ parties, and our protocols use a public-key infrastructure and a trusted setup for unique threshold signatures. This is the first protocol that decreases the failure probability (overall) by a \emph{super-constant} factor per round.
2022
EUROCRYPT
Post-Quantum Security of the Even-Mansour Cipher 📺
The Even-Mansour cipher is a simple method for constructing a (keyed) pseudorandom permutation $E$ from a public random permutation~$P:\bool^n \rightarrow \bool^n$. It is a core ingredient in a wide array of symmetric-key constructions, including several lightweight cryptosystems presently under consideration for standardization by NIST. It is secure against classical attacks, with optimal attacks requiring $q_E$ queries to $E$ and $q_P$ queries to $P$ such that $q_P \cdot q_E \approx 2^n$. If the attacker is given \emph{quantum} access to both $E$ and $P$, however, the cipher is completely insecure, with attacks using $q_P = q_E = O(n)$ queries known. In any plausible real-world setting, however, a quantum attacker would have only \emph{classical} access to the keyed permutation $E$ implemented by honest parties, while retaining quantum access to $P$. Attacks in this setting with $q_P^2 \cdot q_E \approx 2^n$ are known, showing that security degrades as compared to the purely classical case, but leaving open the question as to whether the Even-Mansour cipher can still be proven secure in this natural ``post-quantum'' setting. We resolve this open question, showing that any attack in this post-quantum setting requires $q^2_P \cdot q_E + q_P \cdot q_E^2 \approx 2^n$. Our results apply to both the two-key and single-key variants of Even-Mansour. Along the way, we establish several generalizations of results from prior work on quantum-query lower bounds that may be of independent interest.
2022
EUROCRYPT
Round-Optimal and Communication-Efficient Multiparty Computation 📺
Typical approaches for minimizing the round complexity of multi-party computation (MPC) come at the cost of increased communication complexity (CC) or the reliance on setup assumptions. A notable exception is the recent work of Ananth et al. [TCC 2019], which used Functional Encryption (FE) combiners to obtain a round optimal (two-round) semi-honest MPC in the plain model with CC proportional to the depth and input-output length of the circuit being computed---we refer to such protocols as circuit scalable. This leaves open the question of obtaining communication efficient protocols that are secure against malicious adversaries in the plain model, which our work solves. Concretely, our two main contributions are: 1) We provide a round-preserving black-box compiler that compiles a wide class of MPC protocols into circuit-scalable maliciously secure MPC protocols in the plain model, assuming (succinct) FE combiners. 2) We provide a round-preserving black-box compiler that compiles a wide class of MPC protocols into circuit-independent --- i.e., with CC that depends only on the input-output length of the circuit---maliciously secure MPC protocols in the plain model, assuming Multi-Key Fully-Homomorphic Encryption (MFHE). Our constructions are based on a new compiler that turns a wide class of MPC protocols into k-delayed-input function MPC protocols (a notion we introduce), where the functions to be computed is specified only in the k-th round of the protocol. As immediate corollaries of our two compilers, we derive (1) the first round-optimal and circuit-scalable maliciously secure MPC, and (2) the first round-optimal and circuit-independent maliciously secure MPC in the plain model. The latter MPC achieves the best to-date CC for a round-optimal malicious MPC protocol. In fact, it is even communication-optimal when the output size of the function being evaluated is smaller than its input size (e.g., for boolean functions). All of our results are based on standard polynomial time assumptions.
2022
EUROCRYPT
Incompressible Cryptography 📺
Incompressible encryption allows us to make the ciphertext size flexibly large and ensures that an adversary learns nothing about the encrypted data, even if the decryption key later leaks, unless she stores essentially the entire ciphertext. Incompressible signatures can be made arbitrarily large and ensure that an adversary cannot produce a signature on any message, even one she has seen signed before, unless she stores one of the signatures essentially in its entirety. In this work, we give simple constructions of both incompressible public-key encryption and signatures under minimal assumptions. Furthermore, large incompressible ciphertexts (resp. signatures) can be decrypted (resp. verified) in a streaming manner with low storage. In particular, these notions strengthen the related concepts of disappearing encryption and signatures, recently introduced by Guan and Zhandry (TCC 2021), whose previous constructions relied on sophisticated techniques and strong, non-standard assumptions. We extend our constructions to achieve an optimal "rate", meaning the large ciphertexts (resp. signatures) can contain almost equally large messages, at the cost of stronger assumptions.
2022
EUROCRYPT
Multi-Designated Receiver Signed Public Key Encryption 📺
This paper introduces a new type of public-key encryption scheme, called Multi-Designated Receiver Signed Public Key Encryption (MDRS-PKE), which allows a sender to select a set of designated receivers and both encrypt and sign a message that only these receivers will be able to read and authenticate (confidentiality and authenticity). An MDRS-PKE scheme provides several additional security properties which allow for a fundamentally new type of communication not considered before. Namely, it satisfies consistency---a dishonest sender cannot make different receivers receive different messages---off-the-record---a dishonest receiver cannot convince a third party of what message was sent (e.g., by selling their secret key), because dishonest receivers have the ability to forge signatures---and anonymity---parties that are not in the set of designated receivers cannot identify who the sender and designated receivers are. We give a construction of an MDRS-PKE scheme from standard assumptions. At the core of our construction lies yet another new type of public-key encryption scheme, which is of independent interest: Public Key Encryption for Broadcast (PKEBC) which provides all the security guarantees of MDRS-PKE schemes, except authenticity. We note that MDRS-PKE schemes give strictly more guarantees than Multi-Designated Verifier Signature (MDVS) schemes with privacy of identities. This in particular means that our MDRS-PKE construction yields the first MDVS scheme with privacy of identities from standard assumptions. The only prior construction of such schemes was based on Verifiable Functional Encryption for general circuits (Damgard et al., TCC '20).
2022
EUROCRYPT
On Succinct Non-Interactive Arguments in Relativized Worlds 📺
Succinct non-interactive arguments of knowledge (SNARKs) are cryptographic proofs with strong efficiency properties. Applications of SNARKs often involve proving computations that include the SNARK verifier, a technique called recursive composition. Unfortunately, SNARKs with desirable features such as a transparent (public-coin) setup are known only in the random oracle model (ROM). In applications this oracle must be heuristically instantiated and used in a non-black-box way. In this paper we identify a natural oracle model, the low-degree random oracle model, in which there exist transparent SNARKs for all NP computations *relative to this oracle*. Informally, letting $O$ be a low-degree encoding of a random oracle, and assuming the existence of (standard-model) collision-resistant hash functions, there exist SNARKs relative to $O$ for all languages in $NP^{O}$. Such a SNARK can directly prove a computation about its own verifier. To analyze this model, we introduce a more general framework, the *linear code random oracle model* (LCROM). We show how to obtain SNARKs in the LCROM for computations that query the oracle, given an *accumulation scheme* for oracle queries. Then we construct such an accumulation scheme for the special case of a low degree random oracle.
2022
EUROCRYPT
Refined Cryptanalysis of the GPRS Ciphers GEA-1 and GEA-2 📺
At EUROCRYPT~2021, Beierle et al. presented the first public analysis of the GPRS ciphers GEA-1 and GEA-2. They showed that although GEA-1 uses a 64-bit session key, it can be recovered with the knowledge of only 65 bits of keystream in time $2^{40}$ using $44$ GiB of memory. The attack exploits a weakness in the initialization process of the cipher that was presumably hidden intentionally by the designers to reduce its security. While no such weakness was found for GEA-2, the authors presented an attack on this cipher with time complexity of about $2^{45}$. The main practical obstacle is the required knowledge of 12800 bits of keystream used to encrypt a full GPRS frame. Variants of the attack are applicable (but more expensive) when given less consecutive keystream bits, or when the available keystream is fragmented (it contains no long consecutive block). In this paper, we improve and complement the previous analysis of GEA-1 and GEA-2. For GEA-1, we devise an attack in which the memory complexity is reduced by a factor of about $2^{13} = 8192$ from $44$ GiB to about 4 MiB, while the time complexity remains $2^{40}$. Our implementation recovers the GEA-1 session key in average time of 2.5~hours on a modern laptop. For GEA-2, we describe two attacks that complement the analysis of Beierle et al. The first attack obtains a linear tradeoff between the number of consecutive keystream bits available to the attacker (denoted by $\ell$) and the time complexity. It improves upon the previous attack in the range of (roughly) $\ell \leq 7000$. Specifically, for $\ell = 1100$ the complexity of our attack is about $2^{54}$, while the previous one is not faster than the $2^{64}$ brute force complexity. In case the available keystream is fragmented, our second attack reduces the memory complexity of the previous attack by a factor of $512$ from 32 GiB to 64 MiB with no time complexity penalty. Our attacks are based on new combinations of stream cipher cryptanalytic techniques and algorithmic techniques used in other contexts (such as solving the $k$-XOR problem).
2022
EUROCRYPT
Families of SNARK-friendly 2-chains of elliptic curves 📺
At CANS'20, El Housni and Guillevic introduced a new 2-chain of pairing-friendly elliptic curves for recursive zero-knowledge Succinct Non-interactive ARguments of Knowledge (zk-SNARKs) made of the former BLS12-377 curve (a Barreto--Lynn--Scott curve over a 377-bit prime field) and the new BW6-761 curve (a Brezing--Weng curve of embedding degree 6 over a 761-bit prime field). First we generalise the curve construction, the pairing formulas ($e \colon \G_1 \times \G_2 \to \G_T$) and the group operations to any BW6 curve defined on top of any BLS12 curve, forming a family of 2-chain pairing-friendly curves. Second, we investigate other possible 2-chain families made on top of the BLS12 and BLS24 curves. We compare BW6 to Cocks--Pinch curves of higher embedding degrees 8 and 12 (CP8, CP12) at the 128-bit security level. We derive formulas for efficient optimal ate and optimal Tate pairings on our new CP curves. We show that for both BLS12 and BLS24, the BW6 construction always gives the fastest pairing and curve arithmetic compared to Cocks-Pinch curves. Finally, we suggest a short list of curves suitable for Groth16 and KZG-based universal SNARKs and present an optimized implementation of these curves. Based on Groth16 and PlonK (a KZG-based SNARK) implementations in the \texttt{gnark} ecosystem, we obtain that the BLS12-377/BW6-761 pair is optimized for the former while the BLS24-315/BW6-672 pair is optimized for the latter.
2022
EUROCRYPT
Zero-Knowledge IOPs with Linear-Time Prover and Polylogarithmic-Time Verifier 📺
Interactive oracle proofs (IOPs) are a multi-round generalization of probabilistically checkable proofs that play a fundamental role in the construction of efficient cryptographic proofs. We present an IOP that simultaneously achieves the properties of zero knowledge, linear-time proving, and polylogarithmic-time verification. We construct a zero-knowledge IOP where, for the satisfiability of an $N$-gate arithmetic circuit over any field of size $\Omega(N)$, the prover uses $O(N)$ field operations and the verifier uses $\polylog(N)$ field operations (with proof length $O(N)$ and query complexity $\polylog(N)$). Polylogarithmic verification is achieved in the holographic setting for every circuit (the verifier has oracle access to a linear-time-computable encoding of the circuit whose satisfiability is being proved). Our result implies progress on a basic goal in the area of efficient zero knowledge. Via a known transformation, we obtain a zero knowledge argument system where the prover runs in linear time and the verifier runs in polylogarithmic time; the construction is plausibly post-quantum and only makes a black-box use of lightweight cryptography (collision-resistant hash functions).
2022
EUROCRYPT
Anamorphic Encryption: Private Communication against a Dictator 📺
Cryptosystems have been developed over the years under the typical prevalent setting which assumes that the receiver’s key is kept secure from the adversary, and that the choice of the message to be sent is freely performed by the sender and is kept secure from the adversary as well. Under these fundamental and basic operational assumptions, modern Cryptography has flourished over the last half a century or so, with amazing achievements: New systems (including public-key Cryptography), beautiful and useful models (including security definitions such as semantic security), and new primitives (such as zero-knowledge proofs) have been developed. Furthermore, these fundamental achievements have been translated into actual working systems, and span many of the daily human activities over the Internet. However, in recent years, there is an overgrowing pressure from many governments to allow the government itself access to keys and messages of encryption systems (under various names: escrow encryption, emergency access, communication decency acts, etc.). Numerous non-direct arguments against such policies have been raised, such as “the bad guys can utilize other encryption system” so all other cryptosystems have to be declared illegal, or that “allowing the government access is an ill-advised policy since it creates a natural weak systems security point, which may attract others (to masquerade as the government).” It remains a fundamental open issue to show directly that the above mentioned efforts by a government (called here “a dictator” for brevity) which mandate breaking of the basic operational assumption (and disallowing other cryptosystems), is, in fact, a futile exercise. This is a direct technical point which needs to be made and has not been made to date. In this work, as a technical demonstration of the futility of the dictator’s demands, we invent the notion of “Anamorphic Encryption” which shows that even if the dictator gets the keys and the messages used in the system (before anything is sent) and no other system is allowed, there is a covert way within the context of well established public-key cryptosystems for an entity to send secure messages which are, in spite of the stringent dictator conditions, hidden from the dictator itself! We feel that this may be an important direct technical argument against the nature of governments attempts to police the use of strong cryptographic systems, and we hope to stimulate further works in this direction.
2022
EUROCRYPT
Watermarking PRFs against Quantum Adversaries 📺
We initiate the study of software watermarking against quantum adversaries. A quantum adversary generates a quantum state as a pirate software that potentially removes an embedded message from a classical marked software. Extracting an embedded message from quantum pirate software is difficult since measurement could irreversibly alter the quantum state. In software watermarking against classical adversaries, a message extraction algorithm crucially uses the (input-output) behavior of a classical pirate software to extract an embedded message. Even if we instantiate existing watermarking PRFs with quantum-safe building blocks, it is not clear whether they are secure against quantum adversaries due to the quantum-specific property above. Thus, we need entirely new techniques to achieve software watermarking against quantum adversaries. In this work, we define secure watermarking PRFs for quantum adversaries (unremovability against quantum adversaries). We also present two watermarking PRFs as follows. - We construct a privately extractable watermarking PRF against quantum adversaries from the quantum hardness of the learning with errors (LWE) problem. The marking and extraction algorithms use a public parameter and a private extraction key, respectively. The watermarking PRF is unremovable even if adversaries have (the public parameter and) access to the extraction oracle, which returns a result of extraction for a queried quantum circuit. - We construct a publicly extractable watermarking PRF against quantum adversaries from indistinguishability obfuscation (IO) and the quantum hardness of the LWE problem. The marking and extraction algorithms use a public parameter and a public extraction key, respectively. The watermarking PRF is unremovable even if adversaries have the extraction key (and the public parameter). We develop a quantum extraction technique to extract information (a classical string) from a quantum state without destroying the state too much. We also introduce the notion of extraction-less watermarking PRFs as a crucial building block to achieve the results above by combining the tool with our quantum extraction technique.
2022
EUROCRYPT
Non-Interactive Zero-Knowledge Proofs with Fine-Grained Security 📺
We construct the first non-interactive zero-knowledge (NIZK) proof systems in the fine-grained setting where adversaries' resources are bounded and honest users have no more resources than an adversary. More concretely, our setting is the NC1-fine-grained setting, namely, all parties (including adversaries and honest participants) are in NC1. Our NIZK systems are for circuit satisfiability (SAT) under the worst-case assumption, $NC1 \subsetneq L/poly$. As technical contributions, we propose two approaches to construct NIZK in the NC1-fine-grained setting. In stark contrast to the classical Fiat-Shamir transformation, both our approaches start with a simple Sigma-protocol and transform it into NIZKs for circuit SAT without random oracles. Additionally, our second approach firstly proposes a fully homomorphic encryption (FHE) scheme in the fine-grained setting, which was not known before, as a building block. Compared with the first approach, the resulting NIZK only supports circuits with constant multiplicative depth, while its proof size is independent of the statement circuit size. Extending our approaches, we obtain two NIZK systems in the uniform reference string model and two non-interactive zaps (namely, non-interactive witness-indistinguishability proof systems in the plain model). While the previous constructions from Ball, Dachman-Soled, and Kulkarni (CRYPTO 2020) require provers to run in polynomial-time, our constructions are the first one with provers in NC1.
2022
EUROCRYPT
Anonymity of NIST PQC Round 3 KEMs 📺
This paper investigates \emph{anonymity} of all NIST PQC Round~3 KEMs: Classic McEliece, Kyber, NTRU, Saber, BIKE, FrodoKEM, HQC, NTRU Prime (Streamlined NTRU Prime and NTRU LPRime), and SIKE. We show the following results: * NTRU is anonymous in the quantum random oracle model (QROM) if the underlying deterministic PKE is strongly disjoint-simulatable. NTRU is collision-free in the QROM. A hybrid PKE scheme constructed from NTRU as KEM and appropriate DEM is anonymous and robust. (Similar results for BIKE, FrodoKEM, HQC, NTRU LPRime, and SIKE hold except one of three parameter sets of HQC.) * Classic McEliece is anonymous in the QROM if the underlying PKE is strongly disjoint-simulatable and a hybrid PKE scheme constructed from it as KEM and appropriate DEM is anonymous. * Grubbs, Maram, and Paterson pointed out that Kyber and Saber have a gap in the current IND-CCA security proof in the QROM (EUROCRYPT 2022). We found that Streamlined NTRU Prime has another technical obstacle for the IND-CCA security proof in the QROM. Those answer the open problem to investigate the anonymity and robustness of NIST PQC Round~3 KEMs posed by Grubbs, Maram, and Paterson (EUROCRYPT 2022). We use strong disjoint-simulatability of the underlying PKE of KEM and strong pseudorandomness and smoothness/sparseness of KEM as the main tools, which will be of independent interest.
2022
EUROCRYPT
EpiGRAM: Practical Garbled RAM 📺
Best Paper Award
Garbled RAM (GRAM) is a powerful technique introduced by Lu and Ostrovsky that equips Garbled Circuit (GC) with a sublinear cost RAM without adding rounds of interaction. While multiple GRAM constructions are known, none are suitable for practice, due to costs that have high constants and poor scaling. We present the first GRAM suitable for practice. For computational security parameter $\kappa$ and for a size-$n$ RAM that stores blocks of size $w = \Omega(\log^2 n)$ bits, our GRAM incurs only amortized $O(w \cdot \log^2 n \cdot \kappa)$ communication and computation per access. We evaluate the concrete cost of our GRAM; our approach outperforms trivial linear-scan-based RAM for as few as $512$ $128$-bit elements.
2022
EUROCRYPT
Efficient Schemes for Committing Authenticated Encryption 📺
This paper provides efficient authenticated-encryption (AE) schemes in which a ciphertext is a commitment to the key. These are extended, at minimal additional cost, to schemes where the ciphertext is a commitment to all encryption inputs, meaning key, nonce, associated data and message. Our primary schemes are modifications of GCM (for basic, unique-nonce AE security) and AES-GCM-SIV (for misuse-resistant AE security) and add both forms of commitment without any increase in ciphertext size. We also give more generic, but somewhat more costly, solutions.
2022
EUROCRYPT
Authentication in the Bounded Storage Model 📺
We consider the streaming variant of the Bounded Storage Model (BSM), where the honest parties can stream large amounts of data to each other, while only maintaining a small memory of size $n$. The adversary also operates as a streaming algorithm, but has a much larger memory size $m \gg n$. The goal is to construct unconditionally secure cryptographic schemes in the BSM, and prior works did so for symmetric-key encryption, key agreement, oblivious transfer and multiparty computation. In this work, we construct message authentication and signatures in the BSM. First, we consider the symmetric-key setting, where Alice and Bob share a small secret key. Alice can authenticate arbitrarily many messages to Bob by streaming long authentication tags of size $k \gg m$, while ensuring that the tags can be generated and verified using only $n$ bits of memory. We show a solution using local extractors (Vadhan; JoC '04), which allows for up to exponentially large adversarial memory $m = 2^{O(n)}$, and has tags of size $k= O(m)$. Second, we consider the same setting as above, but now additionally require each individual tag to be small, of size $k \leq n$. We show a solution is still possible when the adversary's memory is $m = O(n^2)$, which is optimal. Our solution relies on a space lower bound for leaning parities (Raz; FOCS '16). Third, we consider the public-key signature setting. A signer Alice initially streams a long verification key over an authentic channel, while only keeping a short signing key in her memory. A verifier Bob receives the streamed verification key and generates some short verification digest that he keeps in his memory. Later, Alice can sign arbitrarily many messages using her signing key by streaming the signatures to Bob, who can verify them using his verification digest. We show a solution for $m= O(n^2)$, which we show to be optimal. Our solution relies on a novel entropy lemma, of independent interest. We show that, if a sequence of blocks has sufficiently high min-entropy, then a large fraction of individual blocks must have high min-entropy. Naive versions of this lemma are false, but we show how to patch it to make it hold.
2022
EUROCRYPT
Short Pairing-Free Blind Signatures with Exponential Security 📺
This paper proposes the first practical pairing-free three-move blind signature schemes that (1) are concurrently secure, (2) produce short signatures (i.e., {\em three} or {\em four} group elements/scalars), and (3) are provably secure either in the generic group model (GGM) or the algebraic group model (AGM) under the (plain or one-more) discrete logarithm assumption (beyond additionally assuming random oracles). We also propose a partially blind version of one of our schemes. Our schemes do not rely on the hardness of the ROS problem (which can be broken in polynomial time) or of the mROS problem (which admits sub-exponential attacks). The only prior work with these properties is Abe's signature scheme (EUROCRYPT '02), which was recently proved to be secure in the AGM by Kastner et al. (PKC '22), but which also produces signatures twice as long as those from our scheme. The core of our proofs of security is a new problem, called {\em weighted} {\em fractional} ROS (WFROS), for which we prove (unconditional) exponential lower bounds.
2022
EUROCRYPT
Secure Non-interactive Simulation: Feasibility \& Rate 📺
A natural solution to increase the efficiency of secure computation will be to non-interactively and securely transform diverse inexpensive-to-generate correlated randomness, like, joint samples from noise sources, into correlations useful for secure computation protocols. Motivated by this general application for secure computation, our work introduces the notion of {\em secure non-interactive simulation} (\snis). Parties receive samples of correlated randomness, and they, without any interaction, securely convert them into samples from another correlated randomness. Our work presents a simulation-based security definition for \snis and initiates the study of the feasibility and efficiency of \snis. We also study \snis among fundamental correlated randomnesses like random samples from the binary symmetric and binary erasure channels, represented by \BSC and \BEC, respectively. We show the impossibility of interconversion between \BSC and \BEC samples. Next, we prove that a \snis of a $\BEC(\eps')$ sample (a \BEC with noise characteristic $\eps'$) from $\BEC(\eps)$ is feasible if and only if $(1-\eps') = (1-\eps)^k$, for some $k\in\NN$. In this context, we prove that all \snis constructions must be linear. Furthermore, if $(1-\eps') = (1-\eps)^k$, then the rate of simulating multiple independent $\BEC(\eps')$ samples is at most $1/k$, which is also achievable using (block) linear constructions. Finally, we show that a \snis of a $\BSC(\eps')$ sample from $\BSC(\eps)$ samples is feasible if and only if $(1-2\eps')=(1-2\eps)^k$, for some $k\in\NN$. Interestingly, there are linear as well as non-linear \snis constructions. When $(1-2\eps')=(1-2\eps)^k$, we prove that the rate of a {\em perfectly secure} \snis is at most $1/k$, which is achievable using linear and non-linear constructions. Our technical approach algebraizes the definition of \snis and proceeds via Fourier analysis. Our work develops general analysis methodologies for Boolean functions, explicitly incorporating cryptographic security constraints. Our work also proves strong forms of {\em statistical-to-perfect security} transformations: one can error-correct a statistically secure \snis to make it perfectly secure. We show a connection of our research with {\em homogeneous Boolean functions} and {\em distance-invariant codes}, which may be of independent interest.
2022
EUROCRYPT
SNARGs for P from Sub-exponential DDH and QR 📺
We obtain publicly verifiable Succinct Non-Interactive Arguments (SNARGs) for arbitrary deterministic computations and bounded space non-deterministic computation from well-studied group-based assumptions. In particular, assuming the sub-exponential hardness of the Decisional Diffie-Hellman (DDH) and Quadratic Residuosity (QR) assumptions, we obtain the following results, where n denotes the length of the instance: 1. A SNARG for any language that can be decided in non-deterministic time T and space S with communication complexity and verifier runtime(n+S)·T^{o(1)}. 2. A SNARG for any language that can be decided in deterministic time T with communication complexity n·T^{o(1)} and verifier runtime n·T^{o(1)}.
2022
EUROCRYPT
Hiding in Plain Sight: Memory-tight Proofs via Randomness Programming 📺
This paper continues the study of {\em memory-tight reductions} (Auerbach et al, CRYPTO '17). These are reductions that only incur minimal memory costs over those of the original adversary, allowing precise security statements for memory-bounded adversaries (under appropriate assumptions expressed in terms of adversary time and memory usage). Despite its importance, only a few techniques to achieve memory-tightness are known and impossibility results in prior works show that even basic, textbook reductions cannot be made memory-tight. This paper introduces a new class of memory-tight reductions which leverage random strings in the interaction with the adversary to hide state information, thus shifting the memory costs to the adversary. We exhibit this technique with several examples. We give memory-tight proofs for digital signatures allowing many forgery attempts when considering randomized message distributions or probabilistic RSA-FDH signatures specifically. We prove security of the authenticated encryption scheme Encrypt-then-PRF with a memory-tight reduction to the underlying encryption scheme. By considering specific schemes or restricted definitions we avoid generic impossibility results of Auerbach et al.~(CRYPTO '17) and Ghoshal et al.~(CRYPTO '20). As a further case study, we consider the textbook equivalence of CCA-security for public-key encryption for one or multiple encryption queries. We show two qualitatively different memory-tight versions of this result, depending on the considered notion of CCA security.
2022
EUROCRYPT
On the Multi-User Security of Short Schnorr Signatures with Preprocessing 📺
The Schnorr signature scheme is an efficient digital signature scheme with short signature lengths, i.e., $4k$-bit signatures for $k$ bits of security. A Schnorr signature $\sigma$ over a group of size $p\approx 2^{2k}$ consists of a tuple $(s,e)$, where $e \in \{0,1\}^{2k}$ is a hash output and $s\in \mathbb{Z}_p$ must be computed using the secret key. While the hash output $e$ requires $2k$ bits to encode, Schnorr proposed that it might be possible to truncate the hash value without adversely impacting security. In this paper, we prove that \emph{short} Schnorr signatures of length $3k$ bits provide $k$ bits of multi-user security in the (Shoup's) generic group model and the programmable random oracle model. We further analyze the multi-user security of key-prefixed short Schnorr signatures against preprocessing attacks, showing that it is possible to obtain secure signatures of length $3k + \log S + \log N$ bits. Here, $N$ denotes the number of users and $S$ denotes the size of the hint generated by our preprocessing attacker, e.g., if $S=2^{k/2}$, then we would obtain secure $3.75k$-bit signatures for groups of up to $N \leq 2^{k/4}$ users. Our techniques easily generalize to several other Fiat-Shamir-based signature schemes, allowing us to establish analogous results for Chaum-Pedersen signatures and Katz-Wang signatures. As a building block, we also analyze the $1$-out-of-$N$ discrete-log problem in the generic group model, with and without preprocessing.
2022
EUROCRYPT
Stacking Sigmas: A Framework to Compose Sigma-Protocols for Disjunctions 📺
Zero-Knowledge (ZK) Proofs for disjunctive statements have been a focus of a long line of research. Classical results such as Cramer {\em et al.} [CRYPTO'94] and Abe {\em et al.} [AC'02] design generic compilers that transform certain classes of ZK proofs into ZK proofs for disjunctive statements. However, communication complexity of the resulting protocols in these results ends up being proportional to the total size of all the proofs in the disjunction. More recently, a series of works (e.g. Heath {\em et al.} [EC'20]) has exploited special properties of garbled circuits to construct efficient ZK proofs for disjunctions, where the proof size is only proportional to the length of the largest clause in the disjunction. However, these techniques do not appear to generalize beyond garbled circuits. In this work, we focus on achieving the best of both worlds. We design a \textit{general framework} that compiles a large class of {unmodified} $\Sigma$-protocols, each for an individual statement, into a new $\Sigma$-protocol that proves a disjunction of these statements. Our framework can be used both when each clause is proved with the same $\Sigma$-protocol and when different $\Sigma$-protocols are used for different clauses. The resulting $\Sigma$-protocol is concretely efficient and has communication complexity proportional to the communication required by the largest clause, with additive terms that are only logarithmic in the number of clauses. We show that our compiler can be applied to many well-known $\Sigma$-protocols, including classical protocols (\emph{e.g.} Schnorr and Guillou-Quisquater) and modern MPC-in-the-head protocols such as the recent work of Katz, Kolesnikov and Wang [CCS'18] and the Ligero protocol of Ames {\em et al.} [CCS'17] Finally, since all of the protocols in our class can be made non-interactive in the random oracle model using the Fiat-Shamir transform, our result yields the first generic non-interactive zero-knowledge protocol for disjunctions where the communication only depends on the size of the largest clause.
2022
EUROCRYPT
Gemini: elastic SNARKs for diverse environments 📺
We introduce a new class of succinct arguments, that we call elastic. Elastic SNARKs allow the prover to allocate different resources (such as memory and time) depending on the execution environment and the statement to prove. The resulting output is independent of the prover’s configuration. To study elastic SNARKs, we extend the streaming paradigm of [Block et al., TCC’20]. We provide a definitional framework for elastic polynomial interactive oracle proofs for R1CS instances and design a compiler which transforms an elastic PIOP into a preprocessing argument system that supports streaming or random access to its inputs. Depending on the configuration, the prover will choose different trade-offs for time (either linear, or quasilinear) and memory (either linear, or logarithmic). We prove the existence of elastic SNARKS by presenting Gemini, a novel FFT-free preprocessing argument. We prove its security and develop a proof-of-concept implementation in Rust based on the arkworks framework. We provide benchmarks for large R1CS instances of tens of billions of gates on a single machine.
2022
EUROCRYPT
Lightweight, Maliciously Secure Verifiable Function Secret Sharing 📺
In this work, we present a lightweight construction of verifiable two-party function secret sharing (FSS) for point functions and multi-point functions. Our verifiability method is lightweight in two ways. Firstly, it is concretely efficient, making use of only symmetric key operations and no public key or MPC techniques are involved. Our performance is comparable with the state-of-the-art non-verifiable DPF constructions, and we outperform all prior DPF verification techniques in both computation and communication complexity, which we demonstrate with an implementation of our scheme. Secondly, our verification procedure is essentially unconstrained. It will verify that distributed point function (DPF) shares correspond to some point function irrespective of the output group size, the structure of the DPF output, or the set of points on which the DPF must be evaluated. This is in stark contrast with prior works, which depended on at least one and often all three of these constraints. In addition, our construction is the first DPF verification protocol that can verify general DPFs while remaining secure even if one server is malicious. Prior work on maliciously secure DPF verification could only verify DPFs where the non-zero output is binary and the output space is a large field. As an additional feature, our verification procedure can be batched so that verifying a polynomial number of DPF shares requires the exact same amount of communication as verifying one pair of DPF shares. We combine this packed DPF verification with a novel method for packing DPFs into shares of a multi-point function where the evaluation time, verification time, and verification communication are independent of the number of non-zero points in the function. An immediate corollary of our results are two-server protocols for PIR and PSI that remain secure when any one of the three parties is malicious (either the client or one of the servers).
2022
EUROCRYPT
Indistinguishability Obfuscation from LPN over F_p, DLIN, and PRGs in NC^0 📺
In this work, we study what minimal sets of assumptions suffice for constructing indistinguishability obfuscation ($\iO$). We prove: {\bf Theorem}(Informal): {\em Assume sub-exponential security of the following assumptions: - the Learning Parity with Noise ($\mathsf{LPN}$) assumption over general prime fields $\mathbb{F}_p$ with polynomially many $\mathsf{LPN}$ samples and error rate $1/k^\delta$, where $k$ is the dimension of the $\mathsf{LPN}$ secret, and $\delta>0$ is any constant; - the existence of a Boolean Pseudo-Random Generator ($\mathsf{PRG}$) in $\mathsf{NC}^0$ with stretch $n^{1+\tau}$, where $n$ is the length of the $\mathsf{PRG}$ seed, and $\tau>0$ is any constant; - the Decision Linear ($\mathsf{DLIN}$) assumption on symmetric bilinear groups of prime order. Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exists. Further, assuming only polynomial security of the aforementioned assumptions, there exists collusion resistant public-key functional encryption for all polynomial-size circuits. This removes the reliance on the Learning With Errors (LWE) assumption from the recent work of [Jain, Lin, Sahai STOC'21]. As a consequence, we obtain the first fully homomorphic encryption scheme that does not rely on any lattice-based hardness assumption. Our techniques feature a new notion of randomized encoding called Preprocessing Randomized Encoding (PRE), that essentially can be computed in the exponent of pairing groups. When combined with other new techniques, PRE gives a much more streamlined construction of $\iO$ while still maintaining reliance only on well-studied assumptions.
2022
EUROCRYPT
Asymmetric PAKE with low computation and communication 📺
In Crypto'21 Gu, Jarecki, and Krawczyk [20] showed an asymmetric password authenticated key exchange protocol (aPAKE) whose computational cost matches (symmetric) password authenticated key exchange (PAKE) and plain (i.e. unauthenticated) key exchange (KE). However, this minimal-cost aPAKE did not match prior aPAKE's in round complexity, using 4 rounds assuming the client initiates compared to 2 rounds in an aPAKE of Bradley et al. In this paper we show two aPAKE protocols that achieve optimal computational cost and optimal round complexity. Our protocols can be seen as applications of the Encrypted Key Exchange (EKE) compiler of Bellovin and Merritt [6], which creates password-authenticated key exchange by password-encrypting messages in a key exchange protocol. Whereas Bellovin and Merritt used this method to construct a PAKE by applying password-encryption to KE messages, we construct an aPAKE by applying password-encryption to messages of a unilaterally authenticated Key Exchange (ua-KE). We present two versions of this compiler. The first uses salted password hash and takes 3 rounds if the client initiates. The second uses unsalted password hash and takes a single simultaneous flow (it is the first aPAKE to do so), thus simultaneously matching the minimal computational cost and the minimal round complexity of PAKE and KE. We analyze our aPAKE protocols assuming Ideal Cipher (IC) on a group as modular constructions from ua-KE realized via a (universally composable) Authenticated Key Exchange where the server uses one-time keys (otk-AKE). We then show that one-pass variants of 3DH and HMQV securely realize otk-AKE in ROM. Interestingly, the two resulting concrete aPAKE's use the exact same protocol messages as two natural variants of EKE, and the only difference between the symmetric PAKE (EKE) and asymmetric PAKE (our protocols) is in the key derivation equation used to derive the final session key output.
2022
EUROCRYPT
Anonymous, Robust Post-Quantum Public Key Encryption 📺
A core goal of the NIST PQC competition is to produce PKE schemes which, even if attacked with a large-scale quantum computer, maintain the security guarantees needed by applications. The main security focus in the NIST PQC context has been IND-CCA security, but other applications demand that PKE schemes provide 'anonymity' (Bellare et al., ASIACRYPT 2001), and 'robustness' (Abdalla et al., TCC 2010). Examples of such applications include anonymous cryptocurrencies, searchable encryption, and auction protocols. However, almost nothing is known about how to build post-quantum PKE schemes offering these security properties. In particular, the status of the NIST PQC candidates with respect to anonymity and robustness is unknown. This paper initiates a systematic study of anonymity and robustness for post-quantum PKE schemes. Firstly, we identify implicit rejection as a crucial design choice shared by most post-quantum KEMs, show that implicit rejection renders prior results on anonymity and robustness for KEM-DEM PKEs inapplicable, and transfer prior results to the implicit-rejection setting where possible. Secondly, since they are widely used to build post-quantum PKEs, we examine how the Fujisaki-Okamoto (FO) transforms (Fujisaki and Okamoto, Journal of Cryptology 2013) confer robustness and enhance weak anonymity of a base PKE. We then leverage our theoretical results to study the anonymity and robustness of three NIST KEM finalists---Saber, Kyber, and Classic McEliece---and one alternate, FrodoKEM. Overall, our findings for robustness are definitive: we provide positive robustness results for Saber, Kyber, and FrodoKEM, and a negative result for Classic McEliece. Our negative result stems from a striking property of KEM-DEM PKE schemes built with the Classic McEliece KEM: for any message 'm', we can construct a single hybrid ciphertext 'c' which decrypts to the chosen 'm' under any Classic McEliece private key. Our findings for anonymity are more mixed: we identify barriers to proving anonymity for Saber, Kyber, and Classic McEliece. We also found that in the case of Saber and Kyber, these barriers lead to issues with their IND-CCA security claims. We have worked with the Saber and Kyber teams to fix these issues, but they remain unresolved. On the positive side, we were able to prove anonymity for FrodoKEM and a variant of Saber introduced by D'Anvers et al. (AFRICACRYPT 2018). Our analyses of these two schemes also identified technical gaps in their IND-CCA security claims, but we were able to fix them.
2022
EUROCRYPT
Secure Multiparty Computation with Free Branching 📺
We study secure multi-party computation (MPC) protocols for branching circuits that contain multiple sub-circuits (i.e., branches) and the output of the circuit is that of single ``active'' branch. Crucially, the identity of the active branch must remain hidden from the protocol participants. While such circuits can be securely computed by evaluating each branch and then multiplexing the output, such an approach incurs a communication cost linear in the size of the entire circuit. To alleviate this, a series of recent works have investigated the problem of reducing the communication cost of branching executions inside MPC (without relying on fully homomorphic encryption). Most notably, the stacked garbling paradigm [Heath and Kolesnikov, CRYPTO'20] yields garbled circuits for branching circuits whose size only depends on the size of the largest branch. Presently, however, it is not known how to obtain similar communication improvements for secure computation involving {\em more than two parties}. In this work, we provide a generic framework for branching multi-party computation that supports {\em any number of parties}. The communication complexity of our scheme is proportional to the size of the largest branch and the computation is linear in the size of the entire circuit. We provide an implementation and benchmarks to demonstrate practicality of our approach.
2022
EUROCRYPT
Adaptively Secure Computation for RAM Programs 📺
We obtain the first two-round two-party computation protocol, in the plain model, that is secure against passive adversaries who can adaptively corrupt all parties where the communication complexity is proportional to the square of the RAM complexity of the function up to polylogarithmic factors assuming the existence of non-committing encryption.
2022
EUROCRYPT
Round-Optimal Multi-Party Computation with Identifiable Abort 📺
Secure multi-party computation (MPC) protocols that are resilient to a dishonest majority allow the adversary to get the output of the computation while, at the same time, forcing the honest parties to abort. Aumann and Lindell introduced the enhanced notion of security with identifiable abort, which still allows the adversary to trigger an abort but, at the same time, it enables the honest parties to agree on the identity of the party that led to the abort. More recently, in Eurocrypt 2016, Garg et al. showed that, assuming access to a simultaneous message exchange channel for all the parties, at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model. Following Garg et al., a sequence of works has matched this lower bound, but none of them achieved security with identifiable abort. In this work, we close this gap and show that four rounds of communication are also sufficient to securely realize any functionality with identifiable abort using standard and generic polynomial-time assumptions. To achieve this result we introduce the new notion of bounded-rewind secure MPC that guarantees security even against an adversary that performs a mild form of reset attacks. We show how to instantiate this primitive starting from any MPC protocol and by assuming trapdoor-permutations. The notion of bounded-rewind secure MPC allows for easier parallel composition of MPC protocols with other (interactive) cryptographic primitives. Therefore, we believe that this primitive can be useful in other contexts in which it is crucial to combine multiple primitives with MPC protocols while keeping the round complexity of the final protocol low.
2022
EUROCRYPT
Non-malleable Commitments Against Quantum Attacks 📺
We construct, under standard hardness assumptions, the first non-malleable commitments secure against quantum attacks. Our commitments are statistically binding and satisfy the standard notion of {\em non-malleability with respect to commitment}. We obtain a $\log^\star(\lambda)$-round classical protocol, assuming the existence of post-quantum one-way functions. Previously, non-malleable commitments with quantum security were only known against a restricted class of adversaries known as {\em synchronizing adversaries.} At the heart of our results is a new general technique that allows to modularly obtain non-malleable commitments from any extractable commitment protocol, obliviously of the underlying extraction strategy (black-box or non-black-box) or round complexity. The transformation may also be of interest in the classical setting.
2022
EUROCRYPT
Secure Multiparty Computation with Sublinear Preprocessing 📺
A common technique for enhancing the efficiency of secure multiparty computation (MPC) with dishonest majority is via {\em preprocessing}: In an offline phase, parties engage in an input-independent protocol to securely generate correlated randomness. Once inputs are known, the correlated randomness is consumed by a ``non-cryptographic'' and highly efficient online protocol. The correlated randomness in such protocols traditionally comes in two flavors: multiplication triples (Beaver, Crypto '91), which suffice for security against semi-honest parties, and {\em authenticated} multiplication triples (Bendlin et al., Eurocrypt '11, Damg{\aa}rd et al., Crypto '12) that yield efficient protocols against malicious parties. Recent constructions of pseudorandom correlation generators (Boyle et al., Crypto '19, '20) enable concretely efficient secure generation of multiplication triples with {\em sublinear communication complexity}. However, these techniques do not efficiently apply to authenticated triples, except in the case of secure two-party computation of arithmetic circuits over large fields. In this work, we propose the first {\em concretely efficient} approach for (malicious) MPC with preprocessing in which the offline communication is {\em sublinear} in the circuit size. More specifically, the offline communication scales with the {\em square root} of the circuit size. From a feasibility point of view, our protocols can make use of any secure protocol for generating (unauthenticated) multiplication triples together with any {\em additive} homomorphic encryption. We propose concretely efficient instantiations (based on strong but plausible ``linear-only'' assumptions) from existing homomorphic encryption schemes and pseudorandom correlation generators. Our technique is based on a variant of a recent protocol of Boyle et al. (Crypto '21) for MPC with preprocessing. As a result, our protocols inherit the succinct correlated randomness feature of the latter protocol.
2022
EUROCRYPT
Information-Combining Differential Fault Attacks on DEFAULT 📺
Differential fault analysis (DFA) is a very powerful attack vector on implementations of symmetric cryptography. Most countermeasures are applied at the implementation level. At ASIACRYPT 2021, Baksi et al. proposed a design strategy that aims to provide inherent cipher level resistance against DFA by using S-boxes with linear structures. They argue that in their instantiation, the block cipher DEFAULT, a DFA adversary can learn at most 64 of the 128 key bits, so the remaining brute-force complexity of 2^64 is impractical. In this paper, we show that a DFA adversary can combine information across rounds to recover the full key, invalidating their security claim. In particular, we observe that such ciphers exhibit large classes of equivalent keys that can be represented efficiently in normalized form using linear equations. We exploit this in combination with the specifics of DEFAULT's strong key schedule to recover the key using less than 100 faulty computation and negligible time complexity. Moreover, we show that even an idealized version of DEFAULT with independent round keys is vulnerable to our information-combining attacks based on normalized keys.
2022
EUROCRYPT
Revamped Differential-Linear Cryptanalysis on Reduced Round ChaCha 📺
In this paper, we provide several improvements over the existing differential-linear attacks on ChaCha. ChaCha is a stream cipher which has $20$ rounds. At CRYPTO $2020$, Beierle et al. observed a differential in the $3.5$-th round if the right pairs are chosen. They produced an improved attack using this, but showed that to achieve a right pair, we need $2^5$ iterations on average. In this direction, we provide a technique to find the right pairs with the help of listing. Also, we provide a strategical improvement in PNB construction, modification of complexity calculation and an alternative attack method using two input-output pairs. Using these, we improve the time complexity, reducing it to $2^{221.95}$ from $2^{230.86}$ reported by Beierle et al. for $256$ bit version of ChaCha. Also, after a decade, we improve existing complexity (Shi et al: ICISC 2012) for a $6$-round of $128$ bit version of ChaCha by more than 11 million times and produce the first-ever attack on 6.5-round ChaCha$128$ with time complexity $2^{123.04}.$
2022
EUROCRYPT
CoCoA: Concurrent Continuous Group Key Agreement 📺
Messaging platforms like Signal are widely deployed and provide strong security in an asynchronous setting. It is a challenging problem to construct a protocol with similar security guarantees that can \emph{efficiently} scale to large groups. A major bottleneck are the frequent key rotations users need to perform to achieve post compromise forward security. In current proposals -- most notably in TreeKEM (which is part of the IETF's Messaging Layer Security (MLS) protocol draft) -- for users in a group of size $n$ to rotate their keys, they must each craft a message of size $\log(n)$ to be broadcast to the group using an (untrusted) delivery server. In larger groups, having users sequentially rotate their keys requires too much bandwidth (or takes too long), so variants allowing any $T \leq n$ users to simultaneously rotate their keys in just $2$ communication rounds have been suggested (e.g.\ ``Propose and Commit" by MLS). Unfortunately, $2$-round concurrent updates are either damaging or expensive (or both); i.e.\ they either result in future operations being more costly (e.g.\ via ``blanking'' or ``tainting'') or are costly themselves requiring $\Omega(T)$ communication for each user [Bienstock et al., TCC'20]. In this paper we propose CoCoA; a new scheme that allows for $T$ concurrent updates that are neither damaging nor costly. That is, they add no cost to future operations yet they only require $\Omega(\log^2(n))$ communication per user. To circumvent the [Bienstock et al.] lower bound, CoCoA increases the number of rounds needed to complete all updates from $2$ up to (at most) $\log(n)$; though typically fewer rounds are needed. The key insight of our protocol is the following: in the (non-concurrent version of) TreeKEM, a delivery server which gets $T$ concurrent update requests will approve one and reject the remaining $T-1$. In contrast, our server attempts to apply all of them. If more than one user requests to rotate the same key during a round, the server arbitrarily picks a winner. Surprisingly, we prove that regardless of how the server chooses the winners, all previously compromised users will recover after at most $\log(n)$ such update rounds. To keep the communication complexity low, CoCoA is a server-aided CGKA. That is, the delivery server no longer blindly forwards packets, but instead actively computes individualized packets tailored to each user. As the server is untrusted, this change requires us to develop new mechanisms ensuring robustness of the protocol.
2022
EUROCRYPT
On the Concrete Security of TLS 1.3 PSK Mode 📺
The pre-shared key (PSK) handshake modes of TLS 1.3 allow for the performant, low-latency resumption of previous connections and are widely used on the Web and by resource-constrained devices, e.g., in the Internet of Things. Taking advantage of these performance benefits with optimal and theoretically-sound parameters requires tight security proofs. We give the first tight security proofs for the TLS 1.3 PSK handshake modes. Our main technical contribution is to address a gap in prior tight security proofs of TLS 1.3 which modeled either the entire key schedule or components thereof as independent random oracles to enable tight proof techniques. These approaches ignore existing interdependencies in TLS 1.3's key schedule, arising from the fact that the same cryptographic hash function is used in several components of the key schedule and the handshake more generally. We overcome this gap by proposing a new abstraction for the key schedule and carefully arguing its soundness via the indifferentiability framework. Interestingly, we observe that for one specific configuration, PSK-only mode with hash function SHA-384, it seems difficult to argue indifferentiability due to a lack of domain separation between the various hash function usages. We view this as an interesting insight for the design of protocols, such as future TLS versions. For all other configurations however, our proofs significantly tighten the security of the TLS 1.3 PSK modes, confirming standardized parameters (for which prior bounds provided subpar or even void guarantees) and enabling a theoretically-sound deployment.
2022
EUROCRYPT
Distributed (Correlation) Samplers: How to Remove a Trusted Dealer in One Round 📺
Structured random strings (SRSs) and correlated randomness are important for many cryptographic protocols. In settings where interaction is expensive, it is desirable to obtain such randomness in as few rounds of communication as possible; ideally, simply by exchanging one reusable round of messages which can be considered public keys. In this paper, we describe how to generate any SRS or correlated randomness in such a single round of communication, using, among other things, indistinguishable obfuscation. We introduce what we call a distributed sampler, which enables n parties to sample a single public value (SRS) from any distribution. We construct a semi-malicious distributed sampler in the plain model, and use it to build a semi-malicious public- key PCF (Boyle et al., FOCS 2020) in the plain model. A public-key PCF can be thought of as a distributed correlation sampler; instead of producing a public SRS, it gives each party a private random value (where the values satisfy some correlation). We introduce a general technique called an anti-rusher which compiles any one-round protocol with semi-malicious security without inputs to a similar one-round protocol with active security by making use of a programmable random oracle. This gets us actively secure distributed samplers and public-key PCFs in the random oracle model. Finally, we explore some tradeoffs. Our first PCF construction is limited to reverse-sampleable correlations (where the random outputs of honest parties must be simulatable given the random outputs of corrupt parties); we additionally show a different construction without this limitation, but which does not allow parties to hold secret parameters of the correlation. We also describe how to avoid the use of a random oracle at the cost of relying on sub-exponentially secure indistinguishability obfuscation.
2022
EUROCRYPT
COA-Secure Obfuscation and Applications 📺
We put forth a new paradigm for program obfuscation, where obfuscated programs are endowed with proofs of ``well formedness.'' In addition to asserting existence of an underlying plaintext program with an attested structure, these proofs also prevent mauling attacks, whereby an adversary surreptitiously creates an obfuscated program based on secrets which are embedded in other obfuscated programs. We call this new guarantee Chosen Obfuscation Attacks (COA) security. We show how to enhance a large class of obfuscation mechanisms to be COA-secure, assuming subexponentially secure IO for circuits and subexponentially secure one-way functions.To demonstrate the power of the new notion, we also use it to realize: - A new form of software watermarking, which provides significantly broader protection than current schemes against counterfeits that pass a keyless, public verification process. - Completely CCA encryption, which is a strengthening of completely non-malleable encryption.
2022
EUROCRYPT
Unclonable Polymers and Their Cryptographic Applications 📺
We propose a mechanism for generating and manipulating protein polymers to obtain a new type of *consumable storage* that exhibits intriguing cryptographic "self-destruct" properties, assuming the hardness of certain polymer-sequencing problems. To demonstrate the cryptographic potential of this technology, we first develop a formalism that captures (in a minimalistic way) the functionality and security properties provided by the technology. Next, using this technology, we construct and prove security of two cryptographic applications that are currently obtainable only via trusted hardware that implements logical circuitry (either classical or quantum). The first application is a password-controlled *secure vault* where the stored data is irrecoverably erased once a threshold of unsuccessful access attempts is reached. The second is (a somewhat relaxed version of) *one time programs*, namely a device that allows evaluating a secret function only a limited number of times before self-destructing, where each evaluation is made on a fresh user-chosen input. Finally, while our constructions, modeling, and analysis are designed to capture the proposed polymer-based technology, they are sufficiently general to be of potential independent interest.
2022
EUROCRYPT
Asymptotically Quasi-Optimal Cryptography 📺
The question of minimizing the {\em computational overhead} of cryptography was put forward by the work of Ishai, Kushilevitz, Ostrovsky and Sahai (STOC 2008). The main conclusion was that, under plausible assumptions, most cryptographic primitives can be realized with {\em constant} computational overhead. However, this ignores an additive term that may depend polynomially on the (concrete) computational security parameter $\lambda$. In this work, we study the question of obtaining optimal efficiency, up to polylogarithmic factors, for {\em all} choices of $n$ and $\lambda$, where $n$ is the size of the given task. In particular, when $n=\lambda$, we would like the computational cost to be only $\tilde O(\lambda)$. We refer to this goal as {\em asymptotically quasi-optimal} (AQO) cryptography. We start by realizing the first AQO semi-honest batch oblivious linear evaluation (BOLE) protocol. Our protocol applies to OLE over small fields and relies on the near-exponential security of the ring learning with errors (RLWE) assumption. Building on the above and on known constructions of AQO PCPs, we design the first AQO zero-knowledge (ZK) argument system for Boolean circuit satisfiability. Our construction combines a new AQO ZK-PCP construction that respects the AQO property of the underlying PCP along with a technique for converting statistical secrecy into soundness via OLE reversal. Finally, combining the above results, we get AQO secure computation protocols for Boolean circuits with security against malicious parties under RLWE.
2022
EUROCRYPT
Guaranteed Output in O(sqrt(n)) Rounds for Round-Robin Sampling Protocols 📺
We introduce a notion of round-robin secure sampling that captures several protocols in the literature, such as the "powers-of-tau" setup protocol for pairing-based polynomial commitments and zk-SNARKs, and certain verifiable mixnets. Due to their round-robin structure, protocols of this class inherently require n sequential broadcast rounds, where n is the number of participants. We describe how to compile them generically into protocols that require only O(sqrt(n)) broadcast rounds. Our compiled protocols guarantee output delivery against any dishonest majority. This stands in contrast to prior techniques, which require Omega(n) sequential broadcasts in most cases (and sometimes many more). Our compiled protocols permit a certain amount of adversarial bias in the output, as all sampling protocols with guaranteed output must, due to Cleve's impossibility result (STOC'86). We show that in the context of the aforementioned applications, this bias is harmless.
2022
EUROCRYPT
Constant-round Blind Classical Verification of Quantum Sampling 📺
In a recent breakthrough, Mahadev constructed a classical verification of quantum computation (CVQC) protocol for a classical client to delegate decision problems in BQP to an untrusted quantum prover under computational assumptions. In this work, we explore further the feasibility of CVQC with the more general sampling problems in BQP and with the desirable blindness property. We contribute affirmative solutions to both as follows. * Motivated by the sampling nature of many quantum applications (e.g., quantum algorithms for machine learning and quantum supremacy tasks), we initiate the study of CVQC for quantum sampling problems (denoted by SampBQP). More precisely, in a CVQC protocol for a SampBQP problem, the prover and the verifier are given an input x\in{0, 1}^n and a quantum circuit C, and the goal of the classical client is to learn a sample from the output z\leftarrow C(x) up to a small error, from its interaction with an untrusted prover. We demonstrate its feasibility by constructing a four-message CVQC protocol for SampBQP based on the quantum Learning With Errors assumption. * The blindness of CVQC protocols refers to a property of the protocol where the prover learns nothing, and hence is blind, about the client’s input. It is a highly desirable property that has been intensively studied for the delegation of quantum computation. We provide a simple yet powerful generic compiler that transforms any CVQC protocol to a blind one while preserving its completeness and soundness errors as well as the number of rounds. Applying our compiler to (a parallel repetition of) Mahadev’s CVQC protocol for BQP and our CVQC protocol for SampBQP yields the first constant-round blind CVQC protocol for BQP and SampBQP respectively, with negligible and inverse polynomial soundness errors respectively, and negligible completeness errors.
2022
EUROCRYPT
Optimal Broadcast Encryption and CP-ABE from Evasive Lattice Assumptions 📺
We present a new, simple candidate broadcast encryption scheme for N users with parameter size poly(logN). We prove security of our scheme under a non-standard variant of the LWE assumption where the distinguisher additionally receives short Gaussian pre-images while avoiding zeroizing attacks. This yields the first candidate optimal broadcast encryption that is plausibly post-quantum secure, and enjoys a security reduction to a simple assumption. As a secondary contribution, we present a candidate ciphertext-policy attribute-based encryption (CP-ABE) scheme for circuits of a-priori bounded polynomial depth where the parameter size is independent of the circuit size, and prove security under an additional non-standard assumption.
2022
PKC
Polynomial IOPs for Linear Algebra Relations 📺
This paper proposes new Polynomial IOPs for arithmetic circuits. They rely on the monomial coefficient basis to represent the matrices and vectors arising from the arithmetic constraint satisfaction system, and build on new protocols for establishing the correct computation of linear algebra relations such as matrix-vector products and Hadamard products. Our protocols give rise to concrete proof systems with succinct verification when compiled down with a cryptographic compiler whose role is abstracted away in this paper. Depending only on the compiler, the resulting SNARKs are either transparent or rely on a trusted setup.
2022
PKC
Efficient Lattice-Based Inner-Product Functional Encryption 📺
In the recent years, many research lines on Functional Encryption (FE) have been suggested and studied regarding the functionality, security, or efficiency. Nevertheless, an open problem on a basic functionality, the single-input inner-product (IPFE), remains: can IPFE be instantiated based on the Ring Learning With Errors (RLWE) assumption? The RLWE assumption provides quantum-resistance security while in comparison with LWE assumption gives significant performance and compactness gains. In this paper we present the first RLWE-based IPFE scheme. We carefully choose strategies in the security proofs to optimize the size of parameters. More precisely, we develop two new results on ideal lattices. The first result is a variant of Ring-LWE, that we call multi-hint extended Ring-LWE, where some hints on the secret and the noise are given. We present a reduction from RLWE problem to this variant. The second tool is a special form of Leftover Hash Lemma (LHL) over rings, known as Ring-LHL. To demonstrate the efficiency of our scheme we provide an optimized implementation of RLWE-based IPFE scheme and show its performance on a practical use case. We further present new compilers that, combined with some existing ones, can transfer a single-input FE to its (identity-based, decentralized) multi-client variant with linear size of the ciphertext (w.r.t the number of clients).
2022
PKC
A Note on the Post-Quantum Security of (Ring) Signatures 📺
This work revisits the security of classical signatures and ring signatures in a quantum world. For (ordinary) signatures, we focus on the arguably preferable security notion of {\em blind-unforgeability} recently proposed by Alagic et al.\ (Eurocrypt'20). We present two {\em short} signature schemes achieving this notion: one is in the quantum random oracle model, assuming quantum hardness of SIS; and the other is in the plain model, assuming quantum hardness of LWE with super-polynomial modulus. Prior to this work, the only known blind-unforgeable schemes are Lamport's one-time signature and the Winternitz one-time signature, and both of them are in the quantum random oracle model. For ring signatures, the recent work by Chatterjee et al.\ (Crypto'21) proposes a definition trying to capture adversaries with quantum access to the signer. However, it is unclear if their definition, when restricted to the classical world, is as strong as the standard security notion for ring signatures. They also present a construction that only {\em partially} achieves (even) this seeming weak definition, in the sense that the adversary can only conduct superposition attacks over the messages, but not the rings. We propose a new definition that does not suffer from the above issue. Our definition is an analog to the blind-unforgeability in the ring signature setting. Moreover, assuming the quantum hardness of LWE, we construct a compiler converting any blind-unforgeable (ordinary) signatures to a ring signature satisfying our definition.
2022
PKC
Multitarget decryption failure attacks and their application to Saber and Kyber 📺
Many lattice-based encryption schemes are subject to a very small probability of decryption failures. It has been shown that an adversary can efficiently recover the secret key using a number of ciphertexts that cause such a decryption failure. In PKC 2019, D'Anvers et al. introduced `failure boosting', a technique to speed up the search for decryption failures. In this work we first improve the state-of-the-art multitarget failure boosting attacks. We then improve the cost calculation of failure boosting and extend the applicability of these calculations to permit cost calculations of real-world schemes. Using our newly developed methodologies we determine the multitarget decryption failure attack cost for all parameter sets of Saber and Kyber, showing among others that the quantum security of Saber can theoretically be reduced from 172 bits to 145 bits in specific circumstances. We then discuss the applicability of decryption failure attacks in real-world scenarios, showing that an attack might not be practical to execute.
2022
PKC
On the Isogeny Problem with Torsion Point Information 📺
It has recently been rigorously proven (and was previously known under certain heuristics) that the general supersingular isogeny problem reduces to the supersingular endomorphism ring computation problem. However, in order to attack SIDH-type schemes, one requires a particular isogeny which is usually not returned by the general reduction. At Asiacrypt 2016, Galbraith, Petit, Shani and Ti presented a polynomial-time reduction of the problem of finding the secret isogeny in SIDH to the problem of computing the endomorphism ring of a supersingular elliptic curve. Their method exploits the fact that secret isogenies in SIDH are of degree approximately $p^{1/2}$. The method does not extend to other SIDH-type schemes, where secret isogenies of larger degree are used and this condition is not fulfilled. We present a more general reduction algorithm that generalises to all SIDH-type schemes. The main idea of our algorithm is to exploit available torsion point images together with the KLPT algorithm to obtain a linear system of equations over a certain residue class ring. We show that this system will have a unique solution that can be lifted to the integers if some mild conditions on the parameters are satisfied. This lift then yields the secret isogeny. One consequence of this work is that the choice of the prime $p$ in \mbox{B-SIDH} is tight.
2022
PKC
Financially Backed Covert Security 📺
The security notion of covert security introduced by Aumann and Lindell (TCC'07) allows the adversary to successfully cheat and break security with a fixed probability 1-e, while with probability e, honest parties detect the cheating attempt. Asharov and Orlandi (ASIACRYPT'12) extend covert security to enable parties to create publicly verifiable evidence about misbehavior that can be transferred to any third party. This notion is called publicly verifiable covert security (PVC) and has been investigated by multiple works. While these two notions work well in settings with known identities in which parties care about their reputation, they fall short in Internet-like settings where there are only digital identities that can provide some form of anonymity. In this work, we propose the notion of financially backed covert security (FBC), which ensures that the adversary is financially punished if cheating is detected. Next, we present three transformations that turn PVC protocols into FBC protocols. Our protocols provide highly efficient judging, thereby enabling practical judge implementations via smart contracts deployed on a blockchain. In particular, the judge only needs to non-interactively validate a single protocol message while previous PVC protocols required the judge to emulate the whole protocol. Furthermore, by allowing an interactive punishment procedure, we can reduce the amount of validation to a single program instruction, e.g., a gate in a circuit. An interactive punishment, additionally, enables us to create financially backed covert secure protocols without any form of common public transcript, a property that has not been achieved by prior PVC protocols.
2022
PKC
Efficient Lattice-Based Blind Signatures via Gaussian One-Time Signatures 📺
Lattice-based blind signature schemes have been receiving some recent attention lately. Earlier efficient 3-round schemes (Asiacrypt 2010, Financial Cryptography 2020) were recently shown to have mistakes in their proofs, and fixing them turned out to be extremely inefficient and limited the number of signatures that a signer could send to less than a dozen (Crypto 2020). In this work we propose a round-optimal, 2-round lattice-based blind signature scheme which produces signatures of length 150KB. The running time of the signing protocol is linear in the maximum number signatures that can be given out, and this limits the number of signatures that can be signed per public key. Nevertheless, the scheme is still quite efficient when the number of signatures is limited to a few dozen thousand, and appears to currently be the most efficient lattice-based candidate.
2022
PKC
Post-quantum Asynchronous Deniable Key Exchange and the Signal Handshake 📺
The key exchange protocol that establishes initial shared secrets in the handshake of the Signal end-to-end encrypted messaging protocol has several important characteristics: (1) it runs asynchronously (without both parties needing to be simultaneously online), (2) it provides implicit mutual authentication while retaining deniability (transcripts cannot be used to prove either party participated in the protocol), and (3) it retains security even if some keys are compromised (forward secrecy and beyond). All of these properties emerge from clever use of the highly flexible Diffie--Hellman protocol. While quantum-resistant key encapsulation mechanisms (KEMs) can replace Diffie--Hellman key exchange in some settings, there is no KEM-based replacement for the Signal handshake that achieves all three aforementioned properties, in part due to the inherent asymmetry of KEM operations. In this paper, we show how to construct asynchronous deniable key exchange by combining KEMs and designated verifier signature (DVS) schemes. There are several candidates for post-quantum DVS schemes, either direct constructions or via ring signatures. This yields a template for an efficient post-quantum realization of the Signal handshake with the same asynchronicity and security properties as the original Signal protocol.
2022
PKC
Towards a Simpler Lattice Gadget Toolkit 📺
As a building block, gadgets and associated algorithms are widely used in advanced lattice cryptosystems. The gadget algorithms for power-of-base moduli are very efficient and simple, however the current algorithms for arbitrary moduli are still complicated and practically more costly despite several efforts. Considering the necessity of arbitrary moduli, developing simpler and more practical gadget algorithms for arbitrary moduli is crucial to improving the practical performance of lattice based applications. In this work, we propose two new gadget sampling algorithms for arbitrary moduli. Our first algorithm is for gadget Gaussian sampling. It is simple and efficient. One distinguishing feature of our Gaussian sampler is that it does not need floating-point arithmetic, which makes it better compatible with constrained environments. Our second algorithm is for gadget subgaussian sampling. Compared with the existing algorithm, it is simpler, faster, and requires asymptotically less randomness. In addition, our subgaussian sampler achieves an almost equal quality for different practical parameters. Overall these two algorithms provide simpler options for gadget algorithms and enhance the practicality of the gadget toolkit.
2022
PKC
The Direction of Updatable Encryption Does Matter 📺
We introduce a new definition for key updates, called backward-leak uni-directional key updates, in updatable encryption (UE). This notion is a variant of uni-directional key updates for UE. We show that existing secure UE schemes in the bi-directional key updates setting are not secure in the backward-leak uni-directional key updates setting. Thus, security in the backward-leak uni-directional key updates setting is strictly stronger than security in the bi-directional key updates setting. This result is in sharp contrast to the equivalence theorem by Jiang (Asiacrypt 2020), which says security in the bi-directional key updates setting is equivalent to security in the existing uni-directional key updates setting. We call the existing uni-directional key updates ``forward-leak uni-directional'' key updates to distinguish two types of uni-directional key updates in this paper. We also present two UE schemes with the following features. - The first scheme is post-quantum secure in the backward-leak uni-directional key updates setting under the learning with errors assumption. - The second scheme is secure in the no-directional key updates setting and based on indistinguishability obfuscation and one-way functions. This result solves the open problem left by Jiang (Asiacrypt 2020).
2022
PKC
Leakage-Resilient IBE/ABE with Optimal Leakage Rates from Lattices 📺
We derive the first adaptively secure \ibe~and \abe for t-CNF, and selectively secure \abe for general circuits from lattices, with $1-o(1)$ leakage rates, in the both relative leakage model and bounded retrieval model (\BRM). To achieve this, we first identify a new fine-grained security notion for \abe~-- partially adaptive/selective security, and instantiate this notion from \LWE. Then, by using this notion, we design a new key compressing mechanism for identity-based/attributed-based weak hash proof system (\ib/\ab-\whps) for various policy classes, achieving (1) succinct secret keys and (2) adaptive/selective security matching the existing non-leakage resilient lattice-based designs. Using the existing connection between weak hash proof system and leakage resilient encryption, the succinct-key \ib/\ab-\whps~can yield the desired leakage resilient \ibe/\abe schemes with the optimal leakage rates in the relative leakage model. Finally, by further improving the prior analysis of the compatible locally computable extractors, we can achieve the optimal leakage rates in the \BRM.
2022
PKC
Syndrome Decoding Estimator 📺
The selection of secure parameter sets requires an estimation of the attack cost to break the respective cryptographic scheme instantiated under these parameters. The current NIST standardization process for post-quantum schemes makes this an urgent task, especially considering the announcement to select final candidates by the end of 2021. For code-based schemes, recent estimates seemed to contradict the claimed security of most proposals, leading to a certain doubt about the correctness of those estimates. Furthermore, none of the available estimates includes most recent algorithmic improvements on decoding linear codes, which are based on information set decoding (ISD) in combination with nearest neighbor search. In this work we observe that \emph{all} major ISD improvements are build on nearest neighbor search, explicitly or implicitly. This allows us to derive a framework from which we obtain \emph{practical} variants of all relevant ISD algorithms including the most recent improvements. We derive formulas for the practical attack costs and make those online available in an easy to use estimator tool written in python and C. Eventually, we provide classical and quantum estimates for the bit security of all parameter sets of current code-based NIST proposals.
2022
PKC
On the Bottleneck Complexity of MPC with Correlated Randomness 📺
At ICALP 2018, Boyle et al. introduced the notion of the \emph{bottleneck complexity} of a secure multi-party computation (MPC) protocol. This measures the maximum communication complexity of any one party in the protocol, aiming to improve load-balancing among the parties. In this work, we study the bottleneck complexity of MPC in the preprocessing model, where parties are given correlated randomness ahead of time. We present two constructions of \emph{bottleneck-efficient} MPC protocols, whose bottleneck complexity is independent of the number of parties: 1. A protocol for computing abelian programs, based only on one-way functions. 2. A protocol for selection functions, based on any linearly homomorphic encryption scheme. Compared with previous bottleneck-efficient constructions, our protocols can be based on a wider range of assumptions, and avoid the use of fully homomorphic encryption.
2022
PKC
Traceable PRFs: Full Collusion Resistance and Active Security 📺
The main goal of traceable cryptography is to protect against unauthorized redistribution of cryptographic functionalities. Such schemes provide a way to embed identities (i.e., a "mark") within cryptographic objects (e.g., decryption keys in an encryption scheme, signing keys in a signature scheme). In turn, the tracing guarantee ensures that any "pirate device" that successfully replicates the underlying functionality can be successfully traced to the set of identities used to build the device. In this work, we study traceable pseudorandom functions (PRFs). As PRFs are the workhorses of symmetric cryptography, traceable PRFs are useful for augmenting symmetric cryptographic primitives with strong traceable security guarantees. However, existing constructions of traceable PRFs either rely on strong notions like indistinguishability obfuscation or satisfy weak security guarantees like single-key security (i.e., tracing only works against adversaries that possess a single marked key). In this work, we show how to use fingerprinting codes to upgrade a single-key traceable PRF into a fully collusion resistant traceable PRF, where security holds regardless of how many keys the adversary possesses. We additionally introduce a stronger notion of security where tracing security holds even against active adversaries that have oracle access to the tracing algorithm. In conjunction with known constructions of single-key traceable PRFs, we obtain the first fully collusion resistant traceable PRF from standard lattice assumptions. Our traceable PRFs directly imply new lattice-based secret-key traitor tracing schemes that are CCA-secure and where tracing security holds against active adversaries that have access to the tracing oracle.
2022
PKC
Post-Quantum Anonymous One-Sided Authenticated Key Exchange without Random Oracles 📺
Authenticated Key Exchange (AKE) is a cryptographic protocol to share a common session key among multiple parties. Usually, PKI-based AKE schemes are designed to guarantee secrecy of the session key and mutual authentication. However, in practice, there are many cases where mutual authentication is undesirable such as in anonymous networks like Tor and Riffle, or difficult to achieve due to the certificate management at the user level such as the Internet. Goldberg et al. formulated a model of anonymous one-sided AKE which guarantees the anonymity of the client by allowing only the client to authenticate the server, and proposed a concrete scheme. However, existing anonymous one-sided AKE schemes are only known to be secure in the random oracle model. In this paper, we propose generic constructions of anonymous one-sided AKE in the random oracle model and in the standard model, respectively. Our constructions allow us to construct the first post-quantum anonymous one-sided AKE scheme from isogenies in the standard model.
2022
PKC
Efficient Verifiable Partially-Decryptable Commitments from Lattices and Applications 📺
We introduce verifiable partially-decryptable commitments (VPDC), as a building block for constructing efficient privacy-preserving protocols supporting auditability by a trusted party. A VPDC is an extension of a commitment along with an accompanying proof, convincing a verifier that (i) the given commitment is well-formed and (ii) a certain part of the committed message can be decrypted using a (secret) trapdoor known to a trusted party. We first formalize VPDCs and then introduce a general decryption feasibility result that overcomes the challenges in relaxed proofs arising in the lattice setting. Our general result can be applied to a wide class of Fiat-Shamir based protocols and may be of independent interest. Next, we show how to extend the commonly used lattice-based `Hashed-Message Commitment' (HMC) scheme into a succinct and efficient VPDC. In particular, we devise a novel `gadget'-based Regev-style (partial) decryption method, compatible with efficient relaxed lattice-based zero-knowledge proofs. We prove the soundness of our VPDC in the setting of adversarial proofs, where a prover tries to create a valid VPDC output that fails in decryption. To demonstrate the effectiveness of our results, we extend a private blockchain payment protocol, MatRiCT, by Esgin et al. (ACM CCS '19) into a formally auditable construction, which we call MatRiCT-Au, with very low communication and computation overheads over MatRiCT.
2022
PKC
On Pairing-Free Blind Signature Schemes in the Algebraic Group Model 📺
Studying the security and efficiency of blind signatures is an important goal for privacy sensitive applications. In particular, for large- scale settings (e.g., cryptocurrency tumblers), it is important for schemes to scale well with the number of users in the system. Unfortunately, all practical schemes either 1) rely on (very strong) number theoretic hard- ness assumptions and/or computationally expensive pairing operations over bilinear groups, or 2) support only a polylogarithmic number of concurrent (i.e., arbitrarily interleaved) signing sessions per public key. In this work, we revisit the security of two pairing-free blind signature schemes in the Algebraic Group Model (AGM) + Random Oracle Model (ROM). Concretely, 1. We consider the security of Abe’s scheme (EUROCRYPT ‘01), which is known to have a flawed proof in the plain ROM. We adapt the scheme to allow a partially blind variant and give a proof of the new scheme under the discrete logarithm assumption in the AGM+ROM, even for (polynomially many) concurrent signing sessions. 2. We then prove that the popular blind Schnorr scheme is secure un- der the one-more discrete logarithm assumption if the signatures are issued sequentially. While the work of Fuchsbauer et al. (EURO- CRYPT ‘20) proves the security of the blind Schnorr scheme for con- current signing sessions in the AGM+ROM, its underlying assump- tion, ROS, is proven false by Benhamouda et al. (EUROCRYPT ‘21) when more than polylogarithmically many signatures are issued. Given the recent progress, we present the first security analysis of the blind Schnorr scheme in the slightly weaker sequential setting. We also show that our security proof reduces from the weakest possible assumption, with respect to known reduction techniques.
2022
PKC
A New Security Notion for PKC in the Standard Model: Weaker, Simpler, and Still Realizing Secure Channels 📺
Encryption satisfying CCA2 security is commonly known to be unnecessarily strong for realizing secure channels. Moreover, CCA2 constructions in the standard model are far from being competitive practical alternatives to constructions via random oracle. A promising research area to alleviate this problem are weaker security notions—like IND-RCCA secure encryption or IND-atag-wCCA secure tag-based encryption—which are still able to facilitate secure message transfer (SMT) via authenticated channels. In this paper we introduce the concept of sender-binding encryption (SBE), unifying prior approaches of SMT construction in the universal composability (UC) model. We furthermore develop the corresponding non-trivial security notion of IND-SB-CPA and formally prove that it suffices for realizing SMT in conjunction with authenticated channels. Our notion is the weakest so far in the sense that it can be generically constructed from the weakest prior notions—RCCA and atag-wCCA—without additional assumptions, while the reverse is not true. A direct consequence is that IND-stag-wCCA, which is strictly weaker than IND-atag-wCCA but stronger than our IND-SB-CPA, can be used to construct a secure channel. Finally, we give an efficient IND-SB-CPA secure construction in the standard model from IND-CPA secure double receiver encryption (DRE) based on McEliece. This shows that IND-SB-CPA security yields simpler and more efficient constructions in the standard model than the weakest prior notions, i.e., IND-atag-wCCA and IND-stag-wCCA.
2022
PKC
Post-quantum Security of Plain OAEP Transform 📺
In this paper, we show that OAEP transform is indistinguishable under chosen ciphertext attack in the quantum random oracle model if the underlying trapdoor permutation is quantum partial-domain one-way. The existing post-quantum security of OAEP (TCC 2016-B ) requires a modification to the OAEP transform using an extra hash function. We prove the security of the OAEP transform without any modification and this answers an open question in one of the finalists of NIST competition, NTRU submission, affirmatively.
2022
PKC
Rational Modular Encoding in the DCR Setting: Non-Interactive Range Proofs and Paillier-Based Naor-Yung in the Standard Model 📺
Range proofs allow a sender to convince a verifier that committed integers belong to an interval without revealing anything else. So far, all known non-interactive range proofs in the standard model rely on groups endowed with a bilinear map. Moreover, they either require the group order to be larger than the range of any proven statement or they suffer from a wasteful rate. Recently (Eurocrypt'21), Couteau et al. introduced a new approach to efficiently prove range membership by encoding integers as a modular ratio between small integers. We show that their technique can be transposed in the standard model under the Composite Residuosity (DCR) assumption. Interestingly, with this modification, the size of ranges is not a priori restricted by the common reference string. It also gives a constant ratio between the size of ranges and proofs. Moreover, we show that their technique of encoding messages as bounded rationals provides a secure standard model instantiation of the Naor-Yung CCA2 encryption paradigm under the DCR assumption. Keywords: Range proofs, NIZK, standard model, Naor-Yung.
2022
PKC
Lockable Obfuscation from Circularly Insecure Fully Homomorphic Encryption 📺
In a lockable obfuscation scheme, a party called the obfuscator takes as input a circuit C, a lock value y and, a message m, and outputs an obfuscated circuit. Given the obfuscated circuit, an evaluator can run it on an input x and learn the message if C(x) = y. For security, we require that the obfuscation reveals no information on the circuit as long as the lock y has high entropy even given the circuit C. The only known constructions of lockable obfuscation schemes require indistinguishability obfuscation (iO) or the learning with errors (LWE) assumption. Furthermore, in terms of technique, all known constructions, excluding iO-based, are build from provably secure variations of graph-induced multilinear maps. We show a generic construction of a lockable obfuscation scheme built from a (leveled) fully homomorphic encryption scheme that is circularly insecure. Specifically, we need a fully homomorphic encryption scheme that is secure under chosen-plaintext attack (IND-CPA) but for which there is an efficient cycle tester that can detect encrypted key cycles. Our finding sheds new light on how to construct lockable obfuscation schemes and shows why cycle tester constructions were helpful in the design of lockable obfuscation schemes. One of the many use cases for lockable obfuscation schemes are constructions for IND-CPA secure but circularly insecure encryption schemes. Our work shows that there is a connection in both ways between circular insecure encryption and lockable obfuscation.
2022
PKC
Improved Constructions of Anonymous Credentials From Structure-Preserving Signatures on Equivalence Classes 📺
Anonymous attribute-based credentials (ABCs) are a powerful tool allowing users to authenticate while maintaining privacy. When instantiated from structure-preserving signatures on equivalence classes (SPS-EQ) we obtain a controlled form of malleability, and hence increased functionality and privacy for the user. Existing constructions consider equivalence classes on the message space, allowing the joint randomization of credentials and the corresponding signatures on them. In this work, we additionally consider equivalence classes on the signing-key space. In this regard, we obtain a \emph{signer hiding} notion, where the issuing organization is not revealed when a user shows a credential. To achieve this, we instantiate the ABC framework of Fuchsbauer, Hanser, and Slamanig (FHS, Journal of Cryptology '19) with a recent SPS-EQ scheme (ASIACRYPT '19) modified to support a fully adaptive NIZK from the framework of Couteau and Hartmann (CRYPTO '20). We also show how to obtain Mercurial Signatures (CT-RSA, 2019), extending the application of our construction to anonymous delegatable credentials. To further increase functionality and efficiency, we augment the set-commitment scheme of FHS19 to support openings on attribute sets disjoint from those possessed by the user, while integrating a proof of exponentiation to allow for a more efficient verifier. Instantiating in the CRS model, we obtain an efficient credential system, anonymous under malicious organization keys, with increased expressiveness and privacy, proven secure in the standard model.
2022
PKC
Reusable Two-Round MPC from LPN 📺
We present a new construction of maliciously-secure, two-round multiparty computation (MPC) in the CRS model, where the first message is reusable an unbounded number of times. The security of the protocol relies on the Learning Parity with Noise (LPN) assumption with inverse polynomial noise rate $1/n^{1-\epsilon}$ for small enough constant $\epsilon$, where $n$ is the LPN dimension. Prior works on reusable two-round MPC required assumptions such as DDH or LWE that imply some flavor of homomorphic computation. We obtain our result in two steps: - In the first step, we construct a two-round MPC protocol in the {\it silent pre-processing model} (Boyle et al., Crypto 2019). Specifically, the parties engage in a computationally inexpensive setup procedure that generates some correlated random strings. Then, the parties commit to their inputs. Finally, each party sends a message depending on the function to be computed, and these messages can be decoded to obtain the output. Crucially, the complexity of the pre-processing phase and the input commitment phase do not grow with the size of the circuit to be computed. We call this {\it multiparty silent NISC} (msNISC), generalizing the notion of two-party silent NISC of Boyle et al. (CCS 2019). We provide a construction of msNISC from LPN in the random oracle model. - In the second step, we give a transformation that removes the pre-processing phase and use of random oracle from the previous protocol. This transformation additionally adds (unbounded) reusability of the first round message, giving the first construction of reusable two-round MPC from the LPN assumption. This step makes novel use of randomized encoding of circuits (Applebaum et al., FOCS 2004) and a variant of the ``tree of MPC messages" technique of Ananth et al. and Bartusek et al. (TCC 2020).
2022
PKC
Radical Isogenies on Montgomery Curves 📺
We work on some open problems in radical isogenies. Radical isogenies are formulas to compute chains of N-isogenies for small N and proposed by Castryck, Decru, and Vercauteren in Asiacrypt 2020. These formulas do not need to generate a point of order N generating the kernel and accelerate some isogeny-based cryptosystems like CSIDH. On the other hand, since these formulas use Tate normal forms, these need to transform Tate normal forms to curves with efficient arithmetic, e.g., Montgomery curves. In this paper, we propose radical-isogeny formulas of degrees 3 and 4 on Montgomery curves. Our formulas compute some values determining Montgomery curves, from which one can efficiently recover Montgomery coefficients. And our formulas are more efficient for some cryptosystems than the original radical isogenies. In addition, we prove a conjecture left open by Castryck et al. that relates to radical isogenies of degree 4.
2022
PKC
Lattice-based Signatures with Tight Adaptive Corruptions and More 📺
We construct the first tightly secure signature schemes in the multi-user setting with adaptive corruptions from lattices. In stark contrast to the previous tight constructions whose security is solely based on number-theoretic assumptions, our schemes are based on the Learning with Errors (LWE) assumption which is supposed to be post-quantum secure. The security of our scheme is independent of the numbers of users and signing queries, and it is in the non-programmable random oracle model. Our LWE-based scheme is compact, namely, its signatures contain only a constant number of lattice vectors. At the core of our construction are a new abstraction of the existing lossy identification (ID) schemes using dual-mode commitment schemes and a refinement of the framework by Diemert et al. (PKC 2021) which transforms a lossy ID scheme to a signature using sequential OR proofs. In combination, we obtain a tight generic construction of signatures from dual-mode commitments in the multi-user setting. Improving the work of Diemert et al., our new approach can be instantiated using not only the LWE assumption, but also an isogeny-based assumption. We stress that our LWE-based lossy ID scheme in the intermediate step uses a conceptually different idea than the previous lattice-based ones. Of independent interest, we formally rule out the possibility that the aforementioned ``ID-to-Signature'' methodology can work tightly using parallel OR proofs. In addition to the results of Fischlin et al. (EUROCRYPT 2020), our impossibility result shows a qualitative difference between both forms of OR proofs in terms of tightness.
2022
PKC
Logarithmic-Size (Linkable) Threshold Ring Signatures in the Plain Model 📺
A $1$-out-of-$N$ ring signature scheme, introduced by Rivest, Shamir, and Tauman-Kalai (ASIACRYPT '01), allows a signer to sign a message as part of a set of size $N$ (the so-called ``ring'') which are anonymous to any verifier, including other members of the ring. Threshold ring (or ``thring'') signatures generalize ring signatures to $t$-out-of-$N$ parties, with $t \geq 1$, who anonymously sign messages and show that they are distinct signers (Bresson et al., CRYPTO'02). Until recently, there was no construction of ring signatures that both $(i)$ had logarithmic signature size in $N$, and $(ii)$ was secure in the plain model. The work of Backes et al. (EUROCRYPT'19) resolved both these issues. However, threshold ring signatures have their own particular problem: with a threshold $t \geq 1$, signers must often reveal their identities to the other signers as part of the signing process. This is an issue in situations where a ring member has something controversial to sign; he may feel uncomfortable requesting that other members join the threshold, as this reveals his identity. Building on the Backes et al. template, in this work we present the first construction of a thring signature that is logarithmic-sized in $N$, in the plain model, and does not require signers to interact with each other to produce the thring signature. We also present a linkable counterpart to our construction, which supports a fine-grained control of linkability. Moreover, our thring signatures can easily be adapted to achieve the recent notions of claimability and repudiability (Park and Sealfon, CRYPTO'19).
2022
PKC
Low-Communication Multiparty Triple Generation for SPDZ from Ring-LPN 📺
The SPDZ protocol for multi-party computation relies on a correlated randomness setup consisting of authenticated, multiplication triples. A recent line of work by Boyle et al. (Crypto 2019, Crypto 2020) has investigated the possibility of producing this correlated randomness in a \emph{silent preprocessing} phase, which involves a ``small'' setup protocol with less communication than the total size of the triples being produced. These works do this using a tool called a \emph{pseudorandom correlation generator} (PCG), which allows a large batch of correlated randomness to be compressed into a set of smaller, correlated seeds. However, existing methods for compressing SPDZ triples only apply to the 2-party setting. In this work, we construct a PCG for producing SPDZ triples over large prime fields in the multi-party setting. The security of our PCG is based on the ring-LPN assumption over fields, similar to the work of Boyle et al. (Crypto 2020) in the 2-party setting. We also present a corresponding, actively secure setup protocol, which can be used to generate the PCG seeds and instantiate SPDZ with a silent preprocessing phase. As a building block, which may be of independent interest, we construct a new type of 3-party distributed point function supporting outputs over arbitrary groups (including large prime order), as well as an efficient protocol for setting up our DPF keys with active security.
2022
PKC
CNF-FSS and its Applications 📺
Function Secret Sharing (FSS), introduced by Boyle, Gilboa and Ishai~\cite{BGI15}, extends the classical notion of secret-sharing a \textit{value} to secret sharing a \textit{function}. Namely, for a secret function $f$ (from a class $\cal F$), FSS provides a sharing of $f$ whereby {\em succinct} shares (``keys'') are distributed to a set of parties, so that later the parties can non-interactively compute an additive sharing of $f(x)$, for any input $x$ in the domain of $f$. Previous work on FSS concentrated mostly on the two-party case, where highly efficient schemes are obtained for some simple, yet extremely useful, classes $\cal F$ (in particular, FSS for the class of point functions, a task referred to as DPF~--~Distributed Point Functions~\cite{GI14,BGI15}). In this paper, we concentrate on the multi-party case, with $p\ge 3$ parties and $t$-security ($1\le t<p$). First, we introduce the notion of \textsf{CNF-DPF} (or, more generally, \textsf{CNF-FSS}), where the scheme uses the CNF version of secret sharing (rather than additive sharing) to share each value $f(x)$. We then demonstrate the utility of CNF-DPF by providing several applications. Our main result shows how CNF-DPF can be used to achieve substantial asymptotic improvement in communication complexity when using it as a building block for constructing {\em standard} $(t,p)$-DPF protocols that tolerate $t>1$ (semi-honest) corruptions (of the $p$ parties). For example, we build a 2-out-of-5 secure (standard) DPF scheme of communication complexity $O(N^{1/4})$, where $N$ is the domain size of $f$ (compared with the current best-known of $O(N^{1/2})$ for $(2,5)$-DPF). More generally, with $p>dt$ parties, we give a $(t,p)$-DPF whose communication grows as $O(N^{1/2d})$ (rather than $O(\sqrt{N})$ that follows from the $(p-1,p)$-DPF scheme of \cite{BGI15}). We also present a 1-out-of-3 secure CNF-DPF scheme, in which each party holds two of the three keys, with poly-logarithmic communication complexity. These results have immediate implications to scenarios where (multi-server) DPF was shown to be applicable. For example, we show how to use such a scheme to obtain asymptotic improvement ($O(\log^2N)$ versus $O(\sqrt{N})$) in communication complexity over the 3-party protocol of~\cite{BKKO20}.
2022
PKC
Count Me In! Extendablity for Threshold Ring Signatures 📺
Ring signatures enable a signer to sign a message on behalf of a group anonymously, without revealing her identity. Similarly, threshold ring signatures allow several signers to sign the same message on behalf of a group; while the combined signature reveals that some threshold t of group members signed the message, it does not leak anything else about the signers’ identities. Anonymity is a central feature in threshold ring signature applications, such as whistleblowing, e-voting and privacy-preserving cryptocurrencies: it is often crucial for signers to remain anonymous even from their fellow signers. When the generation of a signature requires interaction, this is diffcult to achieve. There exist threshold ring signatures with non-interactive signing — where signers locally produce partial signatures which can then be aggregated — but a limitation of existing threshold ring signature constructions is that all of the signers must agree on the group on whose behalf they are signing, which implicitly assumes some coordination amongst them. The need to agree on a group before generating a signature also prevents others — from outside that group — from endorsing a message by adding their signature to the statement post-factum. We overcome this limitation by introducing extendability for ring signatures, same-message linkable ring signatures, and threshold ring signatures. Extendability allows an untrusted third party to take a signature, and extend it by enlarging the anonymity set to a larger set. In the extendable threshold ring signature, two signatures on the same message which have been extended to the same anonymity set can then be combined into one signature with a higher threshold. This enhances signers’ anonymity, and enables new signers to anonymously support a statement already made by others. For each of those primitives, we formalize the syntax and provide a meaningful security model which includes different flavors of anonymous extendability. In addition, we present concrete realizations of each primitive and formally prove their security relying on signatures of knowledge and the hardness of the discrete logarithm problem. We also describe a generic transformation to obtain extendable threshold ring signatures from same-message-linkable extendable ring signatures. Finally, we implement and benchmark our constructions.
2022
PKC
Encapsulated Search Index : Public-Key, Sub-linear, Distributed, and Delegatable 📺
We build the first *sub-linear* (in fact, potentially constant-time) *public-key* searchable encryption system: - server can publish a public key $PK$. - anybody can build an encrypted index for document $D$ under $PK$. - client holding the index can obtain a token $z_w$ from the server to check if a keyword $w$ belongs to $D$. - search using $z_w$ is almost as fast (e.g., sub-linear) as the non-private search. - server granting the token does not learn anything about the document $D$, beyond the keyword $w$. - yet, the token $z_w$ is specific to the pair $(D,w)$: the client does not learn if other keywords $w'\neq w$ belong to $D$, or if $w$ belongs to other, freshly indexed documents $D'$. - server cannot fool the client by giving a wrong token $z_w$. We call such a primitive *encapsulated search index* (ESI). Our ESI scheme can be made $(t,n)$-distributed among $n$ servers in the best possible way: *non-interactive*, verifiable, and resilient to any coalition of up to $(t-1)$ malicious servers. We also introduce the notion of *delegatable* ESI and show how to extend our construction to this setting. Our solution --- including public indexing, sub-linear search, delegation, and distributed token generation --- is deployed as a commercial application by a real-world company.
2022
PKC
KDM Security for the Fujisaki-Okamoto Transformations in the QROM 📺
Key dependent message (KDM) security is a security notion that guarantees confidentiality of communication even if secret keys are encrypted. KDM security has found a number of applications in practical situations such as hard-disk encryption systems, anonymous credentials, and bootstrapping of fully homomorphic encryptions. Recently, it also found an application in quantum delegation protocols as shown by Zhang (TCC 2019). In this work, we investigate the KDM security of existing practical public-key encryption (PKE) schemes proposed in the quantum random oracle model (QROM). Concretely, we study a PKE scheme whose KEM is constructed by using Fujisaki-Okamoto (FO) transformations in the QROM. FO transformations are applied to an IND-CPA secure PKE schemes and yield IND-CCA secure key encapsulation mechanisms (KEM). Then, we show the following results. - We can reduce the KDM-CPA security in the QROM of a PKE scheme whose KEM is derived from any of the FO transformations proposed by Hofheinz et al. (TCC 2017) to the IND-CPA security of the underlying PKE scheme, without square root security loss. For this result we use one-time-pad (OTP) as DEM to convert KEM into PKE. - We can reduce the KDM-CCA security in the QROM of a PKE scheme whose KEM is derived from a single variant of the FO transformation proposed by Hofheinz et al. (TCC 2017) to the IND-CPA security of the underlying PKE scheme, without square root security loss. For this result, we use OTP-then-MAC construction as DEM to convert KEM into PKE. Also, we require a mild injectivity assumption for the underlying IND-CPA secure PKE scheme. In order to avoid square root security loss, we use a double-sided one-way to hiding (O2H) lemma proposed by Kuchta et al. (EUROCRYPT 2020). In the context of KDM security, there is a technical hurdle for using double-sided O2H lemma due to the circularity issue. Our main technical contribution is to overcome the hurdle.
2022
PKC
Making Private Function Evaluation Safer, Faster, and Simpler 📺
In the problem of two-party \emph{private function evaluation} (PFE), one party $P_A$ holds a \emph{private function} $f$ and (optionally) a private input $x_A$, while the other party $P_B$ possesses a private input $x_B$. Their goal is to evaluate $f$ on $x_A$ and $x_B$, and one or both parties may obtain the evaluation result $f(x_A, x_B)$ while no other information beyond $f(x_A, x_B)$ is revealed. In this paper, we revisit the two-party PFE problem and provide several enhancements. We propose the \emph{first} constant-round actively secure PFE protocol with linear complexity. Based on this result, we further provide the \emph{first} constant-round publicly verifiable covertly (PVC) secure PFE protocol with linear complexity to gain better efficiency. For instance, when the deterrence factor is $\epsilon = 1/2$, compared to the passively secure protocol, its communication cost is very close and its computation cost is around $2.6\times$. In our constructions, as a by-product, we design a specific protocol for proving that a list of ElGamal ciphertexts is derived from an \emph{extended permutation} performed on a given list of elements. It should be noted that this protocol greatly improves the previous result and may be of independent interest. In addition, a reusability property is added to our two PFE protocols. Namely, if the same function $f$ is involved in multiple executions of the protocol between $P_A$ and $P_B$, then the protocol could be executed more efficiently from the second execution. Moreover, we further extend this property to be \emph{global}, such that it supports multiple executions for the same $f$ in a reusable fashion between $P_A$ and \emph{arbitrary} parties playing the role of $P_B$.
2022
PKC
Lifting Standard Model Reductions to Common Setup Assumptions 📺
In this paper we show that standard model black-box reductions naturally lift to various setup assumptions, such as the random oracle (ROM) or ideal cipher model. Concretely, we prove that a black-box reduction from a security notion $P$ to security notion $Q$ in the standard model can be turned into a non-programmable black-box reduction from $P_\oracle$ to $Q_\oracle$ in a model with a setup assumption $\oracle$, where $P_\oracle$ and $Q_\oracle$ are the natural extensions of $P$ and $Q$ to a model with a setup assumption $\oracle$. Our results rely on a generalization of the recent framework by Hofheinz and Nguyen (PKC 2019) to support primitives which make use of a trusted setup. Our framework encompasses standard idealized settings like the random oracle and the ideal cipher model. At the core of our main result lie novel properties of negligible functions that can be of independent interest.
2022
PKC
ECLIPSE: Enhanced Compiling method for Pedersen-committed zkSNARK Engines 📺
We advance the state-of-the art for zero-knowledge commit-and-prove SNARKs (CP-SNARKs). CP-SNARKs are an important class of SNARKs which, using commitments as ``glue'', allow to efficiently combine proof systems---e.g., general-purpose SNARKs (an efficient way to prove statements about circuits) and $\Sigma$-protocols (an efficient way to prove statements about group operations). Thus, CP-SNARKs allow to efficiently provide zero-knowledge proofs for composite statements such as $h=H(g^{x})$ for some hash-function $H$. Our main contribution is providing the first construction of CP-SNARKs where the proof size is succinct in the number of commitments. We achieve our result by providing a general technique to compile Algebraic Holographic Proofs (AHP) (an underlying abstraction used in many modern SNARKs) with special ``decomposition'' properties into an efficient CP-SNARK. We then show that some of the most efficient AHP constructions---Marlin, PLONK, and Sonic---satisfy our compilation requirements. Our resulting SNARKs achieve universal and updatable reference strings, which are highly desirable features as they greatly reduce the trust needed in the SNARK setup phase.
2022
PKC
Time-Memory tradeoffs for large-weight syndrome decoding in ternary codes 📺
We propose new algorithms for solving a class of large-weight syndrome decoding problems in random ternary codes. This is the main generic problem underlying the security of the recent Wave signature scheme (Debris-Alazard et al., 2019), and it has so far received limited attention. At SAC 2019 Bricout et al. proposed a reduction to a binary subset sum problem requiring many solutions, and used it to obtain the fastest known algorithm. However —as is often the case in the coding theory literature— its memory cost is proportional to its time cost, which makes it unattractive in most applications. In this work we propose a range of memory-efficient algorithms for this problem, which describe a near-continuous time-memory tradeoff curve. Those are obtained by using the same reduction as Bricout et al. and carefully instantiating the derived subset sum problem with exhaustive- search algorithms from the literature, in particular dissection (Dinur et al., 2012) and dissection in tree (Dinur, 2019). We also spend significant effort adapting those algorithms to decrease their granularity, thereby allowing them to be smoothly used in a syndrome decoding context when not all the solutions to the subset sum problem are required. For a proposed parameter set for Wave, one of our best instantiations is estimated to cost 2^177 bit operations and requiring 2^88.5 bits of storage, while we estimate this to be 2^152 and 2^144 for the best algorithm from Bricout et al..
2022
PKC
Two-Round Oblivious Linear Evaluation from Learning with Errors 📺
Oblivious Linear Evaluation (OLE) is the arithmetic analogue of the well-know oblivious transfer primitive. It allows a sender, holding an affine function $f(x)=a+bx$ over a finite field or ring, to let a receiver learn $f(w)$ for a $w$ of the receiver's choice. In terms of security, the sender remains oblivious of the receiver's input $w$, whereas the receiver learns nothing beyond $f(w)$ about $f$. In recent years, OLE has emerged as an essential building block to construct efficient, reusable and maliciously-secure two-party computation. In this work, we present efficient two-round protocols for OLE over large fields based on the Learning with Errors (LWE) assumption, providing a full arithmetic generalization of the oblivious transfer protocol of Peikert, Vaikuntanathan and Waters (CRYPTO 2008). At the technical core of our work is a novel extraction technique which allows to determine if a non-trivial multiple of some vector is close to a $q$-ary lattice.
2022
PKC
On the security of OSIDH 📺
The Oriented Supersingular Isogeny Diffie-Hellman is a post-quantum key exchange scheme recently introduced by Colò and Kohel. It is based on the group action of an ideal class group of a quadratic imaginary order on a subset of supersingular elliptic curves, and in this sense it can be viewed as a generalization of the popular isogeny based key exchange CSIDH. From an algorithmic standpoint, however, OSIDH is quite different from CSIDH. In a sense, OSIDH uses class groups which are more structured than in CSIDH, creating a potential weakness that was already recognized by Colò and Kohel. To circumvent the weakness, they proposed an ingenious way to realize a key exchange by exchanging partial information on how the class group acts in the neighborhood of the public curves, and conjectured that this additional information would not impact security. In this work we revisit the security of OSIDH by presenting a new attack, building upon previous work of Onuki. Our attack has exponential complexity, but it practically breaks Colò and Kohel's parameters unlike Onuki's attack. We also discuss countermeasures to our attack, and analyze their impact on OSIDH, both from an efficiency and a functionality point of view.
2022
PKC
Storing and Retrieving Secrets on a blockchain 📺
A secret sharing scheme enables one party to distribute shares of a secret to n parties and ensures that an adversary in control of t out of n parties will learn no information about the secret. However, traditional secret sharing schemes are often insufficient, especially for applications in which the set of parties who hold the secret shares might change over time. To achieve security in this setting, dynamic proactive secret sharing (DPSS) is used. DPSS schemes proactively update the secret shares held by the parties and allow changes to the set of parties holding the secrets. We propose FaB-DPSS (FAst Batched DPSS) -- a new and highly optimized batched DPSS scheme. While previous work on batched DPSS focuses on a single client submitting a batch of secrets and does not allow storing and releasing secrets independently, we allow multiple different clients to dynamically share and release secrets. FaB-DPSS is the most efficient robust DPSS scheme that supports the highest possible adversarial threshold of 1/2. We prove FaB-DPSS secure and implement it. All operations complete in seconds, and we outperform a prior state-of-the-art DPSS scheme by over 6 times. Additionally, we propose new applications of DPSS in the context of blockchains. Specifically, we propose a protocol that uses blockchains and FaB-DPSS to provide conditional secret storage. The protocol allows parties to store secrets along with a release condition, and once a (possibly different) party satisfies this release condition, the secret is privately released to that party. This functionality is similar to extractable witness encryption. While there are numerous compelling applications (e.g., time-lock encryption, one-time programs, and fair multi-party computation) which rely on extractable witness encryption, there are no known efficient constructions (or even constructions based on any well-studied assumptions) of extractable witness encryption. However, by utilizing blockchains and FaB-DPSS, we can easily build all those applications. We provide an implementation of our conditional secret storage protocol as well as several applications building on top of it.
2022
TOSC
The Legendre Symbol and the Modulo-2 Operator in Symmetric Schemes over Fnp: Preimage Attack on Full Grendel 📺
Motivated by modern cryptographic use cases such as multi-party computation (MPC), homomorphic encryption (HE), and zero-knowledge (ZK) protocols, several symmetric schemes that are efficient in these scenarios have recently been proposed in the literature. Some of these schemes are instantiated with low-degree nonlinear functions, for example low-degree power maps (e.g., MiMC, HadesMiMC, Poseidon) or the Toffoli gate (e.g., Ciminion). Others (e.g., Rescue, Vision, Grendel) are instead instantiated via high-degree functions which are easy to evaluate in the target application. A recent example for the latter case is the hash function Grendel, whose nonlinear layer is constructed using the Legendre symbol. In this paper, we analyze high-degree functions such as the Legendre symbol or the modulo-2 operation as building blocks for the nonlinear layer of a cryptographic scheme over Fnp.Our focus regards the security analysis rather than the efficiency in the mentioned use cases. For this purpose, we present several new invertible functions that make use of the Legendre symbol or of the modulo-2 operation.Even though these functions often provide strong statistical properties and ensure a high degree after a few rounds, the main problem regards their small number of possible outputs, that is, only three for the Legendre symbol and only two for the modulo-2 operation. By fixing them, it is possible to reduce the overall degree of the function significantly. We exploit this behavior by describing the first preimage attack on full Grendel, and we verify it in practice.
2022
TOSC
Weak Tweak-Keys for the CRAFT Block Cipher 📺
CRAFT is a lightweight tweakable Substitution-Permutation-Network (SPN) block cipher optimized for efficient protection of its implementations against Differential Fault Analysis (DFA) attacks. In this paper, we present an equivalent description of CRAFT up to a simple mapping on the plaintext, ciphertext and round tweakeys. We show that the new representation, for a sub-class of keys, leads to a new structure which is a Feistel network, with non-linear operation and key addition only on half the state. Consequently, it reveals a class of weak keys for which CRAFT is less resistant against differential and linear cryptanalyses. As a result, we present one weak-key single-tweak differential attack on 23 rounds (with time complexity of 294 encryptions and data complexity of 274 chosen plaintext/tweak/ciphertext tuples and works for 2112 weak keys) and one weak-key related-tweak attack on 26 rounds of the cipher (with time complexity of 2105 encryptions and data complexity 273 chosen plaintext/tweak/ciphertext tuples and works for 2108 weak keys). Note that these attacks do not break the security claim of the CRAFT block cipher.
2022
TOSC
Bounds for the Security of Ascon against Differential and Linear Cryptanalysis 📺
The NIST Lightweight Cryptography project aims to standardize symmetric cryptographic designs, including authenticated encryption and hashing, suitable for constrained devices. One essential criterion for the evaluation of the 10 finalists is the evidence for their security against attacks like linear and differential cryptanalysis. For Ascon, one of the finalists and previous winner of the CAESAR competition in the ‘lightweight’ category, there is a large gap between the proven bounds and the best known characteristics found with heuristic tools: The bounds only cover up to 3 rounds with 15 differentially and 13 linearly active S-boxes, insufficient for proving a level of security for the full constructions.In this paper, we propose a new modeling strategy for SAT solvers and derive strong bounds for the round-reduced Ascon permutation. We prove that 4 rounds already ensure that any single characteristic has a differential probability or squared correlation of at most 2−72, and 6 rounds at most 2−108. This is significantly below the bound that could be exploited within the query limit for keyed Ascon modes. These bounds are probably not tight. To achieve this result, we propose a new search strategy of dividing the search space into a large number of subproblems based on ‘girdle patterns’, and show how to exploit the rotational symmetry of Ascon using necklace theory. Additionally, we evaluate and optimize several aspects of the pure SAT model, including the counter implementation and parallelizability, which we expect to be useful for future applications to other models.
2022
TOSC
A Formal Analysis of Boomerang Probabilities 📺
In the past 20 years since their conception, boomerang attacks have become an important tool in the cryptanalysis of block ciphers. In the classical estimate of their success probability, assumptions are made about the independence of the underlying differential trails that are not well-founded. We underline the problems inherent in these independence assumptions by using them to prove that for any boomerang there exists a differential trail over the entire cipher with a higher probability than the boomerang.While cryptanalysts today have a clear understanding that the trails can be dependent, the focus of previous research has mostly gone into using these dependencies to improve attacks but little effort has been put into giving boomerangs and their success probabilities a stronger theoretical underpinning. With this publication, we provide such a formalization.We provide a framework which allows us to formulate and prove rigorous statements about the probabilities involved in boomerang attacks without relying on independence assumptions of the trails. Among these statements is a proof that two-round boomerangs on SPNs with differentially 4-uniform S-boxes always deviate from the classical probability estimate to the largest degree possible.We applied the results of this formalization to analyze the validity of some of the first boomerang attacks. We show that the boomerang constructed in the amplified boomerang attack on Serpent by Kelsey, Kohno, and Schneier has probability zero. For the rectangle attack on Serpent by Dunkelman, Biham, and Keller, we demonstrate that a minuscule fraction of only 2−43.4 of all differential trail combinations used in the original attack have a non-zero probability. In spite of this, the probability of the boomerang is in fact a little higher than the original estimate suggests as the non-zero trails have a vastly higher probability than the classical estimate predicts.
2022
TOSC
Influence of the Linear Layer on the Algebraic Degree in SP-Networks 📺
We consider SPN schemes, i.e., schemes whose non-linear layer is defined as the parallel application of t ≥ 1 independent S-Boxes over F2n and whose linear layer is defined by the multiplication with a (n · t) × (n · t) matrix over F2. Even if the algebraic representation of a scheme depends on all its components, upper bounds on the growth of the algebraic degree in the literature usually only consider the details of the non-linear layer. Hence a natural question arises: (how) do the details of the linear layer influence the growth of the algebraic degree? We show that the linear layer plays a crucial role in the growth of the algebraic degree and present a new upper bound on the algebraic degree in SP-networks. As main results, we prove that in the case of low-degree round functions with large S-Boxes: (a) an initial exponential growth of the algebraic degree can be followed by a linear growth until the maximum algebraic degree is reached; (b) the rate of the linear growth is proportional to the degree of the linear layer over Ft2n. Besides providing a theoretical insight, our analysis is particularly relevant for assessing the security of the security of cryptographic permutations designed to be competitive in applications like MPC, FHE, SNARKs, and STARKs, including permutations based on the Hades design strategy. We have verified our findings on small-scale instances and we have compared them against the currently best results in the literature, showing a substantial improvement of upper bounds on the algebraic degree in case of low-degree round functions with large S-Boxes.
2022
TOSC
Security of COFB against Chosen Ciphertext Attacks 📺
COFB is a lightweight Authenticated Encryption with Associated Data (AEAD) mode based on block ciphers. It was proposed in CHES 2017 and is the basis for GIFT-COFB, a finalist in the NIST lightweight standardization project. It comes with provable security results that guarantee its security up to the birthday bound in the nonce-respecting model. However, the designers offer multiple versions of the analysis with different details and the implications of attacks against the scheme are not discussed deeply. In this article, we look at a group of possible forgery and privacy attacks against COFB. We show that the security for both forgery and privacy is bounded by the number of forgery attempts. We show the existence of forgery and privacy attacks with success probability qd/2n/2, given qd forgery attempts. In particular, we show an attack with 2n/2 attempts using only a single known-plaintext encryption query against COFB. While these attacks do not contradict the claims made by the designers of GIFT-COFB, they show its limitations in terms of the number of forgery attempts. They also show that, while COFB generates a 128-bit tag, it behaves in a very similar manner to an AEAD scheme with 64-bit tag. As a result of independent interest, our analysis provides a contradiction to the main theorem of Journal of Cryptology volume 33, pages 703–741 (2020), which includes an improved security proof of COFB compared to the CHES 2017 version. Finally, we discuss the term nqd/2n/2 that appears in the security proof of GIFT-COFB and CHES 2017, showing why there is a security gap between the provable results and the attacks. We emphasize that the results in this article do not threaten the security of GIFT-COFB in the scope of the NIST lightweight cryptography requirements or the claims made by the designers in the specification document of the design.
2022
TOSC
Towards Low-Latency Implementation of Linear Layers 📺
Lightweight cryptography features a small footprint and/or low computational complexity. Low-cost implementations of linear layers usually play an important role in lightweight cryptography. Although it has been shown by Boyar et al. that finding the optimal implementation of a linear layer is a Shortest Linear Program (SLP) problem and NP-hard, there exist a variety of heuristic methods to search for near-optimal solutions. This paper considers the low-latency criteria and focuses on the heuristic search of lightweight implementation for linear layers. Most of the prior approach iteratively combines the inputs (of linear layers) to reach the output, which can be regarded as the forward search. To better adapt the low-latency criteria, we propose a new framework of backward search that attempts to iteratively split every output (into an XORing of two bits) until all inputs appear. By bounding the time of splitting, the new framework can find a sub-optimal solution with a minimized depth of circuits.We apply our new search algorithm to linear layers of block ciphers and find many low-latency candidates for implementations. Notably, for AES Mixcolumns, we provide an implementation with 103 XOR gates with a depth of 3, which is among the best hardware implementations of the AES linear layer. Besides, we obtain better implementations in XOR gates for 54.3% of 4256 Maximum Distance Separable (MDS) matrices proposed by Li et al. at FSE 2019. We also achieve an involutory MDS matrix (in M4(GL(8, F2))) whose implementation uses the lowest number (i.e., 86, saving 2 from the state-of-the-art result) of XORs with the minimum depth.
2022
TOSC
Quantum Period Finding is Compression Robust 📺
We study quantum period finding algorithms such as Simon and Shor (and its variant Ekerå-Håstad). For a periodic function f these algorithms produce – via some quantum embedding of f – a quantum superposition ∑x |x〉 |f(x)〉, which requires a certain amount of output qubits that represent |f(x)〉. We show that one can lower this amount to a single output qubit by hashing f down to a single bit in an oracle setting.Namely, we replace the embedding of f in quantum period finding circuits by oracle access to several embeddings of hashed versions of f. We show that on expectation this modification only doubles the required amount of quantum measurements, while significantly reducing the total number of qubits. For example, for Simon’s algorithm that finds periods in f : Fn2 → Fn2 our hashing technique reduces the required output qubits from n down to 1, and therefore the total amount of qubits from 2n to n + 1. We also show that Simon’s algorithm admits real world applications with only n + 1 qubits by giving a concrete realization of a hashed version of the cryptographic Even-Mansour construction. Moreover, for a variant of Simon’s algorithm on Even-Mansour that requires only classical queries to Even-Mansour we save a factor of (roughly) 4 in the qubits.Our oracle-based hashed version of the Ekerå-Håstad algorithm for factoring n-bit RSA reduces the required qubits from (3/2 + o(1))n down to (1/2+ o(1))n.
2021
ASIACRYPT
Generic Framework for Key-Guessing Improvements 📺
We propose a general technique to improve the key-guessing step of several attacks on block ciphers. This is achieved by defining and studying some new properties of the associated S-boxes and by representing them as a special type of decision trees that are crucial for finding fine-grained guessing strategies for various attack vectors. We have proposed and implemented the algorithm that efficiently finds such trees, and use it for providing several applications of this approach, which include the best known attacks on NOKEON, GIFT, and RECTANGLE.
2021
ASIACRYPT
A Systematic Approach and Analysis of Key Mismatch Attacks on Lattice-Based NIST Candidate KEMs 📺
Research on key mismatch attacks against lattice-based KEMs is an important part of the cryptographic assessment of the ongoing NIST standardization of post-quantum cryptography. There have been a number of these attacks to date. However, a unified method to evaluate these KEMs' resilience under key mismatch attacks is still missing. Since the key index of efficiency is the number of queries needed to successfully mount such an attack, in this paper, we propose and develop a systematic approach to find lower bounds on the minimum average number of queries needed for such attacks. Our basic idea is to transform the problem of finding the lower bound of queries into finding an optimal binary recovery tree (BRT), where the computations of the lower bounds become essentially the computations of a certain Shannon entropy. The optimal BRT approach also enables us to understand why, for some lattice-based NIST candidate KEMs, there is a big gap between the theoretical bounds and bounds observed in practical attacks, in terms of the number of queries needed. This further leads us to propose a generic improvement method for these existing attacks, which are confirmed by our experiments. Moreover, our proposed method could be directly used to improve the side-channel attacks against CCA-secure NIST candidate KEMs.
2021
ASIACRYPT
DEFAULT: Cipher Level Resistance Against Differential Fault Attack 📺
Differential Fault Analysis (DFA) is a well known cryptanalytic technique that exploits faulty outputs of an encryption device. Despite its popularity and similarity with the classical Differential Analysis (DA), a thorough analysis explaining DFA from a designer's point of view is missing in the literature. To the best of our knowledge, no DFA immune cipher at an algorithmic level has been proposed so far. Furthermore, all known DFA countermeasures somehow depend on the device/protocol or on the implementation such as duplication/comparison. As all of these are outside the scope of the cipher designer, we focus on designing a primitive which can protect from DFA on its own. We present the first concept of cipher level DFA resistance which does not rely on any device/protocol related assumption, nor does it depend on any form of duplication. Our construction is simple, software/hardware friendly and DFA security scales up with the state size. It can be plugged before and/or after (almost) any symmetric key cipher and will ensure a non-trivial search complexity against DFA. One key component in our DFA protection layer is an SBox with linear structures. Such SBoxes have never been used in cipher design as they generally perform poorly against differential attacks. We argue that they in fact represent an interesting trade-off between good cryptographic properties and DFA resistance. As a proof of concept, we construct a DFA protecting layer, named DEFAULT-LAYER, as well as a full-fledged block cipher DEFAULT. Our solutions compare favourably to the state-of-the-art, offering advantages over the sophisticated duplication based solutions like impeccable circuits/CRAFT or infective countermeasures.
2021
ASIACRYPT
Massive Superpoly Recovery with Nested Monomial Predictions 📺
Determining the exact algebraic structure or some partial information of the superpoly for a given cube is a necessary step in the cube attack -- a generic cryptanalytic technique for symmetric-key primitives with some secret and public tweakable inputs. Currently, the division property based approach is the most powerful tool for exact superpoly recovery. However, as the algebraic normal form (ANF) of the targeted output bit gets increasingly complicated as the number of rounds grows, existing methods for superpoly recovery quickly hit their bottlenecks. For example, previous method stuck at round 842, 190, and 892 for \trivium, \grain, and \kreyvium, respectively. In this paper, we propose a new framework for recovering the exact ANFs of massive superpolies based on the monomial prediction technique (ASIACRYPT 2020, an alternative language for the division property). In this framework, the targeted output bit is first expressed as a polynomial of the bits of some intermediate states. For each term appearing in the polynomial, the monomial prediction technique is applied to determine its superpoly if the corresponding MILP model can be solved within a preset time limit. Terms unresolved within the time limit are further expanded as polynomials of the bits of some deeper intermediate states with symbolic computation, whose terms are again processed with monomial predictions. The above procedure is iterated until all terms are resolved. Finally, all the sub-superpolies are collected and assembled into the superpoly of the targeted bit. We apply the new framework to \trivium, \grain, and \kreyvium. As a result, the exact ANFs of the superpolies for 843-, 844- and 845-round \trivium, 191-round \grain and 894-round \kreyvium are recovered. Moreover, with help of the M\"{o}bius transform, we present a novel key-recovery technique based on superpolies involving \textit{all} key bits by exploiting the sparse structures, which leads to the best key-recovery attacks on the targets considered.
2021
ASIACRYPT
Gentry-Wichs Is Tight: A Falsifiable Non-Adaptively Sound SNARG 📺
By the impossibility result of Gentry and Wichs, non-falsifiable assumptions are needed to construct (even non-zero-knowledge) adaptively sound succinct non-interactive arguments (SNARGs) for hard languages. It is important to understand whether this impossibility result is tight. While it is known how to construct adaptively sound non-succinct non-interactive arguments for $\mathsf{NP}$ from falsifiable assumptions, adaptively sound SNARGs for $\mathsf{NP}$ from non-falsifiable assumptions, and adaptively sound SNARGs for $\mathsf{P}$ from falsifiable assumptions, there are no known non-adaptively sound SNARGs for $\mathsf{NP}$ from falsifiable assumptions. We show that Gentry-Wichs is tight by constructing the latter. In addition, we prove it is non-adaptively knowledge-sound in the algebraic group model and Sub-ZK (i.e., zero-knowledge even if the CRS is subverted) under a non-falsifiable assumption.
2021
ASIACRYPT
Fine-tuning the ISO/IEC Standard LightMAC 📺
LightMAC, by Luykx et al., is a block cipher based message authentication code (MAC). The simplicity of design and low overhead allows it to have very compact implementations. As a result, it has been recently chosen as an ISO/IEC standard MAC for lightweight applications. LightMAC has been shown to achieve query-length independent security bound of $O(q^2/2^n)$ when instantiated with two independently keyed $n$-bit block ciphers, where $q$ denotes the number of MAC queries and the query-length is upper bounded by $(n-s)2^s$ bits for a fixed counter size $s$. In this paper, we aim to minimize the number of block cipher keys in LightMAC. First, we show that the original LightMAC instantiated with a single block cipher key, referred as 1k-LightMAC, achieves security bound of $O(q^2/2^n)$ while the query-length is at least $(n-s)$ bits and at most $(n-s)\min\{2^{n/4},2^s\}$ bits. Second, we show that a minor variant of 1k-LightMAC, dubbed as LightMAC-ds, achieves security bound of $O(q^2/2^n)$ while query-length is upper bounded by $(n-s)2^{s-1}$ bits. Of independent interest, our security proof of 1k-LightMAC employs a novel sampling approach, called the reset-sampling, as a subroutine within the H-coefficient proof setup.
2021
ASIACRYPT
2021
ASIACRYPT
2021
ASIACRYPT
Private Join and Compute from PIR with Default 📺
The private join and compute (PJC) functionality enables secure computation over data distributed across different databases, and is applicable to a wide range of applications, many of which address settings where the input databases are of significantly different sizes. We introduce the notion of private information retrieval (PIR) with default, which enables two-party PJC functionalities in a way that hides the size of the intersection of the two databases and incurs sublinear communication cost in the size of the bigger database. We provide two constructions for this functionality, one of which requires offline linear communication, which can be amortized across queries, and one that provides sublinear cost for each query but relies on more computationally expensive tools. We construct inner-product PJC, which has applications to ads conversion measurement and contact tracing, relying on an extension of PIR with default. We evaluate the efficiency of our constructions, which can enable $\mathbf{2^{8}}$ PIR with default lookups on a database of size $\mathbf{2^{25}}$ (or inner-product PJC on databases with such sizes) with the communication of $\mathbf{44}$MB, which costs less than $\mathbf{0.17}$c. for the client and $\mathbf{26.48}$c. for the server.
2021
ASIACRYPT
A Geometric Approach to Linear Cryptanalysis 📺
A new interpretation of linear cryptanalysis is proposed. This 'geometric approach' unifies all common variants of linear cryptanalysis, reveals links between various properties, and suggests additional generalizations. For example, new insights into invariants corresponding to non-real eigenvalues of correlation matrices and a generalization of the link between zero-correlation and integral attacks are obtained. Geometric intuition leads to a fixed-key motivation for the piling-up principle, which is illustrated by explaining and generalizing previous results relating invariants and linear approximations. Rank-one approximations are proposed to analyze cell-oriented ciphers, and used to resolve an open problem posed by Beierle, Canteaut and Leander at FSE 2019. In particular, it is shown how such approximations can be analyzed automatically using Riemannian optimization.
2021
ASIACRYPT
Gladius: LWR based efficient hybrid public key encryption with distributed decryption 📺
Standard hybrid encryption schemes based on the KEM-DEM framework are hard to implement efficiently in a distributed manner whilst maintaining the CCA security property of the scheme. This is because the DEM needs to be decrypted under the key encapsulated by the KEM, before the whole ciphertext is declared valid. In this paper we present a new variant of the KEM-DEM framework, closely related to Tag-KEMs, which sidesteps this issue. We then present a post-quantum KEM for this framework based on Learning-with-Rounding, which is designed specifically to have fast distributed decryption. Our combined construction of a hybrid encryption scheme with Learning-with-Rounding based KEM, called Gladius, is closely related to the NIST Round 3 candidate called Saber. Finally, we give a prototype distributed implementation that achieves a decapsulation time of 4.99 seconds for three parties.
2021
ASIACRYPT
Public Key Encryption with Flexible Pattern Matching 📺
Many interesting applications of pattern matching (e.g. deep-packet inspection or medical data analysis) target very sensitive data. In particular, spotting illegal behaviour in internet traffic conflicts with legitimate privacy requirements, which usually forces users (e.g. children, employees) to blindly trust an entity that fully decrypts their traffic in the name of security. The compromise between traffic analysis and privacy can be achieved through searchable encryption. However, as the traffic data is a stream and as the patterns to search are bound to evolve over time (e.g. new virus signatures), these applications require a kind of searchable encryption that provides more flexibility than the classical schemes. We indeed need to be able to search for patterns of variable sizes in an arbitrary long stream that has potentially been encrypted prior to pattern identification. To stress these specificities, we call such a scheme a stream encryption supporting pattern matching. Recent papers use bilinear groups to provide public key constructions supporting these features. These solutions are lighter than more generic ones (e.g. fully homomorphic encryption) while retaining the adequate expressivity to support pattern matching without harming privacy more than needed. However, all existing solutions in this family have weaknesses with respect to efficiency and security that need to be addressed. Regarding efficiency, their public key has a size linear in the size of the alphabet, which can be quite large, in particular for applications that naturally process data as bytestrings. Regarding security, they all rely on a very strong computational assumption that is both interactive and specially tailored for this kind of scheme. In this paper, we tackle these problems by providing two new constructions using bilinear groups to support pattern matching on encrypted streams. Our first construction shares the same strong assumption but dramatically reduces the size of the public key by removing the dependency on the size of the alphabet, while nearly halving the size of the ciphertext. On a typical application with large patterns, our public key is two order of magnitude smaller that the one of previous schemes, which demonstrates the practicality of our approach. Our second construction manages to retain most of the good features of the first one while exclusively relying on a simple (static) variant of DDH, which solves the security problem of previous works.
2021
ASIACRYPT
Algebraic Attacks on Rasta and Dasta Using Low-Degree Equations 📺
Rasta and Dasta are two fully homomorphic encryption friendly symmetric-key primitives proposed at CRYPTO 2018 and ToSC 2020, respectively. We point out that the designers of Rasta and Dasta neglected an important property of the $\chi$ operation. Combined with the special structure of Rasta and Dasta, this property directly leads to significantly improved algebraic cryptanalysis. Especially, it enables us to theoretically break 2 out of 3 instances of full Agrasta, which is the aggressive version of Rasta with the block size only slightly larger than the security level in bits. We further reveal that Dasta is more vulnerable against our attacks than Rasta for its usage of a linear layer composed of an ever-changing bit permutation and a deterministic linear transform. Based on our cryptanalysis, the security margins of Dasta and Rasta parameterized with $(n,\kappa,r)\in\{(327,80,4),(1877,128,4),(3545,256,5)\}$ are reduced to only 1 round, where $n$, $\kappa$ and $r$ denote the block size, the claimed security level and the number of rounds, respectively. These parameters are of particular interest as the corresponding ANDdepth is the lowest among those that can be implemented in reasonable time and target the same claimed security level.
2021
ASIACRYPT
Generalized Channels from Limited Blockchain Scripts and Adaptor Signatures 📺
Decentralized and permissionless ledgers offer an inherently low transaction rate, as a result of their consensus protocol demanding the storage of each transaction on-chain. A prominent proposal to tackle this scalability issue is to utilize off-chain protocols, where parties only need to post a limited number of transactions on-chain. Existing solutions can roughly be categorized into: (i) application-specific channels (e.g., payment channels), offering strictly weaker functionality than the underlying blockchain; and (ii) state channels, supporting arbitrary smart contracts at the cost of being compatible only with the few blockchains having Turing-complete scripting languages (e.g., Ethereum). In this work, we introduce and formalize the notion of generalized channels allowing users to perform any operation supported by the underlying blockchain in an off-chain manner. Generalized channels thus extend the functionality of payment channels and relax the definition of state channels. We present a concrete construction compatible with any blockchain supporting transaction authorization, time-locks and constant number of Boolean and and or operations -- requirements fulfilled by many (non-Turing-complete) blockchains including the popular Bitcoin. To this end, we leverage adaptor signatures -- a cryptographic primitive already used in the cryptocurrency literature but formalized as a standalone primitive in this work for the first time. We formally prove the security of our generalized channel construction in the Universal Composability framework. As an important practical contribution, our generalized channel construction outperforms the state-of-the-art payment channel construction, the Lightning Network, in efficiency. Concretely, it halves the off-chain communication complexity and reduces the on-chain footprint in case of disputes from linear to constant in the number of off-chain applications funded by the channel. Finally, we evaluate the practicality of our construction via a prototype implementation and discuss various applications including financially secured fair two-party computation.
2021
ASIACRYPT
Key Encapsulation Mechanism with Tight Enhanced Security in the Multi-User Setting: Impossibility Result and Optimal Tightness 📺
For Key Encapsulation Mechanism (KEM) deployed in a multi-user setting, an adversary may corrupt some users to learn their secret keys, and obtain some encapsulated keys due to careless key managements of users. To resist such attacks, we formalize Enhanced security against Chosen Plaintext/Ciphertext Attack (ECPA/ECCA), which ask the pseudorandomness of unrevealed encapsulated keys under uncorrupted users. This enhanced security for KEM serves well for the security of a class of Authenticated Key Exchange protocols built from KEM. In this paper, we study the achievability of tight ECPA and ECCA security for KEM in the multi-user setting, and present an impossibility result and an optimal security loss factor that can be obtained. The existing meta-reduction technique due to Bader et al. (EUROCRYPT 2016) rules out some KEMs, but many well-known KEMs, e.g., Cramer-Shoup KEM (SIAM J. Comput. 2003), Kurosawa-Desmedt KEM (CRYPTO 2004), run out. To solve this problem, we develop a new technique tool named rank of KEM and a new secret key partitioning strategy for meta-reduction. With this new tool and new strategy, we prove that KEM schemes with polynomially-bounded ranks have no tight ECPA and ECCA security from non-interactive complexity assumptions, and the security loss is at least linear in the number n of users. This impossibility result covers lots of well-known KEMs, including the Cramer-Shoup KEM, Kurosawa-Desmedt KEM and many others. Moreover, we show that the linear security loss is optimal by presenting concrete KEMs with security loss Θ(n). This is justified by a non-trivial security reduction with linear loss factor from ECPA/ECCA security to the traditional multi-challenge CPA/CCA security.
2021
ASIACRYPT
Balanced Non-Adjacent Forms 📺
Integers can be decomposed in multiple ways. The choice of a recoding technique is generally dictated by performance considerations. The usual metric for optimizing the decomposition is the Hamming weight. In this work, we consider a different metric and propose new modified forms (i.e., integer representations using signed digits) that satisfy minimality requirements under the new metric. Specifically, we introduce what we call balanced non-adjacent forms and prove that they feature a minimal Euclidean weight. We also present efficient algorithms to produce these new minimal forms. We analyze their asymptotic and exact distributions. We extend the definition to modular integers and show similar optimality results. The balanced non adjacent forms find natural applications in fully homomorphic encryption as they optimally reduce the noise variance in LWE-type ciphertexts.
2021
ASIACRYPT
Garbling, Stacked and Staggered: Faster k-out-of-n Garbled Function Evaluation 📺
Stacked Garbling (SGC) is a Garbled Circuit (GC) improvement that efficiently and securely evaluates programs with conditional branching. SGC reduces bandwidth consumption such that communication is proportional to the size of the single longest program execution path, rather than to the size of the entire program. Crucially, the parties expend increased computational effort compared to classic GC. Motivated by procuring a subset in a menu of computational services or tasks, we consider GC evaluation of k-out-of-n branches, whose indices are known (or eventually revealed) to the GC evaluator E. Our stack-and-stagger technique amortizes GC computation in this setting. We retain the communication advantage of SGC, while significantly improving computation and wall-clock time. Namely, each GC party garbles (or evaluates) the total of n branches, a significant improvement over the O(nk) garblings/evaluations needed by standard SGC. We present our construction as a garbling scheme. Our technique brings significant overall performance improvement in various settings, including those typically considered in the literature: e.g. on a 1Gbps LAN we evaluate 16-out-of-128 functions ~7.68x faster than standard stacked garbling.
2021
ASIACRYPT
PrORAM: Fast O(log n) Authenticated Shares ZK ORAM 📺
We construct a concretely efficient Zero Knowledge (ZK) Oblivious RAM (ORAM) for ZK Proof (ZKP) systems based on authenticated sharings of arithmetic values. It consumes 2logn oblivious transfers (OTs) of length-2sigma secrets per access of an arithmetic value, for statistical security parameter sigma and array size n. This is an asymptotic and concrete improvement over previous best (concretely efficient) ZK ORAM BubbleRAM of Heath and Kolesnikov ([HK20a], CCS 2020), whose access cost is 1/2 log^2 n OTs of length-2sigma secrets. ZK ORAM is essential for proving statements that are best expressed as RAM programs, rather than Boolean or arithmetic circuits. Our construction is private-coin ZK. We integrate it with [HK20a]’s ZKP protocol and prove the resulting ZKP system secure. We implemented PrORAM in C++. Compared to the state-of-the-art BubbleRAM, our PrORAM is ~10x faster for arrays of size 2^20 of 40-bit values.
2021
ASIACRYPT
Redeeming Reset Indifferentiability and Applications to Post-Quantum Security 📺
Indifferentiability is used to analyze the security of constructions of idealized objects, such as random oracles or ideal ciphers. Reset indifferentiability is a strengthening of plain indifferentiability which is applicable in far more scenarios, but has largely been abandoned due to significant impossibility results and a lack of positive results. Our main results are: - Under \emph{weak} reset indifferentiability, ideal ciphers imply (fixed size) random oracles, and domain shrinkage is possible. We thus show reset indifferentiability is more useful than previously thought. - We lift our analysis to the quantum setting, showing that ideal ciphers imply random oracles under quantum indifferentiability. - Despite Shor's algorithm, we observe that generic groups are still meaningful quantumly, showing that they are quantumly (reset) indifferentiable from ideal ciphers; combined with the above, cryptographic groups yield post-quantum \emph{symmetric} key cryptography. In particular, we obtain a plausible post-quantum random oracle that is a subset-product followed by two modular reductions.
2021
ASIACRYPT
Bounded Collusion ABE for TMs from IBE 📺
We give an attribute-based encryption system for Turing Machines that is provably secure assuming only the existence of identity- based encryption (IBE) for large identity spaces. Currently, IBE is known to be realizable from most mainstream number theoretic assumptions that imply public key cryptography including factoring, the search Diffie-Hellman assumption, and the Learning with Errors assumption. Our core construction provides security against an attacker that makes a single key query for a machine T before declaring a challenge string w∗ that is associated with the challenge ciphertext. We build our construction by leveraging a Garbled RAM construction of Gentry, Halevi, Raykova and Wichs; however, to prove security we need to introduce a new notion of security called iterated simulation security. We then show how to transform our core construction into one that is secure for an a-priori bounded number q = q(\lambda) of key queries that can occur either before or after the challenge ciphertext. We do this by first showing how one can use a special type of non-committing encryption to transform a system that is secure only if a single key is chosen before the challenge ciphertext is declared into one where the single key can be requested either before or after the challenge ciphertext. We give a simple construction of this non-committing encryption from public key encryption in the Random Oracle Model. Next, one can apply standard combinatorial techniques to lift from single-key adaptive security to q-key adaptive security.
2021
ASIACRYPT
Adaptive Security via Deletion in Attribute-Based Encryption: Solutions from Search Assumptions in Bilinear Groups 📺
One of the primary research challenges in Attribute-Based Encryption (ABE) is constructing and proving cryptosystems that are adaptively secure. To date the main paradigm for achieving adaptive security in ABE is dual system encryption. However, almost all such solutions in bilinear groups rely on (variants of) either the subgroup decision problem over composite order groups or the decision linear assumption. Both of these assumptions are decisional rather than search assumptions and the target of the assumption is a source or bilinear group element. This is in contrast to earlier selectively secure ABE systems which can be proven secure from either the decisional or search Bilinear Diffie-Hellman assumption. In this work we make progress on closing this gap by giving a new ABE construction for the subset functionality and prove security under the Search Bilinear Diffie-Hellman assumption. We first provide a framework for proving adaptive security in Attribute-Based Encryption systems. We introduce a concept of ABE with deletable attributes where any party can take a ciphertext encrypted under the attribute string x in {0, 1}^n and modify it into a ciphertext encrypted under any string x' in {0, 1, bot}^n where x' is derived by replacing any bits of x with bot symbols (i.e. ``deleting" attributes of x). The semantics of the system are that any private key for a circuit C can be used to decrypt a ciphertext associated with x' if none of the input bits read by circuit C are bot symbols and C(x') = 1. We show a pathway for combining ABE with deletable attributes with constrained pseudorandom functions to obtain adaptively secure ABE building upon the recent work of [Tsabary19]. Our new ABE system will be adaptively secure and be a ciphertext-policy ABE that supports the same functionality as the underlying constrained PRF as long as the PRF is ``deletion conforming". Here we also provide a simple constrained PRF construction that gives subset functionality. Our approach enables us to access a broader array of Attribute-Based Encryption schemes support deletion of attributes. For example, we show that both the [GPSW06] and [Boyen13] ABE schemes can trivially handle a deletion operation. And, by using a hardcore bit variant of GPSW scheme we obtain an adaptively secure ABE scheme under the Search Bilinear Diffie-Hellman assumption in addition to pseudo random functions in NC1. This gives the first adaptively secure ABE from a search assumption as all prior work relied on decision assumptions over source group elements.
2021
ASIACRYPT
Homomorphic Secret Sharing for Multipartite and General Adversary Structures Supporting Parallel Evaluation of Low-Degree Polynomials 📺
Homomorphic secret sharing (HSS) for a function $f$ allows input parties to distribute shares for their private inputs and then locally compute output shares from which the value of $f$ is recovered. HSS can be directly used to obtain a two-round multiparty computation (MPC) protocol for possibly non-threshold adversary structures whose communication complexity is independent of the size of $f$. In this paper, we propose two constructions of HSS schemes supporting parallel evaluation of a single low-degree polynomial and tolerating multipartite and general adversary structures. Our multipartite scheme tolerates a wider class of adversary structures than the previous multipartite one in the particular case of a single evaluation and has exponentially smaller share size than the general construction. While restricting the range of tolerable adversary structures (but still applicable to non-threshold ones), our schemes perform $\ell$ parallel evaluations with communication complexity approximately $\ell/\log\ell$ times smaller than simply using $\ell$ independent instances. We also formalize two classes of adversary structures taking into account real-world situations to which the previous threshold schemes are inapplicable. Our schemes then perform $O(m)$ parallel evaluations with almost the same communication cost as a single evaluation, where $m$ is the number of parties.
2021
ASIACRYPT
Onion Routing with Replies 📺
Onion routing (OR) protocols are a crucial tool for providing anonymous internet communication. An OR protocol enables a user to anonymously send requests to a server. A fundamental problem of OR protocols is how to deal with replies: ideally, we would want the server to be able to send a reply back to the anonymous user without knowing or disclosing the user's identity. Existing OR protocols do allow for such replies, but do not provably protect the payload (i.e., message) of replies against manipulation. Kuhn et al. (IEEE S&P 2020) show that such manipulations can in fact be leveraged to break anonymity of the whole protocol. In this work, we close this gap and provide the first framework and protocols for OR with protected replies. We define security in the sense of an ideal functionality in the universal composability model, and provide corresponding (less complex) game-based security notions for the individual properties. We also provide two secure instantiations of our framework: one based on updatable encryption, and one based on succinct non-interactive arguments (SNARGs) to authenticate payloads both in requests and replies. In both cases, our central technical handle is an implicit authentication of the transmitted payload data, as opposed to an explicit, but insufficient authentication (with MACs) in previous solutions. Our results exhibit a new and surprising application of updatable encryption outside of long-term data storage.
2021
ASIACRYPT
Partial Key Exposure Attack on Short Secret Exponent CRT-RSA 📺
Let $(N,e)$ be an RSA public key, where $N=pq$ is the product of equal bitsize primes $p,q$. Let $d_p, d_q$ be the corresponding secret CRT-RSA exponents. Using a Coppersmith-type attack, Takayasu, Lu and Peng (TLP) recently showed that one obtains the factorization of $N$ in polynomial time, provided that $d_p, d_q \leq N^{0.122}$. Building on the TLP attack, we show the first {\em Partial Key Exposure} attack on short secret exponent CRT-RSA. Namely, let $N^{0.122} \leq d_p, d_q \leq N^{0.5}$. Then we show that a constant known fraction of the least significant bits (LSBs) of both $d_p, d_q$ suffices to factor $N$ in polynomial time. Naturally, the larger $d_p,d_q$, the more LSBs are required. E.g. if $d_p, d_q$ are of size $N^{0.13}$, then we have to know roughly a $\frac 1 5$-fraction of their LSBs, whereas for $d_p, d_q$ of size $N^{0.2}$ we require already knowledge of a $\frac 2 3$-LSB fraction. Eventually, if $d_p, d_q$ are of full size $N^{0.5}$, we have to know all of their bits. Notice that as a side-product of our result we obtain a heuristic deterministic polynomial time factorization algorithm on input $(N,e,d_p,d_q)$.
2021
ASIACRYPT
Beyond Software Watermarking: Traitor-Tracing for Pseudorandom Functions 📺
Software watermarking schemes allow a user to embed an identifier into a piece of code such that the resulting program is nearly functionally-equivalent to the original program, and yet, it is difficult to remove the identifier without destroying the functionality of the program. Such schemes are often considered for proving software ownership or for digital rights management. Existing constructions of watermarking have focused primarily on watermarking pseudorandom functions (PRFs). In this work, we revisit the definitional foundations of watermarking, and begin by highlighting a major flaw in existing security notions. Existing security notions for watermarking only require that the identifier be successfully extracted from programs that preserve the exact input/output behavior of the original program. In the context of PRFs, this means that an adversary that constructs a program which computes a quarter of the output bits of the PRF or that is able to distinguish the outputs of the PRF from random are considered to be outside the threat model. However, in any application (e.g., watermarking a decryption device or an authentication token) that relies on PRF security, an adversary that manages to predict a quarter of the bits or distinguishes the PRF outputs from random would be considered to have defeated the scheme. Thus, existing watermarking schemes provide very little security guarantee against realistic adversaries. None of the existing constructions of watermarkable PRFs would be able to extract the identifier from a program that only outputs a quarter of the bits of the PRF or one that perfectly distinguishes. To address the shortcomings in existing watermarkable PRF definitions, we introduce a new primitive called a traceable PRF. Our definitions are inspired by similar definitions from public-key traitor tracing, and aim to capture a very robust set of adversaries: namely, any adversary that produces a useful distinguisher (i.e., a program that can break PRF security), can be traced to a specific identifier. We provide a general framework for constructing traceable PRFs via an intermediate primitive called private linear constrained PRFs. Finally, we show how to construct traceable PRFs from a similar set of assumptions previously used to realize software watermarking. Namely, we obtain a single-key traceable PRF from standard lattice assumptions and a fully collusion-resistant traceable PRF from indistinguishability obfuscation (together with injective one-way functions).
2021
ASIACRYPT
Chain Reductions for Multi-Signatures and the HBMS Scheme 📺
Existing proofs for existing Discrete Log (DL) based multi-signature schemes give only weak guarantees if the schemes are implemented, as they are in practice, in 256-bit groups. This is because the underlying reductions, which are mostly in the standard model and from DL, are loose. We show that relaxing either the model or the assumption suffices to obtain tight reductions. Namely we give (1) tight proofs from DL in the Algebraic Group Model, and (2) tight, standard-model proofs from well-founded assumptions other than DL. We first do this for the classical 3-round schemes, namely $\BN$ and $\MuSig$. Then we give a new 2-round multi-signature scheme, $\MSB$, as efficient as prior ones, for which we do the same. These multiple paths to security for a single scheme are made possible by a framework of chain reductions, in which a reduction is broken into a chain of sub-reductions involving intermediate problems. Overall our results improve the security guarantees for DL-based multi-signature schemes in the groups in which they are implemented in practice.
2021
ASIACRYPT
Transciphering Framework for Approximate Homomorphic Encryption 📺
Homomorphic encryption (HE) is a promising cryptographic primitive that enables computation over encrypted data, with a variety of applications including medical, genomic, and financial tasks. In Asiacrypt 2017, Cheon et al. proposed the CKKS scheme to efficiently support approximate computation over encrypted data of real numbers. HE schemes including CKKS, nevertheless, still suffer from slow encryption speed and large ciphertext expansion compared to symmetric cryptography. In this paper, we propose a novel hybrid framework, dubbed RtF (Real-to-Finite-field) framework, that supports CKKS. The main idea behind this construction is to combine the CKKS and the FV homomorphic encryption schemes, and use a stream cipher using modular arithmetic in between. As a result, real numbers can be encrypted without significant ciphertext expansion or computational overload on the client side. As an instantiation of the stream cipher in our framework, we propose a new HE-friendly cipher, dubbed HERA, and extensively analyze its security and efficiency. The main feature of HERA is that it uses a simple randomized key schedule. Compared to recent HE-friendly ciphers such as FLIP and Rasta using randomized linear layers, HERA requires a smaller number of random bits. For this reason, HERA significantly outperforms existing HE-friendly ciphers on both the client and the server sides. With the RtF transciphering framework combined with HERA at the 128-bit security level, we achieve small ciphertext expansion ratio with a range of 1.23 to 1.54, which is at least 23 times smaller than using (symmetric) CKKS-only, assuming the same precision bits and the same level of ciphertexts at the end of the framework. We also achieve 1.6 $\mu$s and 21.7 MB/s for latency and throughput on the client side, which are 9085 times and 17.8 times faster than the CKKS-only environment, respectively.
2021
ASIACRYPT
A New Variant of Unbalanced Oil and Vinegar Using Quotient Ring: QR-UOV 📺
The unbalanced oil and vinegar signature scheme (UOV) is a multivariate signature scheme that has essentially not been broken for over 20 years. However, it requires the use of a large public key; thus, various methods have been proposed to reduce its size. In this paper, we propose a new variant of UOV with a public key represented by block matrices whose components correspond to an element of a quotient ring. We discuss how it affects the security of our proposed scheme whether or not the quotient ring is a field. Furthermore, we discuss their security against currently known and newly possible attacks and propose parameters for our scheme. We demonstrate that our proposed scheme can achieve a small public key size without significantly increasing the signature size compared with other UOV variants. For example, the public key size of our proposed scheme is 85.8 KB for NIST's Post-Quantum Cryptography Project (security level 3), whereas that of compressed Rainbow is 252.3 KB, where Rainbow is a variant of UOV and is one of the third-round finalists of the NIST PQC project.
2021
ASIACRYPT
FAST: Secure and High Performance Format-Preserving Encryption and Tokenization 📺
We propose a new construction for format-preserving encryption. Our design provides the flexibility for use in format-preserving encryption (FPE) and for static table-driven tokenization. Our algorithm is a substitution-permutation network based on random Sboxes. Using pseudorandom generators and pseudorandom functions, we prove a strong adaptive security based on the super-pseudorandom permutation assumption of our core design. We obtain empirical parameters to reach this assumption. We suggest parameters for quantum security. Our design accommodates very small domains, with a radix $a$ from 4 to the Unicode alphabet size and a block length $l$ starting 2. The number of Sbox evaluations per encryption is asymptotically $l^{\frac32}$, which is also the number of bytes we need to generate using AES in CTR mode for each tweak setup. For instance, we tokenize 10 decimal digits using 29 (parallel) AES computations to be done only once, when the tweak changes.
2021
ASIACRYPT
Bit Security as Computational Cost for Winning Games with High Probability 📺
We introduce a novel framework for quantifying the bit security of security games. Our notion is defined with an operational meaning that a $\lambda$-bit secure game requires a total computational cost of $2^\lambda$ for winning the game with high probability, e.g., 0.99. We define the bit security both for search-type and decision-type games. Since we identify that these two types of games should be structurally different, we treat them differently but define the bit security using the unified framework to guarantee the same operational interpretation. The key novelty of our notion of bit security is to employ two types of adversaries: inner adversary and outer adversary. While the inner adversary plays a ``usual'' security game, the outer adversary invokes the inner adversary many times to amplify the winning probability for the security game. We find from our framework that the bit security for decision games can be characterized by the information measure called the \emph{R\'enyi divergence} of order $1/2$ of the inner adversary. The conventional ``advantage,'' defined as the probability of winning the game, characterizes our bit security for search-type games. We present several security reductions in our framework for justifying our notion of bit security. Many of our results quantitatively match the results for the bit security notion proposed by Micciancio and Walter in 2018. In this sense, our bit security strengthens the previous notion of bit security by adding an operational meaning. A difference from their work is that, in our framework, the Goldreich-Levin theorem gives an optimal reduction only for ``balanced'' adversaries who output binary values in a balanced manner.
2021
ASIACRYPT
Fault-Injection Attacks against NIST’s Post-Quantum Cryptography Round 3 KEM Candidates 📺
We investigate __all__ NIST PQC Round 3 KEM candidates from the viewpoint of fault-injection attacks: Classic McEliece, Kyber, NTRU, Saber, BIKE, FrodoKEM, HQC, NTRU Prime, and SIKE. All KEM schemes use variants of the Fujisaki-Okamoto transformation, so the equality test with re-encryption in decapsulation is critical. We survey effective key-recovery attacks when we can skip the equality test. We found the existing key-recovery attacks against Kyber, NTRU, Saber, FrodoKEM, HQC, one of two KEM schemes in NTRU Prime, and SIKE. We propose a new key-recovery attack against the other KEM scheme in NTRU Prime. We also report an attack against BIKE that leads to leakage of information of secret keys. The open-source pqm4 library contains all KEM schemes except Classic McEliece and HQC. We show that giving a single instruction-skipping fault in the decapsulation processes leads to skipping the equality test __virtually__ for Kyber, NTRU, Saber, BIKE, and SIKE. We also report the experimental attacks against them. We also report the implementation of NTRU Prime allows chosen-ciphertext attacks freely and the timing side-channel of FrodoKEM reported in Guo, Johansson, and Nilsson (CRYPTO 2020) remains, while there are no such bugs in their NIST PQC Round 3 submissions.
2021
ASIACRYPT
Toward a Fully Secure Authenticated Encryption Scheme From a Pseudorandom Permutation 📺
In this paper, we propose a new block cipher-based authenticated encryption scheme, dubbed the Synthetic Counter with Masking (SCM) mode. SCM follows the NSIV paradigm proposed by Peyrin and Seurin (CRYPTO 2016), where a keyed hash function accepts a nonce N with associated data and a message, yielding an authentication tag T, and then the message is encrypted by a counter-like mode using both T and N. Here we move one step further by encrypting nonces; in the encryption part, the inputs to the block cipher are determined by T, counters, and an encrypted nonce, and all its outputs are also masked by an (additional) encrypted nonce, yielding keystream blocks. As a result, we obtain, for the first time, a block cipher-based authenticated encryption scheme of rate 1/2 that provides n-bit security with respect to the query complexity (ignoring the influence of message length) in the nonce-respecting setting, and at the same time guarantees graceful security degradation in the faulty nonce model, when the underlying n-bit block cipher is modeled as a secure pseudorandom permutation. Seen as a slight variant of GCM-SIV, SCM is also parallelizable and inverse-free, and its performance is still comparable to GCM-SIV.
2021
ASIACRYPT
Strong and Tight Security Guarantees against Integral Distinguishers 📺
Integral attacks belong to the classical attack vectors against any given block ciphers. However, providing arguments that a given cipher is resistant against those attacks is notoriously difficult. In this paper, based solely on the assumption of independent round keys, we develop significantly stronger arguments than what was possible before: our main result is that we show how to argue that the sum of ciphertexts over any possible subset of plaintext is key-dependent, i.e., the non existence of integral distinguishers.
2021
ASIACRYPT
Secure and Efficient Software Masking on Superscalar Pipelined Processors 📺
Physical side-channel attacks like power analysis pose a serious threat to cryptographic devices in real-world applications. Consequently, devices implement algorithmic countermeasures like masking. In the past, works on the design and verification of masked software implementations have mostly focused on simple microprocessors that findusage on smart cards. However, many other applications such as in the automotive industry require side-channel protected cryptographic computations on much more powerful CPUs. In such situations, the security loss due to complex architectural side-effects, the corresponding performance degradation, as well as discussions of suitable probing models and verification techniques are still vastly unexplored research questions. We answer these questions and perform a comprehensive analysis of more complex processor architectures in the context of masking-related side effects. First, we analyze the RISC-V SweRV core — featuring a 9-stage pipeline, two execution units, and load/store buffers — and point out a significant gap between security in a simple software probing model and practical security on such CPUs. More concretely, we show that architectural side effects of complex CPU architectures can significantly reduce the protection order of masked software, both via formal analysis in the hardware probing model, as well as empirically via gate-level timing simulations. We then discuss the options of fixing these problems in hardware or leaving them as constraints to software. Based on these software constraints, we formulate general rules for the design of masked software on more complex CPUs. Finally, we compare several implementation strategies for masking schemes and present in a case study that designing secure masked software for complex CPUs is still possible with overhead as low as 13%.
2021
ASIACRYPT
NTRU Fatigue: How Stretched is Overstretched? 📺
Until recently lattice reduction attacks on NTRU lattices were thought to behave similar as on (ring)-LWE lattices with the same parameters. However several works (Albrecht-Bai-Ducas 2016, Kirchner-Fouque 2017) showed a significant gap for large moduli $q$, the so-called overstretched regime of NTRU. With the NTRU scheme being a finalist to the NIST PQC competition it is important to understand ---both asymptotically and concretely--- where the fatigue point lies exactly, i.e. at which $q$ the overstretched regime begins. Unfortunately the analysis by Kirchner and Fouque is based on an impossibility argument, which only results in an asymptotic upper bound on the fatigue point. It also does not really {\em explain} how lattice reduction actually recovers secret-key information. We propose a new analysis that asymptotically improves on that of Kirchner and Fouque, narrowing down the fatigue point for ternary NTRU from $q \leq n^{2.783+o(1)}$ to $q=n^{2.484+o(1)}$, and finally explaining the mechanism behind this phenomenon. We push this analysis further to a concrete one, settling the fatigue point at $q \approx 0.004 \cdot n^{2.484}$, and allowing precise hardness predictions in the overstretched regime. These predictions are backed by extensive experiments.
2021
ASIACRYPT
On Time-Lock Cryptographic Assumptions in Abelian Hidden-Order Groups 📺
In this paper we study cryptographic finite abelian groups of unknown order and hardness assumptions in these groups. Abelian groups necessitate multiple group generators, which may be chosen at random. We formalize this setting and hardness assumptions therein. Furthermore, we generalize the algebraic group model and strong algebraic group model from cyclic groups to arbitrary finite abelian groups of unknown order. Building on these formalizations, we present techniques to deal with this new setting, and prove new reductions. These results are relevant for class groups of imaginary quadratic number fields and time-lock cryptography build upon them.
2021
ASIACRYPT
Quantum Linearization Attacks 📺
Recent works have shown that quantum period-finding can be used to break many popular constructions (some block ciphers such as Even-Mansour, multiple MACs and AEs...) in the superposition query model. So far, all the constructions broken exhibited a strong algebraic structure, which enables to craft a periodic function of a single input block. The recovery of the secret period allows to recover a key, distinguish, break the confidentiality or authenticity of these modes. In this paper, we introduce the \emph{quantum linearization attack}, a new way of using Simon's algorithm to target MACs in the superposition query model. Specifically, we use inputs of multiple blocks as an interface to a function hiding a linear structure. The recovery of this structure allows to perform forgeries. We also present some variants of this attack that use other quantum algorithms, which are much less common in quantum symmetric cryptanalysis: Deutsch's, Bernstein-Vazirani's, and Shor's. To the best of our knowledge, this is the first time these algorithms have been used in quantum forgery or key-recovery attacks. Our attack breaks many parallelizable MACs such as {\sf LightMac}, {\sf PMAC}, and numerous variants with (classical) beyond-birthday-bound security ({\sf LightMAC+}, {\sf PMAC+}) or using tweakable block ciphers ({\sf ZMAC}). More generally, it shows that constructing parallelizable quantum-secure PRFs might be a challenging task.
2021
ASIACRYPT
Giving an Adversary Guarantees (Or: How to Model Designated Verifier Signatures in a Composable Framework) 📺
When defining a security notion, one typically specifies what dishonest parties cannot achieve. For example, communication is confidential if a third party cannot learn anything about the messages being transmitted, and it is authentic if a third party cannot impersonate the real (honest) sender. For certain applications, however, security crucially relies on giving dishonest parties certain capabilities. As an example, in Designated Verifier Signature (DVS) schemes, one captures that only the designated verifier can be convinced of the authenticity of a message by guaranteeing that any dishonest party can forge signatures which look indistinguishable (to a third party) from original ones created by the sender. However, composable frameworks cannot typically model such guarantees as they are only designed to bound what a dishonest party can do. In this paper we show how to model such guarantees---that dishonest parties must have some capability---in the Constructive Cryptography (CC) framework (Maurer and Renner, ICS 2011). More concretely, we give the first composable security definitions for Multi-Designated Verifier Signature (MDVS) schemes---a generalization of DVS schemes. The ideal world is defined as the intersection of two worlds. The first captures authenticity in the usual way. The second provides the guarantee that a dishonest party can forge signatures. By taking the intersection we have an ideal world with the desired properties. We also compare our composable definitions to existing security notions for MDVS schemes from the literature. We find that only recently, 23 years after the introduction of MDVS schemes, sufficiently strong security notions were introduced capturing the security of MDVS schemes (Damg{\r a}rd et al., TCC 2020). As we prove, however, these notions are still strictly stronger than necessary.
2021
ASIACRYPT
Tardigrade: An Atomic Broadcast Protocol for Arbitrary Network Conditions 📺
We study the problem of \emph{atomic broadcast}---the underlying problem addressed by blockchain protocols---in the presence of a malicious adversary who corrupts some fraction of the $n$ parties running the protocol. Existing protocols are either robust for any number of corruptions in a \emph{synchronous} network (where messages are delivered within some known time~$\Delta$) but fail if the synchrony assumption is violated, or tolerate fewer than $n/3$ corrupted parties in an \emph{asynchronous} network (where messages can be delayed arbitrarily) and cannot tolerate more corruptions even if the network happens to be well behaved. We design an atomic broadcast protocol (TARDIGRADE) that, for any $t_s \geq t_a$ with $2t_s + t_a < n$, provides security against $t_s$ corrupted parties if the network is synchronous, while remaining secure when $t_a$ parties are corrupted even in an asynchronous network. We show that TARDIGRADE achieves optimal tradeoffs between $t_s$ and~$t_a$. Finally, we show a second protocol (UPGRADE) with similar (but slightly weaker) guarantees that achieves per-transaction communication complexity linear in~$n$.
2021
ASIACRYPT
Revisiting Homomorphic Encryption Schemes for Finite Fields 📺
The Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/ Fan-Vercauteren (BFV) schemes are the two main homomorphic encryption (HE) schemes to perform exact computations over finite fields and integers. Although the schemes work with the same plaintext space, there are significant differences in their noise management, algorithms for the core homomorphic multiplication operation, message encoding, and practical usability. The main goal of our work is to revisit both schemes, focusing on closing the gap between the schemes by improving their noise growth, computational complexity of the core algorithms, and usability. The other goal of our work is to provide both theoretical and experimental performance comparison of BGV and BFV. More precisely, we propose an improved variant of BFV where the encryption operation is modified to significantly reduce the noise growth, which makes the BFV noise growth somewhat better than for BGV (in contrast to prior results showing that BGV has smaller noise growth for larger plaintext moduli). We also modify the homomorphic multiplication procedure, which is the main bottleneck in BFV, to reduce its algorithmic complexity. Our work introduces several other novel optimizations, including lazy scaling in BFV homomorphic multiplication and an improved BFV decryption procedure in the Residue Number System (RNS) representation. We also develop a usable variant of BGV as a more efficient alternative to BFV for common practical scenarios. We implement our improved variants of BFV and BGV in PALISADE and evaluate their experimental performance for several benchmark computations. The experimental results suggest that our BGV implementation is faster for intermediate and large plaintext moduli, which are often used in practical scenarios with ciphertext packing, while our BFV implementation is faster for small plaintext moduli. More precisely, we propose an improved variant of BFV where the encryption operation is modified to significantly reduce the noise growth, which makes the BFV noise growth somewhat better than for BGV (in contrast to prior results showing that BGV has smaller noise growth for larger plaintext moduli). We also modify the homomorphic multiplication procedure, which is the main bottleneck in BFV, to reduce its algorithmic complexity. Our work introduces several other novel optimizations, including lazy scaling in BFV homomorphic multiplication and an improved BFV decryption procedure in the Residue Number System (RNS) representation. We also develop a usable variant of BGV as a more efficient alternative to BFV for common practical scenarios. We implement our improved variants of BFV and BGV in PALISADE and evaluate their experimental performance for several benchmark computations. Our results suggest that BGV is faster for intermediate and large plaintext moduli, which are often used in practical scenarios with ciphertext packing, while BFV is faster for small plaintext moduli.
2021
ASIACRYPT
Quantum Computationally Predicate-Binding Commitments with Application in Quantum Zero-Knowledge Arguments for NP 📺
A quantum bit commitment scheme is to realize bit (rather than qubit) commitment by exploiting quantum communication and quantum computation. In this work, we study the binding property of the quantum string commitment scheme obtained by composing a generic quantum perfectly(resp. statistically)-hiding computationally-binding bit commitment scheme (which can be realized based on quantum-secure one-way permutations(resp. functions)) in parallel. We show that the resulting scheme satisfies a stronger quantum computational binding property, which we will call predicate-binding, than the trivial honest-binding. Intuitively and very roughly, the predicate-binding property guarantees that given any inconsistent predicate pair over a set of strings (i.e. no strings in this set can satisfy both predicates), if a (claimed) quantum commitment can be opened so that the revealed string satisfies one predicate with certainty, then the same commitment cannot be opened so that the revealed string satisfies the other predicate (except for a negligible probability). As an application, we plug a generic quantum perfectly(resp. statistically)-hiding computationally-binding bit commitment scheme in Blum's zero-knowledge protocol for the NP-complete language Hamiltonian Cycle. This will give rise to the first quantum perfect(resp. statistical) zero-knowledge argument system (with soundness error 1/2) for all NP languages based solely on quantum-secure one-way permutations(resp. functions). The quantum computational soundness of this system will follow immediately from the quantum computational predicate-binding property of commitments.
2021
ASIACRYPT
(Compact) Adaptively Secure FE for Attribute-Weighted Sums from k-Lin 📺
This paper presents the first adaptively simulation secure functional encryption (FE) schemes for attribute-weighted sums. In such an FE scheme, encryption takes as input N pairs of attribute {(x_i, z_i )}_{i \in [N]} for some N \in \mathbb{N} where the attributes {x_i}_{i \in [N]} are public while the attributes {z_i}_{i \in [N]} are private. The indices i \in [N] are referred to as the slots. A secret key corresponds to some weight function f, and decryption recovers the weighted sum \sum_{i \in [N]} f(x_i)z_i. This is an important functionality with a wide range of potential real life applications. In the proposed FE schemes attributes are viewed as vectors and weight functions are arithmetic branching programs (ABP). We present two schemes with varying parameters and levels of adaptive security. (a) We first present a one-slot scheme that achieves adaptive security in the simulation-based security model against a bounded number of ciphertext queries and an arbitrary polynomial number of secret key queries both before and after the ciphertext queries. This is the best possible level of security one can achieve in the adaptive simulation-based framework. From the relations between the simulation-based and indistinguishability-based security frameworks for FE, it follows that the proposed FE scheme also achieves indistinguishability- based adaptive security against an a-priori unbounded number of ciphertext queries and an arbitrary polynomial number of secret key queries both before and after the ciphertext queries. Moreover, the scheme enjoys compact ciphertexts that do not grow with the number of appearances of the attributes within the weight functions. (b) Next, bootstrapping from the one-slot scheme, we present an unbounded-slot scheme that achieves simulation-based adaptive security against a bounded number of ciphertext and pre-ciphertext secret key queries while supporting an a-priori unbounded number of post-ciphertext secret key queries. The scheme achieves public parameters and secret key sizes independent of the number of slots N and a secret key can decrypt a ciphertext for any a-priori unbounded N. Further, just like the one-slot scheme, this scheme also has the ciphertext size independent of the number of appearances of the attributes within the weight functions. However, all the parameters of the scheme, namely, the master public key, ciphertexts, and secret keys scale linearly with the bound on the number of pre-ciphertext secret key queries. Our schemes are built upon asymmetric bilinear groups of prime order and the security is derived under the standard (bilateral) k-Linear (k-Lin) assumption. Our work resolves an open problem posed by Abdalla, Gong, and Wee in CRYPTO 2020, where they presented an unbounded-slot FE scheme for attribute-weighted sum achieving only semi-adaptive simulation security. At a technical level, our work extends the recent adaptive security framework of Lin and Luo [EUROCRYPT 2020], devised to achieve compact ciphertexts in the context of indistinguishability-based payload-hiding security, into the setting of simulation-based adaptive attribute-hiding security.
2021
ASIACRYPT
Compressed Sigma-Protocols for Bilinear Group Arithmetic Circuits and Application to Logarithmic Transparent Threshold Signatures 📺
Lai et al. (CCS 2019) have shown how Bulletproof’s arithmetic circuit zero-knowledge protocol (Bootle et al., EUROCRYPT 2016 and B{\"u}nz et al., S\&P 2018) can be generalized to work for bilinear group arithmetic circuits directly, i.e., without requiring these circuits to be translated into arithmetic circuits. In a nutshell, a bilinear group arithmetic circuit is a standard arithmetic circuit augmented with special gates capturing group exponentiations or pairings. Such circuits are highly relevant, e.g., in the context of zero-knowledge statements over pairing-based languages. As expressing these special gates in terms of a standard arithmetic circuit results in a significant overhead in circuit size, an approach to zero-knowledge via standard arithmetic circuits may incur substantial additional costs. The approach due to Lai et al. shows how to avoid this by integrating additional zero-knowledge techniques into the Bulletproof framework so as to handle the special gates very efficiently. We take a different approach by generalizing {\em Compressed $\Sigma$-Protocol Theory} (CRYPTO 2020) from arithmetic circuit relations to bilinear group arithmetic circuit relations. Besides its conceptual simplicity, our approach has the practical advantage of reducing the communication costs of Lai et al.'s protocol by roughly a multiplicative factor $3$. Finally, we show an application of our results which may be of independent interest. We construct the first $k$-out-of-$n$ threshold signature scheme (TSS) that allows for transparent setup {\em and} that yields threshold signatures of size logarithmic in $n$. The threshold signature hides the identities of the $k$ signers and the threshold $k$ can be dynamically chosen at aggregation time.
2021
ASIACRYPT
Luby-Rackoff Backwards with More Users and More Security 📺
It is known, from the work of Dai \textit{et al.} (in CRYPTO'17), that the PRF advantage of $\xorp$ (bitwise-xor of two outputs of $n$-bit random permutations with domain separated inputs), against an adversary making $q$ queries, is about $q/2^n$ for $q \leq 2^{n- 5}$. The same bound can be easily shown to hold for $\xorp[k]$ (bitwise-xor of $k$ outputs $n$-bit pseudorandom random permutations with domain separated inputs), for $k \geq 3$. In this work, we first consider multi-user security of $\xorp[3]$. We show that the multi-user PRF advantage of $\xorp[3]$ is about $\sqrt{uq_{\max}}/2^n$ for all {$q_{\max} \leq 2^{n}/12$}, where $u$ is the number of users and $q_{\max}$ is the maximum number of queries the adversary can make to each user. In the multi-user setup, this implies that $\xorp[3]$ gives security for $O(2^n)$ users even allowing almost $O(2^n)$ queries to each user. This also indicates significant improvement in the single-user setup ({\em i.e.,} when $u =1$), where the distinguishing advantage of the adversary even after making $O(2^n)$ queries is $O({1 \over \sqrt{2^n}})$, {\em i.e.,} negligible. Subsequently, we consider a simple efficient variant of $\xorp[3]$ in which we use five calls to produce $2n$ bit output (instead of six calls in the case of $\xorp[3]$). This variant also achieves similar level of security. As an immediate application, we can construct a variant of block cipher based counter mode which provides much higher security (both in the single-user and the multi-user setup) compared to the security of the encryption part of GCM at the cost of efficiency.
2021
ASIACRYPT
Lattice-Based Group Encryption with Full Dynamicity and Message Filtering Policy 📺
Group encryption (GE) is a fundamental privacy-preserving primitive analog of group signatures, which allows users to decrypt specific ciphertexts while hiding themselves within a crowd. Since its first birth, numerous constructions have been proposed, among which the schemes separately constructed by Libert et al. (Asiacrypt 2016) over lattices and by Nguyen et al. (PKC 2021) over coding theory are postquantum secure. Though the last scheme, at the first time, achieved the full dynamicity (allowing group users to join or leave the group in their ease) and message filtering policy, which greatly improved the state-of-affairs of GE systems, its practical applications are still limited due to the rather complicated design, inefficiency and the weaker security (secure in the random oracles). In return, the Libert et al.’s scheme possesses a solid security (secure in the standard model), but it lacks the previous functions and still suffers from inefficiency because of extremely using lattice trapdoors. In this work, we re-formalize the model and security definitions of fully dynamic group encryption (FDGE) that are essentially equivalent to but more succinct than Nguyen et al.’s; Then, we provide a generic and efficient zero-knowledge proof method for proving that a binary vector is non-zero over lattices, on which a proof for the Prohibitive message filtering policy in the lattice setting is first achieved (yet in a simple manner); Finally, by combining appropriate cryptographic materials and our presented zero-knowledge proofs, we achieve the first latticebased FDGE schemes in a simpler manner, which needs no any lattice trapdoor and is proved secure in the standard model (assuming interaction during the proof phase), outweighing the existing post-quantum secure GE systems in terms of functions, efficiency and security.
2021
ASIACRYPT
On the hardness of the NTRU problem 📺
The 25 year-old NTRU problem is an important computational assumption in public-key cryptography. However, from a reduction perspective, its relative hardness compared to other problems on Euclidean lattices is not well-understood. Its decision version reduces to the search Ring-LWE problem, but this only provides a hardness upper bound. We provide two answers to the long-standing open problem of providing reduction-based evidence of the hardness of the NTRU problem. First, we reduce the worst-case approximate Shortest Vector Problem over ideal lattices to an average-case search variant of the NTRU problem. Second, we reduce another average-case search variant of the NTRU problem to the decision NTRU problem.
2021
ASIACRYPT
Snarky Ceremonies 📺
Succinct non-interactive arguments of knowledge (SNARKs) have found numerous applications in the blockchain setting and elsewhere. The most efficient SNARKs require a distributed ceremony protocol to generate public parameters, also known as a structured reference string (SRS). Our contributions are two-fold: \begin{compactitem} \item We give a security framework for non-interactive zero-knowledge arguments with a ceremony protocol. \item We revisit the ceremony protocol of Groth's SNARK [Bowe et al., 2017]. We show that the original construction can be simplified and optimized, and then prove its security in our new framework. Importantly, our construction avoids the random beacon model used in the original work. \end{compactitem}
2021
ASIACRYPT
Reverse Firewalls for Adaptively Secure MPC without Setup 📺
We study Multi-party computation (MPC) in the setting of subversion, where the adversary tampers with the machines of honest parties. Our goal is to construct actively secure MPC protocols where parties are corrupted adaptively by an adversary (as in the standard adaptive security setting), and in addition, honest parties' machines are compromised. The idea of reverse firewalls (RF) was introduced at EUROCRYPT'15 by Mironov and Stephens-Davidowitz as an approach to protecting protocols against corruption of honest parties' devices. Intuitively, an RF for a party $\mathcal{P}$ is an external entity that sits between $\mathcal{P}$ and the outside world and whose scope is to sanitize $\mathcal{P}$’s incoming and outgoing messages in the face of subversion of their computer. Mironov and Stephens-Davidowitz constructed a protocol for passively-secure two-party computation. At CRYPTO'20, Chakraborty, Dziembowski and Nielsen constructed a protocol for secure computation with firewalls that improved on this result, both by extending it to \textit{multi}-party computation protocol, and considering \textit{active} security in the presence of \textit{static} corruptions. In this paper, we initiate the study of RF for MPC in the \textit{adaptive} setting. We put forward a definition for adaptively secure MPC in the reverse firewall setting, explore relationships among the security notions, and then construct reverse firewalls for MPC in this stronger setting of adaptive security. We also resolve the open question of Chakraborty, Dziembowski and Nielsen by removing the need for a trusted setup in constructing RF for MPC. Towards this end, we construct reverse firewalls for adaptively secure augmented coin tossing and adaptively secure zero-knowledge protocols and obtain a constant round adaptively secure MPC protocol in the reverse firewall setting without setup. Along the way, we propose a new multi-party adaptively secure coin tossing protocol in the plain model, that is of independent interest.
2021
ASIACRYPT
Algebraic Adversaries in the Universal Composability Framework 📺
The algebraic-group model (AGM), which lies between the generic group model and the standard model of computation, provides a means by which to analyze the security of cryptosystems against so-called algebraic adversaries. We formalize the AGM within the framework of universal composability, providing formal definitions for this setting and proving an appropriate composition theorem. This extends the applicability of the AGM to more-complex protocols, and lays the foundations for analyzing algebraic adversaries in a composable fashion. Our results also clarify the meaning of composing proofs in the AGM with other proofs and they highlight a natural form of independence between idealized groups that seems inherent to the AGM and has not been made formal before---these insights also apply to the composition of game-based proofs in the AGM. We show the utility of our model by proving several important protocols universally composable for algebraic adversaries, specifically: (1) the Chou-Orlandi protocol for oblivious transfer, and (2) the SPAKE2 and CPace protocols for password-based authenticated key exchange.
2021
ASIACRYPT
Categorization of Faulty Nonce Misuse Resistant Message Authentication 📺
A growing number of lightweight block ciphers are proposed for environments such as the Internet of Things. An important contribution to the reduced implementation cost is a block length n of 64 or 96 bits rather than 128 bits. As a consequence, encryption modes and message authentication code (MAC) algorithms require security beyond the 2^{n/2} birthday bound. This paper provides an extensive treatment of MAC algorithms that offer beyond birthday bound PRF security for both nonce-respecting and nonce-misusing adversaries. We study constructions that use two block cipher calls, one universal hash function call and an arbitrary number of XOR operations. We start with the separate problem of generically identifying all possible secure n-to-n-bit pseudorandom functions (PRFs) based on two block cipher calls. The analysis shows that the existing constructions EDM, SoP, and EDMD are the only constructions of this kind that achieve beyond birthday bound security. Subsequently we deliver an exhaustive treatment of MAC algorithms, where the outcome of a universal hash function evaluation on the message may be entered at any point in the computation of the PRF. We conclude that there are a total amount of nine schemes that achieve beyond birthday bound security, and a tenth construction that cannot be proven using currently known proof techniques. For these former nine MAC algorithms, three constructions achieve optimal n-bit security in the nonce-respecting setting, but are completely insecure if the nonce is reused. The remaining six constructions have 3n/4-bit security in the nonce-respecting setting, and only four out of these six constructions still achieve beyond the birthday bound security in the case of nonce misuse.
2021
ASIACRYPT
Better Security-Efficiency Trade-Offs in Permutation-Based Two-Party Computation 📺
We improve upon the security of (tweakable) correlation-robust hash functions, which are essential components of garbling schemes and oblivious-transfer extension schemes. We in particular focus on constructions from permutations, and improve upon the work by Guo etal. (IEEE S\&P '20) in terms of security and efficiency. We present a tweakable one-call construction which matches the security of the most secure two-call construction -- the resulting security bound takes form O((p+q)q/2^n), where q is the number of construction evaluations and p is the number of direct adversarial queries to the underlying n-bit permutation, which is modeled as random. Moreover, we present a new two-call construction with much better security degradation -- in particular, for applications of interest, where only a constant number of evaluations per tweak are made, the security degrades as O((\sqrt{q} p+q^2)/2^n). Our security proof relies on on the sum-capture theorems (Babai ’02; Steinberger ’12, Cogliati and Seurin ’18), as well as on new balls-into-bins combinatorial lemmas for limited independence ball-throws. Of independent interest, we also provide a self-contained concrete security treatment of oblivious transfer extension.
2021
ASIACRYPT
A Practical Key-Recovery Attack on 805-Round Trivium 📺
The cube attack is one of the most important cryptanalytic techniques against Trivium. Many key-recovery attacks based on cube attacks have been established. However, few attacks can recover the 80-bit full key information practically. In particular, the previous best practical key-recovery attack was on 784-round Trivium proposed by Fouque and Vannet at FSE 2013. To mount practical key-recovery attacks, it requires a sufficient number of low-degree superpolies. It is difficult both for experimental cube attacks and division property based cube attacks with randomly selected cubes due to lack of efficiency. In this paper, we give a new algorithm to construct candidate cubes targeting linear superpolies. Our experiments show that the success probability is 100% for finding linear superpolies using the constructed cubes. We obtain over 1000 linear superpolies for 805-round Trivium. With 42 independent linear superpolies, we mount a practical key-recovery attack on 805-round Trivium, which increases the number of attacked rounds by 21. The complexity of our attack is $ 2^{41.40} $, which could be carried out on a PC with a GTX-1080 GPU in several hours.
2021
ASIACRYPT
Automatic Classical and Quantum Rebound Attacks on AES-like Hashing by Exploiting Related-key Differentials 📺
Collision attacks on AES-like hashing (hash functions constructed by plugging AES-like ciphers or permutations into the famous PGV modes or their variants) can be reduced to the problem of finding a pair of inputs respecting a differential of the underlying AES-like primitive whose input and output differences are the same. The rebound attack due to Mendel et al. is a powerful tool for achieving this goal, whose quantum version was first considered by Hosoyamada and Sasaki at EUROCRYPT 2020. In this work, we automate the process of searching for the configurations of rebound attacks by taking related-key differentials of the underlying block cipher into account with the MILP-based approach. In the quantum setting, our model guide the search towards characteristics that minimize the resources (e.g., QRAM) and complexities of the resulting rebound attacks. We apply our method to Saturnin-hash, Skinny, and Whirlpool and improved results are obtained.
2021
ASIACRYPT
On the non-tightness of measurement-based reductions for key encapsulation mechanism in the quantum random oracle model 📺
Key encapsulation mechanism (KEM) variants of the Fujisaki-Okamoto (FO) transformation (TCC 2017) that turn a weakly-secure public-key encryption (PKE) into an IND-CCA-secure KEM, were widely used among the KEM submissions to the NIST Post-Quantum Cryptography Standardization Project. Under the standard CPA security assumptions, i.e., OW-CPA and IND-CPA, the security of these variants in the quantum random oracle model (QROM) has been proved by black-box reductions, e.g., Jiang et al. (CRYPTO 2018), and by non-black-box reductions (EUROCRYPT 2020). The non-black-box reductions (EUROCRYPT 2020) have a liner security loss, but can only apply to specific \emph{reversible} adversaries with strict \emph{reversible} implementation. On the contrary, the existing black-box reductions in the literature can apply to an arbitrary adversary with an arbitrary implementation, but suffer a quadratic security loss. In this paper, for KEM variants of the FO transformation, we first show the tightness limits of the black-box reductions, and prove that a \emph{measurement-based} reduction in the QROM from breaking the standard OW-CPA (or IND-CPA) security of the underlying PKE to breaking the IND-CCA security of the resulting KEM, will \emph{inevitably} incur a quadratic loss of the security, where ``measurement-based" means the reduction measures a hash query from the adversary and uses the measurement outcome to break the underlying security of PKE. In particular, most black-box reductions for these FO-like KEM variants are of this type, and our results suggest an explanation for the lack of progress in improving this reduction tightness in terms of the degree of security loss. Then, we further show that the quadratic loss is also unavoidable when one turns a search problem into a decision problem using the one-way to hiding technique in a black-box manner, which has been recognized as an essential technique to prove the security of cryptosystems involving quantum random oracles.
2021
ASIACRYPT
Simulation-Based Bi-Selective Opening Security for Public Key Encryption 📺
Selective opening attacks (SOA) (for public-key encryption, PKE) concern such a multi-user scenario, where an adversary adaptively corrupts some fraction of the users to break into a subset of honestly created ciphertexts, and tries to learn the information on the messages of some unopened (but potentially related) ciphertexts. Until now, the notion of selective opening attacks is only considered in two settings: sender selective opening (SSO), where part of senders are corrupted and messages together with randomness for encryption are revealed; and receiver selective opening (RSO), where part of receivers are corrupted and messages together with secret keys for decryption are revealed. In this paper, we consider a more natural and general setting for selective opening security. In the setting, the adversary may adaptively corrupt part of senders and receivers \emph{simultaneously}, and get the plaintext messages together with internal randomness for encryption and secret keys for decryption, while it is hoped that messages of uncorrupted parties remain protected. We denote it as Bi-SO security since it is reminiscent of Bi-Deniability for PKE. We first formalize the requirement of Bi-SO security by the simulation-based (SIM) style, and prove that some practical PKE schemes achieve SIM-Bi-$\text{SO}$-CCA security in the random oracle model. Then, we suggest a weak model of Bi-SO security, denoted as SIM-wBi-$\text{SO}$-CCA security, and argue that it is still meaningful and useful. We propose a generic construction of PKE schemes that achieve SIM-wBi-$\text{SO}$-CCA security in the standard model and instantiate them from various standard assumptions. Our generic construction is built on a newly presented primitive, namely, universal$_{\kappa}$ hash proof system with key equivocability, which may be of independent interest.
2021
ASIACRYPT
Modular Design of Role-Symmetric Authenticated Key Exchange Protocols 📺
Authenticated Key Exchange (AKE) is an important primitive in applied cryptography. Previously several strong models of AKE were introduced, e.g., CK, CK+, eCK and their extended versions considering perfect forward secrecy (PFS), (denoted by a “-PFS” suffix). These models provide different security guarantees and they are incomparable. Hence, one still lacks systematic understanding of the prerequisites for secure AKEs and a modular design of AKE protocols. In this paper, we investigate this issue in the context of One-Round Authenticated Key Exchange (ORKE), which is role-symmetric for players and only needs a single round to establish a session key. Our treatments are as follows: First, we reformat the CK, CK-PFS, CK+, CK+-PFS, eCK and eCK-PFS models in the context of ORKE, some of which are formulated for the first time in the literature. Next, we introduce a new tool, Key-wise Recoverable Function (KRF). With merely black-box calls to KRFs, we build modular constructions for ORKEs. As an immediate application, many previous protocols can be explained naturally by the construction. We further build a protocol with CK, CK+, eCK, CK-PFS, CK+-PFS and eCK-PFS security simultaneously, by properly instantiating the underlying KRF. As a by-product, we have simplified proofs for a few known protocols, with non-standard assumptions avoidable.
2021
ASIACRYPT
A formula for disaster: a unified approach to elliptic curve special-point-based attacks 📺
The Refined Power Analysis, Zero-Value Point, and Exceptional Procedure attacks introduced side-channel techniques against specific cases of elliptic curve cryptography. The three attacks recover bits of a static ECDH key adaptively, collecting information on whether a certain multiple of the input point was computed. We unify and generalize these attacks in a common framework, and solve the corresponding problem for a broader class of inputs. We also introduce a version of the attack against windowed scalar multiplication methods, recovering the full scalar instead of just a part of it. Finally, we systematically analyze elliptic curve point addition formulas from the Explicit-Formulas Database, classify all non-trivial exceptional points, and find them in new formulas. These results indicate the usefulness of our tooling, which we released publicly, for unrolling formulas and finding special points, and potentially for independent future work.
2021
ASIACRYPT
Security Analysis of CPace 📺
In response to standardization requests regarding password-authenticated key exchange (PAKE) protocols, the IRTF working group CFRG has setup a PAKE selection process in 2019, which led to the selection of the CPace protocol in the balanced setting, in which parties share a common password. In subsequent standardization efforts, the CPace protocol further developed, yielding a protocol family whose actual security guarantees in practical settings are not well understood. In this paper, we provide a comprehensive security analysis of CPace in the universal composability framework. Our analysis is realistic in the sense that it captures adaptive corruptions and refrains from modeling CPace's MapToPoint function that maps field elements to curve points as an idealized function. In order to extend our proofs to different CPace variants optimized for specific elliptic-curve ecosystems, we employ a new approach which represents the assumptions required by the proof as libraries accessed by a simulator. By allowing for the modular replacement of assumptions used in the proof, this new approach avoids a repeated analysis of unchanged protocol parts and lets us efficiently analyze the security guarantees of all the different CPace variants. As a result of our analysis, all of the investigated practical CPace variants enjoy adaptive UC security.
2021
ASIACRYPT
Cryptanalysis of an oblivious PRF from supersingular isogenies 📺
We cryptanalyse the SIDH-based oblivious pseudorandom function from supersingular isogenies proposed at Asiacrypt'20 by Boneh, Kogan and Woo. To this end, we give an attack on an assumption, the auxiliary one-more assumption, that was introduced by Boneh et al. and we show that this leads to an attack on the oblivious PRF itself. The attack breaks the pseudorandomness as it allows adversaries to evaluate the OPRF without further interactions with the server after some initial OPRF evaluations and some offline computations. More specifically, we first propose a polynomial-time attack. Then, we argue it is easy to change the OPRF protocol to include some countermeasures, and present a second subexponential attack that succeeds in the presence of said countermeasures. Both attacks break the security parameters suggested by Boneh et al. Furthermore, we provide a proof of concept implementation as well as some timings of our attack. Finally, we examine the generation of one of the OPRF parameters and argue that a trusted third party is needed to guarantee provable security.
2021
ASIACRYPT
Boosting the Security of Blind Signature Schemes 📺
Existing blind signature schemes that are secure for polynomially many concurrent executions of the signing protocol are either inefficient or rely on non-standard assumptions (even in the random-oracle model). We show the first efficient blind signature schemes achieving this level of security based on the RSA, quadratic residuosity, and discrete logarithm assumptions (in the random-oracle model). Our core technique involves an extension and generalization of a transform due to Pointcheval (Eurocrypt~'98) that allows us to convert certain blind signature schemes that are secure for (concurrently) issuing logarithmically many signatures into ones secure for (concurrently) issuing polynomially many signatures.
2021
ASIACRYPT
The One-More Discrete Logarithm Assumption in the Generic Group Model 📺
The one more-discrete logarithm assumption (OMDL) underlies the security analysis of identification protocols, blind signature and multi-signature schemes, such as blind Schnorr signatures and the recent MuSig2 multi-signatures. As these schemes produce standard Schnorr signatures, they are compatible with existing systems, e.g. in the context of blockchains. OMDL is moreover assumed for many results on the impossibility of certain security reductions. Despite its wide use, surprisingly, OMDL is lacking any rigorous analysis; there is not even a proof that it holds in the generic group model (GGM). (We show that a claimed proof is flawed.) In this work we give a formal proof of OMDL in the GGM. We also prove a related assumption, the one-more computational Diffie-Hellman assumption, in the GGM. Our proofs deviate from prior GGM proofs and replace the use of the Schwartz-Zippel Lemma by a new argument.
2021
ASIACRYPT
New Attacks on LowMC instances with a Single Plaintext/Ciphertext pair 📺
Cryptanalysis of the LowMC block cipher when the attacker has access to a single known plaintext/ciphertext pair is a mathematically challenging problem. This is because the attacker is unable to employ most of the standard techniques in symmetric cryptography like linear and differential cryptanalysis. This scenario is particularly relevant while arguing the security of the Picnic digital signature scheme in which the plaintext/ciphertext pair generated by the LowMC block cipher serves as the public (verification) key and the corresponding LowMC encryption key also serves as the secret (signing) key of the signature scheme. In the paper by Banik et al. (IACR ToSC 2020:4), the authors used a linearization technique of the LowMC S-box to mount attacks on some instances of the block cipher. In this paper, we first make a more precise complexity analysis of the linearization attack. Then, we show how to perform a 2-stage MITM attack on LowMC. The first stage reduces the key candidates corresponding to a fraction of key bits of the master key. The second MITM stage between this reduced candidate set and the remaining fraction of key bits successfully recovers the master key. We show that the combined computational complexity of both these stages is significantly lower than those reported in the ToSC paper by Banik et al.
2021
ASIACRYPT
Tight Security for Key-Alternating Ciphers with Correlated Sub-Keys 📺
A substantial effort has been devoted to proving optimal bounds for the security of key-alternating ciphers with independent sub-keys in the random permutation model (e.g., Chen and Steinberger, EUROCRYPT '14; Hoang and Tessaro, CRYPTO '16). While common in the study of multi-round constructions, the assumption that sub-keys are truly independent is not realistic, as these are generally highly correlated and generated from shorter keys. In this paper, we show the existence of non-trivial distributions of limited independence for which a t-round key-alternating cipher achieves optimal security. Our work is a natural continuation of the work of Chen et al. (CRYPTO '14) which considered the case of t = 2 when all-subkeys are identical. Here, we show that key-alternating ciphers remain secure for a large class of (t-1)-wise and (t-2)-wise independent distribution of sub-keys. Our proofs proceed by generalizations of the so-called Sum-Capture Theorem, which we prove using Fourier-analytic techniques.
2021
ASIACRYPT
Astrolabous: A Universally Composable Time Lock Encryption Scheme 📺
In this work, we study the cryptographic primitive called time-lock encryption (TLE). The concept of TLE involves a party initiating the encryption of a message that one can only decrypt after a certain amount of time has elapsed. Following the universal composability (UC) paradigm introduced by Canetti [IEEE FOCS 2001], we formally abstract the concept of TLE into an ideal functionality in a flexible way. In addition, we provide a standalone definition for secure TLE schemes in a game-based style and we devise a hybrid protocol that relies on such a secure TLE scheme. We show that if the underlying TLE scheme satisfies the standalone game-based security definition, then our hybrid protocol UC realises the TLE functionality in the random oracle model. Finally, we present \emph{Astrolabous}, a TLE construction that satisfies our security definition, leading to the first UC realization of the TLE functionality. Interestingly, it is hard to prove UC secure any of the TLE construction proposed in the literature. The reason behind this difficulty relates to the UC framework itself. Intuitively, to capture semantic security, no information should be leaked regarding the plaintext in the ideal world, thus the ciphertext should not contain any information relating to the message. On the other hand, all ciphertexts will eventually open, resulting in a trivial distinction of the real from the ideal world in the standard model. We overcome this limitation by extending any secure TLE construction adopting the techniques of Nielsen [CRYPTO 2002] in the random oracle model. Specifically, the description of the extended TLE algorithms includes calls to the random oracle, allowing our simulator to equivocate. This extension can be applied to any TLE algorithm that satisfies our standalone game-based security definition, and in particular to Astrolabous.
2021
ASIACRYPT
Two-Round Adaptively Secure MPC from Isogenies, LPN, or CDH 📺
We present a new framework for building round-optimal (two-round) adaptively secure MPC. We show that a relatively weak notion of OT that we call indistinguishability OT with receiver oblivious sampleability (r-iOT) is enough to build two-round, adaptively secure MPC against malicious adversaries in the CRS model. We then show how to construct r-iOT from CDH, LPN, or isogeny-based assumptions that can be viewed as group actions (such as CSIDH and CSI-FiSh). This yields the first concrete constructions of two-round adaptively secure MPC against malicious adversaries from CDH, LPN, or isogeny-based assumptions. We further extend our non-isogeny results to the plain model, achieving (to the best of our knowledge) the first construction of two-round adaptively secure MPC against semi-honest adversaries in the plain model from LPN. Our results allow us to build two-round adaptively secure MPC against malicious adversaries from essentially all of the well-studied assumptions in cryptography. In addition, our constructions from isogenies or LPN provide the first post-quantum alternatives to LWE-based constructions for round-optimal adaptively secure MPC. Along the way, we show that r-iOT also implies non-committing encryption (NCE), thereby yielding the first constructions of NCE from isogenies or LPN.
2021
ASIACRYPT
Improved single-round secure multiplication using regenerating codes 📺
In 2016, Guruswami and Wootters showed Shamir's secret-sharing scheme defined over an extension field has a regenerating property. Namely, we can compress each share to an element of the base field by applying a linear form, such that the secret is determined by a linear combination of the compressed shares. Immediately it seemed like an application to improve the complexity of unconditionally secure multiparty computation must be imminent; however, thus far, no result has been published. We present the first application of regenerating codes to MPC, and show that its utility lies in reducing the number of rounds. Concretely, we present a protocol that obliviously evaluates a depth-$d$ arithmetic circuit in $d + O(1)$ rounds, in the amortized setting of parallel evaluations, with $o(n^2)$ ring elements communicated per multiplication. Our protocol makes use of function-dependent preprocessing, and is secure against the maximal adversary corrupting $t < n/2$ parties. All existing approaches in this setting have complexity $\Omega(n^2)$. Moreover, we extend some of the theory on regenerating codes to Galois rings. It was already known that the repair property of MDS codes over fields can be fully characterized in terms of its dual code. We show this characterization extends to linear codes over Galois rings, and use it to show the result of Guruswami and Wootters also holds true for Shamir's scheme over Galois rings.
2021
ASIACRYPT
Identity-Based Encryption for Fair Anonymity Applications: Defining, Implementing, and Applying Rerandomizable RCCA-secure IBE 📺
Our context is anonymous encryption schemes hiding their receiver, but in a setting which allows authorities to reveal the receiver when needed. While anonymous Identity-Based Encryption (IBE) is a natural candidate for such fair anonymity (it gives trusted authority access by design), the {\it de facto} security standard (a.k.a. IND-ID-CCA) is incompatible with the ciphertext rerandomizability which is crucial to anonymous communication. Thus, we seek to extend IND-ID-CCA security for IBE to a notion that can be meaningfully relaxed for rerandomizability while it still protects against active adversaries. To the end, inspired by the notion of replayable adaptive chosen-ciphertext attack (RCCA) security (Canetti {\it et al.}, Crypto'03), we formalize a new security notion called Anonymous Identity-Based RCCA (ANON-ID-RCCA) security for rerandomizable IBE and propose the first construction with rigorous security analysis. The core of our scheme is a novel extension of the double-strand paradigm, which was originally proposed by Golle {\it et al.} (CT-RSA'04) and later extended by Prabhakaran and Rosulek (Crypto'07), to the well-known Gentry-IBE (Eurocrypt'06). Notably, our scheme is the first IBE that simultaneously satisfies adaptive security, rerandomizability, and recipient-anonymity to date. As the application of our new notion, we design a new universal mixnet in the identity-based setting that does not require public key distribution (with fair anonymity). More generally, our new notion is also applicable to most existing rerandomizable RCCA-secure applications to eliminate the need for public key distribution infrastructure while allowing fairness.
2021
ASIACRYPT
Double-Block-Length Hash Function for Minimum Memory Size 📺
Sharing a common primitive for multiple functionalities is essential for lightweight cryptography, and NIST's lightweight cryptography competition (LWC) considers the integration of hashing to AEAD. While permutations are natural primitive choices in such a goal, for design diversity, it is interesting to investigate how small block-cipher (BC) based and tweakable block-cipher (TBC) based schemes can be. Double-block-length (DBL) hash function modes are suitable to ensure the same security level for AEAD and hashing, but hard to achieve a small memory size. Romulus, a TBC-based finalist in NIST LWC, introduced the DBL hashing scheme Romulus-H, but it requires $3n+k$ bits of memory using an underlying primitive with an $n$-bit block and a $k$-bit (twea)key. Even the smallest DBL modes in the literature require $2n+k$ bits of memory. Addressing this issue, we present new DBL modes EXEX-NI and EXEX-I achieving $(n+k)$-bit state size, i.e., no extra memory in addition to $n+k$ bits needed within the primitive. EXEX-NI is indifferentiable from a random oracle up to $n - \log n$ bits. By instantiating it with SKINNY, we can provide hashing to Romulus with zero memory overhead. EXEX-I is an optimized mode with collision resistance. We finally compare the hardware performances of EXEX-NI and EXEX-I, and Romulus-H with SKINNY-128-384. EXEX-NI and EXEX-I achieve the circuit-area reduction by 2,000+ GE, yielding the total areas being smaller than 70% of that of Romulus-H.
2021
ASIACRYPT
Proofs for Inner Pairing Products and Applications 📺
We present a generalized inner product argument and demonstrate its applications to pairing-based languages. We apply our generalized argument to prove that an inner pairing product is correctly evaluated with respect to committed vectors of $n$ source group elements. With a structured reference string (SRS), we achieve a logarithmic-time verifier whose work is dominated by $6 \log n$ target group exponentiations. Proofs are of size $6 \log n$ target group elements, computed using $6n$ pairings and $4n$ exponentiations in each source group. We apply our inner product arguments to build the first polynomial commitment scheme with succinct (logarithmic) verification, $O(\sqrt{d})$ prover complexity for degree $d$ polynomials (not including the cost to evaluate the polynomial), and a SRS of size $O(\sqrt{d})$. Concretely, this means that for $d=2^{28}$, producing an evaluation proof in our protocol is $76\times$ faster than doing so in the KZG commitment scheme, and the CRS in our protocol is $1000\times$ smaller: $13$MB vs $13$GB for KZG. As a second application, we introduce an argument for aggregating $n$ Groth16 zkSNARKs into an $O(\log n)$ sized proof. Our protocol is significantly faster ($>1000\times$) than aggregating SNARKs via recursive composition: we aggregate $\sim 130,000$ proofs in $25$ minutes, versus $90$ proofs via recursive composition. Finally, we further apply our aggregation protocol to construct a low-memory SNARK for machine computations that does not rely on recursive composition. For a computation that requires time $T$ and space $S$, our SNARK produces proofs in space $\tilde{\mathcal{O}}(S+T)$, which is significantly more space efficient than a monolithic SNARK, which requires space $\tilde{\mathcal{O}}(S \cdot T)$.
2021
ASIACRYPT
Batching Base Oblivious Transfers 📺
Protocols that make use of oblivious transfer (OT) rarely require just one instance. Usually a batch of OTs is required — notably, when generating base OTs for OT extension. There is a natural way to optimize 2-round OT protocols when generating a batch, by reusing certain protocol messages across all instances. In this work we show that this batch optimization is error-prone. We catalog many implementations and papers that have an incorrect treatment of this batch optimization, some of them leading to catastrophic leakage in OT extension protocols. We provide a full treatment of how to properly optimize recent 2-round OT protocols for the batch setting. Along the way we show several performance improvements to the OT protocol of McQuoid, Rosulek, and Roy (ACM CCS 2020). In particular, we show an extremely simple OT construction that may be of pedagogical interest.
2021
ASIACRYPT
Lunar: a Toolbox for More Efficient Universal and Updatable zkSNARKs and Commit-and-Prove Extensions 📺
We study how to construct zkSNARKs whose SRS is universal and updatable, i.e., valid for all relations within a size-bound and to which a dynamic set of participants can indefinitely add secret randomness. Our focus is: efficient universal updatable zkSNARKs with linear-size SRS and their commit-and-prove variants. We both introduce new formal frameworks and techniques, as well as systematize existing ones. We achieve a collection of zkSNARKs with different tradeoffs. One of our schemes achieves the smallest proof size and proving time compared to the state of art for proofs for arithmetic circuits. The language supported by this scheme is a variant of R1CS that we introduce, called R1CS-lite. Another of our constructions directly supports standard R1CS and achieves the fastest proving time for this type of constraints. These results stem from different contributions: (1) a new algebraically-flavored variant of IOPs that we call Polynomial Holographic IOPs (PHPs); (2) a new compiler that combines our PHPs with commit-and-prove zk-SNARKs (CP-SNARKs) for committed polynomials; (3) pairing-based realizations of these CP-SNARKs for polynomials; (4) constructions of PHPs for R1CS and R1CS-lite. Finally, we extend the compiler in item (2) to yield commit-and-prove universal zkSNARKs.
2021
ASIACRYPT
Clustering Effect in Simon and Simeck 📺
Simon and Simeck are two lightweight block ciphers with a simple round function using only word rotations and a bit-wise AND operation. Previous work has shown a strong clustering effect for differential and linear cryptanalysis, due to the existence of many trails with the same inputs and outputs. In this paper, we explore this clustering effect by exhibiting a class of high probability differential and linear trails where the active bits stay in a fixed window of w bits. Instead of enumerating a set of good trails contributing to a differential or a linear approximation, we compute the probability distribution over this space, including all trails in the class. This results in stronger distinguishers than previously proposed, and we describe key recovery attacks against Simon and Simeck improving the previous results by up to 7 rounds. In particular, we obtain an attack against 42-round Simeck-64, leaving only two rounds of security margin, and an attack against 45-round Simon-96/144, reducing the security margin from 16 rounds to 9 rounds.
2021
ASIACRYPT
Quantum Encryption with Certified Deletion, Revisited: Public Key, Attribute-Based, and Classical Communication 📺
Broadbent and Islam (TCC '20) proposed a quantum cryptographic primitive called quantum encryption with certified deletion. In this primitive, a receiver in possession of a quantum ciphertext can generate a classical certificate that the encrypted message is deleted. Although their construction is information-theoretically secure, it is limited to the setting of one-time symmetric key encryption (SKE), where a sender and receiver have to share a common key in advance and the key can be used only once. Moreover, the sender has to generate a quantum state and send it to the receiver over a quantum channel in their construction. Deletion certificates are privately verifiable, which means a verification key for a certificate must be kept secret, in the definition by Broadbent and Islam. However, we can also consider public verifiability. In this work, we present various constructions of encryption with certified deletion. - Quantum communication case: We achieve (reusable-key) public key encryption (PKE) and attribute-based encryption (ABE) with certified deletion. Our PKE scheme with certified deletion is constructed assuming the existence of IND-CPA secure PKE, and our ABE scheme with certified deletion is constructed assuming the existence of indistinguishability obfuscation and one-way function. These two schemes are privately verifiable. - Classical communication case: We also achieve interactive encryption with certified deletion that uses only classical communication. We give two schemes, a privately verifiable one and a publicly verifiable one. The former is constructed assuming the LWE assumption in the quantum random oracle model. The latter is constructed assuming the existence of one-shot signatures and extractable witness encryption.
2021
ASIACRYPT
Faster Dual Lattice Attacks for Solving LWE -- with applications to CRYSTALS 📺
Cryptosystems based on the learning with errors (LWE) problem are assigned a security level that relates to the cost of generic algorithms for solving the LWE problem. This includes at least the so- called primal and dual lattice attacks. In this paper, we present an improvement of the dual lattice attack using an idea that can be traced back to work by Bleichenbacher. We present an improved distinguisher that in combination with a guessing step shows a reduction in the overall complexity for the dual attack on all schemes. Our second contribution is a new two-step lattice reduction strategy that allows the new dual lattice attack to exploit two recent techniques in lattice reduction algorithms, i.e., the "dimensions for free" trick and the trick of producing many short vectors in one sieving. Since the incompatibility of these two tricks was believed to be the main reason that dual attacks are less interesting, our new reduction strategy allows more efficient dual approaches than primal attacks, for important cryptographic parameter sets. We apply the proposed attacks on CRYSTALS-Kyber and CRYSTALS-Dilithium, two of the finalists in the NIST post-quantum cryptography project and present new lower complexity numbers, both classically and quantumly in the core-SVP model. Most importantly, for the proposed security parameters, our new dual attack with refined lattice reduction strategy greatly improves the state-of-the-art primal attack in the classical gate-count metric, i.e., the classical Random Access Machine (RAM) model, indicating that some parameters are really on the edge for their claimed security level. Specifically, the improvement factor can be as large as 15 bits for Kyber1024 with an extrapolation model (Albrecht et al. at Eurocrypt 2019). Also, we show that Kyber768 could be solved with classical gate complexity below its claimed security level. Last, we apply the new attack to the proposed parameters in a draft version of Homomorphic Encryption Standard (see https://homomorphicencryption.org) and obtain significant gains. For instance, we could solve a parameter set aiming for 192-bit security in $2^{187.0}$ operations in the classical RAM model. Note that these parameters are deployed in well-known Fully Homomorphic Encryption libraries.
2021
ASIACRYPT
Symmetric Key Exchange with Full Forward Security and Robust Synchronization 📺
We construct lightweight authenticated key exchange protocols based on pre-shared keys, which achieve full forward security and rely only on simple and efficient symmetric-key primitives. All of our protocols have rigorous security proofs in a strong security model, all have low communication complexity, and are particularly suitable for resource-constrained devices. We describe three protocols that apply linear key evolution to provide different performance and security properties. Correctness in parallel and concurrent protocol sessions is difficult to achieve for linearly key-evolving protocols, emphasizing the need for assurance of availability alongside the usual confidentiality and authentication security goals. We introduce synchronization robustness as a new formal security goal, which essentially guarantees that parties can re-synchronize efficiently. All of our new protocols achieve this property. Since protocols based on linear key evolution cannot guarantee that all concurrently initiated sessions successfully derive a key, we also propose two constructions with non-linear key evolution based on puncturable PRFs. These are instantiable from standard hash functions and require O( C log(|CTR|)) memory, where C is the number of concurrent sessions and |CTR| is an upper bound on the total number of sessions per party. These are the first protocols to simultaneously achieve full forward security, synchronization robustness, and concurrent correctness.
2021
ASIACRYPT
Séta: Supersingular Encryption from Torsion Attacks 📺
We present Séta, a new family of public-key encryption schemes with post-quantum security based on isogenies of supersingular elliptic curves. It is constructed from a new family of trapdoor one-way functions, where the inversion algorithm uses Petit's so called \emph{torsion attacks} on SIDH to compute an isogeny between supersingular elliptic curves given an endomorphism of the starting curve and images of torsion points. We prove the OW-CPA security of S\'eta and present an IND-CCA variant using the post-quantum OAEP transformation. Several variants for key generation are explored together with their impact on the selection of parameters, such as the base prime of the scheme. We furthermore formalise an ``uber'' isogeny assumption framework which aims to generalize computational isogeny problems encountered in schemes including SIDH, CSDIH, OSIDH and ours. Finally, we carefully select parameters to achieve a balance between security and run-times and present experimental results from our implementation.
2021
ASIACRYPT
How to Build a Trapdoor Function from an Encryption Scheme 📺
In this work we ask the following question: Can we transform any encryption scheme into a trapdoor function (TDF)? Alternatively stated, can we make any encryption scheme randomness recoverable? We propose a generic compiler that takes as input any encryption scheme with pseudorandom ciphertexts and adds a trapdoor to invert the encryption, recovering also the random coins. This universal TDFier only assumes in addition the existence of a hinting pseudorandom generator (PRG). Despite the simplicity, our transformation is quite general and we establish a series of new feasibility results: - The first identity-based TDF [Bellare et al. EUROCRYPT 2012] from the CDH assumption in pairing-free groups (or from factoring), thus matching the state of the art for identity-based encryption schemes. Prior works required pairings or LWE. - The first collusion-resistant attribute-based TDF (AB-TDF) for all ($NC^1$, resp.) circuits from LWE (bilinear maps, resp.). Moreover, the first single-key AB-TDF from CDH. To the best of our knowledge, no AB-TDF was known in the literature (not even for a single key) from any assumption. We obtain the same results for predicate encryption. As an additional contribution, we define and construct a trapdoor garbling scheme: A simulation secure garbling scheme with a hidden ``trigger'' that allows the evaluator to fully recover the randomness used by the garbling algorithm. We show how to construct trapdoor garbling from the DDH or LWE assumption with an interplay of key-dependent message (KDM) and randomness-dependent message (RDM) techniques. Trapdoor garbling allows us to obtain alternative constructions of (single-key) AB-TDFs with additional desirable properties, such as adaptive security (in the choice of the attribute) and projective keys. We expect trapdoor garbling to be useful in other contexts, e.g. in case where, upon successful execution, the evaluator needs to immediately verify that the garbled circuit was well-formed.
2021
ASIACRYPT
Lattice sieving via quantum random walks 📺
Lattice-based cryptography is one of the leading proposals for post-quantum cryptography. The Shortest Vector Problem (SVP) is arguably the most important problem for the cryptanalysis of lattice-based cryptography, and many lattice-based schemes have security claims based on its hardness. The best quantum algorithm for the SVP is due to Laarhoven [Laa16 PhD] and runs in (heuristic) time $2^{0.2653d + o(d)}$. In this article, we present an improvement over Laarhoven's result and present an algorithm that has a (heuristic) running time of $2^{0.2570 d + o(d)}$ where $d$ is the lattice dimension. We also present time-memory trade-offs where we quantify the amount of quantum memory and quantum random access memory of our algorithm. The core idea is to replace Grover's algorithm used in [Laa16 PhD] in a key part of the sieving algorithm by a quantum random walk in which we add a layer of local sensitive filtering.
2021
ASIACRYPT
Shorter Lattice-Based Group Signatures via ``Almost Free'' Encryption and Other Optimizations 📺
We present an improved lattice-based group signature scheme whose parameter sizes and running times are independent of the group size. The signature length in our scheme is around $200$KB, which is approximately a $3$X reduction over the previously most compact such scheme, based on any quantum-safe assumption, of del Pino et al. (CCS 2018). The improvement comes via several optimizations of some basic cryptographic components that make up group signature schemes, and we think that they will find other applications in privacy-based lattice cryptography.
2021
ASIACRYPT
Divided We Stand, United We Fall: Security Analysis of Some SCA+SIFA Countermeasures Against SCA-Enhanced Fault Template Attacks 📺
Protection against side-channel (SCA) and fault attacks (FA) requires two classes of countermeasures to be simultaneously embedded in a cryptographic implementation. It has already been shown that a straightforward combination of SCA and FA countermeasures are vul- nerable against FAs, such as Statistical Ineffective Fault Analysis (SIFA) and Fault Template Attacks (FTA). Consequently, new classes of countermeasures have been proposed which prevent against SIFA, and also includes masking for SCA protection. While they are secure against SIFA and SCA individually, one important question is whether the security claim still holds at the presence of a combined SCA and FA adversary. Security against combined attacks is, however, desired, as countermeasures for both threats are included in such implementations. In this paper, we show that some of the recently proposed combined SIFA and SCA countermeasures fall prey against combined attacks. To this end, we enhance the FTA attacks by considering side-channel information during fault injection. The success of the proposed attacks stems from some non-trivial fault propagation properties of S-Boxes, which remains unexplored in the original FTA proposal. The proposed attacks are validated on an open-source software implementation of Keccak with SIFA-protected χ 5 S-Box with laser fault injection and power measurement, and a hardware implementation of a SIFA-protected χ3 S-Box through gate-level power trace simulation. Finally, we discuss some mitigation strategies to strengthen existing countermeasures.
2021
ASIACRYPT
Improved Programmable Bootstrapping with Larger Precision and Efficient Arithmetic Circuits for TFHE 📺
Fully Homomorphic Encryption} (FHE) schemes enable to compute over encrypted data. Among them, TFHE [CGGI17] has the great advantage of offering an efficient method for bootstrapping noisy ciphertexts, i.e., reduce the noise. Indeed, homomorphic computation increases the noise in ciphertexts and might compromise the encrypted message. TFHE bootstrapping, in addition to reducing the noise, also evaluates (for free) univariate functions expressed as look-up tables. It however requires to have the most significant bit of the plaintext to be known a priori, resulting in the loss of one bit of space to store messages. Furthermore it represents a non negligible overhead in terms of computation in many use cases. In this paper, we propose a solution to overcome this limitation, that we call Programmable Bootstrapping Without Padding (WoP-PBS). This approach relies on two building blocks. The first one is the multiplication à la BFV [FV12] that we incorporate into TFHE. This is possible thanks to a thorough noise analysis showing that correct multiplications can be computed using practical TFHE parameters. The second building block is the generalization of TFHE bootstrapping introduced in this paper. It offers the flexibility to select any chunk of bits in an encrypted plaintext during a bootstrap. It also enables to evaluate many LUTs at the same time when working with small enough precision. All these improvements are particularly helpful in some applications such as the evaluation of Boolean circuits (where a bootstrap is no longer required in each evaluated gate) and, more generally, in the efficient evaluation of arithmetic circuits even with large integers. Those results improve TFHE circuit bootstrapping as well. Moreover, we show that bootstrapping large precision integers is now possible using much smaller parameters than those obtained by scaling TFHE ones.
2021
ASIACRYPT
SHealS and HealS: isogeny-based PKEs from a key validation method for SIDH 📺
In 2016, Galbraith et al. presented an adaptive attack on the SIDH key exchange protocol. In SIKE, one applies a variant of the Fujisaki-Okamoto transform to force Bob to reveal his encryption key to Alice, which Alice then uses to re-encrypt Bob's ciphertext and verify its validity. Therefore, Bob can not reuse his encryption keys. There have been two other proposed countermeasures enabling static-static private keys: k-SIDH and its variant by Jao and Urbanik. These countermeasures are relatively expensive since they consist in running multiple parallel instances of SIDH. In this paper, firstly, we propose a new countermeasure to the GPST adaptive attack on SIDH. Our countermeasure does not require key disclosure as in SIKE, nor multiple parallel instances as in k-SIDH. We translate our countermeasure into a key validation method for SIDH-type schmes. Secondly, we use our key validation to design HealSIDH, an efficient SIDH-type static-static key interactive exchange protocol. Thirdly, we derive a PKE scheme SHealS using HealSIDH. SHealS uses larger primes compared to SIKE, has larger keys and ciphertexts, but only $4$ isogenies are computed in a full execution of the scheme, as opposed to $5$ isogenies in SIKE. We prove that SHealS is IND-CPA secure relying on a new assumption we introduce and we conjecture its IND-CCA security. We suggest HealS, a variant of SHealS using a smaller prime, providing smaller keys and ciphertexts. As a result, HealSIDH is a practically efficient SIDH based (interactive) key exchange incorporating a "direct" countermeasure to the GPST adaptive attack.
2021
ASIACRYPT
Digital Signatures with Memory-Tight Security in the Multi-Challenge Setting 📺
The standard security notion for digital signatures is "single-challenge" (SC) EUF-CMA security, where the adversary outputs a single message-signature pair and "wins" if it is a forgery. Auerbach et al. (CRYPTO 2017) introduced memory-tightness of reductions and argued that the right security goal in this setting is actually a stronger "multi-challenge" (MC) definition, where an adversary may output many message-signature pairs and "wins" if at least one is a forgery. Currently, no construction from simple standard assumptions is known to achieve full tightness with respect to time, success probability, and memory simultaneously. Previous works showed that memory-tight signatures cannot be achieved via certain natural classes of reductions (Auerbach et al., CRYPTO 2017; Wang et al., EUROCRYPT 2018). These impossibility results may give the impression that the construction of memory-tight signatures is difficult or even impossible. We show that this impression is false, by giving the first constructions of signature schemes with full tightness in all dimensions in the MC setting. To circumvent the known impossibility results, we first introduce the notion of canonical reductions in the SC setting. We prove a general theorem establishing that every signature scheme with a canonical reduction is already memory-tightly secure in the MC setting, provided that it is strongly unforgeable, the adversary receives only one signature per message, and assuming the existence of a tightly-secure pseudorandom function. We then achieve memory-tight many-signatures-per-message security in the MC setting by a simple additional generic transformation. This yields the first memory-tightly, strongly EUF-CMA-secure signature schemes in the MC setting. Finally, we show that standard security proofs often already can be viewed as canonical reductions. Concretely, we show this for signatures from lossy identification schemes (Abdalla et al., EUROCRYPT 2012), two variants of RSA Full-Domain Hash (Bellare and Rogaway, EUROCRYPT 1996), and two variants of BLS signatures (Boneh et al., ASIACRYPT 2001).
2021
ASIACRYPT
Dynamic Random Probing Expansion with Quasi Linear Asymptotic Complexity 📺
The masking countermeasure is widely used to protect cryptographic implementations against side-channel attacks. While many masking schemes are shown to be secure in the widely deployed probing model, the latter raised a number of concerns regarding its relevance in practice. Offering the adversary the knowledge of a fixed number of intermediate variables, it does not capture the so-called horizontal attacks which exploit the repeated manipulation of sensitive variables. Therefore, recent works have focused on the random probing model in which each computed variable leaks with some given probability p. This model benefits from fitting better the reality of the embedded devices. In particular, Belaïd, Coron, Prouff, Rivain, and Taleb (CRYPTO 2020) introduced a framework to generate random probing circuits. Their compiler somehow extends base gadgets as soon as they satisfy a notion called random probing expandability (RPE). A subsequent work from Belaïd, Rivain, and Taleb (EUROCRYPT 2021) went a step forward with tighter properties and improved complexities. In particular, their construction reaches a complexity of O(κ^{3.9}), for a κ-bit security, while tolerating a leakage probability of p = 2^{−7.5}. In this paper, we generalize the random probing expansion approach by considering a dynamic choice of the base gadgets at each step in the expansion. This approach makes it possible to use gadgets with high number of shares –which enjoy better asymptotic complexity in the expansion framework– while still tolerating the best leakage rate usually obtained for small gadgets. We investigate strategies for the choice of the sequence of compilers and show that it can reduce the complexity of an AES implementation by a factor 10. We also significantly improve the asymptotic complexity of the expanding compiler by exhibiting new asymptotic gadget constructions. Specifically, we introduce RPE gadgets for linear operations featuring a quasi-linear complexity, as well as, an RPE multiplication gadget with linear number of multiplications. These new gadgets drop the complexity of the expanding compiler from quadratic to quasi-linear.
2021
ASIACRYPT
Verifiably-Extractable OWFs and Their Applications to Subversion Zero-Knowledge 📺
An extractable one-way function (EOWF), introduced by Canetti and Dakdouk (ICALP 2008) and generalized by Bitansky et al. (SIAM Journal on Computing vol. 45), is an OWF that allows for efficient extraction of a preimage for the function. We study (generalized) EOWFs that have a public image verification algorithm. We call such OWFs verifiably-extractable and show that several previously known constructions satisfy this notion. We study how such OWFs relate to subversion zero-knowledge (Sub-ZK) NIZKs by using them to generically construct a Sub-ZK NIZK from a NIZK satisfying certain additional properties, and conversely show how to obtain them from any Sub-ZK NIZK. Prior to our work, the Sub-ZK property of NIZKs was achieved using concrete knowledge assumptions.
2021
ASIACRYPT
QCB: Efficient Quantum-secure Authenticated Encryption 📺
It was long thought that symmetric cryptography was only mildly affected by quantum attacks, and that doubling the key length was sufficient to restore security. However, recent works have shown that Simon's quantum period finding algorithm breaks a large number of MAC and authenticated encryption algorithms when the adversary can query the MAC/encryption oracle with a quantum superposition of messages. In particular, the OCB authenticated encryption mode is broken in this setting, and no quantum-secure mode is known with the same efficiency (rate-one and parallelizable). In this paper we generalize the previous attacks, show that a large class of OCB-like schemes is unsafe against superposition queries, and discuss the quantum security notions for authenticated encryption modes. We propose a new rate-one parallelizable mode named QCB inspired by TAE and OCB and prove its security against quantum superposition queries.
2021
ASIACRYPT
Efficient NIZKs for Algebraic Sets 📺
Significantly extending the framework of (Couteau and Hartmann, Crypto 2020), we propose a general methodology to construct NIZKs for showing that an encrypted vector $\vec{\chi}$ belongs to an algebraic set, i.e., is in the zero locus of an ideal $\mathscr{I}$ of a polynomial ring. In the case where $\mathscr{I}$ is principal, i.e., generated by a single polynomial $F$, we first construct a matrix that is a ``quasideterminantal representation'' of $F$ and then a NIZK argument to show that $F (\vec{\chi}) = 0$. This leads to compact NIZKs for general computational structures, such as polynomial-size algebraic branching programs. We extend the framework to the case where $\IDEAL$ is non-principal, obtaining efficient NIZKs for R1CS, arithmetic constraint satisfaction systems, and thus for $\mathsf{NP}$. As an independent result, we explicitly describe the corresponding language of ciphertexts as an algebraic language, with smaller parameters than in previous constructions that were based on the disjunction of algebraic languages. This results in an efficient GL-SPHF for algebraic branching programs.
2021
ASIACRYPT
ConTra Corona: Contact Tracing against the Coronavirus by Bridging the Centralized–Decentralized Divide for Stronger Privacy 📺
Contact tracing is among the most important interventions to mitigate the spread of any pandemic usually in the form of manual contact tracing. Smartphone-facilitated digital contact tracing may help to increase tracing capabilities and extend the coverage to those contacts one does not know in person. Most implemented protocols use local Bluetooth Low Energy (BLE) communication to detect contagion-relevant proximity, together with cryptographic protections, as necessary to improve the privacy of the users of such a system. However, current decentralized protocols, including DP3T, do not sufficiently protect infected users from having their status revealed to their contacts, which raises fear of stigmatization. We alleviate this by proposing a new and practical solution with stronger privacy guarantees against active adversaries. It is based on the upload-what-you-observed paradigm, includes a separation of duties on the server side, and a mechanism to ensure that users cannot deduce which encounter caused a warning with high time resolution. Finally, we present a simulation-based security notion of digital contact tracing in the real–ideal setting, and prove the security of our protocol in this framework.
2021
ASIACRYPT
Franchised Quantum Money 📺
The construction of public key quantum money based on standard cryptographic assumptions is a longstanding open question. Here we introduce franchised quantum money, an alternative form of quantum money that is easier to construct. Franchised quantum money retains the features of a useful quantum money scheme, namely unforgeability and local verification: anyone can verify banknotes without communicating with the bank. In franchised quantum money, every user gets a unique secret verification key, and the scheme is secure against counterfeiting and sabotage, a new security notion that appears in the franchised model. Finally, we construct franchised quantum money and prove security assuming one-way functions.
2021
ASIACRYPT
Cryptographic Analysis of the Bluetooth Secure Connection Protocol Suite 📺
We give a cryptographic analysis of the Bluetooth Secure Connection Protocol Suite. Bluetooth supports several subprotocols such as numeric comparison, passkey entrance, and just works, in order to match the devices' different input/output capabilities. Previous analyses (e.g., Lindell, CT-RSA'09, or Troncoso and Hale, NDSS'21) often considered (and confirmed) the security of single subprotocols only. Recent practically verified attacks, however, such as the Method Confusion Attack (von Tschirschnitz et al., S&P 21) against Bluetooth's authentication and key secrecy property often exploit the bad interplay of different subprotocols. Even worse, some of these attacks show that one cannot show the Bluetooth protocol suite to be a secure authenticated key exchange protocol. We therefore aim at the best we can hope for, and show that the protocol still matches the common key secrecy requirements of a key-exchange protocol if one assumes a trust-on-first-use relationship. This means that the adversary needs to mount an active attack during the first connection, otherwise the subsequent reconnections remain secure. Investigating the cryptographic strength of the Bluetooth protocol we also look into the privacy mechanism of address randomization in Bluetooth (which is only available in the Low Energy version). We show that the cryptography indeed provides a decent level of address privacy, although this does not rule out identification of devices via other means, such as physical characteristics.
2021
ASIACRYPT
Efficient Boolean Search over Encrypted Data with Reduced Leakage 📺
Encrypted multi-maps enable outsourcing the storage of a multi-map to an untrusted server while maintaining the ability to query privately. We focus on encrypted Boolean multi-maps that support arbitrary Boolean queries over the multi-map. Kamara and Moataz [Eurocrypt’17] presented the first encrypted multi-map, BIEX, that supports CNF queries with optimal communication, worst-case sublinear search time and non-trivial leakage. We improve on previous work by presenting a new construction CNFFilter for CNF queries with significantly less leakage than BIEX, while maintaining both optimal communication and worst-case sublinear search time. As a direct consequence our construction shows additional resistance to leakage-abuse attacks in comparison to prior works. For most CNF queries, CNFFilter avoids leaking the result sets for any singleton queries for labels appearing in the CNF query. As an example, for the CNF query of the form (l1 ∨ l2) ∧ l3, our scheme does not leak the result sizes of queries to l1, l2 or l3 individually. On the other hand, BIEX does leak some of this information. This is just an example of the reduced leakage obtained by CNFFilter. The core of CNFFilter is a new filtering algorithm that performs set intersections with significantly less leakage compared to prior works. We implement CNFFilter and show that CNFFilter achieves faster search times and similar communication overhead compared to BIEX at the cost of a small increase in server storage.
2021
ASIACRYPT
Efficient Leakage-Resilient MACs without Idealized Assumptions 📺
The security proofs of leakage-resilient MACs based on symmetric building blocks currently rely on idealized assumptions that hardly translate into interpretable guidelines for the cryptographic engineers implementing these schemes. In this paper, we first present a leakage-resilient MAC that is both efficient and secure under standard and easily interpretable black box and physical assumptions. It only requires a collision resistant hash function and a single call per message authentication to a Tweakable Block Cipher (TBC) that is unpredictable with leakage. This construction leverages two design twists: large tweaks for the TBC and a verification process that checks the inverse TBC against a constant. It enjoys beyond birthday security bounds. We then discuss the cost of getting rid of these design twists. We show that security can be proven without them as well. Yet, a construction without large tweaks requires stronger (non idealized) assumptions and inevitably incurs performance overheads if specialized TBCs can be exploited, and a construction without twisted verification requires even stronger assumptions (still non idealized) and leads to more involved bounds. The combination of these results makes a case for our first pragmatic construction and suggests the design of TBCs with large tweaks and good properties for side-channel countermeasures as an interesting challenge.
2021
ASIACRYPT
Tight adaptive reprogramming in the QROM 📺
The random oracle model (ROM) enjoys widespread popularity, mostly because it tends to allow for tight and conceptually simple proofs where provable security in the standard model is elusive or costly. While being the adequate replacement of the ROM in the post-quantum security setting, the quantum-accessible random oracle model (QROM) has thus far failed to provide these advantages in many settings. In this work, we focus on adaptive reprogrammability, a feature of the ROM enabling tight and simple proofs in many settings. We show that the straightforward quantum-accessible generalization of adaptive reprogramming is feasible by proving a bound on the adversarial advantage in distinguishing whether a random oracle has been reprogrammed or not. We show that our bound is tight by providing a matching attack. We go on to demonstrate that our technique recovers the mentioned advantages of the ROM in three QROM applications: 1) We give a tighter proof of security of the message compression routine as used by XMSS. 2) We show that the standard ROM proof of chosen-message security for Fiat-Shamir signatures can be lifted to the QROM, straightforwardly, achieving a tighter reduction than previously known. 3) We give the first QROM proof of security against fault injection and nonce attacks for the hedged Fiat-Shamir transform.
2021
ASIACRYPT
Convexity of division property transitions: theory, algorithms and compact models 📺
Integral cryptanalysis is a powerful tool for attacking symmetric primitives, and division property is a state-of-the-art framework for finding integral distinguishers. This work describes new theoretical and practical insights into traditional bit-based division property. We focus on analyzing and exploiting monotonicity/convexity of division property and its relation to the graph indicator. In particular, our investigation leads to a new compact representation of propagation, which allows CNF/MILP modeling for larger S-Boxes, such as 16-bit Super-Sboxes of lightweight block ciphers or even 32-bit random S-boxes. This solves the challenge posed by Derbez and Fouque (ToSC 2020), who questioned the possibility of SAT/SMT/MILP modeling of 16-bit Super-Sboxes. As a proof-of-concept, we model the Super-Sboxes of the 8-round LED by CNF formulas, which was not feasible by any previous approach. Our analysis is further supported by an elegant algorithmic framework. We describe simple algorithms for computing division property of a set of $n$-bit vectors in time $O(n2^n)$, reducing such sets to minimal/maximal elements in time $O(n2^n)$, computing division property propagation table of an $n\times m$-bit S-box and its compact representation in time $O((n+m)2^{n+m})$. In addition, we develop an advanced algorithm tailored to "heavy" bijections, allowing to model, for example, a randomly generated 32-bit S-box.
2021
ASIACRYPT
Hierarchical Integrated Signature and Encryption 📺
In this work, we introduce the notion of hierarchical integrated signature and encryption (HISE), wherein a single public key is used for both signature and encryption, and one can derive a secret key used only for decryption from the signing key, which enables secure delegation of decryption capability. HISE enjoys the benefit of key reuse, and admits individual key escrow. We present two generic constructions of HISE. One is from (constrained) identity-based encryption. The other is from uniform one-way function, public-key encryption, and general-purpose public-coin zero-knowledge proof of knowledge. To further attain global key escrow, we take a little detour to revisit global escrow PKE, an object both of independent interest and with many applications. We formalize the syntax and security model of global escrow PKE, and provide two generic constructions. The first embodies a generic approach to compile any PKE into one with global escrow property. The second establishes a connection between three-party non-interactive key exchange and global escrow PKE. Combining the results developed above, we obtain HISE schemes that support both individual and global key escrow. We instantiate our generic constructions of (global escrow) HISE and implement all the resulting concrete schemes for 128-bit security. Our schemes have performance that is comparable to the best Cartesian product combined public-key scheme, and exhibit advantages in terms of richer functionality and public key reuse. As a byproduct, we obtain a new global escrow PKE scheme that outperforms the best prior work in speed by several orders of magnitude, which might be of independent interest.
2021
ASIACRYPT
Lattice Enumeration for Tower NFS: a 521-bit Discrete Logarithm Computation 📺
The Tower variant of the Number Field Sieve (TNFS) is known to be asymptotically the most efficient algorithm to solve the discrete logarithm problem in finite fields of medium characteristics, when the extension degree is composite. A major obstacle to an efficient implementation of TNFS is the collection of algebraic relations, as it happens in dimension greater than 2. This requires the construction of new sieving algorithms which remain efficient as the dimension grows. In this article, we overcome this difficulty by considering a lattice enumeration algorithm which we adapt to this specific context. We also consider a new sieving area, a high-dimensional sphere, whereas previous sieving algorithms for the classical NFS considered an orthotope. Our new sieving technique leads to a much smaller running time, despite the larger dimension of the search space, and even when considering a larger target, as demonstrated by a record computation we performed in a 521-bit finite field GF(p^6). The target finite field is of the same form than finite fields used in recent zero-knowledge proofs in some blockchains. This is the first reported implementation of TNFS.
2021
ASIACRYPT
Promise $\Sigma$-protocol: How to Construct Efficient Threshold ECDSA from Encryptions Based on Class Groups 📺
Threshold Signatures allow $n$ parties to share the ability of issuing digital signatures so that any coalition of size at least $t+1$ can sign, whereas groups of $t$ or less players cannot. The currently known class-group-based threshold ECDSA constructions are either inefficient (requiring parallel-repetition of the underlying zero knowledge proof with small challenge space) or requiring rather non-standard assumptions. In this paper, under \emph{standard assumptions} we present efficient threshold ECDSA protocols from encryption schemes based on class groups \emph{without parallel repeating the underlying zero knowledge proof}, yielding a significant efficiency improvement in the key generation over previous constructions (even those based on non-standard assumptions). Along the way we introduce a new notion of \emph{promise} $\Sigma$-protocol that satisfies only a weaker soundness called \emph{promise extractability}. An accepting \emph{promise} $\Sigma$-proof for statements related to class-group-based encryptions does not establish the truth of the statement but provides security guarantees (promise extractability) that are sufficient for our applications. We also show how to simulate homomorphic operations on a (possibly invalid) class-group-based encryption whose correctness has been proven via our \emph{promise} $\Sigma$-protocol. We believe that these techniques are of independent interest and applicable to other scenarios where efficient zero knowledge proofs for statements related to class-group is required.
2021
CHES
2021
CHES
2021
CRYPTO
The $t$-wise Independence of Substitution-Permutation Networks 📺
Block ciphers such as the Advanced Encryption Standard (Rijndael) are used extensively in practice, yet our understanding of their security continues to be highly incomplete. This paper promotes and continues a research program aimed at {\em proving} the security of block ciphers against important and well-studied classes of attacks. In particular, we initiate the study of (almost) $t$-wise independence of concrete block-cipher construction paradigms such as substitution-permutation networks and key-alternating ciphers. Sufficiently strong (almost) pairwise independence already suffices to resist (truncated) differential attacks and linear cryptanalysis, and hence this is a relevant and meaningful target. Our results are two-fold. Our first result concerns substitution-permutation networks (SPNs) that model ciphers such as AES. We prove the almost pairwise-independence of an SPN instantiated with concrete S-boxes together with an appropriate linear mixing layer, given sufficiently many rounds and independent sub-keys. Our proof relies on a {\em characterization} of S-box computation on input differences in terms of sampling output differences from certain subspaces, and a new randomness extraction lemma (which we prove with Fourier-analytic techniques) that establishes when such sampling yields uniformity. We use our techniques in particular to prove almost pairwise-independence for sufficiently many rounds of both the AES block cipher (which uses a variant of the patched inverse function $x \mapsto x^{-1}$ as the $S$-box) and the MiMC block cipher (which uses the cubing function $x \mapsto x^3$ as the $S$-box), assuming independent sub-keys. Secondly, we show that instantiating a key-alternating cipher (which can be thought of as a degenerate case of SPNs) with most permutations gives us (almost) $t$-wise independence in $t + o(t)$ rounds. In order to do this, we use the probabilistic method to develop two new lemmas, an {\em independence-amplification lemma} and a {\em distance amplification lemma}, that allow us to reason about the evolution of key-alternating ciphers.
2021
CRYPTO
Time- and Space-Efficient Arguments from Groups of Unknown Order 📺
We construct public-coin time- and space-efficient zero-knowledge arguments for NP. For every time T and space S non-deterministic RAM computation, the prover runs in time T * polylog(T) and space S * polylog(T), and the verifier runs in time n * polylog(T), where n is the input length. Our protocol relies on hidden order groups, which can be instantiated with a trusted setup from the hardness of factoring (products of safe primes), or without a trusted setup using class groups. The argument-system can heuristically be made non-interactive using the Fiat-Shamir transform. Our proof builds on DARK (Bunz et al., Eurocrypt 2020), a recent succinct and efficiently verifiable polynomial commitment scheme. We show how to implement a variant of DARK in a time- and space-efficient way. Along the way we: 1. Identify a significant gap in the proof of security of Dark. 2. Give a non-trivial modification of the DARK scheme that overcomes the aforementioned gap. The modified version also relies on significantly weaker cryptographic assumptions than those in the original DARK scheme. Our proof utilizes ideas from the theory of integer lattices in a novel way. 3. Generalize Pietrzak's (ITCS 2019) proof of exponentiation (PoE) protocol to work with general groups of unknown order (without relying on any cryptographic assumption). In proving these results, we develop general-purpose techniques for working with (hidden order) groups, which may be of independent interest.
2021
CRYPTO
Three Halves Make a Whole? Beating the Half-Gates Lower Bound for Garbled Circuits 📺
We describe a garbling scheme for boolean circuits, in which XOR gates are free and AND gates require communication of $1.5\kappa + 5$ bits. This improves over the state-of-the-art ``half-gates'' scheme of Zahur, Rosulek, and Evans (Eurocrypt 2015), in which XOR gates are free and AND gates cost $2\kappa$ bits. The half-gates paper proved a lower bound of $2\kappa$ bits per AND gate, in a model that captured all known garbling techniques at the time. We bypass this lower bound with a novel technique that we call \textbf{slicing and dicing}, which involves slicing wire labels in half and operating separately on those halves. Ours is the first to bypass the lower bound while being fully compatible with free-XOR, making it a drop-in replacement for half-gates. Our construction is proven secure from a similar assumption to prior free-XOR garbling (circular correlation-robust hash), and uses only slightly more computation than half-gates.
2021
CRYPTO
On the Round Complexity of Black-Box Secure MPC 📺
We consider the question of minimizing the round complexity of secure multiparty computation (MPC) protocols that make a black-box use of simple cryptographic primitives in the setting of security against any number of malicious parties. In the plain model, previous black-box protocols required a high constant number of rounds (>15). This is far from the known lower bound of 4 rounds for protocols with black-box simulators. When allowing a random oblivious transfer (OT) correlation setup, 2-round protocols making a black-box use of a pseudorandom generator were previously known. However, such protocols were obtained via a round-collapsing ``protocol garbling'' technique that has poor concrete efficiency and makes a non-black-box use of an underlying malicious-secure protocol. We improve this state of affairs by presenting the following types of black-box protocols. a. 4-round ``pairwise MPC'' in the plain model. This round-optimal protocol enables each ordered pair of parties to compute a function of both inputs whose output is delivered to the second party. The protocol makes black-box use of any public-key encryption (PKE) with pseudorandom public keys. As a special case, we get a black-box round-optimal realization of secure (copies of) OT between every ordered pair of parties. b. 2-round MPC from OT correlations. This round-optimal protocol makes a black-box use of any general 2-round MPC protocol satisfying an augmented notion of semi-honest security. In the two-party case, this yields new kinds of 2-round black-box protocols. c. 5-round MPC in the plain model. This protocol makes a black-box use of PKE with pseudorandom public keys, and 2-round oblivious transfer with ``semi-malicious'' security. A key technical tool for the first result is a novel combination of split-state non-malleable codes (Dziembowski, Pietrzak, and Wichs, JACM '18) with standalone secure {\em two-party} protocols. The second result is based on a new round-optimized variant of the ``IPS compiler'' (Ishai, Prabhakaran and Sahai, Crypto '08). The third result is obtained via a specialized combination of these two techniques.
2021
CRYPTO
Fine-grained Secure Attribute-based Encryption 📺
Fine-grained cryptography is constructing cryptosystems in a setting where an adversary’s resource is a-prior bounded and an honest party has less resource than an adversary. Currently, only simple form of encryption schemes, such as secret-key and public-key encryption, are constructed in this setting. In this paper, we enrich the available tools in fine-grained cryptography by proposing the first fine-grained secure attribute-based encryption (ABE) scheme. Our construction is adaptively secure under the widely accepted worst-case assumption, $NC1 \subsetneq \oplus L/poly$, and it is presented in a generic manner using the notion of predicate encodings (Wee, TCC’14). By properly instantiating the underlying encoding, we can obtain different types of ABE schemes, including identity-based encryption. Previously, all of these schemes were unknown in fine-grained cryptography. Our main technical contribution is constructing ABE schemes without using pairing or the Diffie-Hellman assumption. Hence, our results show that, even if one-way functions do not exist, we still have ABE schemes with meaningful security. For more application of our techniques, we construct an efficient (quasi-adaptive) non-interactive zero-knowledge (QA-NIZK) proof system.
2021
CRYPTO
Computational Hardness of Optimal Fair Computation: Beyond Minicrypt 📺
Secure multi-party computation allows mutually distrusting parties to compute securely over their private data. However, guaranteeing output delivery to honest parties when the adversarial parties may abort the protocol has been a challenging objective. As a representative task, this work considers two-party coin-tossing protocols with guaranteed output delivery, a.k.a., fair coin-tossing. In the information-theoretic plain model, as in two-party zero-sum games, one of the parties can force an output with certainty. In the commitment-hybrid, any $r$-message coin-tossing protocol is ${1/\sqrt r}$-unfair, i.e., the adversary can change the honest party's output distribution by $1/\sqrt r$ in the statistical distance. Moran, Naor, and Segev (TCC--2009) constructed the first $1/r$-unfair protocol in the oblivious transfer-hybrid. No further security improvement is possible because Cleve (STOC--1986) proved that $1/r$-unfairness is unavoidable. Therefore, Moran, Naor, and Segev's coin-tossing protocol is optimal. However, is oblivious transfer necessary for optimal fair coin-tossing? Maji and Wang (CRYPTO--2020) proved that any coin-tossing protocol using one-way functions in a black-box manner is at least $1/\sqrt r$-unfair. That is, optimal fair coin-tossing is impossible in Minicrypt. Our work focuses on tightly characterizing the hardness of computation assumption necessary and sufficient for optimal fair coin-tossing within Cryptomania, outside Minicrypt. Haitner, Makriyannia, Nissim, Omri, Shaltiel, and Silbak (FOCS--2018 and TCC--2018) proved that better than $1/\sqrt r$-unfairness, for any constant $r$, implies the existence of a key-agreement protocol. We prove that any coin-tossing protocol using public-key encryption (or, multi-round key agreement protocols) in a black-box manner must be $1/\sqrt r$-unfair. Next, our work entirely characterizes the additional power of secure function evaluation functionalities for optimal fair coin-tossing. We augment the model with an idealized secure function evaluation of $f$, \aka, the $f$-hybrid. If $f$ is complete, that is, oblivious transfer is possible in the $f$-hybrid, then optimal fair coin-tossing is also possible in the $f$-hybrid. On the other hand, if $f$ is not complete, then a coin-tossing protocol using public-key encryption in a black-box manner in the $f$-hybrid is at least $1/\sqrt r$-unfair.
2021
CRYPTO
Constructing Locally Leakage-resilient Linear Secret-sharing Schemes 📺
Innovative side-channel attacks have repeatedly falsified the assumption that cryptographic implementations are opaque black-boxes. Therefore, it is essential to ensure cryptographic constructions' security even when information leaks via unforeseen avenues. One such fundamental cryptographic primitive is the secret-sharing schemes, which underlies nearly all threshold cryptography. Our understanding of the leakage-resilience of secret-sharing schemes is still in its preliminary stage. This work studies locally leakage-resilient linear secret-sharing schemes. An adversary can leak $m$ bits of arbitrary local leakage from each $n$ secret shares. However, in a locally leakage-resilient secret-sharing scheme, the leakage's joint distribution reveals no additional information about the secret. For every constant $m$, we prove that the Massey secret-sharing scheme corresponding to a random linear code of dimension $k$ (over sufficiently large prime fields) is locally leakage-resilient, where $k/n > 1/2$ is a constant. The previous best construction by Benhamouda, Degwekar, Ishai, Rabin (CRYPTO--2018) needed $k/n > 0.907$. A technical challenge arises because the number of all possible $m$-bit local leakage functions is exponentially larger than the number of random linear codes. Our technical innovation begins with identifying an appropriate pseudorandomness-inspired family of tests; passing them suffices to ensure leakage-resilience. We show that most linear codes pass all tests in this family. This Monte-Carlo construction of linear secret-sharing scheme that is locally leakage-resilient has applications to leakage-resilient secure computation. Furthermore, we highlight a crucial bottleneck for all the analytical approaches in this line of work. Benhamouda et al. introduced an analytical proxy to study the leakage-resilience of secret-sharing schemes; if the proxy is small, then the scheme is leakage-resilient. However, we present a one-bit local leakage function demonstrating that the converse is false, motivating the need for new analytically well-behaved functions that capture leakage-resilience more accurately. Technically, the analysis involves probabilistic and combinatorial techniques and (discrete) Fourier analysis. The family of new ``tests'' capturing local leakage functions, we believe, is of independent and broader interest.
2021
CRYPTO
No Time to Hash:On Super-Efficient Entropy Accumulation 📺
Real-world random number generators (RNGs) cannot afford to use (slow) cryptographic hashing every time they refresh their state R with a new entropic input X. Instead, they use ``super-efficient'' simple entropy-accumulation procedures, such as R <- rot_{alpha, n}(R) XOR X where rot_{alpha,n} rotates an n-bit state R by some fixed number alpha. For example, Microsoft's RNG uses alpha=5 for n=32 and alpha=19 for n=64. Where do these numbers come from? Are they good choices? Should rotation be replaced by a better permutation pi of the input bits? In this work we initiate a rigorous study of these pragmatic questions, by modeling the sequence of successive entropic inputs X_1,X_2, ... as independent (but otherwise adversarial) samples from some natural distribution family D. We show a simple but surprisingly powerful connection between entropy accumulation and understanding the Fourier spectrum of distributions in D. Our contribution is as follows. - We define 2-monotone distributions as a rich family D that includes relevant real-world distributions (Gaussian, exponential, etc.), but avoids trivial impossibility results. - For any alpha with gcd(alpha,n)=1, we show that rotation accumulates Omega(n) bits of entropy from n independent samples X_1,...,X_n from any (unknown) 2-monotone distribution with entropy k > 1. - However, we also show some choices of alpha perform much better than others for a given n. E.g., we show alpha=19 is one of the best choices for n=64; in contrast, alpha=5 is good, but generally worse than alpha=7, for n=32. - More generally, given a permutation pi and k > 1, we define a simple parameter, the covering number C_{pi,k}, and show that it characterizes the number of steps before the rule (R_1,...,R_n) <- (R_{pi(1)},..., R_{pi(n)}) XOR X accumulates nearly n bits of entropy from independent, 2-monotone samples of min-entropy k each. - We build a simple permutation pi^*, which achieves nearly optimal C_{pi^*,k} \approx n/k for all values of k simultaneously, and experimentally validate that it compares favorably with all rotations rot_{alpha,n}.
2021
CRYPTO
White Box Traitor Tracing 📺
Traitor tracing aims to identify the source of leaked decryption keys. Since the ``traitor'' can try to hide their key within obfuscated code in order to evade tracing, the tracing algorithm should work for general, potentially obfuscated, decoder \emph{programs}. In the setting of such general decoder programs, prior work uses \emph{black box} tracing: the tracing algorithm ignores the implementation of the decoder, and instead traces just by making queries to the decoder and observing the outputs. We observe that, in some settings, such black box tracing leads to consistency and user privacy issues. On the other hand, these issues do not appear inherent to \emph{white box} tracing, where the tracing algorithm actually inspects the decoder implementation. We therefore develop new white box traitor tracing schemes providing consistency and/or privacy. Our schemes can be instantiated under various assumptions ranging from public key encryption to indistinguishability obfuscation, with different trade-offs. To the best of our knowledge, ours is the first work to consider white box tracing in the general decoder setting.
2021
CRYPTO
Oblivious Key-Value Stores and Amplification for Private Set Intersection 📺
Many recent private set intersection (PSI) protocols encode input sets as polynomials. We consider the more general notion of an oblivious key-value store (OKVS), which is a data structure that compactly represents a desired mapping $k_i$ to $v_i$. When the $v_i$ values are random, the OKVS data structure hides the $k_i$ values that were used to generate it. The simplest (and size-optimal) OKVS is a polynomial $p$ that is chosen using interpolation such that $p(k_i)=v_i$. We initiate the formal study of oblivious key-value stores, and show new constructions resulting in the fastest OKVS to date. Similarly to cuckoo hashing, current analysis techniques are insufficient for finding *concrete* parameters to guarantee a small failure probability for our OKVS constructions. Moreover, it would cost too much to run experiments to validate a small upperbound on the failure probability. We therefore show novel techniques to amplify an OKVS construction which has a failure probability $p$, to an OKVS with a similar overhead and failure probability $p^c$. Setting $p$ to be moderately small enables to validate it by running a relatively small number of $O(1/p)$ experiments. This validates a $p^c$ failure probability for the amplified OKVS. Finally, we describe how OKVS can significantly improve the state of the art of essentially all variants of PSI. This leads to the fastest two-party PSI protocols to date, for both the semi-honest and the malicious settings. Specifically, in networks with moderate bandwidth (e.g., 30 - 300 Mbps) our malicious two-party PSI protocol has 40\% less communication and is 20-40% faster than the previous state of the art protocol, even though the latter only has heuristic confidence.
2021
CRYPTO
MPC-Friendly Symmetric Cryptography from Alternating Moduli: Candidates, Protocols, and Applications 📺
We study new candidates for symmetric cryptographic primitives that leverage alternation between linear functions over $\mathbb{Z}_2$ and $\mathbb{Z}_3$ to support fast protocols for secure multiparty computation (MPC). This continues the study of weak pseudorandom functions of this kind initiated by Boneh et al. (TCC 2018) and Cheon et al. (PKC 2021). We make the following contributions. (Candidates). We propose new designs of symmetric primitives based on alternating moduli. These include candidate one-way functions, pseudorandom generators, and weak pseudorandom functions. We propose concrete parameters based on cryptanalysis. (Protocols). We provide a unified approach for securely evaluating modulus-alternating primitives in different MPC models. For the original candidate of Boneh et al., our protocols obtain at least 2x improvement in all performance measures. We report efficiency benchmarks of an optimized implementation. (Applications). We showcase the usefulness of our candidates for a variety of applications. This includes short ``Picnic-style'' signature schemes, as well as protocols for oblivious pseudorandom functions, hierarchical key derivation, and distributed key generation for function secret sharing.
2021
CRYPTO
Counterexamples to New Circular Security Assumptions Underlying iO 📺
We study several strengthening of classical circular security assumptions which were recently introduced in four new lattice-based constructions of indistinguishability obfuscation: Brakerski-D\"ottling-Garg-Malavolta (Eurocrypt 2020), Gay-Pass (STOC 2021), Brakerski-D\"ottling-Garg-Malavolta (Eprint 2020) and Wee-Wichs (Eprint 2020). We provide explicit counterexamples to the {\em $2$-circular shielded randomness leakage} assumption w.r.t.\ the Gentry-Sahai-Waters fully homomorphic encryption scheme proposed by Gay-Pass, and the {\em homomorphic pseudorandom LWE samples} conjecture proposed by Wee-Wichs. Our work suggests a separation between classical circular security of the kind underlying un-levelled fully-homomorphic encryption from the strengthened versions underlying recent iO constructions, showing that they are not (yet) on the same footing. Our counterexamples exploit the flexibility to choose specific implementations of circuits, which is explicitly allowed in the Gay-Pass assumption and unspecified in the Wee-Wichs assumption. Their indistinguishabilty obfuscation schemes are still unbroken. Our work shows that the assumptions, at least, need refinement. In particular, generic leakage-resilient circular security assumptions are delicate, and their security is sensitive to the specific structure of the leakages involved.
2021
CRYPTO
Halo Infinite: Proof-Carrying Data from Additive Polynomial Commitments 📺
Polynomial commitment schemes (PCS) have recently been in the spotlight for their key role in building SNARKs. A PCS provides the ability to commit to a polynomial over a finite field and prove its evaluation at points. A *succinct* PCS has commitment size and evaluation proof size sublinear in the degree of the polynomial. An *efficient* PCS has sublinear proof verification. Any efficient and succinct PCS can be used to construct a SNARK with similar security and efficiency characteristics (in the random oracle model). Proof-carrying data (PCD) enables a set of parties to carry out an indefinitely long distributed computation where every step along the way is accompanied by a proof of correctness. It generalizes *incrementally verifiable computation* and can even be used to construct SNARKs. Until recently, however, the only known method for constructing PCD required expensive SNARK recursion. A system called *Halo* first demonstrated a new methodology for building PCD without SNARKs, exploiting an aggregation property of the *Bulletproofs* inner-product argument. The construction was *heuristic* because it makes non-black-box use of a concrete instantiation of the Fiat-Shamir transform. We expand upon this methodology to show that PCD can be (heuristically) built from any homomorphic polynomial commitment scheme (PCS), even if the PCS evaluation proofs are neither succinct nor efficient. In fact, the Halo methodology extends to any PCS that has an even more general property, namely the ability to aggregate linear combinations of commitments into a new succinct commitment that can later be opened to this linear combination. Our results thus imply new constructions of SNARKs and PCD that were not previously described in the literature and serve as a blueprint for future constructions as well.
2021
CRYPTO
Large Message Homomorphic Secret Sharing from DCR and Applications 📺
We present the first homomorphic secret sharing (HSS) construction that simultaneously (1) has negligible correctness error, (2) supports integers from an exponentially large range, and (3) relies on an assumption not known to imply FHE --- specifically, the Decisional Composite Residuosity (DCR) assumption. This resolves an open question posed by Boyle, Gilboa, and Ishai (Crypto 2016). Homomorphic secret sharing is analogous to fully-homomorphic encryption, except the ciphertexts are shared across two non-colluding evaluators. Previous constructions of HSS either had non-negligible correctness error and polynomial-size plaintext space or were based on the stronger LWE assumption. We also present two applications of our technique: a multi-server ORAM with constant bandwidth overhead, and a rate-$1$ trapdoor hash function with negligible error rate.
2021
CRYPTO
Towards faster polynomial-time lattice reduction 📺
The LLL algorithm is a polynomial-time algorithm for reducing d-dimensional lattice with exponential approximation factor. Currently, the most efficient variant of LLL, by Neumaier and Stehl\'e, has a theoretical running time in $d^4\cdot B^{1+o(1)}$ where $B$ is the bitlength of the entries, but has never been implemented. This work introduces new asymptotically fast, parallel, yet heuristic, reduction algorithms with their optimized implementations. Our algorithms are recursive and fully exploit fast block matrix multiplication. We experimentally demonstrate that by carefully controlling the floating-point precision during the recursion steps, we can reduce euclidean lattices of rank d in time $\tilde{O}(d^\omega\cdot C)$, i.e., almost a constant number of matrix multiplications, where $\omega$ is the exponent of matrix multiplication and C is the log of the condition number of the matrix. For cryptographic applications, C is close to B, while it can be up to d times larger in the worst case. It improves the running-time of the state-of-the-art implementation fplll by a multiplicative factor of order $d^2\cdot B$. Further, we show that we can reduce structured lattices, the so-called knapsack lattices, in time $\tilde{O}(d^{\omega-1}\cdot C)$ with a progressive reduction strategy. Besides allowing reducing huge lattices, our implementation can break several instances of Fully Homomorphic Encryption schemes based on large integers in dimension 2,230 with 4 millions of bits.
2021
CRYPTO
Composition with Knowledge Assumptions 📺
Zero-knowledge succinct non-interactive arguments (zk-SNARKs) rely on knowledge assumptions for their security. Meanwhile, as the complexity and scale of cryptographic systems continues to grow, the composition of secure protocols is of vital importance. The current gold standards of composable security, the Universal Composability and Constructive Cryptography frameworks cannot capture knowledge assumptions, as their core proofs of composition prohibit white-box extraction. In this paper, we present a formal model allowing the composition of knowledge assumptions. Despite showing impossibility for the general case, we demonstrate the model’s usefulness when limiting knowledge assumptions to few instances of protocols at a time. We finish by providing the first instance of a simultaneously succinct and composable zk-SNARK, by using existing results within our framework.
2021
CRYPTO
Low-Complexity Weak Pseudorandom Functions in AC0[MOD2] 📺
A *weak pseudorandom function* (WPRF) is a keyed function $f_k:\{0,1\}^n\to\{0,1\}$ such that, for a random key $k$, a collection of samples $(x, f_k(x))$, for {\em uniformly random} inputs $x$, cannot be efficiently distinguished from totally random input-output pairs $(x,y)$. We study WPRFs in AC0[MOD2], the class of functions computable by AC0 circuits with parity gates, making the following contributions.  - *Between Lapland and Cryptomania.* We show that WPRFs in AC0[MOD2] imply a variant of the Learning Parity with Noise (LPN) assumption. This gives an unconditional version of an earlier conditional result of Akavia et al. (ITCS 2014). We further show that WPRFs in a subclass of AC0[mod 2] that includes a recent WPRF candidate by Boyle et al. (FOCS 2020) imply, under a seemingly weak additional conjecture, public-key encryption.  - *WPRF by sparse polynomials.* We propose the first WPRF candidate that can be computed by sparse multivariate polynomials over $\F_2$. We prove that it has subexponential security against linear and algebraic attacks. - *WPRF in AC0 ◦ MOD2.* We study the existence of WPRFs computed by AC0 circuits \emph{over} parity gates. We propose a modified version of a previous WPRF candidate of Akavia et al., and prove that it resists the algebraic attacks that were used by Bogdanov and Rosen (ECCC 2017) to break the original candidate in quasipolynomial time. We give evidence against the possibility of using {\em public} parity gates and relate this question to other conjectures.
2021
CRYPTO
Asymptotically-Good Arithmetic Secret Sharing over Z/p^{\ell}Z with Strong Multiplication and Its Applications to Efficient MPC 📺
The current paper studies information-theoretically secure multiparty computation (MPC) over rings $\Z/p^{\ell}\Z$. This is a follow-up research of recent work on MPC over rings $\Z/p^{\ell}\Z$. In the work of \cite[TCC2019]{tcc}, a protocol based on the Shamir secret sharing over $\Z/p^{\ell}\Z$ was presented. As in the field case, its limitation is that the share size has to grow as the number of players increases. Then several MPC protocols were developed in \cite[Asiacrypt 2020]{asiacrypt} to overcome this limitation. However, the MPC protocols in \cite[Asiacrypt 2020]{asiacrypt} suffer from several drawbacks: (i) the offline multiplication gate has super-linear communication complexity; (ii) the share size is doubled for the most important case, namely over $\Z/2^{\ell}\Z$ due to infeasible lifting of self-orthogonal codes from fields to rings; (iii) most importantly, the BGW model could not be applied via the secret sharing given in \cite[Asiacrypt 2020]{asiacrypt} due to lack of strong multiplication. Our contribution in this paper is three fold. Firstly, we overcome all the drawbacks in \cite{tcc,asiacrypt} mentioned above. Secondly, we establish an arithmetic secret sharing with strong multiplication, which is the most important primitive in the BGW model. Thirdly, we lift Reverse Multiplication Friendly Embeddings (RMFE) from fields to rings, with same (linear) complexity. Note that RMFE has become a standard technique for amortized communication complexity in MPC, as in \cite[CRYPTO'18]{crypto2018} and \cite[CRYPTO'19]{dn19}. To obtain our theoretical results, we use the existence of lifts of curves over rings, then use the known results stating that Riemann-Roch spaces are free modules. To make our scheme practical, we start from good algebraic geometry codes over finite fields obtained from existing computational techniques. Then we present, and implement, an efficient algorithm to Hensel-lift the generating matrix of the code, such that the multiplicative conditions are preserved over rings. Existence of this specific lift is guaranteed by the previous theory. On the other hand, a random lifting of codes over from fields to Galois rings does not preserve multiplicativity in general. (Notice that our indirect method is motivated by the fact that, following the theory instead, would require to ``preprocess'' the curve under a form with ``smooth" equations, in particular with many variables, before lifting it. But computing on these objects over rings is out of the scope of existing research). Finally we provide efficient elementary methods for sharing and (robust) reconstruction of secrets over rings. As a result, arithmetic secret sharing over $\Z/p^{\ell}\Z$ with strong multiplication can be efficiently constructed and practically applied.
2021
CRYPTO
Separating Adaptive Streaming from Oblivious Streaming using the Bounded Storage Model 📺
Streaming algorithms are algorithms for processing large data streams, using only a limited amount of memory. Classical streaming algorithms typically work under the assumption that the input stream is chosen independently from the internal state of the algorithm. Algorithms that utilize this assumption are called oblivious algorithms. Recently, there is a growing interest in studying streaming algorithms that maintain utility also when the input stream is chosen by an adaptive adversary, possibly as a function of previous estimates given by the streaming algorithm. Such streaming algorithms are said to be adversarially-robust. By combining techniques from learning theory with cryptographic tools from the bounded storage model, we separate the oblivious streaming model from the adversarially-robust streaming model. Specifically, we present a streaming problem for which every adversarially-robust streaming algorithm must use polynomial space, while there exists a classical (oblivious) streaming algorithm that uses only polylogarithmic space. This is the first general separation between the capabilities of these two models, resolving one of the central open questions in adversarial robust streaming.
2021
CRYPTO
Sublinear GMW-Style Compiler for MPC with Preprocessing 📺
We consider the efficiency of protocols for secure multiparty computation (MPC) with a dishonest majority. A popular approach for the design of such protocols is to employ {\em preprocessing}. Before the inputs are known, the parties generate correlated secret randomness, which is consumed by a fast and ``information-theoretic'' online protocol. A powerful technique for securing such protocols against malicious parties uses {\em homomorphic MACs} to authenticate the values produced by the online protocol. Compared to a baseline protocol, which is only secure against semi-honest parties, this involves a significant increase in the size of the correlated randomness, by a factor of up to a statistical security parameter. Different approaches for partially mitigating this extra storage cost come at the expense of increasing the online communication. In this work we propose a new technique for protecting MPC with preprocessing against malicious parties. We show that for circuit evaluation protocols that satisfy mild security and structural requirements, that are met by almost all standard protocols with semi-honest security, the extra {\em additive} storage and online communication costs are both {\em logarithmic} in the circuit size. This applies to Boolean circuits and to arithmetic circuits over fields or rings, and to both information-theoretic and computationally secure protocols. Our protocol can be viewed as a sublinear information-theoretic variant of the celebrated ``GMW compiler'' that applies to MPC with preprocessing. Our compiler makes a novel use of the techniques of Boneh et al. (Crypto 2019) for sublinear distributed zero knowledge, which were previously only used in the setting of {\em honest-majority} MPC.
2021
CRYPTO
You Only Speak Once: Secure MPC with Stateless Ephemeral Roles 📺
The inherent difficulty of maintaining stateful environments over long periods of time gave rise to the paradigm of serverless computing, where mostly-stateless components are deployed on demand to handle computation tasks, and are teared down once their task is complete. Serverless architecture could offer the added benefit of improved resistance to targeted denial-of-service attacks, by hiding from the attacker the physical machines involved in the protocol until after they complete their work. Realizing such protection, however, requires that the protocol only uses stateless parties, where each party sends only one message and never needs to speaks again. Perhaps the most famous example of this style of protocols is the Nakamoto consensus protocol used in Bitcoin: A peer can win the right to produce the next block by running a local lottery (mining), all while staying covert. Once the right has been won, it is executed by sending a single message. After that, the physical entity never needs to send more messages. We refer to this as the You-Only-Speak-Once (YOSO) property, and initiate the formal study of it within a new model that we call the YOSO model. Our model is centered around the notion of roles, which are stateless parties that can only send a single message. Crucially, our modelling separates the protocol design, that only uses roles, from the role-assignment mechanism, that assigns roles to actual physical entities. This separation enables studying these two aspects separately, and our YOSO model in this work only deals with the protocol-design aspect. We describe several techniques for achieving YOSO MPC; both computational and information theoretic. Our protocols are synchronous and provide guaranteed output delivery (which is important for application domains such as blockchains), assuming honest majority of roles in every time step. We describe a practically efficient computationally-secure protocol, as well as a proof-of-concept information theoretically secure protocol.
2021
CRYPTO
Two-Round Trip Schnorr Multi-Signatures via Delinearized Witnesses 📺
We construct a two-round Schnorr-based signature scheme (DWMS) by delinearizing two pre-witnesses supplied by each signer. DWMS is a secure signature scheme in the algebraic group model (AGM) and the random oracle model (ROM) under the assumption of the hardness of the one-more discrete logarithm problem and the 2-entwined sum problem that we introduce in this paper. Our new m-entwined sum problem tweaks the k-sum problem in a scalar field using the associated group. We prove the hardness of our new problem in the AGM assuming the hardness of the discrete logarithm problem in the associated group. We believe that our new problem simplifies the security proofs of multi-signature schemes that use the delinearization of witnesses.
2021
CRYPTO
Compact Ring Signatures from Learning With Errors 📺
Ring signatures allow a user to sign a message on behalf of a ``ring'' of signers, while hiding the true identity of the signer. As the degree of anonymity guaranteed by a ring signature is directly proportional to the size of the ring, an important goal in cryptography is to study constructions that minimize the size of the signature as a function of the number of ring members. In this work, we present the first compact ring signature scheme (i.e., where the size of the signature grows logarithmically with the size of the ring) from the (plain) learning with errors (LWE) problem. The construction is in the standard model and it does not rely on a trusted setup or on the random oracle heuristic. In contrast with the prior work of Backes \etal~[EUROCRYPT'2019], our scheme does not rely on bilinear pairings, which allows us to show that the scheme is post-quantum secure assuming the quantum hardness of LWE. At the heart of our scheme is a new construction of compact and statistically witness-indistinguishable ZAP arguments for NP $\cap$ coNP, that we show to be sound based on the plain LWE assumption. Prior to our work, statistical ZAPs (for all of NP) were known to exist only assuming \emph{sub-exponential} LWE. We believe that this scheme might find further applications in the future.
2021
CRYPTO
Threshold Schnorr with Stateless Deterministic Signing from Standard Assumptions 📺
Schnorr's signature scheme permits an elegant threshold signing protocol due to its linear signing equation. However each new signature consumes fresh randomness, which can be a major attack vector in practice. Sources of randomness in deployments are frequently either unreliable, or require state continuity, i.e. reliable fresh state resilient to rollbacks. State continuity is a notoriously difficult guarantee to achieve in practice, due to system crashes caused by software errors, malicious actors, or power supply interruptions (Parno et al., S&P '11). This is a non-issue for Schnorr variants such as EdDSA, which is specified to derive nonces deterministically as a function of the message and the secret key. However, it is challenging to translate these benefits to the threshold setting, specifically to construct a threshold Schnorr scheme where signing neither requires parties to consume fresh randomness nor update long-term secret state. In this work, we construct a dishonest majority threshold Schnorr protocol that enables such stateless deterministic nonce derivation using standardized block ciphers. Our core technical ingredients are new tools for the zero-knowledge from garbled circuits (ZKGC) paradigm to aid in verifying correct nonce derivation: - A mechanism based on UC Commitments that allows a prover to commit once to a witness, and prove an unbounded number of statements online with only cheap symmetric key operations. - A garbling gadget to translate intermediate garbled circuit wire labels to arithmetic encodings. A proof per our scheme requires only a small constant number of exponentiations.
2021
CRYPTO
Cryptanalysis of Full LowMC and LowMC-M with Algebraic Techniques 📺
In this paper, we revisit the difference enumeration techniques for LowMC and develop new algebraic techniques to achieve efficient key-recovery attacks with negligible memory complexity. \mbox{Benefiting} from our technique to reduce the memory complexity, we could significantly improve the attacks on LowMC when the block size is much larger than the key size and even break LowMC with such a kind of parameter. On the other hand, with our new key-recovery technique, we could significantly improve the time to retrieve the full key if given only a single pair of input and output messages together with the difference trail that they take, which was stated as an interesting question by Rechberger et al. in ToSC 2018. Combining both the techniques, with only 2 chosen plaintexts, we could break 4 rounds of LowMC adopting a full S-Box layer with block size of 129, 192 and 255 bits, respectively, which are the 3 recommended parameters for Picnic3, an alternative \mbox{third-round} candidate in NIST's Post-Quantum Cryptography competition. We have to emphasize that our attacks do not indicate that Picnic3 is broken as the Picnic use-case is very different and an attacker cannot even freely choose 2 plaintexts to encrypt for a concrete LowMC instance. However, such parameters are deemed as secure in the latest LowMC. Moreover, much more rounds of seven instances of the backdoor cipher \mbox{LowMC-M} as proposed by Peyrin and Wang in CRYPTO 2020 can be broken without finding the backdoor by making full use of the allowed $2^{64}$ data. The above mentioned attacks are all achieved with negligible memory.
2021
CRYPTO
Mac'n'Cheese: Zero-Knowledge Proofs for Boolean and Arithmetic Circuits with Nested Disjunctions 📺
Zero knowledge proofs are an important building block in many cryptographic applications. Unfortunately, when the proof statements become very large, existing zero-knowledge proof systems easily reach their limits: either the computational overhead, the memory footprint, or the required bandwidth exceed levels that would be tolerable in practice. We present an interactive zero-knowledge proof system for boolean and arithmetic circuits, called Mac'n'Cheese, with a focus on supporting large circuits. Our work follows the commit-and-prove paradigm instantiated using information-theoretic MACs based on vector oblivious linear evaluation to achieve high efficiency. We additionally show how to optimize disjunctions, with a general OR transformation for proving the disjunction of $m$ statements that has communication complexity proportional to the longest statement (plus an additive term logarithmic in $m$). These disjunctions can further be \emph{nested}, allowing efficient proofs about complex statements with many levels of disjunctions. We also show how to make Mac'n'Cheese non-interactive (after a preprocessing phase) using the Fiat-Shamir transform, and with only a small degradation in soundness. We have implemented the online phase of Mac'n'Cheese and achieve a runtime of 144~ns per AND gate and 1.5~$\mu$s per multiplication gate in $\mathbb{F}_{2^{61}-1}$ when run over a network with a 95~ms latency and a bandwidth of 31.5~Mbps. In addition, we show that the disjunction optimization improves communication as expected: when proving a boolean circuit with eight branches and each branch containing roughly 1 billion multiplications, Mac'n'Cheese requires only 75 more bytes to communicate than in the single branch case.
2021
CRYPTO
Thinking Outside the Superbox 📺
Designing a block cipher or cryptographic permutation can be approached in many different ways. One such approach, popularized by AES, consists in grouping the bits along the S-box boundaries, e.g., in bytes, and in consistently processing them in these groups. This aligned approach leads to hierarchical structures like superboxes that make it possible to reason about the differential and linear propagation properties using combinatorial arguments. In contrast, an unaligned approach avoids any such grouping in the design of transformations. However, without hierarchical structure, sophisticated computer programs are required to investigate the differential and linear propagation properties of the primitive. In this paper, we formalize this notion of alignment and study four primitives that are exponents of different design strategies. We propose a way to analyze the interactions between the linear and the nonlinear layers w.r.t. the differential and linear propagation, and we use it to systematically compare the four primitives using non-trivial computer experiments. We show that alignment naturally leads to different forms of clustering, e.g., of active bits in boxes, of two-round trails in activity patterns, and of trails in differentials and linear approximations.
2021
CRYPTO
Silver: Silent VOLE and Oblivious Transfer from Hardness of Decoding Structured LDPC Codes 📺
We put forth new protocols for oblivious transfer extension and vector OLE, called \emph{Silver}, for SILent Vole and oblivious transfER. Silver offers extremely high performances: generating 10 million random OTs on one core of a standard laptop requires only 300ms of computation and 122KB of communication. This represents 37% less computation and ~1300x less communication than the standard IKNP protocol, as well as ~4x less computation and ~4x less communication than the recent protocol of Yang et al. (CCS 2020). Silver is \emph{silent}: after a one-time cheap interaction, two parties can store small seeds, from which they can later \emph{locally} generate a large number of OTs \emph{while remaining offline}. Neither IKNP nor Yang et al. enjoys this feature; compared to the best known silent OT extension protocol of Boyle et al. (CCS 2019), upon which we build up, Silver has 19x less computation, and the same communication. Due to its attractive efficiency features, Silver yields major efficiency improvements in numerous MPC protocols. Our approach is a radical departure from the standard paradigm for building MPC protocols, in that we do \emph{not} attempt to base our constructions on a well-studied assumption. Rather, we follow an approach closer in spirit to the standard paradigm in the design of symmetric primitives: we identify a set of fundamental structural properties that allow us to withstand all known attacks, and put forth a candidate design, guided by our analysis. We also rely on extensive experimentations to analyze our candidate and experimentally validate their properties. In essence, our approach boils down to constructing new families of linear codes with (plausibly) high minimum distance and extremely low encoding time. While further analysis is of course warranted to confidently assess the security of Silver, we hope and believe that initiating this approach to the design of MPC primitives will pave the way to new secure primitives with extremely attractive efficiency features.
2021
CRYPTO
A World of SNARKS 📺
Invited talk
2021
CRYPTO
Subtractive Sets over Cyclotomic Rings: Limits of Schnorr-like Arguments over Lattices 📺
We study when (dual) Vandermonde systems of the form `V_T ⋅ z = s⋅w` admit a solution `z` over a ring `R`, where `V_T` is the Vandermonde matrix defined by a set `T` and where the “slack” `s` is a measure of the quality of solutions. To this end, we propose the notion of `(s,t)`-subtractive sets over a ring `R`, with the property that if `S` is `(s,t)`-subtractive then the above (dual) Vandermonde systems defined by any `t`-subset `T ⊆ S` are solvable over `R`. The challenge is then to find large sets `S` while minimising (the norm of) `s` when given a ring `R`. By constructing families of `(s,t)`-subtractive sets `S` of size `n = poly(λ)` over cyclotomic rings `R = ZZ[ζ_{p^ℓ}]` for prime `p`, we construct Schnorr-like lattice-based proofs of knowledge for the SIS relation `A ⋅ x = s ⋅ y mod q` with `O(1/n)` knowledge error, and `s=1` in case `p = poly(λ)`. Our technique slots naturally into the lattice Bulletproof framework from Crypto’20, producing lattice-based succinct arguments for NP with better parameters. We then give matching impossibility results constraining `n` relative to `s`, which suggest that our Bulletproof-compatible protocols are optimal unless fundamentally new techniques are discovered. Noting that the knowledge error of lattice Bulletproofs is `Ω(log k/n)` for witnesses in `R^k` and subtractive set size `n`, our result represents a barrier to practically efficient lattice-based succinct arguments in the Bulletproof framework. Beyond these main results, the concept of `(s,t)`-subtractive sets bridges group-based threshold cryptography to the lattice settings, which we demonstrate by relating it to distributed pseudorandom functions.
2021
CRYPTO
Lower bounds on lattice sieving and information set decoding 📺
In two of the main areas of post-quantum cryptography, based on lattices and codes, nearest neighbor techniques have been used to speed up state-of-the-art cryptanalytic algorithms, and to obtain the lowest asymptotic cost estimates to date [May--Ozerov, Eurocrypt'15; Becker--Ducas--Gama--Laarhoven, SODA'16]. These upper bounds are useful for assessing the security of cryptosystems against known attacks, but to guarantee long-term security one would like to have closely matching lower bounds, showing that improvements on the algorithmic side will not drastically reduce the security in the future. As existing lower bounds from the nearest neighbor literature do not apply to the nearest neighbor problems appearing in this context, one might wonder whether further speedups to these cryptanalytic algorithms can still be found by only improving the nearest neighbor subroutines. We derive new lower bounds on the costs of solving the nearest neighbor search problems appearing in these cryptanalytic settings. For the Euclidean metric we show that for random data sets on the sphere, the locality-sensitive filtering approach of [Becker--Ducas--Gama--Laarhoven, SODA 2016] using spherical caps is optimal, and hence within a broad class of lattice sieving algorithms covering almost all approaches to date, their asymptotic time complexity of $2^{0.292d + o(d)}$ is optimal. Similar conditional optimality results apply to lattice sieving variants, such as the $2^{0.265d + o(d)}$ complexity for quantum sieving [Laarhoven, PhD thesis 2016] and previously derived complexity estimates for tuple sieving [Herold--Kirshanova--Laarhoven, PKC 2018]. For the Hamming metric we derive new lower bounds for nearest neighbor searching which almost match the best upper bounds from the literature [May--Ozerov, Eurocrypt 2015]. As a consequence we derive conditional lower bounds on decoding attacks, showing that also here one should search for improvements elsewhere to significantly undermine security estimates from the literature.
2021
CRYPTO
Non-Interactive Secure Multiparty Computation for Symmetric Functions, Revisited: More Efficient Constructions and Extensions 📺
Non-interactive secure multiparty computation (NIMPC) is a variant of secure computation which allows each of $n$ players to send only a single message depending on his input and correlated randomness. Abelian programs, which can realize any symmetric function, are defined as functions on the sum of the players' inputs over an abelian group and provide useful functionalities for real-world applications. We improve and extend the previous results in the following ways: \begin{itemize} \item We present NIMPC protocols for abelian programs that improve the best known communication complexity. If inputs take any value of an abelian group $\mathbb{G}$, our protocol achieves the communication complexity $O(|\mathbb{G}|(\log|\mathbb{G}|)^2)$ improving $O(|\mathbb{G}|^2n^2)$ of Beimel et al. (Crypto 2014). If players are limited to inputs from subsets of size at most $d$, our protocol achieves $|\mathbb{G}|(\log|\mathbb{G}|)^2(\max\{n,d\})^{(1+o(1))t}$ where $t$ is a corruption threshold. This result improves $|\mathbb{G}|^3(nd)^{(1+o(1))t}$ of Beimel et al. (Crypto 2014), and even $|\mathbb{G}|^{\log n+O(1)}n$ of Benhamouda et al. (Crypto 2017) if $t=o(\log n)$ and $|\mathbb{G}|=n^{\Theta(1)}$. \item We propose for the first time NIMPC protocols for linear classifiers that are more efficient than those obtained from the generic construction. \item We revisit a known transformation of Benhamouda et al. (Crypto 2017) from Private Simultaneous Messages (PSM) to NIMPC, which we repeatedly use in the above results. We reveal that a sub-protocol used in the transformation does not satisfy the specified security. We also fix their protocol with only constant overhead in the communication complexity. As a byproduct, we obtain an NIMPC protocol for indicator functions with asymptotically optimal communication complexity with respect to the input length. \end{itemize}
2021
CRYPTO
Linear Cryptanalysis of FF3-1 and FEA 📺
Improved attacks on generic small-domain Feistel ciphers with alternating round tweaks are obtained using linear cryptanalysis. This results in practical distinguishing and message-recovery attacks on the United States format-preserving encryption standard FF3-1 and the South-Korean standards FEA-1 and FEA-2. The data-complexity of the proposed attacks on FF3-1 and FEA-1 is $O(N^{r/2 - 1.5})$, where $N^2$ is the domain size and $r$ is the number of rounds. For example, FF3-1 with $N = 10^3$ can be distinguished from an ideal tweakable block cipher with advantage $\ge 1/10$ using $2^{23}$ encryption queries. Recovering the left half of a message with similar advantage requires $2^{24}$ data. The analysis of FF3-1 serves as an interesting real-world application of (generalized) linear cryptanalysis over the group $\mathbb{Z}/N\mathbb{Z}$.
2021
CRYPTO
Multi-Input Quadratic Functional Encryption from Pairings 📺
We construct the first multi-input functional encryption (MIFE) scheme for quadratic functions from pairings. Our construction supports polynomial number of users, where user $i$, for $i \in [n]$, encrypts input $\bfx_i \in \mbZ^m$ to obtain ciphertext $\ct_i$, the key generator provides a key $\sk_\bfc$ for vector $\bfc \in \mbZ^{({mn})^2}$ and decryption, given $\ct_1,\ldots,\ct_n$ and $\sk_\bfc$, recovers $\ip{\bfc}{\bfx \otimes \bfx}$ and nothing else. We achieve indistinguishability-based (selective) security against unbounded collusions under the standard bilateral matrix Diffie-Hellman assumption. All previous MIFE schemes either support only inner products (linear functions) or rely on strong cryptographic assumptions such as indistinguishability obfuscation or multi-linear maps.
2021
CRYPTO
Deniable Fully Homomorphic Encryption from Learning With Errors 📺
We define and construct {\it Deniable Fully Homomorphic Encryption} based on the Learning With Errors (LWE) polynomial hardness assumption. Deniable FHE enables storing encrypted data in the cloud to be processed securely without decryption, maintaining deniability of the encrypted data, as well the prevention of vote-buying in electronic voting schemes where encrypted votes can be tallied without decryption. Our constructions achieve {\it compactness} independently of the level of deniability- both the size of the public key and the size of the ciphertexts are bounded by a fixed polynomial, independent of the faking probability achieved by the scheme. This is in contrast to all previous constructions of deniable encryption schemes (even without requiring homomorphisms) which are based on polynomial hardness assumptions, originating with the seminal work of Canetti, Dwork, Naor and Ostrovsky (CRYPTO 1997) in which the ciphertext size grows with the inverse of the faking probability. Canetti {\it et al.} argued that this dependence ``seems inherent'', but our constructions illustrate this is not the case. We note that the Sahai-Waters (STOC13) construction of deniable encryption from indistinguishability-obfuscation achieves compactness and can be easily modified to achieve deniable FHE as well, but it requires multiple, stronger sub-exponential hardness assumptions, which are furthermore not post-quantum secure. In contrast, our constructions rely only on the LWE polynomial hardness assumption, as currently required for FHE even without deniability. The running time of our encryption algorithm depends on the inverse of the faking probability, thus the scheme falls short of achieving simultaneously compactness, negligible deniability probability {\it and} polynomial encryption time. Yet, we believe that achieving compactness is a fundamental step on the way to achieving all properties simultaneously as has been the historical journey for other primitives such as functional encryption. Interestingly, we note that our constructions support large message spaces, whereas previous constructions were bit by bit, and can be run in online-offline model of encryption, where the bulk of computation is independent of the message and may be performed in an offline pre-processing phase. The running time of the online phase, is independent of the faking probability, whereas the offline encryption run-time grows with the inverse of the faking probability. At the heart of our constructions is a new way to use bootstrapping to obliviously generate FHE ciphertexts so that it supports faking under coercion.
2021
CRYPTO
Revisiting the Security of DbHtS MACs: Beyond-Birthday-Bound in the Multi-User Setting 📺
Double-block Hash-then-Sum (\textsf{DbHtS}) MACs are a class of MACs that aim for achieving beyond-birthday-bound security, including \textsf{SUM-ECBC}, \textsf{PMAC\_Plus}, \textsf{3kf9} and \textsf{LightMAC\_Plus}. Recently Datta et al. (FSE'19), and then Kim et al. (Eurocrypt'20) prove that \textsf{DbHtS} constructions are secure beyond the birthday bound in the single-user setting. However, by a generic reduction, their results degrade to (or even worse than) the birthday bound in the multi-user setting. In this work, we revisit the security of \textsf{DbHtS} MACs in the multi-user setting. We propose a generic framework to prove beyond-birthday-bound security for \textsf{DbHtS} constructions. We demonstrate the usability of this framework with applications to key-reduced variants of \textsf{DbHtS} MACs, including \textsf{2k-SUM-ECBC}, \textsf{2k-PMAC\_Plus} and \textsf{2k-LightMAC\_Plus}. Our results show that the security of these constructions will not degrade as the number of users grows. On the other hand, our results also indicate that these constructions are secure beyond the birthday bound in both single-user and multi-user setting without additional domain separation, which is used in the prior work to simplify the analysis. Moreover, we find a critical flaw in \textsf{2kf9}, which is proved to be secure beyond the birthday bound by Datta et al. (FSE'19). We can successfully forge a tag with probability 1 without making any queries. We go further to show attacks with birthday-bound complexity on several variants of \textsf{2kf9}.
2021
CRYPTO
A New Simple Technique to Bootstrap Various Lattice Zero-Knowledge Proofs to QROM Secure NIZKs 📺
Many of the recent advanced lattice-based Sigma-/public-coin honest verifier (HVZK) interactive protocols based on the techniques developed by Lyubashevsky (Asiacrypt'09, Eurocrypt'12) can be transformed into a non-interactive zero-knowledge (NIZK) proof in the random oracle model (ROM) using the Fiat-Shamir transform. Unfortunately, although they are known to be secure in the __classical__ ROM, existing proof techniques are incapable of proving them secure in the __quantum__ ROM (QROM). Alternatively, while we could instead rely on the Unruh transform (Eurocrypt'15), the resulting QROM secure NIZK will incur a large overhead compared to the underlying interactive protocol. In this paper, we present a new simple semi-generic transform that compiles many existing lattice-based Sigma-/public-coin HVZK interactive protocols into QROM secure NIZKs. Our transform builds on a new primitive called __extractable linear homomorphic commitment__ protocol. The resulting NIZK has several appealing features: it is not only a proof of knowledge but also straight-line extractable; the proof overhead is smaller compared to the Unruh transform; it enjoys a relatively small reduction loss; and it requires minimal background on quantum computation. To illustrate the generality of our technique, we show how to transform the recent Bootle et al.'s 5-round protocol with an exact sound proof (Crypto'19) into a QROM secure NIZK by increasing the proof size by a factor of 2.6. This compares favorably to the Unruh transform that requires a factor of more than 50.
2021
CRYPTO
Subquadratic SNARGs in the Random Oracle Model 📺
In a seminal work, Micali (FOCS 1994) gave the first succinct non-interactive argument (SNARG) in the random oracle model (ROM). The construction combines a PCP and a cryptographic commitment, and has several attractive features: it is plausibly post-quantum; it can be heuristically instantiated via lightweight cryptography; and it has a transparent (public-coin) parameter setup. However, it also has a significant drawback: a large argument size. In this work, we provide a new construction that achieves a smaller argument size. This is the first progress on the Micali construction since it was introduced over 25 years ago. A SNARG in the ROM is (t,ε)-secure if every t-query malicious prover can convince the verifier of a false statement with probability at most ε. For (t,ε)-security, the argument size of all known SNARGs in the ROM (including Micali's) is Õ((log (t/ε))^2) bits, *even* if one were to rely on conjectured probabilistic proofs well beyond current techniques. In practice, these costs lead to SNARGs that are much larger than constructions based on other (pre-quantum and costly) tools. This has led many to believe that SNARGs in the ROM are inherently quadratic. We show that this is not the case. We present a SNARG in the ROM with a sub-quadratic argument size: Õ(log (t/ε) * log t). Our construction relies on a strong soundness notion for PCPs and a weak binding notion for commitments. We hope that our work paves the way for understanding if a linear argument size, that is O(log (t/ε)), is achievable in the ROM.
2021
CRYPTO
Lattice Reduction with Approximate Enumeration Oracles: Practical Algorithms and Concrete Performance 📺
This work provides a systematic investigation of the use of approximate enumeration oracles in BKZ, building on recent technical progress on speeding-up lattice enumeration: relaxing (the search radius of) enumeration and extended preprocessing which preprocesses in a larger rank than the enumeration rank. First, we heuristically justify that relaxing enumeration with certain extreme pruning asymptotically achieves an exponential speed-up for reaching the same root Hermite factor (RHF). Second, we perform simulations/experiments to validate this and the performance for relaxed enumeration with numerically optimised pruning for both regular and extended preprocessing. Upgrading BKZ with such approximate enumeration oracles gives rise to our main result, namely a practical and faster (compared to previous work) polynomial-space lattice reduction algorithm for reaching the same RHF in practical and cryptographic parameter ranges. We assess its concrete time/quality performance with extensive simulations and experiments. As a consequence, we update the extrapolation of the crossover rank between a square-root cost estimate for quantum enumeration using our algorithm and the Core-SVP cost estimate for quantum sieving to 547.
2021
CRYPTO
Secure Wire Shuffling in the Probing Model 📺
In this paper we describe the first improvement of the wire shuffling countermeasure against side-channel attacks described by Ishai, Sahai and Wagner at Crypto 2003. More precisely, we show how to get worst case statistical security against t probes with running time O(t) instead of O(t log t); our construction is also much simpler. Recall that the classical masking countermeasure achieves perfect security but with running time O(t^2). We also describe a practical implementation for AES that outperforms the masking countermeasure for t ≥ 6 000.
2021
CRYPTO
SSE and SSD: Page-Efficient Searchable Symmetric Encryption 📺
Searchable Symmetric Encryption (SSE) enables a client to outsource a database to an untrusted server, while retaining the ability to securely search the data. The performance bottleneck of classic SSE schemes typically does not come from their fast, symmetric cryptographic operations, but rather from the cost of memory accesses. To address this issue, many works in the literature have considered the notion of locality, a simple design criterion that helps capture the cost of memory accesses in traditional storage media, such as Hard Disk Drives. A common thread among many SSE schemes aiming to improve locality is that they are built on top of new memory allocation schemes, which form the technical core of the constructions. The starting observation of this work is that for newer storage media such as Solid State Drives (SSDs), which have become increasingly common, locality is not a good predictor of practical performance. Instead, SSD performance mainly depends on page efficiency, that is, reading as few pages as possible. We define this notion, and identify a simple allocation problem, Data-Independent Packing, that captures the main technical challenge required to build page-efficient SSE. As our main result, we build a page-efficient and storage-efficient data-independent packing scheme, and deduce an SSE scheme with the same properties. The technical core of the result is a new generalization of cuckoo hashing to items of variable size. Practical experiments show that this approach achieves excellent performance.
2021
CRYPTO
ATLAS: Efficient and Scalable MPC in the Honest Majority Setting 📺
In this work, we address communication, computation, and round efficiency of unconditionally secure multi-party computation for arithmetic circuits in the honest majority setting. We achieve both algorithmic and practical improvements: - The best known result in the semi-honest setting has been due to Damgard and Nielsen (CRYPTO 2007). Over the last decade, their construction has played an important role in the progress of efficient secure computation. However despite a number of follow-up works, any significant improvements to the basic semi-honest protocol have been hard to come by. We show 33% improvement in communication complexity of this protocol. We show how to generalize this result to the malicious setting, leading to the best known unconditional honest majority MPC with malicious security. - We focus on the round complexity of the Damgard and Nielsen protocol and improve it by a factor of 2. Our improvement relies on a novel observation relating to an interplay between Damgard and Nielsen multiplication and Beaver triple multiplication. An implementation of our constructions shows an execution run time improvement compared to the state of the art ranging from 30% to 50%.
2021
CRYPTO
Unconditional Communication-Efficient MPC via Hall's Marriage Theorem 📺
The best known n party unconditional multiparty computation protocols with an optimal corruption threshold communicates O(n) field elements per gate. This has been the case even in the semi-honest setting despite over a decade of research on communication complexity in this setting. Going to the slightly sub-optimal corruption setting, the work of Damgard, Ishai, and Kroigaard (EUROCRYPT 2010) provided the first protocol for a single circuit achieving communication complexity of O(log |C|) elements per gate. While a number of works have improved upon this result, obtaining a protocol with O(1) field elements per gate has been an open problem. In this work, we construct the first unconditional multi-party computation protocol evaluating a single arithmetic circuit with amortized communication complexity of O(1) elements per gate.
2021
CRYPTO
Puncturable Pseudorandom Sets and Private Information Retrieval with Near-Optimal Online Bandwidth and Time 📺
Imagine one or more non-colluding servers each holding a large public database, e.g., the repository of DNS entries. Clients would like to access entries in this database without disclosing their queries to the servers. Classical private information retrieval (PIR) schemes achieve polylogarithmic bandwidth per query, but require the server to perform linear computation per query, which is a significant barrier towards deployment. Several recent works showed, however, that by introducing a one-time, per-client, off-line preprocessing phase, an \emph{unbounded} number of client queries can be subsequently served with sublinear online computation time per query (and the cost of the preprocessing can be amortized over the unboundedly many queries). Existing preprocessing PIR schemes (supporting unbounded queries), unfortunately, make undesirable tradeoffs to achieve sublinear online computation: they are either significantly non-optimal in online time or bandwidth, or require the servers to store a linear amount of state per client or even per query, or require polylogarithmically many non-colluding servers. We propose a novel 2-server preprocessing PIR scheme that achieves $\widetilde{O}(\sqrt{n})$ online computation per query and $\widetilde{O}(\sqrt{n})$ client storage, while preserving the polylogarithmic online bandwidth of classical PIR schemes. Both the online bandwidth and computation are optimal up to a poly-logarithmic factor. In our construction, each server stores only the original database and nothing extra, and each online query is served within a single round trip. Our construction relies on the standard LWE assumption. As an important stepping stone, we propose new, more generalized definitions for a cryptographic object called a Privately Puncturable Pseudorandom Set, and give novel constructions that depart significantly from prior approaches.
2021
CRYPTO
On the Concurrent Composition of Quantum Zero-Knowledge 📺
We study the notion of zero-knowledge secure against quantum polynomial-time verifiers (referred to as quantum zero-knowledge) in the concurrent composition setting. Despite being extensively studied in the classical setting, concurrent composition in the quantum setting has hardly been studied. \par We initiate a formal study of concurrent quantum zero-knowledge. Our results are as follows: - Bounded Concurrent QZK for NP and QMA: Assuming post-quantum one-way functions, there exists a quantum zero-knowledge proof system for NP in the bounded concurrent setting. In this setting, we fix a priori the number of verifiers that can simultaneously interact with the prover. Under the same assumption, we also show that there exists a quantum zero-knowledge proof system for QMA in the bounded concurrency setting. - Quantum Proofs of Knowledge: Assuming quantum hardness of learning with errors (QLWE), there exists a bounded concurrent zero-knowledge proof system for NP satisfying quantum proof of knowledge property. Our extraction mechanism simultaneously allows for extraction probability to be negligibly close to acceptance probability (extractability) and also ensures that the prover's state after extraction is statistically close to the prover's state after interacting with the verifier (simulatability). Even in the standalone setting, the seminal work of [Unruh EUROCRYPT'12], and all its followups, satisfied a weaker version of extractability property and moreover, did not achieve simulatability. Our result yields a proof of {\em quantum knowledge} system for QMA with better parameters than prior works.
2021
CRYPTO
SMILE: Set Membership from Ideal Lattices with Applications to Ring Signatures and Confidential Transactions 📺
In a set membership proof, the public information consists of a set of elements and a commitment. The prover then produces a zero-knowledge proof showing that the commitment is indeed to some element from the set. This primitive is closely related to concepts like ring signatures and ``one-out-of-many'' proofs that underlie many anonymity and privacy protocols. The main result of this work is a new succinct lattice-based set membership proof whose size is logarithmic in the size of the set. We also give transformations of our set membership proof to a ring signature scheme and to a confidential transaction payment system. The ring signature size is also logarithmic in the size of the public key set and has size $16$~KB for a set of $2^5$ elements, and $22$~KB for a set of size $2^{25}$. At an approximately $128$-bit security level, these outputs are between 1.5X and 7X smaller than the current state of the art succinct ring signatures of Beullens et al. (Asiacrypt 2020) and Esgin et al. (CCS 2019). We then show that our ring signature, combined with a few other techniques and optimizations, can be turned into a fairly efficient Monero-like confidential transaction system based on the MatRiCT framework of Esgin et al. (CCS 2019). With our new techniques, we are able to reduce the transaction proof size by factors of about 4X - 10X over the aforementioned work. For example, a transaction with two inputs and two outputs, where each input is hidden among $2^{15}$ other accounts, requires approximately $30$KB in our protocol.
2021
CRYPTO
Functional Encryption for Turing Machines with Dynamic Bounded Collusion from LWE 📺
The classic work of Gorbunov, Vaikuntanathan and Wee (CRYPTO 2012) and follow-ups provided constructions of bounded collusion Functional Encryption (FE) for circuits from mild assumptions. In this work, we improve the state of affairs for bounded collusion FE in several ways: 1. {\it New Security Notion.} We introduce the notion of {\it dynamic} bounded collusion FE, where the declaration of collusion bound is delayed to the time of encryption. This enables the encryptor to dynamically choose the collusion bound for different ciphertexts depending on their individual level of sensitivity. Hence, the ciphertext size grows linearly with its own collusion bound and the public key size is independent of collusion bound. In contrast, all prior constructions have public key and ciphertext size that grow at least linearly with a fixed bound $Q$. 2. {\it CPFE for circuits with Dynamic Bounded Collusion.} We provide the first CPFE schemes for circuits enjoying dynamic bounded collusion security. By assuming identity based encryption (IBE), we construct CPFE for circuits of {\it unbounded} size satisfying {\it non-adaptive} simulation based security. By strengthening the underlying assumption to IBE with receiver selective opening security, we obtain CPFE for circuits of {\it bounded} size enjoying {\it adaptive} simulation based security. Moreover, we show that IBE is a necessary assumption for these primitives. Furthermore, by relying on the Learning With Errors (LWE) assumption, we obtain the first {\it succinct} CPFE for circuits, i.e. supporting circuits with unbounded size, but fixed output length and depth. This scheme achieves {\it adaptive} simulation based security. 3. {\it KPFE for circuits with dynamic bounded collusion.} We provide the first KPFE for circuits of unbounded size, but bounded depth and output length satisfying dynamic bounded collusion security from LWE. Our construction achieves {\it adaptive} simulation security improving security of \cite{GKPVZ13a}. 4. {\it KP and CP FE for TM/NL with dynamic bounded collusion.} We provide the first KPFE and CPFE constructions of bounded collusion functional encryption for Turing machines in the public key setting from LWE. Our constructions achieve non-adaptive simulation based security. Both the input and the machine in our construction can be of {\it unbounded} polynomial length. We provide a variant of the above scheme that satisfies {\it adaptive} security, but at the cost of supporting a smaller class of computation, namely Nondeterministic Logarithmic-space (NL). Since NL contains Nondeterministic Finite Automata (NFA), this result subsumes {\it all} prior work of bounded collusion FE for uniform models from standard assumptions \cite{AMY19,AS17}.
2021
CRYPTO
Game-Theoretic Fairness Meets Multi-Party Protocols: The Case of Leader Election 📺
Suppose that $n$ players want to elect a random leader and they communicate by posting messages to a common broadcast channel. This problem is called leader election, and it is fundamental to the distributed systems and cryptography literature. Recently, it has attracted renewed interests due to its promised applications in decentralized environments. In a game theoretically fair leader election protocol, roughly speaking, we want that even a majority coalition cannot increase its own chance of getting elected, nor hurt the chance of any honest individual. The folklore tournament-tree protocol, which completes in logarithmically many rounds, can easily be shown to satisfy game theoretic security. To the best of our knowledge, no sub-logarithmic round protocol was known in the setting that we consider. We show that by adopting an appropriate notion of approximate game-theoretic fairness, and under standard cryptographic assumption, we can achieve $(1-1/2^{\Theta(r)})$-fairness in $r$ rounds for $\Theta(\log \log n) \leq r \leq \Theta(\log n)$, where $n$ denotes the number of players. In particular, this means that we can approximately match the fairness of the tournament tree protocol using as few as $O(\log \log n)$ rounds. We also prove a lower bound showing that logarithmically many rounds are necessary if we restrict ourselves to ``perfect'' game-theoretic fairness and protocols that are ``very similar in structure'' to the tournament-tree protocol. Although leader election is a well-studied problem in other contexts in distributed computing, our work is the first exploration of the round complexity of {\it game-theoretically fair} leader election in the presence of a possibly majority coalition. As a by-product of our exploration, we suggest a new, approximate game-theoretic fairness notion, called ``approximate sequential fairness'', which provides a more desirable solution concept than some previously studied approximate fairness notions.
2021
CRYPTO
The Cost to Break SIKE: A Comparative Hardware-Based Analysis with AES and SHA-3 📺
This work presents a detailed study of the classical security of the post-quantum supersingular isogeny key encapsulation (SIKE) protocol using a realistic budget-based cost model that considers the actual computing and memory costs that are needed for cryptanalysis. In this effort, we design especially-tailored hardware accelerators for the time-critical multiplication and isogeny computations that we use to model an ASIC-powered instance of the van Oorschot-Wiener (vOW) parallel collision search algorithm. We then extend the analysis to AES and SHA-3 in the context of the NIST post-quantum cryptography standardization process to carry out a parameter analysis based on our cost model. This analysis, together with the state-of-the-art quantum security analysis of SIKE, indicates that the current SIKE parameters offer higher practical security than currently believed, closing an open issue on the suitability of the parameters to match NIST's security levels. In addition, we explore the possibility of using significantly smaller primes to enable more efficient and compact implementations with reduced bandwidth. Our improved cost model and analysis can be applied to other cryptographic settings and primitives, and can have implications for other post-quantum candidates in the NIST process.
2021
CRYPTO
Upslices, Downslices, and Secret-Sharing with Complexity of $1.5^n$ 📺
A secret-sharing scheme allows to distribute a secret $s$ among $n$ parties such that only some predefined ``authorized'' sets of parties can reconstruct the secret, and all other ``unauthorized'' sets learn nothing about $s$. The collection of authorized/unauthorized sets is be captured by a monotone function $f:\{0,1\}^n\rightarrow \{0,1\}$. In this paper, we focus on monotone functions that all their min-terms are sets of size $a$, and on their duals -- monotone functions whose max-terms are of size $b$. We refer to these classes as $(a,n)$-\emph{upslices} and $(b,n)$-\emph{downslices}, and note that these natural families correspond to monotone $a$-regular DNFs and monotone $(n-b)$-regular CNFs. We derive the following results. \begin{enumerate} \item (General downslices) Every downslice can be realized with total share size of $1.5^{n+o(n)}<2^{0.585 n}$. Since every monotone function can be cheaply decomposed into $n$ downslices, we obtain a similar result for general access structures improving the previously known $2^{0.637n+o(n)}$ complexity of Applebaum, Beimel, Nir and Peter (STOC 2020). We also achieve a minor improvement in the exponent of linear secrets sharing schemes. \item (Random mixture of upslices) Following, Beimel and Farr{\`{a}}s (TCC 2020) who studied the complexity of random DNFs with constant-size terms, we consider the following general distribution $F$ over monotone DNFs: For each width value $a\in [n]$, uniformly sample $k_a$ monotone terms of size $a$, where $\vec{k}=(k_1,\ldots,k_n)$ is an arbitrary vector of non-negative integers. We show that, except with exponentially small probability, $F$ can be realized with share size of $2^{0.5 n+o(n)}$ and can be linearly realized with an exponent strictly smaller than $2/3$. Our proof also provides a candidate distribution for the ``exponentially-hard'' access structure. \end{enumerate} We use our results to explore connections between several seemingly unrelated questions about the complexity of secret-sharing schemes such as worst-case vs. average-case, linear vs. non-linear, and primal vs. dual access structures. We prove that, in at least one of these settings, there is a significant gap in secret-sharing complexity.
2021
CRYPTO
Impossibility of Quantum Virtual Black-Box Obfuscation of Classical Circuits 📺
Virtual black-box obfuscation is a strong cryptographic primitive: it encrypts a circuit while maintaining its full input/output functionality. A remarkable result by Barak et al. (Crypto 2001) shows that a general obfuscator that obfuscates classical circuits into classical circuits cannot exist. A promising direction that circumvents this impossibility result is to obfuscate classical circuits into quantum states, which would potentially be better capable of hiding information about the obfuscated circuit. We show that, under the assumption that Learning With Errors (LWE) is hard for quantum computers, this quantum variant of virtual black-box obfuscation of classical circuits is generally impossible. On the way, we show that under the presence of dependent classical auxiliary input, even the small class of classical point functions cannot be quantum virtual black-box obfuscated.
2021
CRYPTO
Towards a Unified Approach to Black-Box Constructions of Zero-Knowledge Proofs 📺
General-purpose zero-knowledge proofs for all $\NP$ languages greatly simplify secure protocol design. However, they inherently require the code of the underlying relation. If the relation contains black-box calls to a cryptographic function, the code of that function must be known to use the ZK proof, even if both the relation and the proof require only black-box access to the function. Rosulek (Crypto'12) shows that non-trivial proofs for even simple statements, such as membership in the range of a one-way function, require non-black-box access. We propose an alternative approach to bypass Rosulek's impossibility result. Instead of asking for a ZK proof directly for the given one-way function $f$, we seek to construct a {\em new} one-way function $F$ given only black-box access to $f$, {\em and} an associated ZK protocol for proving non-trivial statements, such as range membership, over its output. We say that $F$, along with its proof system, is a {\em proof-based} one-way function. We similarly define proof-based versions of other primitives, specifically pseudo-random generators and collision-resistant hash functions. We show how to construct proof-based versions of each of the primitives mentioned above from their ordinary counterparts under mild but necessary restrictions over the input. More specifically, \begin{itemize} \item We first show that if the prover entirely chooses the input, then proof-based pseudo-random generators cannot be constructed from ordinary ones in a black-box manner, thus establishing that some restrictions over the input are necessary. \item We next present black-box constructions handling inputs of the form $(x,r)$ where $r$ is chosen uniformly by the verifier. This is similar to the restrictions in the widely used Goldreich-Levin theorem. The associated ZK proofs support range membership over the output as well as arbitrary predicates over prefixes of the input. \end{itemize} Our results open up the possibility that general-purpose ZK proofs for relations that require black-box access to the primitives above may be possible in the future without violating their black-box nature by instantiating them using proof-based primitives instead of ordinary ones.
2021
CRYPTO
Does Fiat-Shamir Require a Cryptographic Hash Function? 📺
The Fiat-Shamir transform is a general method for reducing interaction in public-coin protocols by replacing the random verifier messages with deterministic hashes of the protocol transcript. The soundness of this transformation is usually heuristic and lacks a formal security proof. Instead, to argue security, one can rely on the random oracle methodology, which informally states that whenever a random oracle soundly instantiates Fiat-Shamir, a hash function that is ``sufficiently unstructured'' (such as fixed-length SHA-2) should suffice. Finally, for some special interactive protocols, it is known how to (1) isolate a concrete security property of a hash function that suffices to instantiate Fiat-Shamir and (2) build a hash function satisfying this property under a cryptographic assumption such as Learning with Errors. In this work, we abandon this methodology and ask whether Fiat-Shamir truly requires a cryptographic hash function. Perhaps surprisingly, we show that in two of its most common applications --- building signature schemes as well as (general-purpose) non-interactive zero-knowledge arguments --- there are sound Fiat-Shamir instantiations using extremely simple and non-cryptographic hash functions such as sum-mod-$p$ or bit decomposition. In some cases, we make idealized assumptions (i.e., we invoke the generic group model), while in others, we prove soundness in the plain model. On the negative side, we also identify important cases in which a cryptographic hash function is provably necessary to instantiate Fiat-Shamir. We hope this work leads to an improved understanding of the precise role of the hash function in the Fiat-Shamir transformation.
2021
CRYPTO
A Logarithmic Lower Bound for Oblivious RAM (for all parameters) 📺
An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e., for every input the observed locations accessed are similarly distributed. In recent years there has been great progress both in terms of upper bounds as well as in terms of lower bounds, essentially pinning down the smallest overhead possible in various settings of parameters. We observe that there is a very natural setting of parameters in which \emph{no} non-trivial lower bound is known, even not ones in restricted models of computation (like the so called balls and bins model). Let $N$ and $w$ be the number of cells and bit-size of cells, respectively, in the RAM that we wish to simulate obliviously. Denote by $b$ the cell bit-size of the ORAM. \emph{All} previous ORAM lower bounds have a multiplicative $w/b$ factor which makes them trivial in many settings of parameters of interest. In this work, we prove a new ORAM lower bound that captures this setting (and in all other settings it is at least as good as previous ones, quantitatively). We show that any ORAM must make (amortized) $$ \Omega\left(\log \left(\frac{Nw}{m}\right)/\log\left(\frac{b}{w}\right)\right) $$ memory probes for every logical operation. Here, $m$ denotes the bit-size of the local storage of the ORAM. Our lower bound implies that logarithmic overhead in accesses is necessary, even if $ b \gg w$. Our lower bound is tight for \emph{all} settings of parameters, up to the $\log(b/w)$ factor. Our bound also extends to the non-colluding multi-server setting. As an application, we derive the first (unconditional) separation between the overhead needed for ORAMs in the \emph{online} vs.\ \emph{offline} models. Specifically, we show that when $w=\log N$ and $b,m \in poly\log N$, there exists an offline ORAM that makes (on average) $o(1)$ memory probes per logical operation while every online one must make $\Omega(\log N/\log\log N)$ memory probes per logical operation. No such previous separation was known for any setting of parameters, not even in the balls and bins model.
2021
CRYPTO
Meet-in-the-Middle Attacks Revisited: Key-recovery, Collision, and Preimage Attacks 📺
At EUROCRYPT 2021, Bao et al. proposed an automatic method for systematically exploring the configuration space of meet-in-the-middle (MITM) preimage attacks. We further extend it into a constraint-based framework for finding exploitable MITM characteristics in the context of key-recovery and collision attacks by taking the subtle peculiarities of both scenarios into account. Moreover, to perform attacks based on MITM characteristics with nonlinear constrained neutral words, which have not been seen before, we present a procedure for deriving the solution spaces of neutral words without solving the corresponding nonlinear equations or increasing the overall time complexities of the attack. We apply our method to concrete symmetric-key primitives, including SKINNY, ForkSkinny, Romulus-H, Saturnin, Grostl, Whirlpool, and hashing modes with AES-256. As a result, we identify the first 23-round key-recovery attack on \skinny-$n$-$3n$ and the first 24-round key-recovery attack on ForkSkinny-$n$-$3n$ in the single-key model. Moreover, improved (pseudo) preimage or collision attacks on round-reduced Whirlpool, Grostl, and hashing modes with AES-256 are obtained. In particular, imploying the new representation of the \AES key schedule due to Leurent and Pernot (EUROCRYPT 2021), we identify the first preimage attack on 10-round AES-256 hashing.
2021
CRYPTO
Tighter Security for Schnorr Identification and Signatures: A High-Moment Forking Lemma for ${\Sigma}$-Protocols 📺
The Schnorr identification and signature schemes have been amongst the most influential cryptographic protocols of the past three decades. Unfortunately, although the best-known attacks on these two schemes are via discrete-logarithm computation, the known approaches for basing their security on the hardness of the discrete logarithm problem encounter the ``square-root barrier''. In particular, in any group of order $p$ where Shoup's generic hardness result for the discrete logarithm problem is believed to hold (and is thus used for setting concrete security parameters), the best-known $t$-time attacks on the Schnorr identification and signature schemes have success probability $t^2/p$, whereas existing proofs of security only rule out attacks with success probabilities $(t^2/p)^{1/2}$ and $(q_{\Hash} \cdot t^2/p)^{1/2}$, respectively, where $q_{\Hash}$ denotes the number of random-oracle queries issued by the attacker. We establish tighter security guarantees for identification and signature schemes which result from $\Sigma$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is ``$d$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $d$-th moment of the algorithm's running time. In the concrete context of the discrete logarithm problem, already Shoup's original proof shows that the discrete logarithm problem is $2$-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the $2$-moment hardness of the discrete logarithm problem, any $t$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $(t^2/p)^{2/3}$ and $(q_\Hash \cdot t^2/p)^{2/3}$, respectively.
2021
CRYPTO
Three-Round Secure Multiparty Computation from Black-Box Two-Round Oblivious Transfer 📺
We give constructions of three-round secure multiparty computation (MPC) protocols for general functions that make {\it black-box} use of a two-round oblivious transfer (OT). For the case of semi-honest adversaries, we make use of a two-round, semi-honest secure OT in the plain model. This resolves the round-complexity of black-box (semi-honest) MPC protocols from minimal assumptions and answers an open question of Applebaum et al. (ITCS 2020). For the case of malicious adversaries, we make use of a two-round maliciously-secure OT in the common random/reference string model that satisfies a (mild) variant of adaptive security for the receiver.
2021
CRYPTO
On Tight Quantum Security of HMAC and NMAC in the Quantum Random Oracle Model 📺
HMAC and NMAC are the most basic and important constructions to convert Merkle-Damg{\aa}rd hash functions into message authentication codes (MACs) or pseudorandom functions (PRFs). In the quantum setting, at CRYPTO~2017, Song and Yun showed that HMAC and NMAC are quantum pseudorandom functions (qPRFs) under the standard assumption that the underlying compression function is a qPRF. Their proof guarantees security up to $O(2^{n/5})$ or $O(2^{n/8})$ quantum queries when the output length of HMAC and NMAC is $n$ bits. However, there is a gap between the provable security bound and a simple distinguishing attack that uses $O(2^{n/3})$ quantum queries. This paper settles the problem of closing the gap. We show that the tight bound of the number of quantum queries to distinguish HMAC or NMAC from a random function is $\Theta(2^{n/3})$ in the quantum random oracle model, where compression functions are modeled as quantum random oracles. To give the tight quantum bound, based on an alternative formalization of Zhandry's compressed oracle technique, we introduce a new proof technique focusing on the symmetry of quantum query records.
2021
CRYPTO
Quantum Collision Attacks on Reduced SHA-256 and SHA-512 📺
In this paper, we study dedicated quantum collision attacks on SHA-256 and SHA-512 for the first time. The attacks reach 38 and 39 steps, respectively, which significantly improve the classical attacks for 31 and 27 steps. Both attacks adopt the framework of the previous work that converts many semi-free-start collisions into a 2-block collision, and are faster than the generic attack in the cost metric of time-space tradeoff. We observe that the number of required semi-free-start collisions can be reduced in the quantum setting, which allows us to convert the previous classical 38 and 39 step semi-free-start collisions into a collision. The idea behind our attacks is simple and will also be applicable to other cryptographic hash functions.
2021
CRYPTO
Efficient Information-Theoretic Multi-Party Computation over Non-Commutative Rings 📺
We construct the first efficient MPC protocol that only requires black-box access to a non-commutative ring $R$. Previous results in the same setting were efficient only either for a constant number of corruptions or when computing branching programs and formulas. Our techniques are based on a generalization of Shamir's secret sharing to non-commutative rings, which we derive from the work on Reed Solomon codes by Quintin, Barbier and Chabot (\textit{IEEE Transactions on Information Theory, 2013}). When the center of the ring contains a set $A = \{\alpha_0, \ldots, \alpha_n\}$ such that $\forall i \neq j, \alpha_i - \alpha_j \in R^*$, the resulting secret sharing scheme is strongly multiplicative and we can generalize existing constructions over finite fields without much trouble. Most of our work is devoted to the case where the elements of $A$ do not commute with all of $R$, but they just commute with each other. For such rings, the secret sharing scheme cannot be linear ``on both sides" and furthermore it is not multiplicative. Nevertheless, we are still able to build MPC protocols with a concretely efficient online phase and black-box access to $R$. As an example we consider the ring $\mathcal{M}_{m\times m}(\mathbb{Z}/2^k\mathbb{Z})$, for which when $m > \log(n+1)$, \enote{maybe adapt/simplify the following claim as the comparison requires some nuances} we obtain protocols that require around $\lceil\log(n+1)\rceil/2$ less communication and $2\lceil\log(n+1)\rceil$ less computation than the state of the art protocol based on Circuit Amortization Friendly Encodings (Dalskov, Lee and Soria-Vazquez, \textit{ASIACRYPT 2020}). In this setting with a ``less commutative" $A$, our black-box preprocessing phase has a less practical complexity of $\poly(n)$. Due to this, we additionally provide specialized, concretely efficient preprocessing protocols for $R = \mathcal{M}_{m\times m}(\mathbb{Z}/2^k\mathbb{Z})$ that exploit the structure of the matrix ring.
2021
CRYPTO
Adaptive Extractors and their Application to Leakage Resilient Secret Sharing 📺
We introduce Adaptive Extractors, which unlike traditional randomness extractors, guarantee security even when an adversary obtains leakage on the source \textit{after} observing the extractor output. We make a compelling case for the study of such extractors by demonstrating their use in obtaining adaptive leakage in secret sharing schemes. Specifically, at FOCS 2020, Chattopadhyay, Goodman, Goyal, Kumar, Li, Meka, Zuckerman, built an adaptively secure leakage resilient secret sharing scheme (LRSS) with both rate and leakage rate being $\mathcal{O}(1/n)$, where $n$ is the number of parties. In this work, we build an adaptively secure LRSS that offers an interesting trade-off between rate, leakage rate, and the total number of shares from which an adversary can obtain leakage. As a special case, when considering $t$-out-of-$n$ secret sharing schemes for threshold $t = \alpha n$ (constant $0<\alpha<1$), we build a scheme with constant rate, constant leakage rate, and allow the adversary leakage from all but $t-1$ of the shares, while giving her the remaining $t-1$ shares completely in the clear. (Prior to this, constant rate LRSS scheme tolerating adaptive leakage was unknown for \textit{any} threshold.) Finally, we show applications of our techniques to both non-malleable secret sharing and secure message transmission.
2021
CRYPTO
Pushing the Limits of Valiant's Universal Circuits: Simpler, Tighter and More Compact 📺
A universal circuit (UC) is a general-purpose circuit that can simulate arbitrary circuits (up to a certain size $n$). Valiant provides a $k$-way recursive construction of UCs (STOC 1976), where $k$ tunes the complexity of the recursion. More concretely, Valiant gives theoretical constructions of 2-way and 4-way UCs of asymptotic (multiplicative) sizes $5n\log n$ and $4.75 n\log n$ respectively, which matches the asymptotic lower bound $\Omega(n\log n)$ up to some constant factor. Motivated by various privacy-preserving cryptographic applications, Kiss et al. (Eurocrypt 2016) validated the practicality of $2$-way universal circuits by giving example implementations for private function evaluation. G{\"{u}}nther et al. (Asiacrypt 2017) and Alhassan et al. (J. Cryptology 2020) implemented the 2-way/4-way hybrid UCs with various optimizations in place towards making universal circuits more practical. Zhao et al. (Asiacrypt 2019) optimized Valiant's 4-way UC to asymptotic size $4.5 n\log n$ and proved a lower bound $3.64 n\log n$ for UCs under the Valiant framework. As the scale of computation goes beyond 10-million-gate ($n=10^7$) or even billion-gate level ($n=10^9$), the constant factor in UCs size plays an increasingly important role in application performance. In this work, we investigate Valiant's universal circuits and present an improved framework for constructing universal circuits with the following advantages. [Simplicity.] Parameterization is no longer needed. In contrast to that previous implementations resorted to a hybrid construction combining $k=2$ and $k=4$ for a tradeoff between fine granularity and asymptotic size-efficiency, our construction gets the best of both worlds when configured at the lowest complexity (i.e., $k=2$). [Compactness.] Our universal circuits have asymptotic size $3n\log n$, improving upon the best previously known $4.5n\log n$ by 33\% and beating the $3.64n\log n$ lower bound for UCs constructed under Valiant's framework (Zhao et al., Asiacrypt 2019). [Tightness.] We show that under our new framework the UCs size is lower bounded by $2.95 n\log n$, which almost matches the $3n\log n$ circuit size of our $2$-way construction. We implement the 2-way universal circuits and evaluate its performance with other implementations, which confirms our theoretical analysis.
2021
CRYPTO
Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN 📺
Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90's. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate $\frac{\log^2 n}{n}$ implies the quasi-polynomial hardness of LPN at the extremely high noise rate $1/2-1/\poly(n)$. It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness. We start with a simple observation that the hardness of high-noise LPN over large fields is implied by that of the LWE of the same modulus, and is thus reducible from worst-case hardness of lattice problems. We then revisit [BLVW19], which is the main focus of this work. We first expand the underlying binary linear codes (of the NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant $0<c<1$ the constant-noise LPN problem is ($T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{\min(c,1-c)})},q=2^{\Omega(n^{\min(c,1-c)})}$)-hard assuming that the NCP at the low-noise rate $\tau=n^{-c}$ is ($T'={2^{\Omega(\tau n)}}$, $\epsilon'={2^{-\Omega(\tau n)}}$,$m={2^{\Omega(\tau n)}}$)-hard in the worst case, where $T$, $\epsilon$, $q$ and $m$ are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions need constant-noise LPN with ($T={2^{\omega(\sqrt{n})}}$, $\epsilon'={2^{-\omega(\sqrt{n})}}$,$q={2^{\sqrt{n}}}$)-hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary $\omega(1)$ factor in the exponent) what is reducible from the worst-case NCP when $c= 0.5$. We leave it as an open problem whether the gap can be closed or there is a separation in place.
2021
CRYPTO
Improved Computational Extractors and their Applications 📺
Recent exciting breakthroughs have achieved the first two-source extractors that operate in the low min-entropy regime. Unfortunately, these constructions suffer from non-negligible error, and reducing the error to negligible remains an important open problem. In recent work, Garg, Kalai, and Khurana (GKK, Eurocrypt 2020) investigated a meaningful relaxation of this problem to the computational setting, in the presence of a common random string (CRS). In this relaxed model, their work built explicit two-source extractors for a restricted class of unbalanced sources with min-entropy n^{\gamma} (for some constant \gamma) and negligible error, under the sub-exponential DDH assumption. In this work, we investigate whether computational extractors in the CRS model be applied to more challenging environments. Specifically, we study network extractor protocols (Kalai et al., FOCS 2008) and extractors for adversarial sources (Chattopadhyay et al., STOC 2020) in the CRS model. We observe that these settings require extractors that work well for balanced sources, making the GKK results inapplicable. We remedy this situation by obtaining the following results, all of which are in the CRS model and assume the sub-exponential hardness of DDH. - We obtain ``optimal'' computational two-source and non-malleable extractors for balanced sources: requiring both sources to have only poly-logarithmic min-entropy, and achieving negligible error. To obtain this result, we perform a tighter and arguably simpler analysis of the GKK extractor. - We obtain a single-round network extractor protocol for poly-logarithmic min-entropy sources that tolerates an optimal number of adversarial corruptions. Prior work in the information-theoretic setting required sources with high min-entropy rates, and in the computational setting had round complexity that grew with the number of parties, required sources with linear min-entropy, and relied on exponential hardness (albeit without a CRS). - We obtain an ``optimal'' adversarial source extractor for poly-logarithmic min-entropy sources, where the number of honest sources is only 2 and each corrupted source can depend on either one of the honest sources. Prior work in the information-theoretic setting had to assume a large number of honest sources.
2021
CRYPTO
Broadcast-Optimal Two Round MPC with an Honest Majority 📺
This paper closes the question of the possibility of two-round MPC protocols achieving different security guarantees with and without the availability of broadcast in any given round. Cohen et al. (Eurocrypt 2020) study this question in the dishonest majority setting; we complete the picture by studying the honest majority setting. In the honest majority setting, given broadcast in both rounds, it is known that the strongest guarantee — guaranteed output delivery — is achievable (Gordon et al. Crypto 2015). We show that, given broadcast in the first round only, guaranteed output delivery is still achievable. Given broadcast in the second round only, we give a new construction that achieves identifiable abort, and we show that fairness — and thus guaranteed output delivery — are not achievable in this setting. Finally, if only peer-to-peer channels are available, we show that the weakest guarantee — selective abort — is the only one achievable for corruption thresholds t > 1 and for t = 1 and n = 3. On the other hand, it is already known that selective abort can be achieved in these cases. In the remaining cases, i.e., t = 1 and n > 3, it is known (from the work of Ishai et al. at Crypto 2010, and Ishai et al. at Crypto 2015) that guaranteed output delivery (and thus all weaker guarantees) are possible.
2021
CRYPTO
Compressing Proofs of k-Out-Of-n Partial Knowledge 📺
In a proof of partial knowledge, introduced by Cramer, Damg{\aa}rd and Schoenmakers (CRYPTO 1994), a prover knowing witnesses for some $k$-subset of $n$ given public statements can convince the verifier of this claim without revealing which $k$-subset. Their solution combines $\Sigma$-protocol theory and linear secret sharing, and achieves linear communication complexity for general $k,n$. Especially the ``one-out-of-$n$'' case $k=1$ has seen myriad applications during the last decades, e.g., in electronic voting, ring signatures, and confidential transaction systems. In this paper we focus on the discrete logarithm (DL) setting, where the prover claims knowledge of DLs of $k$-out-of-$n$ given elements. Groth and Kohlweiss (EUROCRYPT 2015) have shown how to solve the special case $k=1$ %, yet arbitrary~$n$, with {\em logarithmic} (in $n$) communication, instead of linear as prior work. However, their method takes explicit advantage of $k=1$ and does not generalize to $k>1$. Alternatively, an {\em indirect} approach for solving the considered problem is by translating the $k$-out-of-$n$ relation into a circuit and then applying communication-efficient circuit ZK. Indeed, for the $k=1$ case this approach has been highly optimized, e.g., in ZCash. Our main contribution is a new, simple honest-verifier zero-knowledge proof protocol for proving knowledge of $k$ out of $n$ DLs with {\em logarithmic} communication and {\em for general $k$ and $n$}, without requiring any generic circuit ZK machinery. Our solution puts forward a novel extension of the {\em compressed} $\Sigma$-protocol theory (CRYPTO 2020), which we then utilize to compress a new $\Sigma$-protocol for proving knowledge of $k$-out-of-$n$ DL's down to logarithmic size. The latter $\Sigma$-protocol is inspired by the CRYPTO 1994 approach, but a careful re-design of the original protocol is necessary for the compression technique to apply. Interestingly, {\em even for $k=1$ and general $n$} our approach improves prior {\em direct} approaches as it reduces prover complexity without increasing the communication complexity. Besides the conceptual simplicity, we also identify regimes of practical relevance where our approach achieves asymptotic and concrete improvements, e.g., in proof size and prover complexity, over the generic approach based on circuit-ZK. Finally, we show various extensions and generalizations of our core result. For instance, we extend our protocol to proofs of partial knowledge of Pedersen (vector) commitment openings, and/or to include a proof that the witness satisfies some additional constraint, and we show how to extend our results to non-threshold access structures.
2021
CRYPTO
A Compressed Sigma-Protocol Theory for Lattices 📺
We show a \emph{lattice-based} solution for commit-and-prove transparent circuit zero-knowledge (ZK) with \emph{polylog-communication}, the \emph{first} not depending on PCPs. We start from \emph{compressed $\Sigma$-protocol theory} (CRYPTO 2020), which is built around basic $\Sigma$-protocols for opening an arbitrary linear form on a long secret vector that is compactly committed to. These protocols are first compressed using a recursive ``folding-technique'' adapted from Bulletproofs, at the expense of logarithmic rounds. Proving in ZK that the secret vector satisfies a given constraint -- captured by a circuit -- is then by (blackbox) reduction to the linear case, via arithmetic secret-sharing techniques adapted from MPC. Commit-and-prove is also facilitated, i.e., when commitment(s) to the secret vector are created ahead of any circuit-ZK proof. On several platforms (incl.\ DL) this leads to logarithmic communication. Non-interactive versions follow from Fiat-Shamir. This abstract modular theory strongly suggests that it should somehow be supported by a lattice-platform \emph{as well}. However, when going through the motions and trying to establish low communication (on a SIS-platform), a certain significant lack in current understanding of multi-round protocols is exposed. Namely, as opposed to the DL-case, the basic $\Sigma$-protocol in question typically has \emph{poly-small challenge} space. Taking into account the compression-step -- which yields \emph{non-constant} rounds -- and the necessity for parallelization to reduce error, there is no known tight result that the compound protocol admits an efficient knowledge extractor. We resolve the state of affairs here by a combination of two novel results which are fully general and of independent interest. The first gives a tight analysis of efficient knowledge extraction in case of non-constant rounds combined with poly-small challenge space, whereas the second shows that parallel repetition indeed forces rapid decrease of knowledge error. Moreover, in our present context, arithmetic secret sharing is not defined over a large finite field but over a quotient of a number ring and this forces our careful adaptation of how the linearization techniques are deployed. We develop our protocols in an abstract framework that is conceptually simple and can be flexibly instantiated. In particular, the framework applies to arbitrary rings and norms.
2021
CRYPTO
Authenticated Key Exchange and Signatures with Tight Security in the Standard Model 📺
We construct the first authenticated key exchange protocols that achieve tight security in the standard model. Previous works either relied on techniques that seem to inherently require a random oracle, or achieved only “Multi-Bit-Guess” security, which is not known to compose tightly, for instance, to build a secure channel. Our constructions are generic, based on digital signatures and key encapsulation mechanisms (KEMs). The main technical challenges we resolve is to determine suitable KEM security notions which on the one hand are strong enough to yield tight security, but at the same time weak enough to be efficiently instantiable in the standard model, based on standard techniques such as universal hash proof systems. Digital signature schemes with tight multi-user security in presence of adaptive corruptions are a central building block, which is used in all known constructions of tightly-secure AKE with full forward security. We identify a subtle gap in the security proof of the only previously known efficient standard model scheme by Bader et al. (TCC 2015). We develop a new variant, which yields the currently most efficient signature scheme that achieves this strong security notion without random oracles and based on standard hardness assumptions.
2021
CRYPTO
Witness Authenticating NIZKs and Applications 📺
We initiate the study of witness authenticating NIZK proof systems (waNIZKs), in which one can use a witness $w$ of a statement $x$ to identify whether a valid proof for $x$ is indeed generated using $w$. Such a new identification functionality enables more diverse applications, and it also puts new requirements on soundness that: (1) no adversary can generate a valid proof that will not be identified by any witness; (2) or forge a proof using her valid witness to frame others. To work around the obvious obstacle towards conventional zero-knowledgeness, we define entropic zero-knowledgeness that requires the proof to leak no partial information, if the witness has sufficient computational entropy. We give a formal treatment of this new primitive. The modeling turns out to be quite involved and multiple subtle points arise and particular cares are required. We present general constructions from standard assumptions. We also demonstrate three applications in non-malleable (perfect one-way) hash, group signatures with verifier-local revocations and plaintext-checkable public-key encryption. Our waNIZK provides a new tool to advance the state of the art in all these applications.
2021
CRYPTO
Receiver-Anonymity in Reradomizable RCCA-Secure Cryptosystems Resolved 📺
In this work, we resolve the open problem raised by Prabhakaran and Rosulek at CRYPTO 2007, and present the first anonymous, rerandomizable, Replayable-CCA (RCCA) secure public key encryption scheme. This solution opens the door to numerous privacy-oriented applications with a highly desired RCCA security level. At the core of our construction is a non-trivial extension of smooth projective hash functions (Cramer and Shoup, EUROCRYPT 2002), and a modular generic framework developed for constructing Rand-RCCA-secure encryption schemes with receiver-anonymity. The framework gives an enhanced abstraction of the original Prabhakaran and Rosulek’s scheme (which was the first construction of Rand-RCCA-secure encryption in the standard model), where the most crucial enhancement is the first realization of the desirable property of receiver-anonymity, essential to privacy settings. It also serves as a conceptually more intuitive and generic understanding of RCCA security, which leads, for example, to new implementations of the notion. Finally, note that (since CCA security is not applicable to the privacy applications motivating our work) the concrete results and the conceptual advancement presented here, seem to substantially expand the power and relevance of the notion of Rand-RCCA-secure encryption.
2021
CRYPTO
Traceable Secret Sharing and Applications 📺
Consider a scenario where Alice stores some secret data $s$ on $n$ servers using a $t$-out-of-$n$ secret sharing scheme. Trudy (the collector) is interested in the secret data of Alice and is willing to pay for it. Trudy publishes an advertisement on the internet which describes an elaborate cryptographic scheme to collect the shares from the $n$ servers. Each server who decides to submit its share is paid a hefty monetary reward and is guaranteed ``immunity" from being caught or prosecuted in a court for violating its service agreement with Alice. Bob is one of the servers and sees this advertisement. On examining the collection scheme closely, Bob concludes that there is no way for Alice to prove anything in a court that he submitted his share. Indeed, if Bob is rational, he might use the cryptographic scheme in the advertisement and submit his share since there are no penalties and no fear of being caught and prosecuted. Can we design a secret sharing scheme which Alice can use to avoid such a scenario? We introduce a new primitive called as \textit{Traceable Secret Sharing} to tackle this problem. In particular, a traceable secret sharing scheme guarantees that a cheating server always runs the risk of getting traced and prosecuted by providing a valid evidence (which can be examined in a court of law) implicating its dishonest behavior. We explore various definitional aspects and show how they are highly non-trivial to construct (even ignoring efficiency aspects). We then give an efficient construction of traceable secret sharing assuming the existence of a secure two-party computation protocol. We also show an application of this primitive in constructing traceable protocols for multi-server delegation of computation.
2021
CRYPTO
On the Possibility of Basing Cryptography on $\EXP \neq \BPP$ 📺
Liu and Pass (FOCS'20) recently demonstrated an equivalence between the existence of one-way functions and mild average-case hardness of the time-bounded Kolmogorov complexity problem. In this work, we establish a similar equivalence but to a different form of time-bounded Kolmogorov Complexity---namely, Levin's notion of Kolmogorov Complexity---whose hardness is closely related to the problem of whether $\EXP \neq \BPP$. In more detail, let $Kt(x)$ denote the Levin-Kolmogorov Complexity of the string $x$; that is, $Kt(x) = \min_{\desc \in \bitset^*, t \in \N}\{|\desc| + \lceil \log t \rceil: U(\desc, 1^t) = x\}$, where $U$ is a universal Turing machine, and let $\mktp$ denote the language of pairs $(x,k)$ having the property that $Kt(x) \leq k$. We demonstrate that: - $\mktp$ is \emph{two-sided error} mildly average-case hard (i.e., $\mktp \notin \HeurpBPP$) iff infinititely-often one-way functions exist. - $\mktp$ is \emph{errorless} mildly average-case hard (i.e., $\mktp \notin \AvgpBPP$) iff $\EXP \neq \BPP$. Thus, the only ``gap'' towards getting (infinitely-often) one-way functions from the assumption that $\EXP \neq \BPP$ is the seemingly ``minor'' technical gap between two-sided error and errorless average-case hardness of the $\mktp$ problem. As a corollary of this result, we additionally demonstrate that any reduction from errorless to two-sided error average-case hardness for $\mktp$ implies (unconditionally) that $\NP \neq \P$. We finally consider other alternative notions of Kolmogorov complexity---including space-bounded Kolmogorov complexity and conditional Kolmogorov complexity---and show how average-case hardness of problems related to them characterize log-space computable one-way functions, or one-way functions in $\NC^0$.
2021
CRYPTO
Oblivious RAM with Worst-Case Logarithmic Overhead 📺
We present the first Oblivious RAM (ORAM) construction that for $N$ memory blocks supports accesses with \emph{worst-case} $O(\log N)$ overhead for any block size $\Omega(\log N)$ while requiring a client memory of only a constant number of memory blocks. We rely on the existence of one-way functions and guarantee computational security. Our result closes a long line of research on fundamental feasibility results for ORAM constructions as logarithmic overhead is necessary. The previous best logarithmic overhead construction only guarantees it in an \emph{amortized} sense, i.e., logarithmic overhead is achieved only for long enough access sequences, where some of the individual accesses incur $\Theta(N)$ overhead. The previously best ORAM in terms of \emph{worst-case} overhead achieves $O(\log^2 N/\log\log N)$ overhead. Technically, we design a novel de-amortization framework for modern ORAM constructions that use the ``shuffled inputs'' assumption. Our framework significantly departs from all previous de-amortization frameworks, originating from Ostrovsky and Shoup (STOC~'97), that seem to be fundamentally too weak to be applied on modern ORAM constructions.
2021
CRYPTO
Provable Security Analysis of FIDO2 📺
We carry out the first provable security analysis of the new FIDO2 protocols, the promising FIDO Alliance’s proposal for a standard for passwordless user authentication. Our analysis covers the core components of FIDO2: the W3C’s Web Authentication (WebAuthn) specification and the new Client-to-Authenticator Protocol (CTAP2). Our analysis is modular. For WebAuthn and CTAP2, in turn, we propose appropriate security models that aim to capture their intended security goals and use the models to analyze their security. First, our proof confirms the authentication security of WebAuthn. Then, we show CTAP2 can only be proved secure in a weak sense; meanwhile, we identify a series of its design flaws and provide suggestions for improvement. To withstand stronger yet realistic adversaries, we propose a generic protocol called sPACA and prove its strong security; with proper instantiations, sPACA is also more efficient than CTAP2. Finally, we analyze the overall security guarantees provided by FIDO2 and WebAuthn+sPACA based on the security of their components. We expect that our models and provable security results will help clarify the security guarantees of the FIDO2 protocols. In addition, we advocate the adoption of our sPACA protocol as a substitute for CTAP2 for both stronger security and better performance.
2021
CRYPTO
MoSS: Modular Security Specifications Framework 📺
Applied cryptographic protocols have to meet a rich set of security requirements under diverse environments and against diverse adversaries. However, currently used security specifications, based on either simulation (e.g., `ideal functionality' in UC) or games, are monolithic, combining together different aspects of protocol requirements, environment and assumptions. Such security specifications are complex, error-prone, and foil reusability, modular analysis and incremental design. We present the Modular Security Specifications (MoSS) framework, which cleanly separates the security requirements (goals) which a protocol should achieve, from the models (assumptions) under which each requirement should be ensured. This modularity allows us to reuse individual models and requirements across different protocols and tasks, and to compare protocols for the same task, either under different assumptions or satisfying different sets of requirements. MoSS is flexible and extendable, e.g., it can support both asymptotic and concrete definitions for security. So far, we confirmed the applicability of MoSS to two applications: secure broadcast protocols and PKI schemes.
2021
CRYPTO
MHz2k: MPC from HE over $\mathbb{Z}_{2^k}$ with New Packing, Simpler Reshare, and Better ZKP 📺
We propose a multi-party computation (MPC) protocol over $\mathbb{Z}_{2^k}$ secure against actively corrupted majority from somewhat homomorphic encryption. The main technical contributions are: (i) a new efficient packing method for $\mathbb{Z}_{2^k}$-messages in lattice-based somewhat homomorphic encryption schemes, (ii) a simpler reshare protocol for level-dependent packings, (iii) a more efficient zero-knowledge proof of plaintext knowledge on cyclotomic rings $\Z[X]/\Phi_M(X)$ with $M$ being a prime. Integrating them, our protocol shows from 2.2x upto 4.8x improvements in amortized communication costs compared to the previous best results. Our techniques not only improve the efficiency of MPC over $\mathbb{Z}_{2^k}$ considerably, but also provide a toolkit that can be leveraged when designing other cryptographic primitives over $\mathbb{Z}_{2^k}$.
2021
CRYPTO
DualRing: Generic Construction of Ring Signatures with Efficient Instantiations 📺
We introduce a novel generic ring signature construction, called DualRing, which can be built from several canonical identification schemes (such as Schnorr identification). DualRing differs from the classical ring signatures by its formation of two rings: a ring of commitments and a ring of challenges. It has a structural difference from the common ring signature approaches based on accumulators or zero-knowledge proofs of the signer index. Comparatively, DualRing has a number of unique advantages. Considering the DL-based setting by using Schnorr identification scheme, our DualRing structure allows the signature size to be compressed into logarithmic size via an argument of knowledge system such as Bulletproofs. We further improve on the Bulletproofs argument system to eliminate about half of the computation while maintaining the same proof size. We call this Sum Argument and it can be of independent interest. This DL-based construction, named DualRing-EC, using Schnorr identification with Sum Argument has the shortest ring signature size in the literature without using trusted setup. Considering the lattice-based setting, we instantiate DualRing by a canonical identification based on M-LWE and M-SIS. In practice, we achieve the shortest lattice-based ring signature, named DualRing-LB, when the ring size is between 4 and 2000. DualRing-LB is also 5x faster in signing and verification than the fastest lattice-based scheme by Esgin et al. (CRYPTO'19).
2021
CRYPTO
A Black-Box Approach to Post-Quantum Zero-Knowledge in Constant Rounds 📺
In a recent seminal work, Bitansky and Shmueli (STOC '20) gave the first construction of a constant round zero-knowledge argument for NP secure against quantum attacks. However, their construction has several drawbacks compared to the classical counterparts. Specifically, their construction only achieves computational soundness, requires strong assumptions of quantum hardness of learning with errors (QLWE assumption) and the existence of quantum fully homomorphic encryption (QFHE), and relies on non-black-box simulation. In this paper, we resolve these issues at the cost of weakening the notion of zero-knowledge to what is called ϵ-zero-knowledge. Concretely, we construct the following protocols: - We construct a constant round interactive proof for NP that satisfies statistical soundness and black-box ϵ-zero-knowledge against quantum attacks assuming the existence of collapsing hash functions, which is a quantum counterpart of collision-resistant hash functions. Interestingly, this construction is just an adapted version of the classical protocol by Goldreich and Kahan (JoC '96) though the proof of ϵ-zero-knowledge property against quantum adversaries requires novel ideas. - We construct a constant round interactive argument for NP that satisfies computational soundness and black-box ϵ-zero-knowledge against quantum attacks only assuming the existence of post-quantum one-way functions. At the heart of our results is a new quantum rewinding technique that enables a simulator to extract a committed message of a malicious verifier while simulating verifier's internal state in an appropriate sense.
2021
CRYPTO
A Rational Protocol Treatment of 51% Attacks 📺
Game-theoretic analysis of cryptocurrencies and, more generally, blockchain-based decentralized ledgers offers insight on their economic robustness, and their behavior when even the cryptographic assumptions that underpin their security fail. In this work we utilize the recently proposed blockchain adaptation of the rational protocol design (RPD) framework [EUROCRYPT~'18] to analyze 51\% double-spending attacks against Nakamoto-style cryptocurrencies. We observe a property of the originally proposed utility class that yields an unnatural behavior against such attacks, and show how to devise a utility that avoids this pitfall and makes predictions that match the observable behavior---i.e., that renders attacking a dominant strategy in settings where an attack was indeed observed. We then propose a generic modification to the underlying protocol which deters attacks on consistency by adversaries controlling a majority of the system's resources, including the 51\% double-spending attack. This can be used as guidance to patch systems that have suffered such attacks, e.g., Ethereum Classic and Bitcoin Cash, and serves as a demonstration of the power of game-theoretic analyses.
2021
CRYPTO
Differential-Linear Cryptanalysis from an Algebraic Perspective 📺
The differential-linear cryptanalysis is an important cryptanalytic tool in cryptography, and has been extensively researched since its discovery by Langford and Hellman in 1994. There are nevertheless very few methods to study the middle part where the differential and linear trail connect, besides the Differential-Linear Connectivity Table (Bar-On et al., EUROCRYPT 2019) and the experimental approach. In this paper, we study differential-linear cryptanalysis from an algebraic perspective. We first introduce a technique called Differential Algebraic Transitional Form (DATF) for differential-linear cryptanalysis, then develop a new theory of estimation of the differential-linear bias and techniques for key recovery in differential-linear cryptanalysis. The techniques are applied to the CAESAR finalist ASCON, the AES finalist SERPENT, and the eSTREAM finalist Grain v1. The bias of the differential-linear approximation is estimated for ASCON and SERPENT. The theoretical estimates of the bias are more accurate than that obtained by the DLCT, and the techniques can be applied with more rounds. Our general techniques can also be used to estimate the bias of Grain v1 in differential cryptanalysis, and have a markedly better performance than the Differential Engine tool tailor-made for the cipher. The improved key recovery attacks on round-reduced variants of these ciphers are then proposed. To the best of our knowledge, they are thus far the best known cryptanalysis of SERPENT, as well as the best differential-linear cryptanalysis of ASCON and the best initialization analysis of Grain v1. The results have been fully verified by experiments. Notably, security analysis of SERPENT is one of the most important applications of differential-linear cryptanalysis in the last two decades. The results in this paper update the differential-linear cryptanalysis of SERPENT-128 and SERPENT-256 with one more round after the work of Biham, Dunkelman and Keller in 2003.
2021
CRYPTO
Round Efficient Secure Multiparty Quantum Computation with Identifiable Abort 📺
A recent result by Dulek et al. (EUROCRYPT 2020) showed a secure protocol for computing any quantum circuit even without the presence of an honest majority. Their protocol, however, is susceptible to a ``denial of service'' attack and allows even a single corrupted party to force an abort. We propose the first quantum protocol that admits security-with-identifiable-abort, which allows the honest parties to agree on the identity of a corrupted party in case of an abort. Additionally, our protocol is the first to have the property that the number of rounds where quantum communication is required is independent of the circuit complexity. Furthermore, if there exists a post-quantum secure classical protocol whose round complexity is independent of the circuit complexity, then our protocol has this property as well. Our protocol is secure under the assumption that classical quantum-resistant fully homomorphic encryption schemes with decryption circuit of logarithmic depth exist. Interestingly, our construction also admits a reduction from quantum fair secure computation to classical fair secure computation.
2021
CRYPTO
How to Meet Ternary LWE Keys 📺
The LWE problem with its ring variants is today the most prominent candidate for building efficient public key cryptosystems resistant to quantum computers. NTRU-type cryptosystems use an LWE-type variant with small max-norm secrets, usually with ternary coefficients from the set $\{-1,0,1\}$. The presumably best attack on these schemes is a hybrid attack that combines lattice reduction techniques with Odlyzko's Meet-in-the-Middle approach. Odlyzko's algorithm is a classical combinatorial attack that for key space size $\S$ runs in time $\S^{0.5}$. We substantially improve on this Meet-in-the-Middle approach, using the representation technique developed for subset sum algorithms. Asymptotically, our heuristic Meet-in-the-Middle attack runs in time roughly $\S^{0.25}$, which also beats the $\S^{\frac 1 3}$ complexity of the best known quantum algorithm. For the round-3 NIST post-quantum encryptions NTRU and NTRU Prime we obtain non-asymptotic instantiations of our attack with complexity roughly $\S^{0.3}$. As opposed to other combinatorial attacks, our attack benefits from larger LWE field sizes $q$, as they are often used in modern lattice-based signatures. For example, for BLISS and GLP signatures we obtain non-asymptotic combinatorial attacks around $\S^{0.28}$. Our attacks do not invalidate the security claims of the aforementioned schemes. However, they establish improved combinatorial upper bounds for their security. We leave it is an open question whether our new Meet-in-the-Middle attack in combination with lattice reduction can be used to speed up the hybrid attack.
2021
CRYPTO
Quadratic Secret Sharing and Conditional Disclosure of Secrets 📺
There is a huge gap between the upper and lower bounds on the share size of secret-sharing schemes for arbitrary $n$-party access structures, and consistent with our current knowledge the optimal share size can be anywhere between polynomial in $n$ and exponential in $n$. For linear secret-sharing schemes, we know that the share size for almost all $n$-party access structures must be exponential in $n$. Furthermore, most constructions of efficient secret-sharing schemes are linear. We would like to study larger classes of secret-sharing schemes with two goals. On one hand, we want to prove lower bounds for larger classes of secret-sharing schemes, possibly shedding some light on the share size of general secret-sharing schemes. On the other hand, we want to construct efficient secret-sharing schemes for access structures that do not have efficient linear secret-sharing schemes. Given this motivation, Paskin-Cherniavsky and Radune (ITC'20) defined and studied a new class of secret-sharing schemes in which the shares are generated by applying degree-$d$ polynomials to the secret and some random field elements. The special case $d=1$ corresponds to linear and multi-linear secret-sharing schemes. We define and study two additional classes of polynomial secret-sharing schemes: (1) schemes in which for every authorized set the reconstruction of the secret is done using polynomials and (2) schemes in which both sharing and reconstruction are done by polynomials. For linear secret-sharing schemes, schemes with linear sharing and schemes with linear reconstruction are equivalent. We give evidence that for polynomial secret-sharing schemes, schemes with polynomial sharing are probably stronger than schemes with polynomial reconstruction. We also prove lower bounds on the share size for schemes with polynomial reconstruction. On the positive side, we provide constructions of secret-sharing schemes and conditional disclosure of secrets (CDS) protocols with quadratic sharing and reconstruction. We extend a construction of Liu et al. (CRYPTO'17) and construct optimal quadratic $k$-server CDS protocols for functions $f:[N]^k\rightarrow \set{0,1}$ with message size $O(N^{(k-1)/3})$. We show how to transform our quadratic $k$-server CDS protocol to a robust CDS protocol, and use the robust CDS protocol to construct quadratic secret-sharing schemes for arbitrary access structures with share size $O(2^{0.705n})$; this is better than the best known share size of $O(2^{0.7576n})$ for linear secret-sharing schemes and worse than the best known share size of $O(2^{0.585n})$ for general secret-sharing schemes.
2021
CRYPTO
Non-Malleable Codes for Bounded Parallel-Time Tampering 📺
Non-malleable codes allow one to encode data in such a way that once a codeword is being tampered with, the modified codeword is either an encoding of the original message, or a completely unrelated one. Since the introduction of this notion by Dziembowski, Pietrzak, and Wichs (ICS '10 and J. ACM '18), there has been a large body of works realizing such coding schemes secure against various classes of tampering functions. It is well known that there is no efficient non-malleable code secure against all polynomial size tampering functions. Nevertheless, no code which is non-malleable for \emph{bounded} polynomial size attackers is known and obtaining such a code has been a major open problem. We present the first construction of a non-malleable code secure against all polynomial size tampering functions that have {bounded} parallel time. This is an even larger class than all bounded polynomial size functions. In particular, this class includes all functions in non-uniform $\mathbf{NC}$ (and much more). Our construction is in the plain model (i.e., no trusted setup) and relies on several cryptographic assumptions such as keyless hash functions, time-lock puzzles, as well as other standard assumptions. Additionally, our construction has several appealing properties: the complexity of encoding is independent of the class of tampering functions and we can obtain (sub-)exponentially small error.
2021
CRYPTO
Sumcheck Arguments and their Applications 📺
We introduce a class of interactive protocols, which we call *sumcheck arguments*, that establishes a novel connection between the sumcheck protocol (Lund et al. JACM 1992) and folding techniques for Pedersen commitments (Bootle et al. EUROCRYPT 2016). Informally, we consider a general notion of bilinear commitment over modules, and show that the sumcheck protocol applied to a certain polynomial associated with the commitment scheme yields a succinct argument of knowledge for openings of the commitment. Building on this, we additionally obtain succinct arguments for the NP-complete language R1CS over certain rings. Sumcheck arguments enable us to recover as a special case numerous prior works in disparate cryptographic settings (such as discrete logarithms, pairings, RSA groups, lattices), providing one abstract framework to understand them all. Further, we answer open questions raised in prior works, such as obtaining a lattice-based succinct argument from the SIS assumption for satisfiability problems over rings.
2021
CRYPTO
Secure Computation from One-Way Noisy Communication, or: Anti-Correlation via Anti-Concentration 📺
Can a sender encode a pair of messages (m_0,m_1) jointly, and send their encoding over (say) a binary erasure channel, so that the receiver can decode exactly one of the two messages and the sender does not know which one? Garg et al. (Crypto 2015) showed that this is information-theoretically impossible. We show how to circumvent this impossibility by assuming that the receiver is computationally bounded, settling for an inverse-polynomial security error (which is provably necessary), and relying on ideal obfuscation. Our solution creates a ``computational anti-correlation'' between the events of receiving m_0 and receiving m_1 by exploiting the anti-concentration of the binomial distribution. The ideal obfuscation primitive in our construction can either be directly realized using (stateless) tamper-proof hardware, yielding an unconditional result, or heuristically instantiated using existing indistinguishability obfuscation schemes. We put forward a new notion of obfuscation that suffices to securely instantiate our construction. As a corollary, we get similar feasibility results for general secure computation of sender-receiver functionalities by leveraging the completeness of the above ``random oblivious transfer'' functionality.
2021
CRYPTO
Proof-Carrying Data without Succinct Arguments 📺
Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Known approaches to construct PCD are based on succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme. In this paper we show how to obtain PCD without relying on SNARKs. We construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size arguments) that has a *split accumulation scheme*, which is a weak form of accumulation that we introduce. Moreover, we construct a transparent non-interactive argument of knowledge for R1CS whose split accumulation is verifiable via a (small) *constant number of group and field operations*. Our construction is proved secure in the random oracle model based on the hardness of discrete logarithms, and it leads, via the random oracle heuristic and our result above, to concrete efficiency improvements for PCD. Along the way, we construct a split accumulation scheme for Hadamard products under Pedersen commitments and for a simple polynomial commitment scheme based on Pedersen commitments. Our results are supported by a modular and efficient implementation.
2021
CRYPTO
Efficient Key Recovery for all HFE Signature Variants 📺
The HFE cryptosystem is one of the best known multivariate schemes. Especially in the area of digital signatures, the HFEv- variant offers short signatures and high performance. Recently, an instance of the HFEv- signature scheme called GeMSS was elected as one of the alternative candidates for signature schemes in the third round of the NIST Post Quantum Crypto (PQC) Standardization Project. In this paper, we propose a new key recovery attack on the HFEv- signature scheme. Our attack shows that both the Minus and the Vinegar modifi- cation do not enhance the security of the basic HFE scheme significantly. This shows that it is very difficult to build a secure and efficient signature scheme on the basis of HFE. In particular, we use our attack to show that the proposed parameters of the GeMSS scheme are not as secure as claimed.
2021
CRYPTO
New Approaches for Quantum Copy-Protection 📺
Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be immensely useful, but unfortunately little is known about how to achieve it in general. In this work, we make progress on this goal, by giving the following results: * We show how to copy protect any program that cannot be learned from its input-output behavior, relative to a classical oracle. This improves on Aaronson (CCC 2009), which achieves the same relative to a quantum oracle. By instantiating the oracle with post-quantum candidate obfuscation schemes, we obtain a heuristic construction of copy protection. * We show, roughly, that any program which can be watermarked can be copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent watermarking schemes.
2021
CRYPTO
Multi-theorem Designated-Verifier NIZK for QMA 📺
Abstract. We present a designated-verifier non-interactive zero-knowledge argument system for QMA with multi-theorem security under the Learning with Errors Assumption. All previous such protocols for QMA are only single-theorem secure. We also relax the setup assumption required in previous works. We prove security in the malicious designated-verifier (MDV-NIZK) model (Quach, Rothblum, and Wichs, EUROCRYPT 2019), where the setup consists of a mutually trusted random string and an untrusted verifier public key. Our main technical contribution is a general compiler that given a NIZK for NP and a quantum sigma protocol for QMA generates an MDV-NIZK protocol for QMA.
2021
CRYPTO
Hidden Cosets and Applications to Unclonable Cryptography 📺
In 2012, Aaronson and Christiano introduced the idea of hidden subspace states to build public-key quantum money [STOC '12]. Since then, this idea has been applied to realize several other cryptographic primitives which enjoy some form of unclonability. In this work, we propose a generalization of hidden subspace states to hidden coset states. We study different unclonable properties of coset states and several applications: * We show that, assuming indistinguishability obfuscation (iO), hidden coset states possess a certain direct product hardness property, which immediately implies a tokenized signature scheme in the plain model. Previously, a tokenized signature scheme was known only relative to an oracle, from a work of Ben-David and Sattath [QCrypt '17]. * Combining a tokenized signature scheme with extractable witness encryption, we give a construction of an unclonable decryption scheme in the plain model. The latter primitive was recently proposed by Georgiou and Zhandry [ePrint '20], who gave a construction relative to a classical oracle. * We conjecture that coset states satisfy a certain natural monogamy-of-entanglement property. Assuming this conjecture is true, we remove the requirement for extractable witness encryption in our unclonable decryption construction. As potential evidence in support of the conjecture, we prove a weaker version of this monogamy property, which we believe will still be of independent interest. * Finally, we give the first construction of a copy-protection scheme for pseudorandom functions (PRFs) in the plain model. Our scheme is secure either assuming iO and extractable witness encryption, or iO, LWE and the conjectured monogamy property mentioned above. This is the first example of a copy-protection scheme with provable security in the plain model for a class of functions that is not evasive.
2021
CRYPTO