International Association for Cryptologic Research

International Association
for Cryptologic Research

CryptoDB

Recent videos of IACR talks

Year
Venue
Title
2022
ASIACRYPT
SNACKs: Leveraging Proofs of Sequential Work for Blockchain Light Clients 📺
The success of blockchains has led to ever-growing ledgers that are stored by all participating full nodes. In contrast, light clients only store small amounts of blockchain-related data and rely on the mediation of full nodes when interacting with the ledger. A broader adoption of blockchains calls for protocols that make this interaction trustless. We revisit the design of light-client blockchain protocols from the perspective of classical proof-system theory, and explain the role that proofs of sequential work (PoSWs) can play in it. To this end, we define a new primitive called succinct non-interactive argument of chain knowledge (SNACK), a non-interactive proof system that provides clear security guarantees to a verifier (a light client) even when interacting only with a single dishonest prover (a full node). We show how augmenting any blockchain with any graph-labeling PoSW (GL-PoSW) enables SNACK proofs for this blockchain. We also provide a unified and extended definition of GL-PoSWs covering all existing constructions, and describe two new variants. We then show how SNACKs can be used to construct light-client protocols, and highlight some deficiencies of existing designs, along with mitigations. Finally, we introduce incremental SNACKs which could potentially provide a new approach to light mining.
2022
ASIACRYPT
Continuously Non-Malleable Codes against Bounded-Depth Tampering 📺
Non-malleable codes (Dziembowski, Pietrzak and Wichs, ICS 2010 & JACM 2018) allow protecting arbitrary cryptographic primitives against related-key attacks (RKAs). Even when using codes that are guaranteed to be non-malleable against a single tampering attempt, one obtains RKA security against poly-many tampering attacks at the price of assuming perfect memory erasures. In contrast, continuously non-malleable codes (Faust, Mukherjee, Nielsen and Venturi, TCC 2014) do not suffer from this limitation, as the non-malleability guarantee holds against poly-many tampering attempts. Unfortunately, there are only a handful of constructions of continuously non-malleable codes, while standard non-malleable codes are known for a large variety of tampering families including, e.g., NC0 and decision-tree tampering, AC0, and recently even bounded polynomial-depth tampering. We change this state of affairs by providing the first constructions of continuously non-malleable codes in the following natural settings: – Against decision-tree tampering, where, in each tampering attempt, every bit of the tampered codeword can be set arbitrarily after adaptively reading up to d locations within the input codeword. Our scheme is in the plain model, can be instantiated assuming the existence of one-way functions, and tolerates tampering by decision trees of depth d = O(n1/8), where n is the length of the codeword. Notably, this class includes NC0. – Against bounded polynomial-depth tampering, where in each tampering attempt the adversary can select any tampering function that can be computed by a circuit of bounded polynomial depth (and unbounded polynomial size). Our scheme is in the common reference string model, and can be instantiated assuming the existence of time-lock puzzles and simulation-extractable (succinct) non-interactive zero-knowledge proofs.
2022
ASIACRYPT
Full Quantum Equivalence of Group Action DLog and CDH, and More 📺
Best Paper Award
Cryptographic group actions are a relaxation of standard cryptographic groups that have less structure. This lack of structure allows them to be plausibly quantum resistant despite Shor's algorithm, while still having a number of applications. The most famous example of group actions are built from isogenies on elliptic curves. Our main result is that CDH for abelian group actions is quantumly equivalent to discrete log. Galbraith et al. (Mathematical Cryptology) previously showed perfectly solving CDH to be equivalent to discrete log quantumly; our result works for any non-negligible advantage. We also explore several other questions about group action and isogeny protocols.
2022
ASIACRYPT
Revisiting Related-Key Boomerang attacks on AES using computer-aided tool 📺
In recent years, several MILP models were introduced to search automatically for boomerang distinguishers and boomerang attacks on block ciphers. However, they can only be used when the key schedule is linear. Here, a new model is introduced to deal with nonlinear key schedules as it is the case for {\mbox{\tt AES}}. This model is more complex and actually it is too slow for exhaustive search. However, when some hints are added to the solver, it found the current best related-key boomerang attack on {\mbox{\tt AES-192}} with $2^{124}$ time, $2^{124}$ data, and $2^{79.8}$ memory complexities, which is better than the one presented by Biryukov and Khovratovich at ASIACRYPT 2009 with complexities $2^{176}/2^{123}/2^{152}$ respectively. This represents a huge improvement for the time and memory complexity, illustrating the power of MILP in cryptanalysis.
2022
ASIACRYPT
Random Sources in Private Computation 📺
We consider multi-party information-theoretic private computation. Such computation inherently requires the use of local randomness by the parties, and the question of minimizing the total number of random bits used for given private computations has received considerable attention in the literature. In this work we are interested in another question: given a private computation, we ask how many of the players need to have access to a random source, and how many of them can be deterministic parties. We are further interested in the possible interplay between the number of random sources in the system and the total number of random bits necessary for the computation. We give a number of results. We first show that, perhaps surprisingly, t players (rather than t+1) with access to a random source are sufficient for the information-theoretic t-private computation of any deterministic functionality over n players for any t<n/2; by a result of (Kushilevitz and Mansour, PODC'96), this is best possible. This means that, counter intuitively, while private computation is impossible without randomness, it is possible to have a private computation even when the adversary can control *all* parties who can toss coins (and therefore sees all random coins). For randomized functionalities we show that t+1 random sources are necessary (and sufficient). We then turn to the question of the possible interplay between the number of random sources and the necessary number of random bits. Since for only very few settings in private computation meaningful bounds on the number of necessary random bits are known, we consider the AND function, for which some such bounds are known. We give a new protocol to 1-privately compute the n-player AND function, which uses a single random source and 6 random bits tossed by that source. This improves, upon the currently best known results (Kushilevitz et al., TCC 2019), at the same time the number of sources and the number of random bits ((Kushilevitz et al., TCC 2019) gives a 2-source, 8-bits protocol). This result gives maybe some evidence that for 1-privacy, using the minimum necessary number of sources one can also achieve the necessary minimum number of random bits. We believe however that our protocol is of independent interest for the study of randomness in private computation.
2022
ASIACRYPT
Jammin' on the deck 📺
Currently, a vast majority of symmetric-key cryptographic schemes are built as block cipher modes. The block cipher is designed to be hard to distinguish from a random permutation and this is supported by cryptanalysis, while (good) modes can be proven secure if a random permutation takes the place of the block cipher. As such, block ciphers form an abstraction level that marks the border between cryptanalysis and security proofs. In this paper, we investigate a re-factored version of symmetric-key cryptography built not around the block ciphers but rather the deck function: a keyed function with arbitrary input and output length and incrementality properties. This allows for modes of use that are simpler to analyze and still very efficient thanks to the excellent performance of currently proposed deck functions. We focus on authenticated encryption (AE) modes with varying levels of robustness. Our modes have built-in support for sessions, but are also efficient without them. As a by-product, we define a new ideal model for AE dubbed the jammin cipher. Unlike the OAE2 security models, the jammin cipher is both a operational ideal scheme and a security reference, and addresses real-world use cases such as bi-directional communication and multi-key security.
2022
ASIACRYPT
Anonymous Public Key Encryption under Corruptions 📺
Anonymity of public key encryption (PKE) requires that, in a multi-user scenario, the PKE ciphertexts do not leak information about which public keys are used to generate them. Corruptions are common threats in the multi-user scenario but anonymity of PKE under corruptions is less studied in the literature. In TCC 2020, Benhamouda et al. first provide a formal characterization for anonymity of PKE under a specific type of corruption. However, no known PKE scheme is proved to meet their characterization. To the best of our knowledge, all the PKE application scenarios which require anonymity also require confidentiality. However, in the work by Benhamouda et al., different types of corruptions for anonymity and confidentiality are considered, which can cause security pitfalls. What's worse, we are not aware of any PKE scheme which can provide both anonymity and confidentiality under the same types of corruptions. In this work, we introduce a new security notion for PKE called ANON-RSO$_{k}\&$C security, capturing anonymity under corruptions. We also introduce SIM-RSO$_{k}\&$C security which captures confidentiality under the same types of corruptions. We provide a generic framework of constructing PKE scheme which can achieve the above two security goals simultaneously based on a new primitive called key and message non-committing encryption (KM-NCE). Then we give a general construction of KM-NCE utilizing a variant of hash proof system (HPS) called Key-Openable HPS. We also provide Key-Openable HPS instantiations based on the matrix decisional Diffie-Hellman assumption. Therefore, we can obtain various concrete PKE instantiations achieving the two security goals in the standard model with \emph{compact} ciphertexts. Furthermore, for some PKE instantiation, its security reduction is \emph{tight}.
2022
ASIACRYPT
Counting Vampires: From Univariate Sumcheck to Updatable ZK-SNARK 📺
We propose a univariate sumcheck argument $\mathfrak{Count}$ of essentially optimal communication efficiency of one group element. While the previously most efficient univariate sumcheck argument of Aurora is based on polynomial commitments, $\mathfrak{Count}$ is based on inner-product commitments. We use $\mathfrak{Count}$ to construct a new pairing-based updatable and universal zk-SNARK $\mathfrak{Vampire}$ with the shortest known argument length (four group and two finite field elements) for $\mathsf{NP}$. In addition, $\mathfrak{Vampire}$ uses the aggregated polynomial commitment scheme of Boneh et al.
2022
ASIACRYPT
Efficient Zero-Knowledge Arguments in Discrete Logarithm Setting: Sublogarithmic Proof or Sublinear Verifier 📺
We propose three interactive zero-knowledge arguments for arithmetic circuit of size N in the common random string model, which can be converted to be non-interactive by Fiat-Shamir heuristics in the random oracle model. First argument features O( log N ) communication and round complexities and O(N) computational complexity for the verifier. Second argument features O(log N ) communication and O( N ) computational complexity for the verifier. Third argument features O(log N ) communication and O( N log N ) computational complexity for the verifier. Contrary to first and second arguments, the third argument is free of reliance on pairing-friendly elliptic curves. The soundness of three arguments is proven under the standard discrete logarithm and/or the double pairing assumption, which is at least as reliable as the decisional Diffie-Hellman assumption.
2022
ASIACRYPT
YOLO YOSO: Fast and Simple Encryption and Secret Sharing in the YOSO Model 📺
Achieving adaptive (or proactive) security in cryptographic protocols is notoriously difficult due to the adversary's power to dynamically corrupt parties as the execution progresses. Inspired by the work of Benhamouda \textit{et al.} in TCC 2020, Gentry \textit{et al.} in CRYPTO 2021 introduced the YOSO (You Only Speak Once) model for constructing adaptively (or proactively) secure protocols in massively distributed settings (\textit{e.g.} blockchains). In this model, instead of having all parties execute an entire protocol, smaller \emph{anonymous committees} are randomly chosen to execute each individual round of the protocol. After playing their role, parties encrypt protocol messages towards the the next anonymous committee and erase their internal state before publishing their ciphertexts. However, a big challenge remains in realizing YOSO protocols: \emph{efficiently} encrypting messages towards anonymous parties selected at random without learning their identities, while proving the encrypted messages are valid with respect to the protocol. In particular, the protocols of Benhamouda \textit{et al.} and of Gentry \textit{et al.} require showing ciphertexts contain valid shares of secret states. We propose concretely efficient methods for encrypting a protocol's secret state towards a random anonymous committee. We start by proposing a very simple and efficient scheme for encrypting messages towards randomly and anonymously selected parties. We then show constructions of publicly verifiable secret (re-)sharing (PVSS) schemes with concretely efficient proofs of (re-)share validity that can be generically instantiated from encryption schemes with certain linear homomorphic properties. In addition, we introduce a new PVSS with proof of sharing consisting of just two field elements, which as far as we know is the first achieving this, and may be of independent interest. Finally, we show that our PVSS schemes can be efficiently realized from our encyption scheme.
2022
ASIACRYPT
Group Action Key Encapsulation and Non-Interactive Key Exchange in the QROM 📺
In the context of quantum-resistant cryptography, cryptographic group actions offer an abstraction of isogeny-based cryptography in the Commutative Supersingular Isogeny Diffie-Hellman (CSIDH) setting. In this work, we revisit the security of two previously proposed natural protocols: the Group Action Hashed ElGamal key encapsulation mechanism (GA-HEG KEM) and the Group Action Hashed Diffie-Hellman non-interactive key-exchange (GA-HDH NIKE) protocol. The latter protocol has already been considered to be used in practical protocols such as Post-Quantum WireGuard (S&P '21) and OPTLS (CCS '20). We prove that active security of the two protocols in the Quantum Random Oracle Model (QROM) inherently relies on very strong variants of the Group Action Strong CDH problem, where the adversary is given arbitrary quantum access to a DDH oracle. That is, quantum accessible Strong CDH assumptions are not only sufficient but also necessary to prove active security of the GA-HEG KEM and the GA-HDH NIKE protocols. Furthermore, we propose variants of the protocols with QROM security from the classical Strong CDH assumption, i.e., CDH with classical access to the DDH oracle. Our first variant uses key confirmation and can therefore only be applied in the KEM setting. Our second but considerably less efficient variant is based on the twinning technique by Cash et al. (EUROCRYPT '08) and in particular yields the first actively secure isogeny-based NIKE with QROM security from the standard CDH assumption.
2022
ASIACRYPT
On Rejection Sampling in Lyubashevsky's Signature Scheme 📺
Lyubashevsky’s signatures are based on the Fiat-Shamir with aborts paradigm, whose central ingredient is the use of rejection sampling to transform secret-dependent signature samples into samples from (or close to) a secret-independent target distribution. Several choices for the underlying distributions and for the rejection sampling strategy can be considered. In this work, we study Lyubashevsky’s signatures through the lens of rejection sampling, and aim to minimize signature size given signing runtime requirements. Several of our results concern rejection sampling itself and could have other applications. We prove lower bounds for compactness of signatures given signing run- time requirements, and for expected runtime of perfect rejection sampling strategies. We also propose a Rényi-divergence-based analysis of Lyuba- shevsky’s signatures which allows for larger deviations from the target distribution, and show hyperball uniforms to be a good choice of distri- butions: they asymptotically reach our compactness lower bounds and offer interesting features for practical deployment. Finally, we propose a different rejection sampling strategy which circumvents the expected runtime lower bound and provides a worst-case runtime guarantee.
2022
ASIACRYPT
New Algorithms and Analyses for Sum-Preserving Encryption 📺
We continue the study of sum-preserving encryption schemes, in which the plaintext and ciphertext are both integer vectors with the same sum. Such encryption schemes were recently constructed and analyzed by Tajik, Gunasekaran, Dutta, Ellia, Bobba, Rosulek, Wright, and Feng (NDSS 2019) in the context of image encryption. Our first main result is to prove a mixing-time bound for the construction given by Tajik et al. using path coupling. We then provide new sum-preserving encryption schemes by describing two practical ways to rank and unrank the values involved in sum-preserving encryption, which can then be combined with the rank-encipher-unrank technique from format-preserving encryption. Finally, we compare the efficiency of the Tajik et al. construction and our new ranking constructions based on performance tests we conducted on prototype implementations.
2022
ASIACRYPT
Statistical Decoding 2.0: Reducing Decoding to LPN 📺
The security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD). A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding. It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error. This algorithm was long forgotten because even the best variants of it performed poorly when compared to the simplest ISD algorithm. We revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decodings at code rates smaller than 0.3. It gives for the first time after 60 years, a better decoding algorithm for a significant range which does not belong to the ISD family.
2022
ASIACRYPT
Algebraic Meet-in-the-Middle Attack on LowMC 📺
By exploiting the feature of partial nonlinear layers, we propose a new technique called algebraic meet-in-the-middle (MITM) attack to analyze the security of LowMC, which can reduce the memory complexity of the simple difference enumeration attack over the state-of-the-art. Moreover, while an efficient algebraic technique to retrieve the full key from a differential trail of LowMC has been proposed at CRYPTO 2021, its time complexity is still exponential in the key size. In this work, we show how to reduce it to constant time when there are a sufficiently large number of active S-boxes in the trail. With the above new techniques, the attacks on LowMC and LowMC-M published at CRYPTO 2021 are further improved, and some LowMC instances could be broken for the first time. Our results seem to indicate that partial nonlinear layers are still not well-understood.
2022
ASIACRYPT
Rotatable Zero Knowledge Sets: Post Compromise Secure Auditable Dictionaries with application to Key Transparency 📺
Recently, the area of Key Transparency (KT) has received a lot of attention, as it allows the service provider to provide auditable and verifiable proofs regarding authenticity of public keys used by various participants. Moreover, it is highly preferable to do it in a privacy-preserving ways, so that users and auditors do not learn anything beyond what is necessary to keep the service provider accountable. Abstractly, the problem of building such systems reduces to constructing so called append-only Zero-Knowledge Sets (aZKS). Unfortunately, none of the previous aZKS constructions adequately addressed the problem of key rotation, which would provide Post-Compromise Security (PCS) in case the server in compromised. In this work we address this concern, and refine an extension of aZKS called Rotatable ZKS (RZKS). In addition to addressing the PCS concern, our notion of RZKS has several other attractive features, such as stronger soundness notion (called extractability), and the ability for a stale communication party to quickly catch up with the current epoch, while ensuring the the server did not erase any of the past data. Of independent interest, we also introduce and build a new primitive called Rotatable Verifiable Random Function (VRF), and show how to build RZKS in a modular fashion from rotatable VRF, ordered accumulators and append-only vector commitment schemes.
2022
ASIACRYPT
Efficient NIZKs from LWE via Polynomial Reconstruction and ``MPC in the Head'' 📺
All existing works building non-interactive zero-knowledge (NIZK) arguments for NP from the Learning With Errors (LWE) assumption have studied instantiating the Fiat-Shamir paradigm on a parallel repetition of an underlying honest-verifier zero knowledge (HVZK) sigma protocol, via an appropriately built correlation-intractable (CI) hash function from LWE. This technique has inherent efficiency losses that arise from parallel repetition. In this work, we show how to make use of the more efficient ``MPC in the Head'' technique for building an underlying honest-verifier protocol upon which to apply the Fiat-Shamir paradigm. To make this possible, we provide a new and more efficient construction of CI hash functions from LWE, using efficient algorithms for polynomial reconstruction as the main technical tool. We stress that our work provides a new and more efficient ``base construction'' for building LWE-based NIZK arguments for NP. Our protocol can be the building block around which other efficiency-focused bootstrapping techniques can be applied, such as the bootstrapping technique of Gentry et al. (Journal of Cryptology 2015).
2022
ASIACRYPT
Efficient Adaptively-Secure Byzantine Agreement for Long Messages 📺
We investigate the communication complexity of Byzantine agreement protocols for long messages against an adaptive adversary. In this setting, prior $n$-party protocols either achieved a communication complexity of $O(nl\cdot\poly(\kappa))$ or $O(nl + n^2 \cdot \poly(\kappa))$ for $l$-bit long messages and security parameter $\kappa$. We improve the state of the art by presenting protocols with communication complexity $O(nl + n \cdot \poly(\kappa))$ in both the synchronous and asynchronous communication models. The synchronous protocol tolerates $t \le (1-\epsilon) \frac{n}{2}$ corruptions and assumes a VRF setup, while the asynchronous protocol tolerates $t \le (1-\epsilon) \frac{n}{3}$ corruptions under further cryptographic assumptions. Our protocols are very simple and combine subcommittee election with the recent approach of Nayak et al. (DISC'20). Surprisingly, the analysis of our protocols is 'all but simple' and involves an interesting new application of Mc Diarmid's inequality to obtain 'almost optimal' corruption thresholds.
2022
ASIACRYPT
Functional Encryption with Secure Key Leasing 📺
Secure software leasing is a quantum cryptographic primitive that enables us to lease software to a user by encoding it into a quantum state. Secure software leasing has a mechanism that verifies whether a returned software is valid or not. The security notion guarantees that once a user returns a software in a valid form, the user no longer uses the software. In this work, we introduce the notion of secret-key functional encryption (SKFE) with secure key leasing, where a decryption key can be securely leased in the sense of secure software leasing. We also instantiate it with standard cryptographic assumptions. More specifically, our contribution is as follows. - We define the syntax and security definitions for SKFE with secure key leasing. - We achieve a transformation from standard SKFE into SKFE with secure key leasing without using additional assumptions. Especially, we obtain bounded collusion-resistant SKFE for P/poly with secure key leasing based on post-quantum one-way functions since we can instantiate bounded collusion-resistant SKFE for P/poly with the assumption. Some previous secure software leasing schemes capture only pirate software that runs on an honest evaluation algorithm (on a legitimate platform). However, our secure key leasing notion captures arbitrary attack strategies and does not have such a limitation. As an additional contribution, we introduce the notion of single-decryptor FE (SDFE), where each functional decryption key is copy-protected. Since copy-protection is a stronger primitive than secure software leasing, this notion can be seen as a stronger cryptographic primitive than FE with secure key leasing. More specifically, our additional contribution is as follows. - We define the syntax and security definitions for SDFE. - We achieve collusion-resistant single-decryptor PKFE for P/poly from post-quantum indistinguishability obfuscation and quantum hardness of the learning with errors problem.
2022
ASIACRYPT
Short-lived zero-knowledge proofs and signatures 📺
We introduce the short-lived proof, a non-interactive proof of knowledge with a novel feature: after a specified period of time, the proof is no longer convincing. This time-delayed loss of soundness happens "naturally" without further involvement from the prover or any third party. We propose definitions for short-lived proofs as well as the special case of short-lived signatures. We show several practical constructions built using verifiable delay functions (VDFs). The key idea in our approach is to allow any party to forge any proof by executing a large sequential computation. Some constructions achieve a stronger property called reusable forgeability in which one sequential computation allows forging an arbitrary number of proofs of different statements. We also introduces two novel types of VDFs, re-randomizable VDFs and zero-knowledge VDFs, which may be of independent interest. Our constructions for short-lived Sigma-protocols and signatures are practically efficient for provers and verifiers, adding a few hundred bytes of overhead and tens to hundreds of milliseconds of proving/verification time.
2022
ASIACRYPT
Threshold Linearly Homomorphic Encryption on Z/2^kZ 📺
A threshold public key encryption protocol is a public key system where the private key is distributed among n different servers. It offers high security since no single server is entrusted to perform the decryption in its entirety. It is the core component of many multiparty computation protocols which involves mutually distrusting parties with common goals. It is even more useful when it is homomorphic, which means that public operations on ciphertexts translates on operations on the underlying plaintexts. In particular, Cramer, Damgård and Nielsen at Eurocrypt 2001 provide a new approach to multiparty computation from linearly homomorphic threshold encryption schemes. On the other hand, there has been recent interest in developing multiparty computations modulo 2^k for a certain integer k, that closely match data manipulated by a CPU. Multiparty computation would therefore benefit from an encryption scheme with such a message space that would support a distributed decryption. In this work, we provide the first threshold linearly homomorphic encryption whose message space is Z/2^kZ for any k. It is inspired by Castagnos and Laguillaumie’s encryption scheme from RSA 2015, but works with a class group of discriminant whose factorisation is unknown. Its natural structure à la Elgamal makes it possible to distribute the decryption among servers using linear integer secret sharing, allowing any access structure for the decryption policy. Furthermore its efficiency and its flexibility on the choice of the message space make it a good candidate for applications to multiparty computation.
2022
ASIACRYPT
Classically Verifiable NIZK for QMA with Preprocessing 📺
We propose three constructions of classically verifiable non-interactive zero-knowledge proofs and arguments (CV-NIZK) for QMA in various preprocessing models. 1. We construct a CV-NIZK for QMA in the quantum secret parameter model where a trusted setup sends a quantum proving key to the prover and a classical verification key to the verifier. It is information theoretically sound and zero-knowledge. 2. Assuming the quantum hardness of the learning with errors problem, we construct a CV-NIZK for QMA in a model where a trusted party generates a CRS and the verifier sends an instance-independent quantum message to the prover as preprocessing. This model is the same as one considered in the recent work by Coladangelo, Vidick, and Zhang (CRYPTO ’20). Our construction has the so-called dual-mode property, which means that there are two computationally in-distinguishable modes of generating CRS, and we have information theoretical soundness in one mode and information theoretical zero-knowledge property in the other. This answers an open problem left by Coladangelo et al, which is to achieve either of soundness or zero-knowledge information theoretically. To the best of our knowledge, ours is the first dual-mode NIZK for QMA in any kind of model. 3. We construct a CV-NIZK for QMA with quantum preprocessing in the quantum random oracle model. This quantum preprocessing is the one where the verifier sends a random Pauli-basis states to the prover. Our construction uses the Fiat-Shamir transformation. The quantum preprocessing can be replaced with the setup that distributes Bell pairs among the prover and the verifier, and therefore we solve the open problem by Broadbent and Grilo (FOCS ’20) about the possibility of NIZK for QMA in the shared Bell pair model via the Fiat-Shamir transformation.
2022
ASIACRYPT
A Modular Approach to the Incompressibility of Block-Cipher-Based AEADs 📺
Incompressibility is one of the most fundamental security goals in white-box cryptography. Given recent advances in the design of efficient and incompressible block ciphers such as SPACE, SPNbox and WhiteBlock, we demonstrate the feasibility of reducing incompressible AEAD modes to incompressible block ciphers. We first observe that several existing AEAD modes of operation, including CCM, GCM(-SIV), and OCB, would be all insecure against white-box adversaries even when used with an incompressble block cipher. This motivates us to revisit and formalize incompressibility-based security definitions for AEAD schemes and for block ciphers, so that we become able to design modes and reduce their security to that of the underlying ciphers. Our new security notion for AEAD, which we name whPRI, is an extension of the pseudo-random injection security in the black-box setting. Similar security notions are also defined for other cryptosystems such as privacy-only encryption schemes. We emphasize that whPRI ensures quite strong authenticity against white-box adversaries: existential unforgeability beyond leakage. This contrasts sharply with previous notions which have ensured either no authenticity or only universal unforgeability. For the underlying ciphers we introduce a new notion of whPRP, which extends that of PRP in the black-box setting. Interestingly, our incompressibility reductions follow from a variant of public indifferentiability. In particular, we show that a practical whPRI-secure AEAD mode can be built from a whPRP-secure block cipher: We present a SIV-like composition of the sponge construction (utilizing a block cipher as its underlying primitive) with the counter mode and prove that such a construction is (in the variant sense) public indifferentiable from a random injection. To instantiate such an AEAD scheme, we propose a 256-bit variant of SPACE, based on our conjecture that SPACE should be a whPRP-secure cipher.
2022
ASIACRYPT
Identity-Based Matchmaking Encryption from Standard Assumptions 📺
In this work, we propose the first identity-based matchmaking encryption (IB-ME) scheme under the standard assumptions in the standard model. This scheme is proven to be secure under the symmetric external Diffie-Hellman (SXDH) assumption in prime order bilinear pairing groups. In our IB-ME scheme, all parameters have constant number of group elements and are simpler than those of previous constructions. Previous works are either in the random oracle model or based on the q-type assumptions, while ours is built directly in the standard model and based on static assumptions, and does not rely on other crypto tools. More concretely, our IB-ME scheme is constructed from a variant of two-level anonymous IBE. We observed that this two-level IBE with anonymity and unforgeability satisfies the same functionality of IB-ME, and its security properties cleverly meet the two requirements of IB-ME (Privacy and Authenticity). The privacy property of IB-ME relies on the anonymity of this two-level IBE, while the authenticity property is corresponding to the unforgeability in the 2nd level. This variant of two-level IBE is built from dual pairing vector spaces, and both security reductions rely on dual system encryption.
2022
ASIACRYPT
Towards Tight Security Bounds for OMAC, XCBC and TMAC 📺
OMAC --- a single-keyed variant of CBC-MAC by Iwata and Kurosawa --- is a widely used and standardized (NIST FIPS 800-38B, ISO/IEC 29167-10:2017) message authentication code (MAC) algorithm. The best security bound for OMAC is due to Nandi who proved that OMAC's pseudorandom function (PRF) advantage is upper bounded by $ O(q^2\ell/2^n) $, where $ q $, $ \ell $, and $ n $, denote the number of queries, maximum permissible query length (in terms of $ n $-bit blocks), and block size of the underlying block cipher, respectively. In contrast, there is no attack with matching lower bound. Indeed, the best known attack on OMAC is the folklore birthday attack achieving a lower bound of $ \Omega(q^2/2^n) $. In this work, we close this gap for a large range of message lengths. Specifically, we show that OMAC's PRF security is upper bounded by $ O(q^2/2^n + q\ell^2/2^n)$. In practical terms, this means that for a $ 128 $-bit block cipher, and message lengths up to $ 64 $ Gigabyte, OMAC can process up to $ 2^{64} $ messages before rekeying (same as the birthday bound). In comparison, the previous bound only allows $ 2^{48} $ messages. As a side-effect of our proof technique, we also derive similar tight security bounds for XCBC (by Black and Rogaway) and TMAC (by Kurosawa and Iwata). As a direct consequence of this work, we have established tight security bounds (in a wide range of $\ell$) for all the CBC-MAC variants, except for the original CBC-MAC.
2022
ASIACRYPT
Practical Provably Secure Flooding for Blockchains 📺
In recent years, permisionless blockchains have received a lot of attention both from industry and academia, where substantial effort has been spent to develop consensus protocols that are secure under the assumption that less than half (or a third) of a given resource (e.g., stake or computing power) is controlled by corrupted parties. The security proofs of these consensus protocols usually assume the availability of a network functionality guaranteeing that a block sent by an honest party is received by all honest parties within some bounded time. To obtain an overall protocol that is secure under the same corruption assumption, it is therefore necessary to combine the consensus protocol with a network protocol that achieves this property under that assumption. In practice, however, the underlying network is typically implemented by flooding protocols that are not proven to be secure in the setting where a fraction of the considered total weight can be corrupted. This has led to many so-called eclipse attacks on existing protocols and tailor-made fixes against specific attacks. To close this apparent gap, we present the first practical flooding protocol that provably delivers sent messages to all honest parties after a logarithmic number of steps. We prove security in the setting where all parties are publicly assigned a positive weight and the adversary can corrupt parties accumulating up to a constant fraction of the total weight. This can directly be used in the proof-of-stake setting, but is not limited to it. To prove the security of our protocol, we combine known results about the diameter of Erdős–Rényi graphs with reductions between different types of random graphs. We further show that the efficiency of our protocol is asymptotically optimal. The practicality of our protocol is supported by extensive simulations for different numbers of parties, weight distributions, and corruption strategies. The simulations confirm our theoretical results and show that messages are delivered quickly regardless of the weight distribution, whereas protocols that are oblivious of the parties' weights completely fail if the weights are unevenly distributed. Furthermore, the average message complexity per party of our protocol is within a small constant factor of such a protocol.
2022
ASIACRYPT
A Modular Approach to the Security Analysis of Two-Permutation Constructions 📺
Constructions based on two public permutation calls are very common in today's cryptographic community. However, each time a new construction is introduced, a dedicated proof must be carried out to study the security of the construction. In this work, we propose a new tool to analyze the security of these constructions in a modular way. This tool is built on the idea of the classical mirror theory for block cipher based constructions, such that it can be used for security proofs in the ideal permutation model. We present different variants of this public permutation mirror theory such that it is suitable for different security notions. We also present a framework to use the new techniques, which provides the bad events that need to be excluded in order to apply the public permutation mirror theory. Furthermore, we showcase the new technique on three examples: the Tweakable Even-Mansour cipher by Cogliati et al. (CRYPTO '15), the two permutation variant of the pEDM PRF by Dutta et al. (ToSC '21(2)), and the two permutation variant of the nEHtM_p MAC algorithm by Dutta and Nandi (AFRICACRYPT '20). With this new tool we prove the multi-user security of these constructions in a considerably simplified way.
2022
ASIACRYPT
GUC-Secure Commitments via Random Oracles: New Impossibility and Feasibility 📺
In the UC framework, protocols must be subroutine respecting; therefore, shared trusted setup might cause security issues. To address this drawback, Generalized UC (GUC) framework is introduced by Canetti {\em et al.} (TCC 2007). In this work, we investigate the impossibility and feasibility of GUC-secure commitments using global random oracles (GRO) as the trusted setup. In particular, we show that it is impossible to have a 2-round (1-round committing and 1-round opening) GUC-secure commitment in the global observable RO model by Canetti {\em et al.} (CCS 2014). We then give a new round-optimal GUC-secure commitment that uses only Minicrypt assumptions (i.e. the existence of one-way functions) in the global observable RO model. Furthermore, we also examine the complete picture on round complexity of the GUC-secure commitments in various global RO models.
2022
ASIACRYPT
Concurrently Composable Non-Interactive Secure Computation 📺
We consider the feasibility of non-interactive secure two-party computation (NISC) in the plain model satisfying the notion of superpolynomial-time simulation (SPS). While stand-alone secure SPS-NISC protocols are known from standard assumptions (Badrinarayanan et al., Asiacrypt 2017), it has remained an open problem to construct a concurrently composable SPS-NISC. Prior to our work, the best protocols require 5 rounds (Garg et al., Eurocrypt 2017), or 3 simultaneous-message rounds (Badrinarayanan et al., TCC 2017). In this work, we demonstrate the first concurrently composable SPS-NISC. Our construction assumes the existence of: * a non-interactive (weakly) CCA-secure commitment, * a stand-alone secure SPS-NISC with subexponential security, and satisfies the notion of “angel-based” UC security (i.e., UC with a superpolynomial-time helper) with perfect correctness. We additionally demonstrate that both of the primitives we use (albeit only with polynomial security) are necessary for such concurrently composable SPS-NISC with perfect correctness. As such, our work identifies essentially necessary and sufficient primitives for concurrently composable SPS-NISC with perfect correctness in the plain model.
2022
ASIACRYPT
Linear-map Vector Commitments and their Practical Applications 📺
Vector commitments (VC) are a cryptographic primitive that allow one to commit to a vector and then “open” some of its positions efficiently. Vector commitments are increasingly recognized as a central tool to scale highly decentralized networks of large size and whose content is dynamic. In this work, we examine the demands on the properties that an ideal vector commitment should satisfy in the light of the emerging plethora of practical applications and propose new constructions that improve the state-of-the-art in several dimensions and offer new tradeoffs. We also propose a unifying framework that captures several constructions and show how to generically achieve some properties from more basic ones.
2022
ASIACRYPT
Memory-Tight Multi-Challenge Security of Public-Key Encryption 📺
We give the first examples of public-key encryption schemes which can be proven to achieve multi-challenge, multi-user CCA security via reductions that are tight in time, advantage, and memory. Our constructions are obtained by applying the KEM-DEM paradigm to variants of Hashed ElGamal and the Fujisaki-Okamoto transformation that are augmented by adding uniformly random strings to their ciphertexts. The reductions carefully combine recent proof techniques introduced by Bhattacharyya'20 and Ghoshal-Ghosal-Jaeger-Tessaro'22. Our proofs for the augmented ECIES version of Hashed-ElGamal make use of a new computational Diffie-Hellman assumption wherein the adversary is given access to a pairing to a random group, which we believe may be of independent interest.
2022
ASIACRYPT
A Non-heuristic Approach to Time-space Tradeoffs and Optimizations for BKW 📺
Blum, Kalai and Wasserman (JACM 2003) gave the first sub-exponential algorithm to solve the Learning Parity with Noise (LPN) problem. In particular, consider the LPN problem with constant noise and dimension $n$. The BKW solves it with space complexity $2^{\frac{(1+\epsilon)n}{\log n}}$ and time/sample complexity $2^{\frac{(1+\epsilon)n}{\log n}}\cdot 2^{\Omega(n^{\frac{1}{1+\epsilon}})}$ for small constant $\epsilon\to 0^+$. We propose a variant of the BKW by tweaking Wagner's generalized birthday problem (Crypto 2002) and adapting the technique to a $c$-ary tree structure. In summary, our algorithm achieves the following: \begin{enumerate} \item {\bf (Time-space tradeoff).} We obtain the same time-space tradeoffs for LPN and LWE as those given by Esser et al. (Crypto 2018), but without resorting to any heuristics. For any $2\leq c\in\mathbb{N}$, our algorithm solves the LPN problem with time complexity $2^{\frac{\log c(1+\epsilon)n}{\log n}}\cdot 2^{\Omega(n^{\frac{1}{1+\epsilon}})}$ and space complexity $2^{\frac{\log c(1+\epsilon)n}{(c-1)\log n}}$, where one can use Grover's quantum algorithm or Dinur et al.'s dissection technique (Crypto 2012) to further accelerate/optimize the time complexity. \item {\bf (Time/sample optimization).} A further adjusted variant of our algorithm solves the LPN problem with sample, time and space complexities all kept at $2^{\frac{(1+\epsilon)n}{\log n}}$ for $\epsilon\to 0^+$, saving factor $2^{\Omega(n^{\frac{1}{1+\epsilon}})}$ in time/sample compared to the original BKW, and the variant of Devadas et al. (TCC 2017). \item {\bf (Sample reduction).} Our algorithm provides an alternative to Lyubashevsky's BKW variant (RANDOM 2005) for LPN with a restricted amount of samples. In particular, given $Q=n^{1+\epsilon}$ (resp., $Q=2^{n^{\epsilon}}$) samples, our algorithm saves a factor of $2^{\Omega(n)/(\log n)^{1-\kappa}}$ (resp., $2^{\Omega(n^{\kappa})}$) for constant $\kappa \to 1^-$ in running time while consuming roughly the same space, compared with Lyubashevsky's algorithm. \end{enumerate} In particular, the time/sample optimization benefits from a careful analysis of the error distribution among the correlated candidates, which was not studied by previous rigorous approaches such as the analysis of Minder and Sinclair (J.Cryptology 2012) or Devadas et al. (TCC 2017).
2022
ASIACRYPT
Stretching Cube Attacks: Improved Methods to Recover Massive Superpolies 📺
Cube attacks exploit the algebraic properties of symmetric ciphers by recovering a special polynomial, the superpoly, and subsequently the secret key. When the algebraic normal forms of the corresponding Boolean functions are not available, the division property based approach allows to recover the exact superpoly in a clever way. However, the computational cost to recover the superpoly becomes prohibitive as the number of rounds of the cipher increases. For example, the nested monomial predictions (NMP) proposed at ASIACRYPT 2021 stuck at round 845 for \trivium. To alleviate the bottleneck of the NMP technique, i.e., the unsolvable model due to the excessive number of monomial trails, we shift our focus to the so-called valuable terms of a specific middle round that contribute to the superpoly. Two new techniques are introduced, namely, Non-zero Bit-based Division Property (NBDP) and Core Monomial Prediction (CMP), both of which result in a simpler MILP model compared to the MILP model of MP. It can be shown that the CMP technique offers a substantial improvement over the monomial prediction technique in terms of computational complexity of recovering valuable terms. Combining the divide-and-conquer strategy with these two new techniques, we catch the valuable terms more effectively and thus avoid wasting computational resources on intermediate terms contributing nothing to the superpoly. As an illustration of the power of our techniques, we apply our framework to \trivium, \grain, \kreyvium and \acorn. As a result, the computational cost of earlier attacks can be significantly reduced and the exact ANFs of the superpolies for 846-, 847- and 848-round \trivium, 192-round \grain, 895-round \kreyvium and 776-round \acorn can be recovered in practical time, even though the superpoly of 848-round \trivium contains over 500 million terms; this corresponds to respectively 3, 1, 1 and 1 rounds more than the previous best results. Moreover, by investigating the internal properties of M\"obius transformation, we show how to perform key recovery using superpolies involving full key bits, which leads to the best key recovery attacks on the targeted ciphers.
2022
ASIACRYPT
Exploring SAT for Cryptanalysis: (Quantum) Collision Attacks against 6-Round SHA-3 📺
In this work, we focus on collision attacks against instances of \shac hash family in both classical and quantum settings. Since the 5-round collision attacks on \shacc-256 and other variants proposed by Guo \etal at JoC~2020, no other essential progress has been published. With a thorough investigation, we identify that the challenges of extending such collision attacks on \shac to more rounds lie in the inefficiency of differential trail search. To overcome this obstacle, we develop a \sat automatic search toolkit. The tool is used in multiple intermediate steps of the collision attacks and exhibits surprisingly high efficiency in differential trail search and other optimization problems encountered in the process. As a result, we present the first 6-round classical collision attack on \shakea with time complexity \cpshake, which also forms a quantum collision attack with quantum time \cpshakeq, and the first 6-round quantum collision attack on \shacc-224 and \shacc-256 with quantum time \cpshattf and \cpshatfs, where $S$ represents the hardware resources of the quantum computer. The fact that classical collision attacks do not apply to 6-round \shacc-224 and \shacc-256 shows the higher coverage of quantum collision attacks, which is consistent with that on SHA-2 observed by Hosoyamada and Sasaki at CRYPTO~2021.
2022
ASIACRYPT
Zero-Knowledge Protocols for the Subset Sum Problem from MPC-in-the-Head with Rejection 📺
We propose (honest verifier) zero-knowledge arguments for the modular subset sum problem. Previous combinatorial approaches, notably one due to Shamir, yield arguments with cubic communication complexity (in the security parameter). More recent methods, based on the MPC-in-the-head technique, also produce arguments with cubic communication complexity. We improve this approach by using a secret-sharing over small integers (rather than modulo q) to reduce the size of the arguments and remove the prime modulus restriction. Since this sharing may reveal information on the secret subset, we introduce the idea of rejection to the MPC-in-the-head paradigm. Special care has to be taken to balance completeness and soundness and preserve zero-knowledge of our arguments. We combine this idea with two techniques to prove that the secret vector (which selects the subset) is well made of binary coordinates. Our new protocols achieve an asymptotic improvement by producing arguments of quadratic size. This improvement is also practical: for a 256-bit modulus q, the best variant of our protocols yields 13KB arguments while previous proposals gave 1180KB arguments, for the best general protocol, and 122KB, for the best protocol restricted to prime modulus. Our techniques can also be applied to vectorial variants of the subset sum problem and in particular the inhomogeneous short integer solution (ISIS) problem for which they provide an efficient alternative to state-of-the-art protocols when the underlying ring is not small and NTT-friendly. We also show the application of our protocol to build efficient zero-knowledge arguments of plaintext and/or key knowledge in the context of fully-homomorphic encryption. When applied to the TFHE scheme, the obtained arguments are more than 20 times smaller than those obtained with previous protocols. Eventually, we use our technique to construct an efficient digital signature scheme based on a pseudo-random function due to Boneh, Halevi, and Howgrave-Graham.
2022
ASIACRYPT
SwiftEC: Shallue--van de Woestijne Indifferentiable Function to Elliptic Curves 📺
Runner up Best Paper
Hashing arbitrary values to points on an elliptic curve is a required step in many cryptographic constructions, and a number of techniques have been proposed to do so over the years. One of the first ones was due to Shallue and van de Woestijne (ANTS-VII), and it had the interesting property of applying to essentially all elliptic curves over finite fields. It did not, however, have the desirable property of being *indifferentiable from a random oracle* when composed with a random oracle to the base field. Various approaches have since been considered to overcome this limitation, starting with the foundational work of Brier et al. (CRYPTO 2011). For example, if f: F_q→E(F_q) is the Shallue--van de Woestijne (SW) map and H, H' are *two* independent random oracles, we now know that m↦f(H(m))+f(H'(m)) is indifferentiable from a random oracle. Unfortunately, this approach has the drawback of being twice as expensive to compute than the straightforward, but not indifferentiable, m↦f(H(m)). Most other solutions so far have had the same issue: they are at least as costly as two base field exponentiations, whereas plain encoding maps like f cost only one exponentiation. Recently, Koshelev (DCC 2022) provided the first construction of indifferentiable hashing at the cost of one exponentiation, but only for a very specific class of curves (some of those with j-invariant 0), and using techniques that are unlikely to apply more broadly. In this work, we revisit this long-standing open problem, and observe that the SW map actually fits in a one-parameter family (f_u)_{u∈F_q} of encodings, such that for independent random oracles H, H', F: m↦f_{H'(m)}(H(m)) is indifferentiable. Moreover, on a very large class of curves (essentially those that are either of odd order or of order divisible by 4), the one-parameter family admits a rational parametrization, which lets us compute F at almost the same cost as small f, and finally achieve indifferentiable hashing to most curves with a single exponentiation. Our new approach also yields an improved variant of the Elligator Squared technique of Tibouchi (FC 2014) that represents points of arbitrary elliptic curves as close-to-uniform random strings.
2022
ASIACRYPT
Efficient Searchable Symmetric Encryption for Join Queries 📺
The Oblivious Cross-Tags (OXT) protocol due to Cash et al. (CRYPTO 2013) is a highly scalable searchable symmetric encryption (SSE) scheme that allows fast processing of conjunctive and more general Boolean queries over encrypted relational databases. A longstanding open question has been to extend OXT to also support queries over joins of tables without pre-computing the joins. In this paper, we solve this open question without compromising on the nice properties of OXT with respect to both security and efficiency. We propose Join Cross-Tags (JXT) - a purely symmetric-key solution that supports efficient conjunctive queries over (equi-) joins of encrypted tables without any pre-computation at setup. JXT is fully compatible with OXT, and can be used in conjunction with OXT to support a wide class of SQL queries directly over encrypted relational databases. JXT incurs a storage cost (over OXT) of a factor equal to the number of potential join-attributes in a table, which is usually compensated by the fact that JXT is a fully symmetric-key solution (as opposed to OXT which relies on discrete-log hard groups). We prove the (adaptive) simulation-based security of JXT with respect to a rigorously defined leakage profile.
2022
ASIACRYPT
Horizontal racewalking using radical isogenies 📺
We address three main open problems concerning the use of radical isogenies, as presented by Castryck, Decru and Vercauteren at Asiacrypt 2020, in the computation of long chains of isogenies of fixed, small degree between elliptic curves over finite fields. Firstly, we present an interpolation method for finding radical isogeny formulae in a given degree N, which by-passes the need for factoring division polynomials over large function fields. Using this method, we are able to push the range for which we have formulae at our disposal from N ≤ 13 to N ≤ 37. Secondly, using a combination of known techniques and ad-hoc manipulations, we derived optimized versions of these formulae for N ≤ 19, with some instances performing more than twice as fast as their counterparts from 2020. Thirdly, we solve the problem of understanding the correct choice of radical when walking along the surface between supersingular elliptic curves over Fp with p ≡ 7 mod 8; this is non-trivial for even N and was only settled for N = 4 by Onuki and Moriya at PKC 2022. We give a conjectural statement for all even N and prove it for N ≤ 14. The speed-ups obtained from these techniques are substantial: using 16-isogenies, the computation of long chains of 2-isogenies over 512-bit prime fields can be improved by a factor 3, and the previous implementation of CSIDH using radical isogenies can be sped up by about 12%.
2022
ASIACRYPT
FINAL: Faster FHE instantiated with NTRU and LWE 📺
The NTRU problem is a promising candidate to build efficient Fully Homomorphic Encryption (FHE).However, all the existing proposals (e.g. LTV, YASHE) need so-called `overstretched' parameters of NTRU to enable homomorphic operations. It was shown by Albrecht~et~al. (CRYPTO~2016) that these parameters are vulnerable against subfield lattice attacks. Based on a recent, more detailed analysis of the overstretched NTRU assumption by Ducas and van Woerden (ASIACRYPT~2021), we construct two FHE schemes whose NTRU parameters lie outside the overstretched range.The first scheme is based solely on NTRU and demonstrates competitive performance against the state-of-the-art FHE schemes including TFHE. Our second scheme, which is based on both the NTRU and LWE assumptions, outperforms TFHE with a 28\% faster bootstrapping and 45\% smaller bootstrapping and key-switching keys.
2022
ASIACRYPT
Additive-Homomorphic Functional Commitments and Applications to Homomorphic Signatures 📺
Functional Commitments (FC) allow one to reveal functions of committed data in a succinct and verifiable way. In this paper we put forward the notion of additive-homomorphic FC and show two efficient, pairing-based, realizations of this primitive supporting multivariate polynomials of constant degree and monotone span programs, respectively. We also show applications of the new primitive in the contexts of homomorphic signatures: we show that additive-homomorphic FCs can be used to realize homomorphic signatures (supporting the same class of functionalities as the underlying FC) in a simple and elegant way. Using our new FCs as underlying building blocks, this leads to the (seemingly) first expressive realizations of multi-input homomorphic signatures not relying on lattices or multilinear maps.
2022
ASIACRYPT
A Universally Composable Non-Interactive Aggregate Cash System 📺
Mimblewimble is a privacy-preserving cryptocurrency, providing the functionality of transaction aggregation. Once certain coins have been spent in Mimblewimble, they can be deleted from the UTXO set. This is desirable: now storage can be saved and computation cost can be reduced. Fuchsbauer et al. (EUROCRYPT 2019) abstracted Mimblewimble as an Aggregate Cash System (ACS) and provided security analysis via game-based definitions. In this paper, we revisit the ACS, and focus on {\em Non-interactive} ACS, denoted as NiACS. We for the first time propose a simulation-based security definition and formalize an ideal functionality for NiACS. Then, we construct a NiACS protocol in a hybrid model which can securely realize the ideal NiACS functionality in the Universal Composition (UC) framework. In addition, we propose a building block, which is a variant of the ElGamal encryption scheme that may be of independent interest. Finally, we show how to instantiate our protocol, and obtain the first NiACS system with UC security.
2022
ASIACRYPT
EvalRound Algorithm in CKKS Bootstrapping 📺
Homomorphic encryption (HE) has open an entirely new world up in the privacy-preserving use of sensitive data by conducting computations on encrypted data. Amongst many HE schemes targeting on computation in various contexts, Cheon--Kim--Kim--Song (CKKS) scheme is distinguished since it allows computations for encrypted real number data, which have greater impact in real-world applications. CKKS scheme is a levelled homomorphic encryption scheme, consuming one level for each homomorphic multiplication. When the level runs out, a special computational circuit called bootstrapping is required in order to conduct further multiplications. The algorithm proposed by Cheon et al. has been regarded as a standard way to do bootstrapping in the CKKS scheme, and it consists of the following four steps: ModRaise, CoeffToSlot, EvalMod and SlotToCoeff. However, the steps consume a number of levels themselves, and thus optimizing this extra consumption has been a major focus of the series of recent research. Among the total levels consumed in the bootstrapping steps, about a half of them is spent in CoeffToSlot and SlotToCoeff steps to scale up the real number components of DFT matrices and round them to the nearest integers. Each scale-up factor is very large so that it takes up one level to rescale it down. Scale-up factors can be taken smaller to save levels, but the error of rounding would be transmitted to EvalMod and eventually corrupt the accuracy of bootstrapping. EvalMod aims to get rid of the superfluous $qI$ term from a plaintext $\pt + qI$ resulting from ModRaise, where $q$ is the bottom modulus and $I$ is a polynomial with small integer coefficients. EvalRound is referred to as its opposite, obtaining $qI$. We introduce a novel bootstrapping algorithm consisting of ModRaise, CoeffToSlot, EvalRound and SlotToCoeff, which yields taking smaller scale-up factors without the damage of rounding errors.
2022
ASIACRYPT
Towards Case-Optimized Hybrid Homomorphic Encryption -Featuring the Elisabeth Stream Cipher- 📺
Hybrid Homomorphic Encryption (HHE) reduces the amount of computation client-side and bandwidth usage in a Fully Homomorphic Encryption (FHE) framework. HHE requires the usage of specific symmetric schemes that can be evaluated homomorphically efficiently. In this paper, we introduce the paradigm of Group Filter Permutator (GFP) as a generalization of the Improved Filter Permutator paradigm introduced by M ́eaux et al. From this paradigm, we specify Elisabeth , a family of stream cipher and give an instance: Elisabeth-4. After proving the security of this scheme, we provide a Rust implementation of it and ensure its performance is comparable to state-of-the-art HHE. The true strength of Elisabeth lies in the available operations server-side: while the best HHE applications were limited to a few multiplications server-side, we used data sent through Elisabeth-4 to homomorphically evaluate a neural network inference. Finally, we discuss the improvement and loss between the HHE and the FHE framework and give ideas to build more efficient schemes from the Elisabeth family.
2022
ASIACRYPT
Enhancing Differential-Neural Cryptanalysis 📺
In CRYPTO 2019, Gohr shows that well-trained neural networks can perform cryptanalytic distinguishing tasks superior to traditional differential distinguishers. Moreover, applying an unorthodox key guessing strategy, an 11-round key-recovery attack on a modern block cipher Speck32/64 improves upon the published state-of-the-art result. This calls into the next questions. To what extent is the advantage of machine learning (ML) over traditional methods, and whether the advantage generally exists in the cryptanalysis of modern ciphers? To answer the first question, we devised ML-based key-recovery attacks on more extended round-reduced Speck32/64. We achieved an improved 12-round and the first practical 13-round attacks. The essential for the new results is enhancing a classical component in the ML-based attacks, that is, the neutral bits. To answer the second question, we produced various neural distinguishers on round-reduced Simon32/64 and provided comparisons with their pure differential-based counterparts.
2022
ASIACRYPT
Towards Practical Topology-Hiding Computation 📺
\par Topology-hiding computation (THC) enables $n$ parties to perform a secure multiparty computation (MPC) protocol in an incomplete communication graph while keeping the communication graph hidden. The work of Akavia et al. (CRYPTO 2017 and JoC 2020) shown that THC is feasible for any graph. In this work, we focus on the efficiency of THC and give improvements for various tasks including broadcast, sum and general computation. We mainly consider THC on undirected cycles, but we also give two results for THC on general graphs. All of our results are derived in the presence of a passive adversary statically corrupting any number of parties. \par In the undirected cycles, the state-of-the-art topology-hiding broadcast (THB) protocol is the Akavia-Moran (AM) protocol of Akavia et al. (EUROCRYPT 2017). We give an optimization for the AM protocol such that the communication cost of broadcasting $O(\kappa)$ bits is reduced from $O(n^2\kappa^2)$ bits to $O(n^2\kappa)$ bits. We also consider the sum and general computation functionalities. Previous to our work, the only THC protocols realizing the sum and general computation functionalities are constructed by using THB to simulate point-to-point channels in an MPC protocol realizing the sum and general computation functionalities, respectively. By allowing the parties to know the exact value of the number of the parties (the AM protocol and our optimization only assume the parties know an upper bound of the number of the parties), we can derive more efficient THC protocols realizing these two functionalities. As a result, comparing with previous works, we reduce the communication cost by a factor of $O(n\kappa)$ for both the sum and general computation functionalities. \par As we have mentioned, we also get two results for THC on general graphs. The state-of-the-art THB protocol for general graphs is the Akavia-LaVigne-Moran (ALM) protocol of Akavia et al. (CRYPTO 2017 and JoC 2020). Our result is that our optimization for the AM protocol also applies to the ALM protocol and can reduce its communication cost by a factor of $O(\kappa)$. Moreover, we optimize the fully-homomorphic encryption (FHE) based GTHC protocol of LaVigne et al. (TCC 2018) and reduce its communication cost from $O(n^8\kappa^2)$ FHE ciphertexts and $O(n^5\kappa)$ FHE public keys to $O(n^6\kappa)$ FHE ciphertexts and $O(n^5\kappa)$ FHE public keys.
2022
ASIACRYPT
An Analysis of the Algebraic Group Model 📺
The algebraic group model (AGM), formalized by Fuchsbauer, Kiltz, and Loss, has recently received significant attention. One of the appealing properties of the AGM is that it is viewed as being (strictly) weaker than the generic group model (GGM), in the sense that hardness results for algebraic algorithms imply hardness results for generic algorithms, and generic reductions in the AGM (namely, between the algebraic formulations of two problems) imply generic reductions in the~GGM. We highlight that as the GGM and AGM are currently formalized, this is not true: hardness in the AGM may not imply hardness in the GGM, and a generic reduction in the AGM may not imply a similar reduction in the~GGM.
2022
ASIACRYPT
Multi-Client Functional Encryption with Fine-Grained Access Control 📺
Multi-Client Functional Encryption (\MCFE) and Multi-Input Functional Encryption (\MIFE) are very interesting extensions of Functional Encryption for practical purpose. They allow to compute joint function over data from multiple parties. Both primitives are aimed at applications in multi-user settings where decryption can be correctly output for users with appropriate functional decryption keys only. While the definitions for a single user or multiple users were quite general and can be realized for general classes of functions as expressive as Turing machines or all circuits, efficient schemes have been proposed so far for concrete classes of functions: either only for access control, \emph{i.e.} the identity function under some conditions, or linear/quadratic functions under no condition. In this paper, we target classes of functions that explicitly combine some evaluation functions independent of the decrypting user under the condition of some access control. More precisely, we introduce a framework for \MCFE with fine-grained access control and propose constructions for both single-client and multi-client settings, for inner-product evaluation and access control via Linear Secret Sharing Schemes (\textsf{LSSS}), with selective and adaptive security. The only known work that combines functional encryption in multi-user setting with access control was proposed by Abdalla \emph{et al.} (Asiacrypt '20), which relies on a generic transformation from the single-client schemes to obtain $\MIFE$ schemes that suffer a quadratic factor of $n$ (where $n$ denotes the number of clients) in the ciphertext size. We follow a different path, via $\MCFE$: we present a \emph{duplicate-and-compress} technique to transform the single-client scheme and obtain a \MCFE with fine-grained access control scheme with only a linear factor of $n$ in the ciphertext size. Our final scheme thus outperforms the Abdalla \emph{et al.}'s scheme by a factor $n$, as one can obtain \MIFE from \MCFE by making all the labels in \MCFE a fixed public constant. The concrete constructions are secure under the $\SXDH$ assumption, in the random oracle model for the \MCFE scheme, but in the standard model for the \MIFE improvement.
2022
ASIACRYPT
Non-Interactive Secure Computation of Inner-Product from LPN and LWE 📺
We put forth a new cryptographic primitive for securely computing inner-products in a scalable, non-interactive fashion: any party can broadcast a public (computationally hiding) encoding of its input, and store a secret state. Given their secret state and the other party's public encoding, any pair of parties can non-interactively compute additive shares of the inner-product between the encoded vectors. We give constructions of this primitive from a common template, which can be instantiated under either the LPN (with non-negligible correctness error) or the LWE (with negligible correctness error) assumptions. Our construction uses a novel twist on the standard non-interactive key exchange based on the Alekhnovich cryptosystem, which upgrades it to a non-interactive inner product protocol almost for free. In addition to being non-interactive, our constructions have linear communication (with constants smaller than all known alternatives) and small computation: using LPN or LWE with quasi-cyclic codes, we estimate that encoding a length-2^20 vector over a 32-bit field takes less that 2s on a standard laptop; decoding amounts to a single cheap inner-product. We show how to remove the non-negligible error in our LPN instantiation using a one-time, logarithmic-communication preprocessing. Eventually, we show to to upgrade its security to the malicious model using new sublinear-communication zero-knowledge proofs for low-noise LPN samples, which might be of independent interest.
2022
ASIACRYPT
Triply Adaptive UC NIZK 📺
Non-interactive zero knowledge (NIZK) enables a prover, to prove that a statement in an NP language is valid, given an accepting witness, without leaking any information about the witness. We study universally composable (UC) NIZKs which are secure against adaptive corruption of parties and provides adaptive soundness, i.e. the statement is adaptively chosen by a malicious prover based on the setup string distribution. The only known adaptively secure NIZK protocols either fail to achieve full adaptive soundness or rely on non-falsifiable knowledge assumptions. We construct the first NIZK protocols which are triply adaptive - secure against adaptive corruptions, guarantees adaptive soundness and satisfies adaptive zero knowledge, from falsifiable assumptions. We do so using the following methodology: - We define a new ideal functionality, denoted as F_NICOM, for non-interactive commitment schemes in the UC framework. - We define and construct Sigma protocols which satisfy triply adaptive security in the F_NICOM model. - By relying on correlation intractable (CI) hash functions, we compile a triply adaptively secure Sigma protocol (in F_NICOM model) into a triply adaptive UC-NIZK argument in the F_NICOM+common reference string (crs) model. In addition to CI hash functions, our compiler requires standard cryptographic primitives - non-interactive equivocal commitments and public key encryption with obliviously samplable ciphertexts, for implementing F_NICOM in the crs model. We instantiate our framework by demonstrating that most statically secure Sigma protocols can be proven to be triply adaptively secure in the F_NICOM model, hence, bridging the gap between static and adaptive security for NIZKs. Our NIZK arguments can be concretely based on assumptions, like LWE, or LPN and DDH.
2022
ASIACRYPT
Encryption to the Future A Paradigm for Sending Secret Messages to Future (Anonymous) Committees 📺
A number of recent works have constructed cryptographic protocols with flavors of adaptive security by having a randomly-chosen anonymous committee run at each round. Since most of these protocols are stateful, transferring secret states from past committees to future, but still unknown, committees is a crucial challenge. Previous works have tackled this problem with approaches tailor-made for their specific setting, which mostly rely on using a blockchain to orchestrate auxiliary committees that aid in the state hand-over process. In this work, we look at this challenge as an important problem on its own and initiate the study of Encryption to the Future (EtF) as a cryptographic primitive. First, we define a notion of an EtF scheme where time is determined with respect to an underlying blockchain and a lottery selects parties to receive a secret message at some point in the future. While this notion seems overly restrictive, we establish two important facts: 1. if used to encrypt towards parties selected in the “far future”, EtF implies witness encryption for NP over a blockchain; 2. if used to encrypt only towards parties selected in the “near future”, EtF is not only sufficient for transferring state among committees as required by previous works, but also captures previous tailor-made solutions. To corroborate these results, we provide a novel construction of EtF based on witness encryption over commitments (cWE), which we instantiate from a number of standard assumptions via a construction based on generic cryptographic primitives. Finally, we show how to use “near future” EtF to obtain “far future” EtF with a protocol based on an auxiliary committee whose communication complexity is independent of the length of plaintext messages being sent to the future.
2022
ASIACRYPT
Non-interactive Mimblewimble transactions, revisited 📺
Mimblewimble is a cryptocurrency protocol that promises to overcome notorious blockchain scalability issues and provides user privacy. For a long time its wider adoption has been hindered by the lack of non-interactive transactions, that is, payments for which only the sender needs to be online. Yu proposed a way of adding non-interactive transactions to stealth addresses to Mimblewimble, but this turned out to be flawed. Building on Yu and integrating ideas from Burkett, we give a fixed scheme and provide a rigorous security analysis strenghtening the previous security model from Eurocrypt'19. Our protocol is considered for implementation by MimbleWimbleCoin and a variant is now deployed as MimbleWimble Extension Blocks (MWEB) in Litecoin.
2022
ASIACRYPT
Authenticated Encryption with Key Identification 📺
Authenticated encryption with associated data (AEAD) forms the core of much of symmetric cryptography, yet the standard techniques for modeling AEAD assume recipients have no ambiguity about what secret key to use for decryption. This is divorced from what occurs in practice, such as in key management services, where a message recipient can store numerous keys and must identify the correct key before decrypting. Ad hoc solutions for identifying the intended key are deployed in practice, but these techniques can be inefficient and, in some cases, have even led to practical attacks. Notably, to date there has been no formal investigation of their security properties or efficacy. We fill this gap by providing the first formalization of nonce-based AEAD that supports key identification (AEAD-KI). Decryption now takes in a vector of secret keys and a ciphertext and must both identify the correct secret key and decrypt the ciphertext. We provide new formal security definitions, including new key robustness definitions and indistinguishability security notions. Finally, we show several different approaches for AEAD-KI and prove their security.
2022
ASIACRYPT
Flashproofs: Efficient Zero-Knowledge Arguments of Range and Polynomial Evaluation with Transparent Setup 📺
We propose Flashproofs, a new type of efficient special honest verifier zero-knowledge arguments with a transparent setup in the discrete logarithm (DL) setting. First, we put forth gas-efficient range arguments that achieve $O(N^{\frac{2}{3}})$ communication cost, and involve $O(N^{\frac{2}{3}})$ group exponentiations for verification and a slightly sub-linear number of group exponentiations for proving with respect to the range $[0, 2^N-1]$, where $N$ is the bit length of the range. For typical confidential transactions on blockchain platforms supporting smart contracts, verifying our range arguments consumes only 237K and 318K gas for 32-bit and 64-bit ranges, which are comparable to 220K gas incurred by verifying the most efficient zkSNARK with a trusted setup (EUROCRYPT \textquotesingle 16) at present. Besides, the aggregation of multiple arguments can yield further efficiency improvement. Second, we present polynomial evaluation arguments based on the techniques of Bayer \& Groth (EUROCRYPT \textquotesingle 13). We provide two zero-knowledge arguments, which are optimised for lower-degree ($D \in [3, 2^9]$) and higher-degree ($D > 2^9$) polynomials, where $D$ is the polynomial degree. Our arguments yield a non-trivial improvement in the overall efficiency. Notably, the number of group exponentiations for proving drops from $8\log D$ to $3(\log D+\sqrt{\log D})$. The communication cost and the number of group exponentiations for verification decrease from $7\log D$ to $(\log D + 3\sqrt{\log D})$. To the best of our knowledge, our arguments instantiate the most communication-efficient arguments of membership and non-membership in the DL setting among those not requiring trusted setups. More importantly, our techniques enable a significantly asymptotic improvement in the efficiency of communication and verification (group exponentiations) from $O(\log D)$ to $O(\sqrt{\log D})$ when multiple arguments satisfying different polynomials with the same degree and inputs are aggregated.
2022
ASIACRYPT
The Abe-Okamoto Partially Blind Signature Scheme Revisited 📺
Partially blind signatures, an extension of ordinary blind signatures, are a primitive with wide applications in e-cash and electronic voting. One of the most efficient schemes to date is the one by Abe and Okamoto (CRYPTO 2000), whose underlying idea - the OR-proof technique - has served as the basis for several works. We point out several subtle flaws in the original proof of security, and provide a new detailed and rigorous proof, achieving similar bounds as the original work. We believe our insights on the proof strategy will find useful in the security analyses of other OR-proof-based schemes.
2022
ASIACRYPT
Latin Dances Reloaded: Improved Cryptanalysis against Salsa and ChaCha, and the proposal of Forró 📺
In this paper, we present 4 major contributions to ARX ciphers and in particular to the Salsa/ChaCha family of stream ciphers: a) We propose an improved differential-linear distinguisher against ChaCha. To do so, we propose a new way to approach the derivation of linear approximations by viewing the algorithm in terms of simpler subrounds. Using this idea we show that it is possible to derive almost all linear approximations from previous works from just 3 simple rules. Furthermore, we show that with one extra rule it is possible to improve the linear approximations proposed by Coutinho and Souza at Eurocrypt 2021. b) We propose a technique called Bidirectional Linear Expansions (BLE) to improve attacks against Salsa. While previous works only considered linear expansions moving forward into the rounds, BLE explores the expansion of a single bit in both forward and backward directions. Applying BLE, we propose the first differential-linear distinguishers ranging 7 and 8 rounds of Salsa and we improve PNB key-recovery attacks against 8 rounds of Salsa. c) Using all the knowledge acquired studying the cryptanalysis of these ciphers, we propose some modifications in order to provide better diffusion per round and higher resistance to cryptanalysis, leading to a new stream cipher named Forró. We show that Forró has higher security margin, this allows us to reduce the total number of rounds while maintaining the security level, thus creating a faster cipher in many platforms, specially in constrained devices. d) Finally, we developed CryptDances, a new tool for the cryptanalysis of Salsa, ChaCha, and Forró designed to be used in high performance environments with several GPUs. With CryptDances it is possible to compute differential correlations, to derive new linear approximations for ChaCha automatically, to automate the computation of the complexity of PNB attacks, among other features. We make CryptDances available for the community at https://github.com/MurCoutinho/cryptDances.
2022
ASIACRYPT
SIDH Proof of Knowledge 📺
We show that the soundness proof for the De Feo--Jao--Plût identification scheme (the basis for supersingular isogeny Diffie--Hellman (SIDH) signatures) contains an invalid assumption, and we provide a counterexample for this assumption---thus showing the proof of soundness is invalid. As this proof was repeated in a number of works by various authors, multiple pieces of literature are affected by this result. Due to the importance of being able to prove knowledge of an SIDH key (for example, to prevent adaptive attacks), soundness is a vital property. Surprisingly, the problem of proving knowledge of a specific isogeny turns out to be considerably more difficult than was perhaps anticipated. The main results of this paper are a sigma protocol to prove knowledge of a walk of specified length in a supersingular isogeny graph, and a second one to additionally prove that the isogeny maps some torsion points to some other torsion points (as seen in SIDH public keys). Our scheme also avoids the SIDH identification scheme soundness issue raised by Ghantous, Pintore and Veroni. In particular, our protocol provides a non-interactive way of verifying correctness of SIDH public keys, and related statements, as protection against adaptive attacks. Post-scriptum: Some months after this work was completed and made public, the SIDH assumption was broken in a series of papers by several authors. Hence, in the standard SIDH setting, some of the statements studied here now have trivial polynomial time non-interactive proofs. Nevertheless our first sigma protocol is unaffected by the attacks, and our second protocol may still be useful in present and future variants of SIDH that escape the attacks.
2022
ASIACRYPT
A Third is All You Need: Extended Partial Key Exposure Attack on CRT-RSA with Additive Exponent Blinding 📺
At Eurocrypt 2022, May et al. proposed a partial key exposure (PKE) attack on CRT-RSA that efficiently factors $N$ knowing only a $\frac{1}{3}$-fraction of either most significant bits (MSBs) or least significant bits (LSBs) of private exponents $d_p$ and $d_q$ for public exponent $e \approx N^{\frac{1}{12}}$. In practice, PKE attacks typically rely on the side-channel leakage of these exponents, while a side-channel resistant implementation of CRT-RSA often uses additively blinded exponents $d^{\prime}_p = d_p + r_p(p-1)$ and $d^{\prime}_q = d_q + r_q(q-1)$ with unknown random blinding factors $r_p$ and $r_q$, which makes PKE attacks more challenging. Motivated by the above, we extend the PKE attack of May et al. to CRT-RSA with additive exponent blinding. While admitting $r_pe\in(0,N^{\frac{1}{4}})$, our extended PKE works ideally when $r_pe \approx N^{\frac{1}{12}}$, in which case the entire private key can be recovered using only $\frac{1}{3}$ known MSBs or LSBs of the blinded CRT exponents $d^{\prime}_p$ and $d^{\prime}_q$. Our extended PKE follows their novel two-step approach to first compute the key-dependent constant $k^{\prime}$ ($ed^{\prime}_p = 1 + k^{\prime}(p-1)$, $ed^{\prime}_q = 1 + l^{\prime}(q-1)$), and then to factor $N$ by computing the root of a univariate polynomial modulo $k^{\prime}p$. We extend their approach as follows. For the MSB case, we propose two options for the first step of the attack, either by obtaining a single estimate $k^{\prime}l^{\prime}$ and calculating $k^{\prime}$ via factoring, or by obtaining multiple estimates $k^{\prime}l^{\prime}_1,\ldots,k^{\prime}l^{\prime}_z$ and calculating $k^{\prime}$ probabilistically via GCD. For the LSB case, we extend their approach by constructing a different univariate polynomial in the second step of the LSB attack. A formal analysis shows that our LSB attack runs in polynomial time under the standard Coppersmith-type assumption, while our MSB attack either runs in sub-exponential time with a reduced input size (the problem is reduced to factor a number of size $e^2r_pr_q\approx N^{\frac{1}{6}}$) or in probabilistic polynomial time under a novel heuristic assumption. Under the settings of the most common key sizes (1024-bit, 2048-bit, and 3072-bit) and blinding factor lengths (32-bit, 64-bit, and 128-bit), our experiments verify the validity of the Coppersmith-type assumption and our own assumption, as well as the feasibility of the factoring step. To the best of our knowledge, this is the first PKE on CRT-RSA with experimentally verified effectiveness against 128-bit unknown exponent blinding factors. We also demonstrate an application of the proposed PKE attack using real partial side-channel key leakage targeting a Montgomery Ladder exponentiation CRT implementation.
2022
ASIACRYPT
Universal Ring Signatures in the Standard Model 📺
Ring signatures allow a user to sign messages on behalf of an \emph{ad hoc} set of users - a ring - while hiding her identity. The original motivation for ring signatures was whistleblowing [Rivest et al. ASIACRYPT'01]: a high government employee can anonymously leak sensitive information while certifying that it comes from a reliable source, namely by signing the leak. However, essentially all known ring signature schemes require the members of the ring to publish a structured verification key that is compatible with the scheme. This creates somewhat of a paradox since, if a user does not want to be framed for whistleblowing, they will stay clear of signature schemes that support ring signatures. In this work, we formalize the concept of universal ring signatures (URS). A URS enables a user to issue a ring signature with respect to a ring of users, independently of the signature schemes they are using. In particular, none of the verification keys in the ring need to come from the same scheme. Thus, in principle, URS presents an effective solution for whistleblowing. The main goal of this work is to study the feasibility of URS, especially in the standard model (i.e. no random oracles or common reference strings). We present several constructions of URS, offering different trade-offs between assumptions required, the level of security achieved, and the size of signatures: \begin{itemize} \item Our first construction is based on superpolynomial hardness assumptions of standard primitives. It achieves compact signatures. That means the size of a signature depends only logarithmically on the size of the ring and on the number of signature schemes involved. \item We then proceed to study the feasibility of constructing URS from standard polynomially-hard assumptions only. We construct a non-compact URS from witness encryption and additional standard assumptions. \item Finally, we show how to modify the non-compact construction into a compact one by relying on indistinguishability obfuscation. \end{itemize}
2022
ASIACRYPT
Mind the TWEAKEY Schedule: Cryptanalysis on SKINNYe-64-256 📺
Designing symmetric ciphers for particular applications becomes a hot topic. At EUROCRYPT 2020, Naito, Sasaki and Sugawara invented the threshold implementation friendly cipher SKINNYe-64-256 to meet the requirement of the authenticated encryption PFB_Plus. Soon, Thomas Peyrin pointed out that SKINNYe-64-256 may lose the security expectation due the new tweakey schedule. Although the security issue of SKINNYe-64-256 is still unclear, Naito et al. decided to introduce SKINNYe-64-256 v2 as a response. In this paper, we give a formal cryptanalysis on the new tweakey schedule of SKINNYe-64-256 and discover unexpected differential cancellations in the tweakey schedule. For example, we find the number of cancellations can be up to 8 within 30 consecutive rounds, which is significantly larger than the expected 3 cancellations. This property is derived by the analysis of the updated functions (LFSRs) of the tweakey via linear algebra. Moreover, we take our new discoveries into rectangle, MITM and impossible differential attacks, and adapt the corresponding automatic tools with new constraints from our discoveries. Finally, we find a 41-round related-tweakey rectangle attack on SKINNYe-64-256 and leave a security margin of 3 rounds only. As STK accepts arbitrary tweakey size, but SKINNY and SKINNYe-64-256 v2 only support up to 4n tweakey size. We introduce a new design of tweakey schedule for SKINNY-64 to further extend the supported tweakey size. We give a formal proof that our new tweakey schedule inherits the security requirement of STK and SKINNY.
2022
ASIACRYPT
Key-Reduced Variants of 3kf9 with Beyond-Birthday-Bound Security 📺
3kf9 is a three-key CBC-type MAC that enhances the standardized integrity algorithm f9 (3GPP-MAC). It has beyond-birthday-bound security and is expected to be a possible candidate in constrained environments when instantiated with lightweight blockciphers. Two variants 2kf9 and 1kf9 were proposed to reduce key size for efficiency, but recently, Leurent et al. (CRYPTO'18) and Shen et al. (CRYPTO'21) pointed out critical flaws on these two variants and invalidated their security proofs with birthday-bound attacks. In this work, we revisit previous constructions of key-reduced variants of 3kf9 and analyze what went wrong in security analyzes. Interestingly, we find that a single doubling at the end can not only fix 2kf9 to go beyond the birthday bound, but also help 1kf9 to go beyond the birthday bound. We then propose two new key-reduced variants of 3kf9, called n2kf9 and n1kf9. By leveraging previous attempts, we prove that n2kf9 is secure up to 2^{2n/3} queries, and prove that n1kf9 is secure up to 2^{2n/3} queries when the message space is prefix-free. We also provide beyond-birthday analysis of n2kf9 in the multi-user setting. Note that compared to EMAC and CBC-MAC, the additional cost to provide a higher security guarantee is expected to be minimal for n2kf9 and n1kf9. It only requires one additional blockcipher call and one doubling.
2022
ASIACRYPT
Recovering the tight security proof of SPHINCS+ 📺
In 2020, Kudinov, Kiktenko, and Fedorov pointed out a flaw in the tight security proof of the SPHINCS+ construction. This work gives a new tight security proof for SPHINCS+. The flaw can be traced back to the security proof for the Winternitz one-time signature scheme (WOTS) used within SPHINCS+. In this work, we give a stand-alone description of the WOTS variant used in SPHINCS+ that we call WOTS-TW. We provide a security proof for WOTS-TW and multi-instance WOTS-TW against non-adaptive chosen message attacks where the adversary only learns the public key after it made its signature query. Afterwards, we show that this is sufficient to give a tight security proof for SPHINCS+. We recover almost the same bound for the security of SPHINCS+, with only a factor w loss compared to the previously claimed bound, where w is the Winternitz parameter that is commonly set to 16. On a more technical level, we introduce new lower bounds on the quantum query complexity for generic attacks against properties of cryptographic hash functions and analyse the constructions of tweakable hash functions used in SPHINCS+ with regard to further security properties.
2022
ASIACRYPT
Unconditionally Secure NIZK in the Fine-Grained Setting 📺
Non-interactive zero-knowledge (NIZK) proof systems are often constructed based on cryptographic assumptions. In this paper, we propose the first unconditionally secure NIZK system in the AC0-fine-grained setting. More precisely, our NIZK system has perfect soundness for all adversaries and unconditional zero-knowledge for AC0 adversaries, namely, an AC0 adversary can only break the zero-knowledge property with negligible probability unconditionally. At the core of our construction is an OR-proof system for satisfiability of 1 out of polynomial many statements.
2022
ASIACRYPT
Attaining GOD Beyond Honest Majority With Friends and Foes 📺
In the classical notion of multiparty computation (MPC), an honest party learning private inputs of others, either as a part of protocol specification or due to a malicious party's unspecified messages, is not considered a potential breach. Several works in the literature exploit this seemingly minor loophole to achieve the strongest security of guaranteed output delivery via a trusted third party, which nullifies the purpose of MPC. Alon et al. (CRYPTO 2020) presented the notion of {\it Friends and Foes} ($\mathtt{FaF}$) security, which accounts for such undesired leakage towards honest parties by modelling them as semi-honest (friends) who do not collude with malicious parties (foes). With real-world applications in mind, it's more realistic to assume parties are semi-honest rather than completely honest, hence it is imperative to design efficient protocols conforming to the $\mathtt{FaF}$ security model. Our contributions are not only motivated by the practical viewpoint, but also consider the theoretical aspects of $\mathtt{FaF}$ security. We prove the necessity of semi-honest oblivious transfer for $\mathtt{FaF}$-secure protocols with optimal resiliency. On the practical side, we present QuadSquad, a ring-based 4PC protocol, which achieves fairness and GOD in the $\mathtt{FaF}$ model, with an optimal corruption of $1$ malicious and $1$ semi-honest party. QuadSquad is, to the best of our knowledge, the first practically efficient $\mathtt{FaF}$ secure protocol with optimal resiliency. Its performance is comparable to the state-of-the-art dishonest majority protocols while improving the security guarantee from abort to fairness and GOD. Further, QuadSquad elevates the security by tackling a stronger adversarial model over the state-of-the-art honest-majority protocols, while offering a comparable performance for the input-dependent computation. We corroborate these claims by benchmarking the performance of QuadSquad. We also consider the application of liquidity matching that deals with highly sensitive financial transaction data, where $\mathtt{FaF}$ security is apt. We design a range of $\mathtt{FaF}$ secure building blocks to securely realize liquidity matching as well as other popular applications such as privacy-preserving machine learning (PPML). Inclusion of these blocks makes QuadSquad a comprehensive framework.
2022
ASIACRYPT
Puncturable Key Wrapping and Its Applications 📺
We introduce puncturable key wrapping (PKW), a new cryptographic primitive that supports fine-grained forward security properties in symmetric key hierarchies. We develop syntax and security definitions, along with provably secure constructions for PKW from simpler components (AEAD schemes and puncturable PRFs). We show how PKW can be applied in two distinct scenarios. First, we show how to use PKW to achieve forward security for TLS 1.3 0-RTT session resumption, even when the server's long-term key for generating session tickets gets compromised. This extends and corrects a recent work of Aviram, Gellert, and Jager (Journal of Cryptology, 2021). Second, we show how to use PKW to build a protected file storage system with file shredding, wherein a client can outsource encrypted files to a potentially malicious or corrupted cloud server whilst achieving strong forward-security guarantees, relying only on local key updates.
2022
ASIACRYPT
Hawk: Module LIP makes Lattice Signatures Fast, Compact and Simple 📺
We propose the signature scheme Hawk, a concrete instantiation of proposals to use the Lattice Isomorphism Problem (LIP) as a foundation for cryptography that focuses on simplicity. This simplicity stems from LIP, which allows the use of lattices such as $\mathbb{Z}^n$, leading to signature algorithms with no floats, no rejection sampling, and compact precomputed distributions. Such design features are desirable for constrained devices, and when computing signatures inside FHE or MPC. The most significant change from recent LIP proposals is the use of module lattices, reusing algorithms and ideas from NTRUSign and Falcon. Its simplicity makes Hawk competitive. We provide cryptanalysis with experimental evidence for the design of Hawk and implement two parameter sets, Hawk-512 and Hawk-1024. Signing using Hawk-512 and Hawk-1024 is four times faster than Falcon on x86 architectures, produces signatures that are about 15% more compact, and is slightly more secure against forgeries by lattice reduction attacks. When floating-points are unavailable, Hawk signs 15 times faster than Falcon. We provide a worst case to average case reduction for module LIP. For certain parametrisations of Hawk this applies to secret key recovery and we reduce signature forgery in the random oracle model to a new problem called the one more short vector problem.
2022
ASIACRYPT
Nostradamus goes Quantum 📺
In the Nostradamus attack, introduced by Kelsey and Kohno (Eurocrypt 2006), the adversary has to commit to a hash value y of an iterated hash function H such that, when later given a message prefix P, the adversary is able to find a suitable "suffix explanation" S with H(P||S)=y. Kelsey and Kohno show a herding attack with $2^{2n/3}$ evaluations of the compression function of H (with n bits output and state), locating the attack between preimage attacks and collision search in terms of complexity. Here we investigate the security of Nostradamus attacks for quantum adversaries. We present a quantum herding algorithm for the Nostradamus problem making approximately $\sqrt[3]{n}\cdot 2^{3n/7}$ compression function evaluations, significantly improving over the classical bound. We also prove that quantum herding attacks cannot do better than $2^{3n/7}$ evaluations for random compression functions, showing that our algorithm is (essentially) optimal. We also discuss a slightly less tight bound of roughly $2^{3n/7-s}$ for general Nostradamus attacks against random compression functions, where s is the maximal block length of the adversarially chosen suffix S.
2022
ASIACRYPT
Optimizing Rectangle Attacks: A Unified and Generic Framework for Key Recovery 📺
The rectangle attack has shown to be a very powerful form of cryptanalysis against block ciphers. Given a rectangle distinguisher, one expects to mount key recovery attacks as efficiently as possible. In the literature, there have been four algorithms for rectangle key recovery attacks. However, their performance vary from case to case. Besides, numerous are the applications where the attacks lack optimality. In this paper, we investigate the rectangle key recovery in depth and propose a unified and generic key recovery algorithm, which supports any possible attacking parameters. Notably, it not only covers the four previous rectangle key recovery algorithms, but also unveils five types of new attacks which were missed previously. Along with the new key recovery algorithm, we propose a framework for automatically finding the best attacking parameters, with which the time complexity of the rectangle attack will be minimized using the new algorithm. To demonstrate the efficiency of the new key recovery algorithm, we apply it to Serpent, CRAFT, SKINNY and Deoxys-BC-256 based on existing distinguishers and obtain a series of improved rectangle attacks.
2022
ASIACRYPT
Synthesizing Quantum Circuits of AES with Lower T-depth and Less Qubits 📺
The significant progress in the development of quantum computers has made the study of cryptanalysis based on quantum computing an active topic. To accurately estimate the resources required to carry out quantum attacks, the involved quantum algorithms have to be synthesized into quantum circuits with basic quantum gates. In this work, we present several generic synthesis and optimization techniques for circuits implementing the quantum oracles of iterative symmetric-key ciphers that are commonly employed in quantum attacks based on Grover and Simon's algorithms. Firstly, a general structure for implementing the round functions of block ciphers in-place is proposed. Then, we present some novel techniques for synthesizing efficient quantum circuits of linear and non-linear cryptographic building blocks. We apply these techniques to AES and systematically investigate the strategies for depth-width trade-offs. Along the way, we derive a quantum circuit for the AES S-box with provably minimal T-depth based on some new observations on its classical circuit. As a result, the T-depth and width (number of qubits) required for implementing the quantum circuits of AES are significantly reduced. Compared with the circuit proposed in EUROCRYPT 2020, the T-depth is reduced from 60 to 40 without increasing the width or 30 with a slight increase in width. These circuits are fully implemented in Microsoft Q# and the source code is publicly available. Compared with the circuit proposed in ASIACRYPT 2020, the width of one of our circuits is reduced from 512 to 371, and the Toffoli-depth is reduced from 2016 to 1558 at the same time. Actually, we can reduce the width to 270 at the cost of increased depth. Moreover, a full spectrum of depth-width trade-offs is provided, setting new records for the synthesis and optimization of quantum circuits of AES.
2022
ASIACRYPT
Multi-User Security of the Sum of Truncated Random Permutations 📺
For several decades, constructing pseudorandom functions from pseudorandom permutations, so-called Luby-Rackoff backward construction, has been a popular cryptographic problem. Two methods are well-known and comprehensively studied for this problem: summing two random permutations and truncating partial bits of the output from a random permutation. In this paper, by combining both summation and truncation, we propose new Luby-Rackoff backward constructions, dubbed SaT1 and SaT2, respectively. SaT2 is obtained by partially truncating output bits from the sum of two independent random permutations, and SaT1 is its single permutation-based variant using domain separation. The distinguishing advantage against SaT1 and SaT2 is upper bounded by O(\sqrt{\mu q_max}/2^{n-0.5m}) and O({\sqrt{\mu}q_max^1.5}/2^{2n-0.5m}), respectively, in the multi-user setting, where n is the size of the underlying permutation, m is the output size of the construction, \mu is the number of users, and q_max is the maximum number of queries per user. We also prove the distinguishing advantage against a variant of XORP[3]~(studied by Bhattacharya and Nandi at Asiacrypt 2021) using independent permutations, dubbed SoP3-2, is upper bounded by O(\sqrt{\mu} q_max^2}/2^{2.5n})$. In the multi-user setting with \mu = O(2^{n-m}), a truncated random permutation provides only the birthday bound security, while SaT1 and SaT2 are fully secure, i.e., allowing O(2^n) queries for each user. It is the same security level as XORP[3] using three permutation calls, while SaT1 and SaT2 need only two permutation calls.
2022
ASIACRYPT
On Module Unique-SVP and NTRU 📺
The NTRU problem can be viewed as an instance of finding a short non-zero vector in a lattice, under the promise that it contains an exceptionally short vector. Further, the lattice under scope has the structure of a rank-2 module over the ring of integers of a number field. Let us refer to this problem as the module unique Shortest Vector Problem,or mod-uSVP for short. We exhibit two reductions that together provide evidence the NTRU problem is not just a particular case of mod-uSVP, but representative of it from a computational perspective. First, we reduce worst-case mod-uSVP to worst-case NTRU. For this, we rely on an oracle for id-SVP, the problem of finding short non-zero vectors in ideal lattices. Using the worst-case id-SVP to worst-case NTRU reduction from Pellet-Mary and Stehlé [ASIACRYPT'21],this shows that worst-case NTRU is equivalent to worst-case mod-uSVP. Second, we give a random self-reduction for mod-uSVP. We put forward a distribution D over mod-uSVP instances such that solving mod-uSVP with a non-negligible probability for samples from D allows to solve mod-uSVP in the worst-case. With the first result, this gives a reduction from worst-case mod-uSVP to an average-case version of NTRU where the NTRU instance distribution is inherited from D. This worst-case to average-case reduction requires an oracle for id-SVP.
2022
ASIACRYPT
Improving Bounds on Elliptic Curve Hidden Number Problem for ECDH Key Exchange 📺
Elliptic Curve Hidden Number Problem (EC-HNP) was first introduced by Boneh, Halevi and Howgrave-Graham at Asiacrypt 2001. To rigorously assess the bit security of the Diffie--Hellman key exchange with elliptic curves (ECDH), the Diffie--Hellman variant of EC-HNP, regarded as an elliptic curve analogy of the Hidden Number Problem (HNP), was presented at PKC 2017. This variant can also be used for practical cryptanalysis of ECDH key exchange in the situation of side-channel attacks. In this paper, we revisit the Coppersmith method for solving the involved modular multivariate polynomials in the Diffie--Hellman variant of EC-HNP and demonstrate that, for any given positive integer $d$, a given sufficiently large prime $p$, and a fixed elliptic curve over the prime field $\mathbb{F}_p$, if there is an oracle that outputs about $\frac{1}{d+1}$ of the most (least) significant bits of the $x$-coordinate of the ECDH key, then one can give a heuristic algorithm to compute all the bits within polynomial time in $\log_2 p$. When $d>1$, the heuristic result $\frac{1}{d+1}$ significantly outperforms both the rigorous bound $\frac{5}{6}$ and heuristic bound $\frac{1}{2}$. Due to the heuristics involved in the Coppersmith method, we do not get the ECDH bit security on a fixed curve. However, we experimentally verify the effectiveness of the heuristics on NIST curves for small dimension lattices.
2022
ASIACRYPT
Security of Truncated Permutation Without Initial Value 📺
Indifferentiability is a powerful notion in cryptography. If a construction is proven to be indifferentiable from an ideal object, it can under certain assumptions instantiate that ideal object in higher-level constructions. Indifferentiability is a particularly useful model for cryptographic hash functions, and myriad results are known proving that a hash function behaves like a random oracle under the assumption that the underlying primitive (typically a compression function, a block cipher, or a permutation) is random. Recently, advances have been made in proving indifferentiability of one-way functions with fixed input length. One such example is truncation of a permutation. If one evaluates a random permutation on an input value concatenated with a fixed initial value, and truncates the output, one obtains a construction that is indifferentiable from a random function up to a certain bound (Dodis et al., FSE 2009; Choi et al., ASIACRYPT 2019). Security of this construction, however, is in part determined by the length of the initial value; omission of this fixed value yields an insecure construction. In this paper, we reconsider truncation of a permutation, and prove that the construction is indifferentiable from a random oracle, even if this fixed initial value is replaced by a randomized value. This randomized value may be the same for different evaluations of the construction, or freshly generated, up to the discretion of the adversary. The security level is the same as that of truncation with fixed initial value, up to collisions in the randomized value. We show that our construction has immediate implications in the context of parallel variable-length digest generation. In detail, we describe Cascade-MGF, that operates on top of any cryptographic hash function and uses the hash function output as randomized initial value in truncation. We demonstrate that Cascade-MGF compares favorably over earlier parallel variable-length digest generation constructions, namely Counter-MGF and Chained-MGF, in almost all settings.
2022
ASIACRYPT
Improved Straight-Line Extraction in the Random Oracle Model With Applications to Signature Aggregation 📺
The goal of this paper is to improve the efficiency and applicability of straightline extraction techniques in the random oracle model. Straightline extraction in the random oracle model refers to the existence of an extractor, which given the random oracle queries made by a prover P*(x) on some theorem x, is able to produce a witness w for x with roughly the same probability that P* produces a verifying proof. This notion applies to both zero-knowledge protocols and verifiable computation where the goal is compressing a proof. Pass (CRYPTO '03) first showed how to achieve this property for NP using a cut-and-choose technique which incurred a \lambda^2-bit overhead in communication where \lambda is a security parameter. Fischlin (CRYPTO '05) presented a more efficient technique based on ``proofs of work'' that sheds this \lambda^2 cost, but only applies to a limited class of Sigma Protocols with a ``quasi-unique response'' property, which for example, does not necessarily include the standard OR composition for Sigma protocols. With Schnorr/EdDSA signature aggregation as a motivating application, we develop new techniques to improve the computation cost of straight-line extractable proofs. Our improvements to the state of the art range from 70x--200x for the best compression parameters. This is due to a uniquely suited polynomial evaluation algorithm, and the insight that a proof-of-work that relies on multicollisions and the birthday paradox is faster to solve than inverting a fixed target. Our collision based proof-of-work more generally improves the Prover's random oracle query complexity when applied in the NIZK setting as well. In addition to reducing the query complexity of Fischlin's Prover, for a special class of Sigma protocols we can for the first time closely match a new lower bound we present. Finally we extend Fischlin's technique so that it applies to a more general class of strongly-sound Sigma protocols, which includes the OR composition. We achieve this by carefully randomizing Fischlin's technique---we show that its current deterministic nature prevents its application to certain multi-witness languages.
2022
ASIACRYPT
State Machine Replication under Changing Network Conditions 📺
Protocols for state machine replication (SMR) are typically designed for synchronous or asynchronous networks, with a lower corruption threshold in the latter case. Recent network-agnostic protocols are secure when run in either a synchronous or an asynchronous network. We propose two new constructions of network-agnostic SMR protocols that improve on existing protocols in terms of either the adversarial model or communication complexity: 1. an adaptively secure protocol with optimal corruption thresholds and quadratic amortized communication complexity per transaction; 2. a statically secure protocol with near-optimal corruption thresholds and linear amortized communication complexity per transaction. We further explore SMR protocols run in a network that may change between synchronous and asynchronous arbitrarily often; parties can be uncorrupted (as in the proactive model), and the protocol should remain secure as long as the appropriate corruption thresholds are maintained. We show that purely asynchronous proactive secret sharing is impossible without some form of synchronization between the parties, ruling out a natural approach to proactively secure network-agnostic SMR protocols. Motivated by this negative result, we consider a model where the adversary is limited in the total number of parties it can corrupt over the duration of the protocol and show, in this setting, that our SMR protocols remain secure even under arbitrarily changing network conditions.
2022
ASIACRYPT
Cryptographic Primitives with Hinting Property 📺
Best Paper by Early Career Researcher Award
A hinting PRG is a (potentially) stronger variant of PRG with a "deterministic" form of circular security with respect to the seed of the PRG (Koppula and Waters, CRYPTO 2019). Hinting PRGs enable many cryptographic applications, most notably CCA-secure public-key encryption and trapdoor functions. In this paper, we study cryptographic primitives with the hinting property, yielding the following results: - We present a novel and conceptually simpler approach for designing hinting PRGs from certain decisional assumptions over cyclic groups or isogeny-based group actions, which enables simpler security proofs as compared to the existing approaches for designing such primitives. - We introduce hinting weak PRFs, a natural extension of the hinting property to weak PRFs, and show how to realize circular/KDM-secure symmetric-key encryption from any hinting weak PRF. We demonstrate that our simple approach for building hinting PRGs can be extended to realize hinting weak PRFs from the same set of decisional assumptions. - We propose a stronger version of the hinting property, which we call the functional hinting property, that guarantees security even in the presence of hints about functions of the secret seed/key. We show how to instantiate functional hinting PRGs and functional hinting weak PRFs for certain (families of) functions by building upon our simple techniques for realizing plain hinting PRGs/weak PRFs. We also demonstrate the applicability of a functional hinting weak PRF with certain algebraic properties in realizing KDM-secure public-key encryption in a black-box manner. - Finally, we show the first black-box separation between hinting weak PRFs (and hinting PRGs) from public-key encryption using simple realizations of these primitives given only a random oracle.
2022
ASIACRYPT
On the Field-Based Division Property: Applications to MiMC, Feistel MiMC and GMiMC 📺
Recent practical applications using advanced cryptographic protocols such as multi-party computation (MPC) and zero-knowledge proofs (ZKP) have prompted a range of novel symmetric primitives described over large finite fields, characterized as arithmetization-oriented (AO) ciphers. Such designs, aiming to minimize the number of multiplications over fields, have a high risk of being vulnerable to algebraic attacks, especially to the higher-order differential attack. Thus, it is significant to carefully evaluate the growth of the algebraic degree. However, degree estimation for AO ciphers has been a challenge for cryptanalysts due to the lack of general and accurate methods. In this paper, we extend the division property, a state-of-the-art frame- work for finding the upper bound of the algebraic degree over binary fields, to the scope of F2n, called general monomial prediction. It is a generic method to detect the algebraic degree for AO ciphers, even applicable to Feistel ciphers which have no better bounds than the trivial exponential one. In the general monomial prediction, our idea is to evaluate whether the polynomial representation of a block cipher contains some specific monomials. With a deep investigation of the arithmetical feature, we introduce the propagation rules of monomials for field-based operations, which can be efficiently modeled using the bit-vector theory of SMT. Then the new searching tool for degree estimation can be constructed due to the relationship between the algebraic degree and the exponents of monomials. We apply our new framework to some important AO ciphers, including Feistel MiMC, GMiMC, and MiMC. For Feistel MiMC, we show that the algebraic degree grows significantly slower than the native exponential bound. For the first time, we present a secret-key higher-order differential distinguisher for up to 124 rounds, much better than the 83-round distinguisher for Feistel MiMC permutation proposed at CRYPTO 2020. We also exhibit a full-round zero-sum distinguisher with a data complexity of 2^{251}. Our method can be further extended for the general Feistel structure with more branches and exhibit higher-order differential distinguishers against the practical instance of GMiMC for up to 50 rounds. For MiMC in SP-networks, our results correspond to the exact algebraic degree proved by Bouvier. We also point out that the number of rounds in MiMC specification is not necessary to guarantee the security against the higher-order differential attack for MiMC-like schemes with different exponents. The investigation of different exponents provides some guidance on the cipher design.
2022
ASIACRYPT
Nonmalleable Digital Lockers and Robust Fuzzy Extractors in the Plain Model 📺
We give the first constructions in the plain model of 1) nonmalleable digital lockers (Canetti and Varia, TCC 2009) and 2) robust fuzzy extractors (Boyen et al., Eurocrypt 2005) that secure sources with entropy below 1/2 of their length. Constructions were previously only known for both primitives assuming random oracles or a common reference string (CRS). We define a new primitive called a nonmalleable point function obfuscation with associated data. The associated data is public but protected from all tampering. We construct a digital locker using a similar paradigm. Our construction achieves nonmalleability over the output point by placing a CRS into the associated data and using an appropriate non-interactive zero-knowledge proof. Tampering is protected against the input point over low-degree polynomials and over any tampering to the output point and associated data. Our constructions achieve virtual black box security. These constructions are then used to create robust fuzzy extractors that can support low-entropy sources in the plain model. By using the geometric structure of a syndrome secure sketch (Dodis et al., SIAM Journal on Computing 2008), the adversary's tampering function can always be expressed as a low-degree polynomial; thus, the protection provided by the constructed nonmalleable objects suffices.
2022
ASIACRYPT
Large-Precision Homomorphic Sign Evaluation using FHEW/TFHE Bootstrapping 📺
A comparison of two encrypted numbers is an important operation needed in many machine learning applications, for example, decision tree or neural network inference/training. An efficient instantiation of this operation in the context of fully homomorphic encryption (FHE) can be challenging, especially when a relatively high precision is sought. The conventional FHE way of evaluating the comparison operation, which is based on the sign function evaluation using FHEW/TFHE bootstrapping (often referred in literature as programmable bootstrapping), can only support very small precision (practically limited to 4-5 bits or so). For higher precision, the runtime complexity scales linearly with the ciphertext (plaintext) modulus (i.e., exponentially with the modulus bit size). We propose sign function evaluation algorithms that scale logarithmically with the ciphertext (plaintext) modulus, enabling the support of large-precision comparison in practice. Our sign evaluation algorithms are based on an iterative use of homomorphic floor function algorithms, which are also derived in our work. Further, we generalize our procedures for floor function evaluation to arbitrary function evaluation, which can be used to support both small plaintext moduli (directly) and larger plaintext moduli (by using a homomorphic digit decomposition algorithm, also suggested in our work). We implement all these algorithms using the PALISADE lattice cryptography library, introducing several implementation-specific optimizations along the way, and discuss our experimental results.
2022
ASIACRYPT
Compact FE for Unbounded Attribute-Weighted Sums for Logspace from SXDH 📺
Thispaperpresentsthefirstfunctionalencryption(FE)scheme for the attribute-weighted sum (AWS) functionality that supports the uniform model of computation. In such an FE scheme, encryption takes as input a pair of attributes (x, z) where the attribute x is public while the attribute z is private. A secret key corresponds to some weight function f, and decryption recovers the weighted sum f(x)z. This is an important functionality with a wide range of potential real-life applications, many of which require the attribute lengths to be flexible rather than being fixed at system setup. In the proposed scheme, the public attributes are considered as binary strings while the private attributes are considered as vectors over some finite field, both having arbitrary polynomial lengths that are not fixed at system setup. The weight functions are modelled as Logspace Turing machines. Prior schemes [Abdalla, Gong, and Wee, CRYPTO 2020 and Datta and Pal, ASIACRYPT 2021] could only support non-uniform Logspace. The proposed scheme is built in asymmetric prime-order bilinear groups and is proven adaptively simulation secure under the well-studied symmetric external Diffie-Hellman (SXDH) assumption against an arbitrary polynomial number of secret key queries both before and after the challenge ciphertext. This is the best possible level of security for FE as noted in the literature. As a special case of the proposed FE scheme, we also obtain the first adaptively simulation secure inner-product FE (IPFE) for vectors of arbitrary length that is not fixed at system setup. On the technical side, our contributions lie in extending the techniques of Lin and Luo [EUROCRYPT 2020] devised for payload hiding attribute-based encryption (ABE) for uniform Logspace access policies avoiding the so-called “one-use” restriction in the indistinguishability-based security model as well as the “three-slot reduction” technique for simulation- secure attribute-hiding FE for non-uniform Logspace devised by Datta and Pal [ASIACRYPT 2021] to the context of simulation-secure attribute- hiding FE for uniform Logspace.
2022
ASIACRYPT
Privacy-Preserving Authenticated Key Exchange in the Standard Model 📺
Privacy-Preserving Authenticated Key Exchange (PPAKE) provides protection both for the session keys and the identity information of the involved parties. In this paper, we introduce the concept of robustness into PPAKE. Robustness enables each user to confirm whether itself is the target recipient of the first round message in the protocol. With the help of robustness, a PPAKE protocol can successfully avoid the heavy redundant communications and computations caused by the ambiguity of communicants in the existing PPAKE, especially in broadcast channels. We propose a generic construction of robust PPAKE from key encapsulation mechanism (KEM), digital signature (SIG), message authentication code (MAC), pseudo-random generator (PRG) and symmetric encryption (SE). By instantiating KEM, MAC, PRG from the DDH assumption and SIG from the CDH assumption, we obtain a specific robust PPAKE scheme in the standard model, which enjoys forward security for session keys, explicit authentication and forward privacy for user identities. Thanks to the robustness of our PPAKE, the number of broadcast messages per run and the computational complexity per user are constant, and in particular, independent of the number of users in the system.
2022
ASIACRYPT
A New Isogeny Representation and Applications to Cryptography 📺
This paper focuses on isogeny representations, defined as witnesses of membership to the language of isogenous supersingular curves (the set of triples $D,E_1,E_2$ with a cyclic isogeny of degree $D$ between $E_1$ and $E_2$). This language and its proofs of membership are known to have several fundamental cryptographic applications such as the construction of digital signatures and validation of encryption keys. In the first part of this article, we reinterpret known results on isogenies in the framework of languages and proofs to show that the language of isogenous supersingular curves is in \textsf{NP} with the isogeny representation derived naturally from the Deuring correspondence. Our main contribution is the design of the suborder representation, a new isogeny representation targetted at the case of (big) prime degree. The core of our new method is the revelation of endomorphisms of smooth norm inside a well-chosen suborder of the codomain's endomorphism ring. These new membership witnesses appear to be opening interesting prospects for isogeny-based cryptography under the hardness of a new computational problem: the SubOrder to Ideal Problem (SOIP). As an application, we introduce pSIDH, a new NIKE based on the suborder representation. In the process, we also develop several heuristic algorithmic tools to solve norm equations inside a new family of quaternion orders. These new algorithms may be of independent interest.
2022
ASIACRYPT
General Properties of Quantum Bit Commitments (Extended Abstract) 📺
While unconditionally-secure quantum bit commitment (allowing both quantum computation and communication) is impossible, researchers turn to study the complexity-based one. A complexity-based canonical (non-interactive) quantum bit commitment scheme refers to a kind of scheme such that the commitment consists of just a single (quantum) message from the sender to the receiver that can be opened later by uncomputing the commit stage. In this work, we study general properties of complexity-based quantum bit commitments through the lens of canonical quantum bit commitments. Among other results, we in particular obtain the following two: 1. Any complexity-based quantum bit commitment scheme can be converted into the canonical (non-interactive) form (with its sum-binding property preserved). 2. Two flavors of canonical quantum bit commitments are equivalent; that is, canonical computationally-hiding statistically-binding quantum bit commitment exists if and only if the canonical statistically-hiding computationally-binding one exists. Combining this result with the first one, it immediately implies (unconditionally) that complexity-based quantum bit commitment is symmetric. Canonical quantum bit commitments can be based on quantum-secure one-way functions or pseudorandom quantum states. But in our opinion, the formulation of canonical quantum bit commitment is so clean and simple that itself can be viewed as a plausible complexity assumption as well. We propose to explore canonical quantum bit commitment from perspectives of both quantum cryptography and quantum complexity theory in future.
2022
ASIACRYPT
Compact and Tightly Selective-Opening Secure Public-key Encryption Schemes 📺
We propose four public-key encryption schemes with tight simulation-based selective-opening security against chosen-ciphertext attacks (SIM-SO-CCA) in the random oracle model. Our schemes only consist of small constant amounts of group elements in the ciphertext, ignoring smaller contributions from symmetric-key encryption, namely, they have compact ciphertexts. Furthermore, three of our schemes have compact public keys as well. Known (almost) tightly SIM-SO-CCA secure PKE schemes are due to the work of Lyu et al. (PKC 2018) and Libert et al. (Crypto 2017). They have either linear-size ciphertexts or linear-size public keys. Moreover, they only achieve almost tightness, namely, with security loss depending on the message bit-length. Different to them, our schemes are the first ones achieving both tight SIM-SO-CCA security and compactness. Our schemes can be divided into two families: - Direct Constructions. Our first three schemes are constructed directly based on the Strong Diffie-Hellman (StDH), Computational DH (CDH), and Decisional DH assumptions. Both their ciphertexts and public keys are compact. Their security loss is a small constant. Interestingly, our CDH-based construction is the first scheme achieving all these advantages based on a weak, search assumption. - A Generic Construction. Our last scheme is the well-known Fujisaki-Okamoto transformation. We show that it can turn a lossy encryption scheme into a tightly SIM-SO-CCA secure PKE. This transformation preserves both tightness and compactness of the underlying lossy encryption, which is in contrast to the non-tight proof of Heuer et al. (PKC 2015).
2022
ASIACRYPT
Non-Interactive Zero-Knowledge Proofs to Multiple Verifiers 📺
In this paper, we study zero-knowledge (ZK) proofs for circuit satisfiability that can prove to $n$ verifiers at a time efficiently. The proofs are secure against the collusion of a prover and a subset of $t$ verifiers. We refer to such ZK proofs as multi-verifier zero-knowledge (MVZK) proofs and focus on the case that a majority of verifiers are honest (i.e., $t<n/2$). We construct efficient MVZK protocols in the random oracle model where the prover sends one message to each verifier, while the verifiers only exchange one round of messages. When the threshold of corrupted verifiers $t<n/2$, the prover sends $1/2+o(1)$ field elements per multiplication gate to every verifier; when $t<n(1/2-\epsilon)$ for some constant $0<\epsilon<1/2$, we can further reduce the communication to $O(1/n)$ field elements per multiplication gate per verifier. Our MVZK protocols demonstrate particularly high scalability: the proofs are streamable and only require a memory proportional to what is needed to evaluate the circuit in the clear.
2022
ASIACRYPT
Knowledge Encryption and Its Applications to Simulatable Protocols With Low Round-Complexity 📺
We introduce a new notion of public key encryption, knowledge encryption, for which its ciphertexts can be reduced to the public-key, i.e., any algorithm that can break the ciphertext indistinguishability can be used to extract the (partial) secret key. We show that knowledge encryption can be built solely on any two-round oblivious transfer with game-based security, which are known based on various standard (polynomial-hardness) assumptions, such as the DDH, the Quadratic($N^{th}$) Residuosity or the LWE assumption. We use knowledge encryption to construct the first three-round (weakly) simulatable oblivious transfer. This protocol satisfies (fully) simulatable security for the receiver, and weakly simulatable security ($(T,\epsilon)$-simulatability) for the sender in the following sense: for any polynomial $T$ and any inverse polynomial $\epsilon$, there exists an efficient simulator such that the distinguishing gap of any distinguisher of size less than $T$ is at most $\epsilon$. Equipped with these tools, we construct a variety of fundamental cryptographic protocols with low round-complexity, assuming only the existence of two-round oblivious transfer with game-based security. These protocols include three-round delayed-input weak zero knowledge argument, three-round weakly secure two-party computation, three-round concurrent weak zero knowledge in the BPK model, and a two-round commitment with weak security under selective opening attack. These results improve upon the assumptions required by the previous constructions. Furthermore, all our protocols enjoy the above $(T,\epsilon)$-simulatability (stronger than the distinguisher-dependent simulatability), and are quasi-polynomial time simulatable under the same (polynomial hardness) assumption.
2022
ASIACRYPT
DAG-$\Sigma$: A DAG-based Sigma Protocol for Relations in CNF 📺
At CRYPTO 1994, Cramer, Damg{\aa}rd and Schoenmakers proposed a general method to construct proofs of knowledge (PoKs), especially for $k$-out-of-$n$ partial knowledge, of which relations can be expressed in disjunctive normal form (DNF). Since then, proofs of $k$-out-of-$n$ partial knowledge have attracted much attention and some efficient constructions have been proposed. However, many practical scenarios require efficient PoK protocols for partial knowledge in other forms. In this paper, we mainly focus on PoK protocols for $k$-conjunctive normal form ($k$-CNF) relations, which have $n$ statements and can be expressed as follows: (i) $k$ statements constitute a clause via ``OR'' operations, and (ii) the relation consists of multiple clauses via ``AND'' operations. We propose an alternative Sigma protocol (called DAG-$\Sigmaup$ protocol) for $k$-CNF relations, by turning these relations into directed acyclic graphs (DAGs). Our DAG-$\Sigmaup$ protocol achieves less communication cost and smaller computational overhead compared with Cramer et al.'s general method.
2022
ASIACRYPT
Failing gracefully: Decryption failures and the Fujisaki-Okamoto transform 📺
In known security reductions for the Fujisaki-Okamoto transformation, decryption failures are handled via a reduction solving the rather unnatural task of finding failing plaintexts given the private key, resulting in a Grover search bound. Moreover, they require an implicit rejection mechanism for invalid ciphertexts to achieve a reasonable security bound in the QROM. We present a reduction that has neither of these deficiencies: We introduce two security games related to finding decryption failures, one capturing the computationally hard task of using the public key to find a decryption failure, and one capturing the statistically hard task of searching the random oracle for key-independent failures like, e.g., large randomness. As a result, our security bounds in the QROM are tighter than previous ones with respect to the generic random oracle search attacks: The attacker can only partially compute the search predicate, namely for said key-independent failures. In addition, our entire reduction works for the explicit-reject variant of the transformation and improves significantly over all of its known reductions. Besides being the more natural variant of the transformation, security of the explicit reject mechanism is also relevant for side channel attack resilience of the implicit-rejection variant. Along the way, we prove several technical results characterizing preimage extraction and certain search tasks in the QROM that might be of independent interest.
2022
ASIACRYPT
Key-schedule Security for the TLS 1.3 Standard 📺
Transport Layer Security (TLS) is the cryptographic backbone of secure communication on the Internet. In its latest version 1.3, the standardization process has taken formal analysis into account both due to the importance of the protocol and the experience with conceptual attacks against previous versions. To manage the complexity of TLS (the specification exceeds 100 pages), prior reduction-based analyses have focused on some protocol features and omitted others, e.g., included session resumption and omitted agile algorithms or vice versa. This article is a major step towards analysing the TLS 1.3 key establishment protocol as specified at the end of its rigorous standardization process. Namely, we provide a full proof of the TLS key schedule, a core protocol component which produces output keys and internal keys of the key exchange protocol. In particular, our model supports all key derivations featured in the standard, including its negotiated modes and algorithms that combine an optional Diffie-Hellman exchange for forward secrecy with optional pre-shared keys supplied by the application or recursively established in prior sessions. Technically, we rely on state-separating proofs (Asiacrypt '18) and introduce techniques to model large and complex derivation graphs. Our key schedule analysis techniques have been used subsequently %by Brzuska, Cornelissen and Kohbrok to analyse the key schedule of Draft 11 of the MLS protocol (S&P'22) and to propose improvements.
2022
ASIACRYPT
Optimising Linear Key Recovery Attacks with Affine Walsh Transform Pruning 📺
Linear cryptanalysis is one of the main families of key-recovery attacks on block ciphers. Several publications have drawn attention towards the possibility of reducing their time complexity using the fast Walsh transform. These previous contributions ignore the structure of the key recovery rounds, which are treated as arbitrary boolean functions. In this paper, we optimise the time and memory complexities of these algorithms by exploiting zeroes in the Walsh spectra of these functions using a novel affine pruning technique for the Walsh Transform. These new optimisation strategies are then showcased with two application examples: an improved attack on the DES and the first known atttack on 29-round PRESENT-128.
2022
ASIACRYPT
BLOOM: Bimodal Lattice One-Out-of-Many Proofs and Applications 📺
We give a construction of an efficient one-out-of-many proof system, in which a prover shows that he knows the pre-image for one element in a set, based on the hardness of lattice problems. The construction employs the recent zero-knowledge framework of Lyubashevsky et al. (Crypto 2022) together with an improved, over prior lattice-based one-out-of-many proofs, recursive procedure, and a novel rejection sampling proof that allows to use the efficient bimodal rejection sampling throughout the protocol. Using these new primitives and techniques, we give instantiations of the most compact lattice-based ring and group signatures schemes. The improvement in signature sizes over prior works ranges between $25\%$ and $2$X. Perhaps of even more significance, the size of the user public keys, which need to be stored somewhere publicly accessible in order for ring signatures to be meaningful, is reduced by factors ranging from $7$X to $15$X. In what could be of independent interest, we also provide noticeably improved proofs for integer relations which, together with one-out-of-many proofs are key components of confidential payment systems.
2022
CRYPTO
Oblivious Message Retrieval 📺
Anonymous message delivery systems, such as private messaging services and privacy-preserving payment systems, need a mechanism for recipients to retrieve the messages addressed to them, without leaking metadata or letting their messages be linked. Recipients could download all posted messages and scan for those addressed to them, but communication and computation costs are excessive at scale. We show how untrusted servers can detect messages on behalf of recipients, and summarize these into a compact encrypted digest that recipients can easily decrypt. These servers operate obliviously and do not learn anything about which messages are addressed to which recipients. Privacy, soundness, and completeness hold even if everyone but the recipient is adversarial and colluding (unlike in prior schemes). Our starting point is an asymptotically-efficient approach, using Fully Homomorphic Encryption and homomorphically-encoded Sparse Random Linear Codes. We then address the concrete performance using bespoke tailoring of lattice-based cryptographic components, alongside various algebraic and algorithmic optimizations. This reduces the digest size to a few bits per message scanned. Concretely, the servers' cost is ~$1 per million messages scanned, and the resulting digests can be decoded by recipients in ~20ms. Our schemes can thus practically attain the strongest form of receiver privacy for current applications such as privacy-preserving cryptocurrencies.
2022
CRYPTO
Time-Space Lower Bounds for Finding Collisions in Merkle-Damgard Hash Functions 📺
We revisit the problem of finding B-block-long collisions in Merkle-Damgard Hash Functions in the auxiliary-input random oracle model, in which an attacker gets a piece of S-bit advice about the random oracle and makes T oracle queries. Akshima, Cash, Drucker and Wee (CRYPTO 2020), based on the work of Coretti, Dodis, Guo and Steinberger (EUROCRYPT 2018), showed a simple attack for 2\leq B\leq T (with respect to a random salt). The attack achieves advantage \Tilde{\Omega}(STB/2^n+T^2/2^n) where n is the output length of the random oracle. They conjectured that this attack is optimal. However, this so-called STB conjecture was only proved for B\approx T and B=2. Very recently, Ghoshal and Komargodski (CRYPTO 22) confirmed STB conjecture for all constant values of B, and provided an \Tilde{O}(S^4TB^2/2^n+T^2/2^n) bound for all choices of B. In this work, we prove an \Tilde{O}((STB/2^n)\cdot\max\{1,ST^2/2^n\}+ T^2/2^n) bound for every 2< B < T. Our bound confirms the STB conjecture for ST^2\leq 2^n, and is optimal up to a factor of S for ST^2>2^n (note as T^2 is always at most 2^n, otherwise finding a collision is trivial by the birthday attack). Our result subsumes all previous upper bounds for all ranges of parameters except for B=\Tilde{O}(1) and ST^2>2^n. We obtain our results by adopting and refining the technique of Chung, Guo, Liu, and Qian (FOCS 2020). Our approach yields more modular proofs and sheds light on how to bypass the limitations of prior techniques. Along the way, we obtain a considerably simpler and illuminating proof for B=2, recovering the main result of Akshima, Cash, Drucker and Wee.
2022
CRYPTO
Universally Composable End-to-End Secure Messaging 📺
We model and analyze the Signal end-to-end messaging protocol within the UC framework. In particular: - We formulate an ideal functionality that captures end-to-end secure messaging, in a setting with PKI and an untrusted server, against an adversary that has full control over the network and can adaptively and momentarily compromise parties at any time and obtain their entire internal states. In particular our analysis captures the forward secrecy and recovery-of-security properties of Signal and the conditions under which they break. - We model the main components of the Signal architecture (PKI and long-term keys, the backbone continuous-key-exchange or "asymmetric ratchet," epoch-level symmetric ratchets, authenticated encryption) as individual ideal functionalities that are realized and analyzed separately and then composed using the UC and Global-State UC theorems. - We show how the ideal functionalities representing these components can be realized using standard cryptographic primitives under minimal hardness assumptions. Our modeling introduces additional innovations that enable arguing about the security of Signal irrespective of the underlying communication medium, as well as secure composition of dynamically generated modules that share state. These features, together with the basic modularity of the UC framework, will hopefully facilitate the use of both Signal-as-a-whole and its individual components within cryptographic applications. Two other features of our modeling are the treatment of fully adaptive corruptions, and making minimal use of random oracle abstractions. In particular, we show how to realize continuous key exchange in the plain model, while preserving security against adaptive corruptions.
2022
CRYPTO
Correlated Pseudorandomness from Expand-Accumulate Codes 📺
A pseudorandom correlation generator (PCG) is a recent tool for securely generating useful sources of correlated randomness, such as random oblivious transfers (OT) and vector oblivious linear evaluations (VOLE), with low communication cost. We introduce a simple new design for PCGs based on so-called expand-accumulate codes, which first apply a sparse random expander graph to replicate each message entry, and then accumulate the entries by computing the sum of each prefix. Our design offers the following advantages compared to state-of-the-art PCG constructions: - Competitive concrete efficiency backed by provable security against relevant classes of attacks; - An offline-online mode that combines near-optimal cache-friendliness with simple parallelization; - Concretely efficient extensions to pseudorandom correlation functions, which enable incremental generation of new correlation instances on demand, and to new kinds of correlated randomness that include circuit-dependent correlations. To further improve the concrete computational cost, we propose a method for speeding up a full-domain evaluation of a puncturable pseudorandom function (PPRF). This is independently motivated by other cryptographic applications of PPRFs.
2022
CRYPTO
Better than Advertised Security for Non-Interactive Threshold Signatures 📺
We give a unified syntax, and a hierarchy of definitions of security of increasing strength, for non-interactive threshold signature schemes. These are schemes having a single-round signing protocol, possibly with one prior round of message-independent pre-processing. We fit FROST1 and BLS, which are leading practical schemes, into our hierarchy, in particular showing they meet stronger security definitions than they have been shown to meet so far. We also fit in our hierarchy a more efficient version FROST2 of FROST1 that we give. These definitions and results, for simplicity, all assume trusted key generation. Finally, we prove the security of FROST2 with key generation performed by an efficient distributed key generation protocol.
2022
CRYPTO
Quantum Commitments and Signatures without One-Way Functions 📺
In the classical world, the existence of commitments is equivalent to the existence of one-way functions. In the quantum setting, on the other hand, commitments are not known to imply one-way functions, but all known constructions of quantum commitments use at least one-way functions. Are one-way functions really necessary for commitments in the quantum world? In this work, we show that non-interactive quantum commitments (for classical messages) with computational hiding and statistical binding exist if pseudorandom quantum states exist. Pseudorandom quantum states are sets of quantum states that are efficiently generated but their polynomially many copies are computationally indistinguishable from the same number of copies of Haar random states [Ji, Liu, and Song, CRYPTO 2018]. It is known that pseudorandom quantum states exist even if BQP = QMA (relative to a quantum oracle) [Kretschmer, TQC 2021], which means that pseudorandom quantum states can exist even if no quantum-secure classical cryptographic primitive exists. Our result therefore shows that quantum commitments can exist even if no quantum-secure classical cryptographic primitive exists. In particular, quantum commitments can exist even if no quantum-secure one-way function exists. In this work, we also consider digital signatures, which are other fundamental primitives in cryptography. We show that one-time secure digital signatures with quantum public keys exist if pseudorandom quantum states exist. In the classical setting, the existence of digital signatures is equivalent to the existence of one-way functions. Our result, on the other hand, shows that quantum signatures can exist even if no quantum-secure classical cryptographic primitive (including quantum-secure one-way functions) exists.
2022
CRYPTO
Implicit White-Box Implementations: White-Boxing ARX Ciphers 📺
Since the first white-box implementation of AES published twenty years ago, no significant progress has been made in the design of secure implementations against an attacker with full control of the device. Designing white-box implementations of existing block ciphers is a challenging problem, as all proposals have been broken. Only two white-box design strategies have been published this far: the CEJO framework, which can only be applied to ciphers with small S-boxes, and self-equivalence encodings, which were only applied to AES. In this work we propose implicit implementations, a new design of white-box implementations based on implicit functions, and we show that current generic attacks that break CEJO or self-equivalence implementations are not successful against implicit implementations. The generation and the security of implicit implementations are related to the self-equivalences of the non-linear layer of the cipher, and we propose a new method to obtain self-equivalences based on the CCZ-equivalence. We implemented this method and many other functionalities in a new open-source tool BoolCrypt, which we used to obtain for the first time affine, linear, and even quadratic self-equivalences of the permuted modular addition. Using the implicit framework and these self-equivalences, we describe for the first time a practical white-box implementation of a generic Addition-Rotation-XOR (ARX) cipher, and we provide an open-source tool to easily generate implicit implementations of ARX ciphers.
2022
CRYPTO
Simplified MITM Modeling for Permutations: New (Quantum) Attacks 📺
Meet-in-the-middle (MITM) is a general paradigm where internal states are computed along two independent paths ('forwards' and 'backwards') that are then matched. Over time, MITM attacks improved using more refined techniques and exploiting additional freedoms and structure, which makes it more involved to find and optimize such attacks. This has led to the use of detailed attack models for generic solvers to automatically search for improved attacks, notably a MILP model developed by Bao et al. at EUROCRYPT 2021. In this paper, we study a simpler MILP modeling combining a greatly reduced attack representation as input to the generic solver, together with a theoretical analysis that, for any solution, proves the existence and complexity of a detailed attack. This modeling allows to find both classical and quantum attacks on a broad class of cryptographic permutations. First, Present-like constructions, with the permutations of the Spongent hash functions: we improve the MITM step in distinguishers by up to 3 rounds. Second, AES-like designs: despite being much simpler than Bao et al.'s, our model allows to recover the best previous results. The only limitation is that we do not use degrees of freedom from the key schedule. Third, we show that the model can be extended to target more permutations, like Feistel networks. In this context we give new Guess-and-determine attacks on reduced Simpira v2 and Sparkle. Finally, using our model, we find several new quantum preimage and pseudo-preimage attacks (e.g. Haraka v2, Simpira v2 ... ) targeting the same number of rounds as the classical attacks.
2022
CRYPTO
Batch Arguments for NP and More from Standard Bilinear Group Assumptions 📺
Best Paper Award
Non-interactive batch arguments for NP provide a way to amortize the cost of NP verification across multiple instances. They enable a prover to convince a verifier of multiple NP statements with communication much smaller than the total witness length and verification time much smaller than individually checking each instance. In this work, we give the first construction of a non-interactive batch argument for NP from standard assumptions on groups with bilinear maps (specifically, from either the subgroup decision assumption in composite-order groups or from the k-Lin assumption in prime-order groups for any k >= 1). Previously, batch arguments for NP were only known from LWE, or a combination of multiple assumptions, or from non-standard/non-falsifiable assumptions. Moreover, our work introduces a new direct approach for batch verification and avoids heavy tools like correlation-intractable hash functions or probabilistically-checkable proofs common to previous approaches. As corollaries to our main construction, we obtain the first publicly-verifiable non-interactive delegation scheme for RAM programs (i.e., a succinct non-interactive argument (SNARG) for P) with a CRS of sublinear size (in the running time of the RAM program), as well as the first aggregate signature scheme (supporting bounded aggregation) from standard assumptions on bilinear maps.
2022
CRYPTO
Constructive Post-Quantum Reductions 📺
Is it possible to convert classical reductions into post-quantum ones? It is customary to argue that while this is problematic in the interactive setting, non-interactive reductions do carry over. However, when considering quantum auxiliary input, this conversion results in a *non-constructive* post-quantum reduction that requires duplicating the quantum auxiliary input, which is in general inefficient or even impossible. This violates the win-win premise of provable cryptography: an attack against a cryptographic primitive should lead to an algorithmic advantage. We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the post-quantum setting in a constructive manner. We show that any non-interactive non-adaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made post-quantum constructive. In contrast, assumptions with super-polynomial solution space (such as general search assumptions) cannot be generally converted. Along the way, we make several additional contributions: 1. We put forth a framework for reductions (or general interaction) with *stateful* solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting. 2. A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a super-polynomial solution space cannot be generically ``restored'' post-measurement. This shows that the novel rewinding technique of Chiesa et al.\ (FOCS 2021) is tight in the sense that it cannot be extended beyond a polynomial measurement space.
2022
CRYPTO
An Algebraic Framework for Silent Preprocessing with Trustless Setup and Active Security 📺
Recently, number-theoretic assumptions including DDH, DCR and QR have been used to build powerful tools for secure computation, in the form of homomorphic secret-sharing (HSS), which leads to secure two-party computation protocols with succinct communication, and pseudorandom correlation functions (PCFs), which allow non-interactive generation of a large quantity of correlated randomness. In this work, we present a group-theoretic framework for these classes of constructions, which unifies their approach to computing distributed discrete logarithms in various groups. We cast existing constructions in our framework, and also present new constructions, including one based on class groups of imaginary quadratic fields. This leads to the first construction of two-party homomorphic secret sharing for branching programs from class group assumptions. Using our framework, we also obtain pseudorandom correlation functions for generating oblivious transfer and vector-OLE correlations from number-theoretic assumptions. These have a trustless, public-key setup when instantiating our framework using class groups. Previously, such constructions either needed a trusted setup in the form of an RSA modulus with unknown factorisation, or relied on multi-key fully homomorphic encryption from the learning with errors assumption. We also show how to upgrade our constructions to achieve active security using appropriate zero-knowledge proofs. In the random oracle model, this leads to a one-round, actively secure protocol for setting up the PCF, as well as a 3-round, actively secure HSS-based protocol for secure two-party computation of branching programs with succinct communication.
2022
CRYPTO
Some Easy Instances of Ideal-SVP and Implications to the Partial Vandermonde Knapsack Problem 📺
Best Paper by Early Career Researcher Award
In this article, we generalize the works of Pan et al. (Eurocrypt'21) and Porter et al. (ArXiv'21) and provide a simple condition under which an ideal lattice defines an easy instance of the shortest vector problem. Namely, we show that the more automorphisms stabilize the ideal, the easier it is to find a short vector in it. This observation was already made for prime ideals in Galois fields, and we generalize it to any ideal (whose prime factors are not ramified) of any number field. We then provide a cryptographic application of this result by showing that particular instances of the partial Vandermonde knapsack problem, also known as partial Fourier recovery problem, can be solved classically in polynomial time. As a proof of concept, we implemented our attack and managed to solve those particular instances for concrete parameter settings proposed in the literature. For random instances, we can halve the lattice dimension with non-negligible probability.
2022
CRYPTO
Public-Key Watermarking Schemes for Pseudorandom Functions 📺
A software watermarking scheme can embed a message into a program while preserving its functionality. The embedded message can be extracted later by an extraction algorithm, and no one could remove it without significantly changing the functionality of the program. A watermarking scheme is public key if neither the marking procedure nor the extraction procedure needs a watermarking secret key. Prior constructions of watermarking schemes mainly focus on watermarking pseudorandom functions (PRFs), and the major open problem in this direction is to construct a public-key watermarkable PRF. In this work, we solve the open problem via constructing public-key watermarkable PRFs with different trade-offs from various assumptions, ranging from standard lattice assumptions to the existence of indistinguishability obfuscation. To achieve the results, we first construct watermarking schemes in a weaker model, where the extraction algorithm is provided with a “hint” about the watermarked PRF key. Then we upgrade the constructions to standard watermarking schemes using a robust unobfuscatable PRF. We also provide the first construction of robust unobfuscatable PRF in this work, which is of independent interest.
2022
CRYPTO
Rotational Differential-Linear Distinguishers of ARX Ciphers with Arbitrary Output Linear Masks 📺
The rotational differential-linear attacks, proposed at EUROCRYPT 2021, is a generalization of differential-linear attacks by replacing the differential part of the attacks with rotational differentials. At EUROCRYPT 2021, Liu et al. presented a method based on Morawiecki et al.’s technique (FSE 2013) for evaluating the rotational differential-linear correlations for the special cases where the output linear masks are unit vectors. With this method, some powerful (rotational) differential-linear distinguishers with output linear masks being unit vectors against Friet, Xoodoo, and Alzette were discovered. However, how to compute the rotational differential-linear correlations for arbitrary output masks was left open. In this work, we partially solve this open problem by presenting an efficient algorithm for computing the (rotational) differential-linear correlation of modulo additions for arbitrary output linear masks, based on which a technique for evaluating the (rotational) differential-linear correlation of ARX ciphers is derived. We apply the technique to Alzette, SipHash, Chacha, and Speck. As a result, significantly improved (rotational) differential-linear distinguishers including deterministic ones are identified. All results of this work are practical and experimentally verified to confirm the validity of our methods. In addition, we try to explain the experimental distinguishers employed in FSE 2008, FSE 2016, and CRYPTO 2020 against Chacha. The predicted correlations are close to the experimental ones.
2022
CRYPTO
Lattice-Based Zero-Knowledge Proofs and Applications: Shorter, Simpler, and More General 📺
We present a much-improved practical protocol, based on the hardness of Module-SIS and Module-LWE problems, for proving knowledge of a short vector $s$ satisfying $As=t\bmod q$. The currently most-efficient technique for constructing such a proof works by showing that the $\ell_\infty$ norm of $s$ is small. It creates a commitment to a polynomial vector $m$ whose CRT coefficients are the coefficients of $s$ and then shows that (1) $A\cdot \mathsf{CRT}(m)=t\bmod\,q$ and (2) in the case that we want to prove that the $\ell_\infty$ norm is at most $1$, the polynomial product $(m - 1)\cdot m\cdot(m+1)$ equals to $0$. While these schemes are already quite efficient for practical applications, the requirement of using the CRT embedding and only being naturally adapted to proving the $\ell_\infty$-norm, hinders the efficiency of this approach. In this work, we show that there is a direct and efficient way to prove that the coefficients of $s$ have a small $\ell_2$ norm which does not require an equivocation with the $\ell_\infty$ norm, nor any conversion to the CRT representation. We observe that the inner product between two vectors $ r$ and $s$ can be made to appear as a coefficient of a product (or sum of products) between polynomials which are functions of $r$ and $s$. Thus, by using a polynomial product proof system and hiding all but one coefficient, we are able to prove knowledge of the inner product of two vectors modulo $q$. Using a cheap, approximate range proof, one can then lift the proof to be over $\mathbb{Z}$ instead of $\mathbb{Z}_q$. Our protocols for proving short norms work over all (interesting) polynomial rings, but are particularly efficient for rings like $\mathbb{Z}[X]/(X^n+1)$ in which the function relating the inner product of vectors and polynomial products happens to be a ``nice'' automorphism. The new proof system can be plugged into constructions of various lattice-based privacy primitives in a black-box manner. As examples, we instantiate a verifiable encryption scheme and a group signature scheme which are more than twice as compact as the previously best solutions.
2022
CRYPTO
Dynamic Local Searchable Symmetric Encryption 📺
In this article, we tackle for the first time the problem of \emph{dynamic} memory-efficient Searchable Symmetric Encryption (SSE). In the term ``memory-efficient'' SSE, we encompass both the goals of \emph{local} SSE, and \emph{page-efficient} SSE. The centerpiece of our approach is a novel connection between those two goals. We introduce a map, called the Generic Local Transform, which takes as input a \emph{page-efficient} SSE scheme with certain special features, and outputs an SSE scheme with strong \emph{locality} properties. We obtain several results. (1) First, for page-efficient SSE with page size $p$, we build a \emph{dynamic} scheme with storage efficiency $\bigO{1}$ and page efficiency $O(\log \log (N/p))$, called LayeredSSE. The main technical innovation behind LayeredSSE is a novel weighted extension of the two-choice allocation process, of independent interest. (2) Second, we introduce the Generic Local Transform, and combine it with LayeredSSE to build a \emph{dynamic} SSE scheme with storage efficiency $O{1}$, locality $O{1}$, and read efficiency $O(\log\log N)$, under the condition that the longest list is of size $O(N^{1-1/\log \log \lambda})$. This matches, in every respect, the purely \emph{static} construction of Asharov et al. presented at STOC 2016: dynamism comes at no extra cost. (3) Finally, by applying the Generic Local Transform to a variant of the Tethys scheme by Bossuat et al. from Crypto 2021, we build an unconditional static SSE with storage efficiency $O(1)$, locality $O(1)$, and read efficiency $O(\log^\varepsilon N)$, for an arbitrarily small constant $\varepsilon > 0$. To our knowledge, this is the construction that comes closest to the lower bound presented by Cash and Tessaro at Eurocrypt 2014.
2022
CRYPTO
A New Approach to Efficient Non-Malleable Zero-Knowledge 📺
Non-malleable zero-knowledge, originally introduced in the context of man-in-the-middle attacks, serves as an important building block to protect against concurrent attacks where different protocols may coexist and interleave. While this primitive admits almost optimal constructions in the plain model, they are several orders of magnitude slower in practice than standalone zero-knowledge. This is in sharp contrast to non-malleable commitments where practical constructions (under the DDH assumption) have been known for a while. We present a new approach for constructing efficient non-malleable zero-knowledge for all languages in NP, based on a new primitive called instance-based non-malleable commitment (IBNMC). We show how to construct practical IBNMC by leveraging the fact that simulators of sub-linear zero-knowledge protocols can be much faster than the honest prover algorithm. With an efficient implementation of IBNMC, our approach yields the first general-purpose non-malleable zero-knowledge protocol that achieves practical efficiency in the plain model. All of our protocols can be instantiated from symmetric primitives such as block-ciphers and hash functions, have reasonable efficiency in practice, and are general-purpose. Our techniques also yield the first efficient non-malleable commitment scheme without public-key assumptions.
2022
CRYPTO
Statistically Sender-Private OT From LPN and Derandomization 📺
We construct a two-message oblivious transfer protocol with statistical sender privacy (SSP OT) based on the Learning Parity with Noise (LPN) Assumption and a standard Nisan-Wigderson style derandomization assumption. Beyond being of interest on their own, SSP OT protocols have proven to be a powerful tool toward minimizing the round complexity in a wide array of cryptographic applications from proofs systems, through secure computation protocols, to hard problems in statistical zero knowledge (SZK). The protocol is plausibly post-quantum secure. The only other constructions with plausible post quantum security are based on the Learning with Errors (LWE) Assumption. Lacking the geometric structure of LWE, our construction and analysis rely on a different set of techniques. Technically, we first construct an SSP OT protocol in the common random string model from LPN alone, and then derandomize the common random string. Most of the technical difficulty lies in the first step. Here we prove a robustness property of the inner product randomness extractor to a certain type of linear splitting attacks. A caveat of our construction is that it relies on the so called low noise regime of LPN. This aligns with our current complexity-theoretic understanding of LPN, which only in the low noise regime is known to imply hardness in SZK.
2022
CRYPTO
Efficient NIZKs and Signatures from Commit-and-Open Protocols in the QROM 📺
Commit-and-open sigma-protocols are a popular class of protocols for constructing non-interactive zero-knowledge arguments and digital-signature schemes via the Fiat-Shamir transformation. Instantiated with hash-based commitments, the resulting non-interactive schemes enjoy tight online-extractability in the random oracle model. Online extractability improves the tightness of security proofs for the resulting digital-signature schemes by avoiding lossy rewinding or forking-lemma based extraction. In this work, we prove tight online extractability in the quantum random oracle model (QROM), showing that the construction supports post-quantum security. First, we consider the default case where committing is done by element-wise hashing. In a second part, we extend our result to Merkle-tree based commitments. Our results yield a significant improvement of the provable post-quantum security of the digital-signature scheme Picnic. Our analysis makes use of a recent framework by Chung et al. [CFHL21] for analysing quantum algorithms in the QROM using purely classical reasoning. Therefore, our results can to a large extent be understood and verified without prior knowledge of quantum information science.
2022
CRYPTO
(Nondeterministic) Hardness vs. Non-Malleability 📺
We present the first truly explicit constructions of \emph{non-malleable codes} against tampering by bounded polynomial size circuits. These objects imply unproven circuit lower bounds and our construction is secure provided $\Eclass$ requires exponential size nondeterministic circuits, an assumption from the derandomization literature. Prior works on NMC for polysize circuits, either required an untamperable CRS [Cheraghchi, Guruswami ITCS'14; Faust, Mukherjee, Venturi, Wichs EUROCRYPT'14] or very strong cryptographic assumptions [Ball, Dachman-Soled, Kulkarni, Lin, Malkin EUROCRYPT'18; Dachman-Soled, Komargodski, Pass CRYPTO'21]. Both of works in the latter category only achieve non-malleability with respect to efficient distinguishers and, more importantly, utilize cryptographic objects for which no provably secure instantiations are known outside the random oracle model. In this sense, none of the prior yields fully explicit codes from non-heuristic assumptions. Our assumption is not known to imply the existence of one-way functions, which suggests that cryptography is unnecessary for non-malleability against this class. Technically, security is shown by \emph{non-deterministically} reducing polynomial size tampering to split-state tampering. The technique is general enough that it allows us to to construct the first \emph{seedless non-malleable extractors} [Cheraghchi, Guruswami TCC'14] for sources sampled by polynomial size circuits [Trevisan, Vadhan FOCS'00] (resp.~recognized by polynomial size circuits [Shaltiel CC'11]) and tampered by polynomial size circuits. Our construction is secure assuming $\Eclass$ requires exponential size $\Sigma_4$-circuits (resp. $\Sigma_3$-circuits), this assumption is the state-of-the-art for extracting randomness from such sources (without non-malleability). Several additional results are included in the full version of this paper [Eprint 2022/070]. First, we observe that non-malleable codes and non-malleable secret sharing [Goyal, Kumar STOC'18] are essentially equivalent with respect to polynomial size tampering. In more detail, assuming $\Eclass$ is hard for exponential size nondeterministic circuits, any efficient secret sharing scheme can be made non-malleable against polynomial size circuit tampering. Second, we observe that the fact that our constructions only achieve inverse polynomial (statistical) security is inherent. Extending a result from [Applebaum, Artemenko, Shaltiel, Yang CC'16] we show it is impossible to do better using black-box reductions. However, we extend the notion of relative error from [Applebaum, Artemenko, Shaltiel, Yang CC'16] to non-malleable extractors and show that they can be constructed from similar assumptions. Third, we observe that relative-error non-malleable extractors can be utilized to render a broad class of cryptographic primitives tamper and leakage resilient, while preserving negligible security guarantees.
2022
CRYPTO
On Time-Space Tradeoffs for Bounded-Length Collisions in Merkle-Damgård Hashing 📺
We study the power of preprocessing adversaries in finding bounded-length collisions in the widely used Merkle-Damgard (MD) hashing in the random oracle model. Specifically, we consider adversaries which have arbitrary $S$-bit advice about the random oracle and can make at most $T$ queries to it. Our goal is to characterize the advantage of such adversaries in finding a $B$-block collision in an MD hash function constructed using the random oracle with range size $N$ as the compression function (given a random salt). The answer to this question is completely understood for very large values of $B$ (essentially $\Omega(T)$) as well as for $B=1,2$. For $B\approx T$, Coretti et al.~(EUROCRYPT '18) gave matching upper and lower bounds of $\tilde\Theta(ST^2/N)$. Akshima et al.~(CRYPTO '20) observed that the attack of Coretti et al.\ could be adapted to work for any value of $B>1$, giving an attack with advantage $\tilde\Omega(STB/N + T^2/N)$. Unfortunately, they could only prove that this attack is optimal for $B=2$. Their proof involves a compression argument with exhaustive case analysis and, as they claim, a naive attempt to generalize their bound to larger values of B (even for $B=3$) would lead to an explosion in the number of cases needed to be analyzed, making it unmanageable. With the lack of a more general upper bound, they formulated the \emph{STB conjecture}, stating that the best-possible advantage is $\tildeO(STB/N + T^2/N)$ for any $B>1$. In this work, we confirm the STB conjecture in many new parameter settings. For instance, in one result, we show that the conjecture holds for all constant values of $B$, significantly extending the result of Akshima et al. Further, using combinatorial properties of graphs, we are able to confirm the conjecture even for super constant values of $B$, as long as some restriction is made on $S$. For instance, we confirm the conjecture for all $B \le T^{1/4}$ as long as $S \le T^{1/8}$. Technically, we develop structural characterizations for bounded-length collisions in MD hashing that allow us to give a compression argument in which the number of cases needed to be handled does not explode.
2022
CRYPTO
Lower Bound on SNARGs in the Random Oracle Model 📺
Succinct non-interactive arguments (SNARGs) have become a fundamental primitive in the cryptographic community. The focus of this work is constructions of SNARGs in the Random Oracle Model (ROM). Such SNARGs enjoy post-quantum security and can be deployed using lightweight cryptography to heuristically instantiate the random oracle. A ROM-SNARG is \emph{$(t,\varepsilon)$-sound} if no $t$-query malicious prover can convince the verifier to accept a false statement with probability larger than $\varepsilon$. Recently, Chiesa-Yogev (CRYPTO '21) presented a ROM-SNARG of length ${\Theta}(\log (t/\varepsilon) \cdot \log t)$ (ignoring $\log n$ factors, for $n$ being the instance size). This improvement, however, is still far from the (folklore) lower bound of $\Omega(\log (t/\varepsilon))$. Assuming the \textit{randomized exponential-time hypothesis}, we prove a tight lower bound of ${\Omega}(\log (t/\varepsilon) \cdot \log t)$ for the length of {$(t,\varepsilon)$-sound} ROM-SNARGs. Our lower bound holds for constructions with non-adaptive verifiers and strong soundness notion called \textit{salted soundness}, restrictions that hold for \emph{all} known constructions (ignoring contrived counterexamples). We prove our lower bound by transforming any short ROM-SNARG (of the considered family) into a same length ROM-SNARG in which the verifier asks only a \emph{few} oracles queries, and then apply the recent lower bound of Chiesa-Yogev (TCC '20) for such SNARGs.
2022
CRYPTO
Triangulating Rebound Attack on AES-like Hashing 📺
Rebound attack was introduced by Mendel et al. at FSE~2009 to fulfill a heavy middle round of a differential path for free, utilizing the degree of freedom from states. The inbound phase was extended to 2 rounds by Super-Sbox technique invented by Lamberger et al. at ASIACRYPT~2009 and Gilbert and Peyrin at FSE~2010. In ASIACRYPT~2010, Sasaki et al. further reduced the requirement of memory by introducing the non-full-active Super-Sbox. In this paper, we further develop this line of research by introducing Super-Inbound, which is able to connect multiple 1-round or 2-round (non-full-active) Super-Sbox inbound phases by utilizing fully the degrees of freedom from both states and key, yet without the use of large memory. This essentially extends the inbound phase by up to 3 rounds. We applied this technique to find classic or quantum collisions on several AES-like hash functions, and improved the attacked round number by 1 to 5 in targets including AES-128 and Skinny hashing modes, Saturnin-hash, and Gr{\o}stl-512. To demonstrate the correctness of our attacks, the semi-free-start collision on 6-round AES-128-MMO/MP with estimated time complexity $2^{24}$ in classical setting was implemented and an example pair was found instantly on a standard PC.
2022
CRYPTO
Superposition Meet-in-the-Middle Attacks: Updates on Fundamental Security of AES-like Ciphers 📺
The Meet-in-the-Middle approach is one of the most powerful cryptanalysis techniques, demonstrated by its applications in preimage attacks on the full MD4, MD5, Tiger, HAVAL, and Haraka-512 v2 hash functions, and key recovery of the full block cipher KTANTAN. The success relies on the separation of a primitive into two independent chunks, where each active cell of the state is used to represent only one chunk or is otherwise considered unusable once mixed. We observe that some of such cells are linearly mixed and can be as useful as the independent ones. This leads to the introduction of superposition states and a whole suite of accompanied techniques, which we incorporate into the MILP-based search framework proposed by Bao et al. at EUROCRYPT 2021 and Dong et al. at CRYPTO 2021, and find applications on a wide range of AES-like hash functions and block ciphers.
2022
CRYPTO
Breaking Rainbow Takes a Weekend on a Laptop 📺
Best Paper by Early Career Researcher Award
This work introduces new key recovery attacks against the Rainbow signature scheme, which is one of the three finalist signature schemes still in the NIST Post-Quantum Cryptography standardization project. The new attacks dramatically outperform previously known attacks for all the parameter sets submitted to NIST and make a key-recovery practical for the SL 1 parameters. Concretely, given a Rainbow public key for the SL 1 parameters of the second-round submission, our attack returns the corresponding public key after on average 53 hours (one weekend) of computation time on a standard laptop.
2022
CRYPTO
Formalizing Delayed Adaptive Corruptions and the Security of Flooding Networks 📺
Many decentralized systems rely on flooding protocols for message dissemination. In such a protocol, the sender of a message sends it to a randomly selected set of peers. These peers again send the message to their randomly selected peers, until every network participant has received the message. This type of protocols clearly fail in face of an adaptive adversary who can simply corrupt all peers of the sender and thereby prevent the message from being delivered. Nevertheless, flooding protocols are commonly used within protocols that aim to be cryptographically secure, most notably in blockchain protocols. While it is possible to revert to static corruptions, this gives unsatisfactory security guarantees, especially in the setting of a blockchain that is supposed to run for an extended period of time. To be able to provide meaningful security guarantees in such settings, we give precise semantics to what we call $\delta$-delayed adversaries in the Universal Composability (UC) framework. Such adversaries can adaptively corrupt parties, but there is a delay of time $\delta$ from when an adversary decides to corrupt a party until they succeed in overtaking control of the party. Within this model, we formally prove the intuitive result that flooding protocols are secure against $\delta$-delayed adversaries when $\delta$ is at least the time it takes to send a message from one peer to another plus the time it takes the recipient to resend the message. To this end, we show how to reduce the adaptive setting with a $\delta$-delayed adversary to a static experiment with an Erdős–Rényi graph. Using the established theory of Erdős–Rényi graphs, we provide upper bounds on the propagation time of the flooding functionality for different neighborhood sizes of the gossip network. More concretely, we show the following for security parameter $\kappa$, point-to-point channels with delay at most $\Delta$, and $n$ parties in total, with a sufficiently delayed adversary that can corrupt any constant fraction of the parties: If all parties send to $\Omega(\kappa)$ parties on average, then we can realize a flooding functionality with maximal delay $\mathcal{O}\bigl(\Delta \cdot \log (n) \bigr)$; and if all parties send to $\Omega\bigl( \sqrt{\kappa n \log (n)} \bigr)$ parties on average, we can realize a flooding functionality with maximal delay $\mathcal{O}(\Delta)$.
2022
CRYPTO
Certified Everlasting Zero-Knowledge Proof for QMA 📺
In known constructions of classical zero-knowledge protocols for NP, either of zero-knowledge or soundness holds only against computationally bounded adversaries. Indeed, achieving both statistical zero-knowledge and statistical soundness at the same time with classical verifier is impossible for NP unless the polynomial-time hierarchy collapses, and it is also believed to be impossible even with a quantum verifier. In this work, we introduce a novel compromise, which we call the certified everlasting zero-knowledge proof for QMA. It is a computational zero-knowledge proof for QMA, but the verifier issues a classical certificate that shows that the verifier has deleted its quantum information. If the certificate is valid, even an unbounded malicious verifier can no longer learn anything beyond the validity of the statement. We construct a certified everlasting zero-knowledge proof for QMA. For the construction, we introduce a new quantum cryptographic primitive, which we call commitment with statistical binding and certified everlasting hiding, where the hiding property becomes statistical once the receiver has issued a valid certificate that shows that the receiver has deleted the committed information. We construct commitment with statistical binding and certified everlasting hiding from quantum encryption with certified deletion by Broadbent and Islam [TCC 2020] (in a black-box way), and then combine it with the quantum sigma-protocol for QMA by Broadbent and Grilo [FOCS 2020] to construct the certified everlasting zero-knowledge proof for QMA. Our constructions are secure in the quantum random oracle model. Commitment with statistical binding and certified everlasting hiding itself is of independent interest, and there will be many other useful applications beyond zero-knowledge.
2022
CRYPTO
Accelerating the Delfs-Galbraith algorithm with fast subfield root detection 📺
We give a new algorithm for finding an isogeny from a given supersingular elliptic curve $E/\F_{p^2}$ to a subfield elliptic curve $E'/\F_p$, which is the bottleneck step of the Delfs-Galbraith algorithm for the general supersingular isogeny problem. Our core ingredient is a novel method of rapidly determining whether a polynomial $f \in L[X]$ has any roots in a subfield $K \subset L$, while avoiding expensive root-finding algorithms. In the special case when $f=\Upphi_{\ell,p}(X,j) \in \F_{p^2}[X]$, i.e., when $f$ is the $\ell$-th modular polynomial evaluated at a supersingular $j$-invariant, this provides a means of efficiently determining whether there is an $\ell$-isogeny connecting the corresponding elliptic curve to a subfield curve. Together with the traditional Delfs-Galbraith walk, inspecting many $\ell$-isogenous neighbours in this way allows us to search through a larger proportion of the supersingular set per unit of time. Though the asymptotic $\tilde{O}(p^{1/2})$ complexity of our improved algorithm remains unchanged from that of the original Delfs-Galbraith algorithm, our theoretical analysis and practical implementation both show a significant reduction in the runtime of the subfield search. This sheds new light on the concrete hardness of the general supersingular isogeny problem (i.e. the foundational problem underlying isogeny-based cryptography), and has immediate implications on the bit-security of schemes like B-SIDH and SQISign for which Delfs-Galbraith is the best known classical attack.
2022
CRYPTO
Verifiable Relation Sharing and Multi-Verifier Zero-Knowledge in Two Rounds: Trading NIZKs with Honest Majority 📺
We introduce the problem of Verifiable Relation Sharing (VRS) where a client (prover) wishes to share a vector of secret data items among $k$ servers (the verifiers) while proving in zero-knowledge that the shared data satisfies some properties. This combined task of sharing and proving generalizes notions like verifiable secret sharing and zero-knowledge proofs over secret-shared data. We study VRS from a theoretical perspective and focus on its round complexity. As our main contribution, we show that every efficiently-computable relation can be realized by a VRS with an optimal round complexity of two rounds where the first round is input-independent (offline round). The protocol achieves full UC-security against an active adversary that is allowed to corrupt any $t$-subset of the parties that may include the client together with some of the verifiers. For a small (logarithmic) number of parties, we achieve an optimal resiliency threshold of $t<0.5(k+1)$, and for a large (polynomial) number of parties, we achieve an almost-optimal resiliency threshold of $t<0.5(k+1)(1-\epsilon)$ for an arbitrarily small constant $\epsilon>0$. Both protocols can be based on sub-exponentially hard injective one-way functions. If the parties have an access to a collision resistance hash function, we can derive statistical everlasting security, i.e., the protocols are secure against adversaries that are computationally bounded during the protocol execution and become computationally unbounded after the protocol execution. Previous 2-round solutions achieve smaller resiliency thresholds and weaker security notions regardless of the underlying assumptions. As a special case, our protocols give rise to 2-round offline/online constructions of multi-verifier zero-knowledge proofs (MVZK). Such constructions were previously obtained under the same type of assumptions that are needed for NIZK, i.e., public-key assumptions or random-oracle type assumptions (Abe et al., Asiacrypt 2002; Groth and Ostrovsky, Crypto 2007; Boneh et al., Crypto 2019; Yang, and Wang, Eprint 2022). Our work shows, for the first time, that in the presence of an honest majority these assumptions can be replaced with more conservative ``Minicrypt''-type assumptions like injective one-way functions and collision-resistance hash functions. Indeed, our MVZK protocols provide a round-efficient substitute for NIZK in settings where honest-majority is present. Additional applications are also presented.
2022
CRYPTO
Post-Quantum Simulatable Extraction with Minimal Assumptions: Black-Box and Constant-Round 📺
From the minimal assumption of post-quantum semi-honest oblivious transfers, we build the first $\epsilon$-simulatable two-party computation (2PC) against quantum polynomial-time (QPT) adversaries that is both constant-round and black-box (for both the construction and security reduction). A recent work by Chia, Chung, Liu, and Yamakawa (FOCS'21) shows that post-quantum 2PC with standard simulation-based security is impossible in constant rounds, unless either $NP \subseteq BQP$ or relying on non-black-box simulation. The $\epsilon$-simulatability we target is a relaxation of the standard simulation-based security that allows for an arbitrarily small noticeable simulation error $\epsilon$. Moreover, when quantum communication is allowed, we can further weaken the assumption to post-quantum secure one-way functions (PQ-OWFs), while maintaining the constant-round and black-box property. Our techniques also yield the following set of constant-round and black-box two-party protocols secure against QPT adversaries, only assuming black-box access to PQ-OWFs: - extractable commitments for which the extractor is also an $\epsilon$-simulator; - $\epsilon$-zero-knowledge commit-and-prove whose commit stage is extractable with $\epsilon$-simulation; - $\epsilon$-simulatable coin-flipping; - $\epsilon$-zero-knowledge arguments of knowledge for $NP$ for which the knowledge extractor is also an $\epsilon$-simulator; - $\epsilon$-zero-knowledge arguments for $QMA$. At the heart of the above results is a black-box extraction lemma showing how to efficiently extract secrets from QPT adversaries while disturbing their quantum state in a controllable manner, i.e., achieving $\epsilon$-simulatability of the after-extraction state of the adversary.
2022
CRYPTO
Shorter Hash-and-Sign Lattice-Based Signatures 📺
Lattice-based digital signature schemes following the hash-and-sign design paradigm of Gentry, Peikert and Vaikuntanathan (GPV) tend to offer an attractive level of efficiency, particularly when instantiated with structured compact trapdoors. In particular, NIST postquantum finalist Falcon is both quite fast for signing and verification and quite compact: NIST notes that it has the smallest bandwidth (as measured in combined size of public key and signature) of all round 2 digital signature candidates. Nevertheless, while Falcon--512, for instance, compares favorably to ECDSA--384 in terms of speed, its signatures are well over 10 times larger. For applications that store large number of signatures, or that require signatures to fit in prescribed packet sizes, this can be a critical limitation. In this paper, we explore several approaches to further improve the size of hash-and-sign lattice-based signatures, particularly instantiated over NTRU lattices like Falcon and its recent variant Mitaka. In particular, while GPV signatures are usually obtained by sampling lattice points according to some *spherical* discrete Gaussian distribution, we show that it can be beneficial to sample instead according to a suitably chosen *ellipsoidal* discrete Gaussian: this is because only half of the sampled Gaussian vector is actually output as the signature, while the other half is recovered during verification. Making the half that actually occurs in signatures shorter reduces signature size at essentially no security loss (in a suitable range of parameters). Similarly, we show that reducing the modulus $q$ with respect to which signatures are computed can improve signature size as well as verification key size almost ``for free''; this is particularly true for constructions like Falcon and Mitaka that do not make substantial use of NTT-based multiplication (and rely instead on transcendental FFT). Finally, we show that the Gaussian vectors in signatures can be represented in a more compact way with appropriate coding-theoretic techniques, improving signature size by an additional 7 to 14%. All in all, we manage to reduce the size of, e.g., Falcon signatures by 30--40% at the cost of only 4--6 bits of Core-SVP security.
2022
CRYPTO
A New Framework For More Efficient Round-Optimal Lattice-Based (Partially) Blind Signature via Trapdoor Sampling 📺
Blind signatures, originally proposed by Chaum (CRYPTO'82), are interactive protocols between a signer and a user, where the user can obtain a signature without revealing the message to be signed. Recently, Hauck et al. (EUROCRYPT'20) observed that all efficient lattice-based blind signatures following the blueprint of the original blind signature by Rukert (ASIACRYPT'10) have a flawed security proof. This puts us in a situation where all known lattice-based blind signatures have at least two of the following drawbacks: heuristic security; 1~MB or more signature size; only supporting bounded polynomially many signatures, or is based on non-standard assumptions. In this work, we construct the first __round-optimal__ (i.e., two-round) lattice-based blind signature with a signature size roughly 100~KB that supports unbounded polynomially many signatures and is provably secure under standard assumptions. Even if we allow non-standard assumptions and more rounds, ours provide the shortest signature size while also supporting unbounded polynomially many signatures. The main idea of our work is revisiting the generic blind signature construction by Fischlin (CRYPTO'06) and optimizing the __commit-then-open__ proof using techniques tailored to lattices. Our blind signature is also the first construction to have a formal security proof in the __quantum__ random oracle model. Finally, our blind signature extends naturally to __partially__ blind signatures, where the user and signer can include an agreed-upon public string in the message.
2022
CRYPTO
Collision-Resistance from Multi-Collision-Resistance 📺
Collision-resistant hash functions (CRH) are a fundamental and ubiquitous cryptographic primitive. Several recent works have studied a relaxation of CRH called t-way multi-collision-resistant hash functions (t-MCRH). These are families of functions for which it is computationally hard to find a t-way collision, even though such collisions are abundant (and even (t-1)-way collisions may be easy to find). The case of t=2 corresponds to standard CRH, but it is natural to study t-MCRH for larger values of t. Multi-collision-resistance seems to be a qualitatively weaker property than standard collision-resistance. In particular, Komargodski et al. (Eurocrypt, 2018) showed that there does not exist a blackbox transformation of MCRH into CRH. Nevertheless, in this work we show a non-blackbox transformation of any moderately shrinking t-MCRH, for t in {3,4}, into an (infinitely often secure) CRH. This transformation is non-constructive - we can prove the existence of a CRH but cannot explicitly point out a construction. Our result partially extends to larger values of t. In particular, we show that for suitable values of t>t', we can transform a t-MCRH into a t'-MCRH, at the cost of reducing the shrinkage of the resulting hash function family and settling for infinitely often security. This result utilizes the list-decodability properties of Reed-Solomon codes.
2022
CRYPTO
Quadratic Multiparty Randomized Encodings Beyond Honest Majority and Their Applications 📺
Multiparty randomized encodings (Applebaum, Brakerski, and Tsabary, SICOMP 2021) reduce the task of securely computing a complicated multiparty functionality $f$ to the task of securely computing a simpler functionality $g$. The reduction is non-interactive and preserves information-theoretic security against a passive (semi-honest) adversary, also referred to as {\em privacy}. The special case of a degree-2 encoding $g$ (2MPRE) has recently found several applications to secure multiparty computation (MPC) with either information-theoretic security or making black-box access to cryptographic primitives. Unfortunately, as all known constructions are based on information-theoretic MPC protocols in the plain model, they can only be private with an honest majority. In this paper, we break the honest-majority barrier and present the first construction of general 2MPRE that remains secure in the presence of a dishonest majority. Our construction encodes every $n$-party functionality $f$ by a 2MPRE that tolerates at most $t=\lfloor 2n/3\rfloor$ passive corruptions. We derive several applications including: (1) The first non-interactive client-server MPC protocol with perfect privacy against any coalition of a minority of the servers and up to $t$ of the $n$ clients; (2) Completeness of 3-party functionalities under non-interactive $t$-private reductions; and (3) A single-round $t$-private reduction from general-MPC to an ideal oblivious transfer (OT). These positive results partially resolve open questions that were posed in several previous works. We also show that $t$-private 2MPREs are necessary for solving (2) and (3), thus establishing new equivalence theorems between these three notions. Finally, we present a new approach for constructing fully-private 2MPREs based on multi-round protocols in the OT-hybrid model that achieve \emph{perfect privacy} against active attacks. Moreover, by slightly restricting the power of the active adversary, we derive an equivalence between these notions. This forms a surprising, and quite unique, connection between a non-interactive passively-private primitive to an interactive actively-private primitive.
2022
CRYPTO
Simon's Algorithm and Symmetric Crypto: Generalizations and Automatized Applications 📺
In this paper we deepen our understanding of how to apply Simon's algorithm to break symmetric cryptographic primitives. On the one hand, we automate the search for new attacks. Using this approach we automatically find the first efficient key-recovery attacks against constructions like 5-round MISTY L-FK or 5-round Feistel-FK (with internal permutation) using Simon's algorithm. On the other hand, we study generalizations of Simon's algorithm using non-standard Hadamard matrices, with the aim to expand the quantum symmetric cryptanalysis toolkit with properties other than the periods. Our main conclusion here is that none of these generalizations can accomplish that, and we conclude that exploiting non-standard Hadamard matrices with quantum computers to break symmetric primitives will require fundamentally new attacks.
2022
CRYPTO
Password-Authenticated Key Exchange from Group Actions 📺
We present two provably secure password-authenticated key exchange (PAKE) protocols based on a commutative group action. To date the most important instantiation of isogeny-based group actions is given by CSIDH. To model the properties more accurately, we extend the framework of cryptographic group actions (Alamati et al., ASIACRYPT 2020) by the ability of computing the quadratic twist of an elliptic curve. This property is always present in the CSIDH setting and turns out to be crucial in the security analysis of our PAKE protocols. Despite the resemblance, the translation of Diffie-Hellman based PAKE protocols to group actions either does not work with known techniques or is insecure (``How not to create an isogeny-based PAKE'', Azarderakhsh et al. ACNS 20). We overcome the difficulties mentioned in previous work by using a ``bit-by-bit'' approach, where each password bit is considered separately. Our first protocol X-GA-PAKE can be executed in a single round. Both parties need to send two set elements for each password bit in order to prevent offline dictionary attacks. The second protocol Com-GA-PAKE requires only one set element per password bit, but one party has to send a commitment on its message first. We also discuss different optimizations that can be used to reduce the computational cost. We provide comprehensive security proofs for our base protocols and deduce security for the optimized versions.
2022
CRYPTO
Parallel Repetition of $(k_1,\dots,k_{\mu})$-Special-Sound Multi-Round Interactive Proofs 📺
In many occasions, the knowledge error $\kappa$ of an interactive proof is not small enough, and thus needs to be reduced. This can be done generically by repeating the interactive proof in parallel. While there have been many works studying the effect of parallel repetition on the {\em soundness error} of interactive proofs and arguments, the effect of parallel repetition on the {\em knowledge error} has largely remained unstudied. Only recently it was shown that the $t$-fold parallel repetition of {\em any} interactive protocol reduces the knowledge error from $\kappa$ down to $\kappa^t +\nu$ for any non-negligible term $\nu$. This generic result is suboptimal in that it does not give the knowledge error $\kappa^t$ that one would expect for typical protocols, and, worse, the knowledge error remains non-negligible. In this work we show that indeed the $t$-fold parallel repetition of any $(k_1,\dots,k_{\mu})$-special-sound multi-round public-coin interactive proof optimally reduces the knowledge error from $\kappa$ down to $\kappa^t$. At the core of our results is an alternative, in some sense more fine-grained, measure of quality of a dishonest prover than its success probability, for which we show that it characterizes when knowledge extraction is possible. This new measure then turns out to be very convenient when it comes to analyzing the parallel repetition of such interactive proofs. While parallel repetition reduces the knowledge error, it is easily seen to {\em increase} the {\em completeness error}. For this reason, we generalize our result to the case of $s$-out-of-$t$ threshold parallel repetition, where the verifier accepts if $s$ out of $t$ of the parallel instances are accepting. An appropriately chosen threshold $s$ allows both the knowledge error and completeness error to be reduced simultaneously.
2022
CRYPTO
Improving Support-Minors rank attacks: applications to GeMSS and Rainbow 📺
The Support-Minors (SM) method has opened new routes to attack multivariate schemes with rank properties that were previously impossible to exploit, as shown by the recent attacks of [1] and [2] on the Round 3 NIST candidates GeMSS and Rainbow respectively. In this paper, we study this SM approach more in depth and we propose a greatly improved attack on GeMSS based on this Support-Minors method. Even though GeMSS was already affected by [1], our attack affects it even more and makes it completely unfeasible to repair the scheme by simply increasing the size of its parameters or even applying the recent projection technique from [3] whose purpose was to make GeMSS immune to [1]. For instance, our attack on the GeMSS128 parameter set has estimated time complexity $2^{72}$, and repairing the scheme by applying [3] would result in a signature with slower signing time by an impractical factor of $2^{14}$. Another contribution is to suggest optimizations that can reduce memory access costs for an XL strategy on a large SM system using the Block-Wiedemann algorithm as subroutine when these costs are a concern. In a memory cost model based on [4], we show that the rectangular MinRank attack from [2] may indeed reduce the security for all Round 3 Rainbow parameter sets below their targeted security strengths, contradicting the lower bound claimed by [5] using the same memory cost model. ***** [1] Improved Key Recovery of the HFEv- Signature Scheme, Chengdong Tao and Albrecht Petzoldt and Jintai Ding, CRYPTO 2021. [2] Improved Cryptanalysis of UOV and Rainbow, Ward Beullens, EUROCRYPT 2021. [3] On the Effect of Projection on Rank Attacks in Multivariate Cryptography, Morten Øygarden and Daniel Smith-Tone and Javier Verbel, PQCrypto 2021. [4] NTRU Prime: Round 3 submission. [5] Rainbow Team: Response to recent paper by Ward Beullens. https://troll.iis. sinica.edu.tw/by-publ/recent/response-ward.pdf
2022
CRYPTO
Constructing and Deconstructing Intentional Weaknesses in Symmetric Ciphers 📺
Deliberately weakened ciphers are of great interest in political discussion on law enforcement, as in the constantly recurring crypto wars, and have been put in the spotlight of academics by recent progress. A paper at Eurocrypt 2021 showed a strong indication that the security of the widely-deployed stream cipher GEA-1 was deliberately and secretly weakened to 40 bits in order to fulfill European export restrictions that have been in place in the late 1990s. However, no explanation of how this could have been constructed was given. On the other hand, we have seen the MALICIOUS design framework, published at CRYPTO 2020, that allows to construct tweakable block ciphers with a backdoor, where the difficulty of recovering the backdoor relies on well-understood cryptographic assumptions. The constructed tweakable block cipher however is rather unusual and very different from, say, general-purpose ciphers like the AES. In this paper, we pick up both topics. For GEA-1 we thoroughly explain how the weakness was constructed, solving the main open question of the work mentioned above. By generalizing MALICIOUS we - for the first time - construct backdoored tweakable block ciphers that follow modern design principles for general-purpose block ciphers, i.e., more natural-looking deliberately weakened tweakable block ciphers.
2022
CRYPTO
Time-Space Tradeoffs for Sponge Hashing: Attacks and Limitations for Short Collisions 📺
Sponge hashing is a novel alternative to the popular Merkle-Damg\aa rd hashing design. The sponge construction has become increasingly popular in various applications, perhaps most notably, it underlies the SHA-3 hashing standard. Sponge hashing is parametrized by two numbers, $r$ and $c$ (bitrate and capacity, respectively), and by a fixed-size permutation on $r+c$ bits. In this work, we study the collision resistance of sponge hashing instantiated with a random permutation by adversaries with an arbitrary $S$-bit auxiliary advice input about the random permutation and $T$ queries. Recent work by Coretti et al.\ (CRYPTO '18) showed that such adversaries can find collisions (with respect to a random IV) with advantage $\Theta(ST^2/2^c + T^2/ 2^{r})$. Although the above attack formally breaks collision resistance in some range of parameters, its practical relevance is limited since the resulting collision is very long (on the order of $T$ blocks). Focusing on the task of finding \emph{short} collisions, we study the complexity of finding a $B$-block collision for a given parameter $B\ge 1$. We give several new attacks and limitations. Most notably, we give a new attack that results in a single-block collision and has advantage \begin{align*} \Omega \left(\left(\frac{S^{2}T}{2^{2c}}\right)^{2/3} + \frac{T^2}{2^r}\right). \end{align*} In some range of parameters, our attack has constant advantage of winning while the previously-known best attack has exponentially small advantage. To the best of our knowledge, this is the first natural application for which sponge hashing is \emph{provably less secure} than the corresponding instance of Merkle-Damg\aa rd hashing. Our attack relies on a novel connection between single-block collision finding in sponge hashing and the well-studied function inversion problem. We also give a general attack that works for any $B\ge 2$ and has advantage $\Omega({STB}/{2^{c}} + {T^2}/{2^{\min\{r,c\}}})$, adapting an idea of Akshima et al. (CRYPTO '20). We complement the above attacks with bounds on the best possible attacks. Specifically, we prove that there is a qualitative jump in the advantage of best possible attacks for finding unbounded-length collisions and those for finding very short collisions. Most notably, we prove (via a highly non-trivial compression argument) that the above attack is optimal for $B=2$ and in some range of parameters.
2022
CRYPTO
On the Insider Security of MLS 📺
The Messaging Layer Security (MLS) protocol is an open standard for end-to-end (E2E) secure group messaging being developed by the IETF, poised for deployment to consumers, industry, and government. It is designed to provide E2E privacy and authenticity for messages in long-lived sessions whenever possible, despite the participation (at times) of malicious insiders that can adaptively interact with the PKI at will, actively deviate from the protocol, leak honest parties' states, and fully control the network. The core of the MLS protocol (from which it inherits essentially all of its efficiency and security properties) is a Continuous Group Key Agreement (CGKA) protocol. It provides asynchronous E2E group management by allowing group members to agree on a fresh independent symmetric key after every change to the group's state (e.g. when someone joins/leaves the group). In this work, we make progress towards a precise understanding of the insider security of MLS (Draft 12). On the theory side, we overcome several subtleties to formulate the first notion of insider security for CGKA (or group messaging). Next, we isolate the core components of MLS to obtain a CGKA protocol we dub Insider Secure TreeKEM (ITK). Finally, we give a rigorous security proof for ITK. In particular, this work also initiates the study of insider secure CGKA and group messaging protocols. Along the way we give three new (very practical) attacks on MLS and corresponding fixes. (Those fixes have now been included into the standard.) We also describe a second attack against MLS-like CGKA protocols proven secure under all previously considered security notions (including those designed specifically to analyze MLS). These attacks highlight the pitfalls in simplifying security notions even in the name of tractability.
2022
CRYPTO
Cryptography from Pseudorandom Quantum States 📺
Pseudorandom states, introduced by Ji, Liu and Song (Crypto'18), are efficiently-computable quantum states that are computationally indistinguishable from Haar-random states. One-way functions imply the existence of pseudorandom states, but Kretschmer (TQC'20) recently constructed an oracle relative to which there are no one-way functions but pseudorandom states still exist. Motivated by this, we study the intriguing possibility of basing interesting cryptographic tasks on pseudorandom states. We construct, assuming the existence of pseudorandom state generators that map a $\lambda$-bit seed to a $\omega(\log\lambda)$-qubit state, (a) statistically binding and computationally hiding commitments and (b) pseudo one-time encryption schemes. A consequence of (a) is that pseudorandom states are sufficient to construct maliciously secure multiparty computation protocols in the dishonest majority setting. Our constructions are derived via a new notion called pseudorandom function-like states (PRFS), a generalization of pseudorandom states that parallels the classical notion of pseudorandom functions. Beyond the above two applications, we believe our notion can effectively replace pseudorandom functions in many other cryptographic applications.
2022
CRYPTO
Securing Approximate Homomorphic Encryption using Differential Privacy 📺
Recent work of Li and Micciancio (Eurocrypt 2021) has shown that the traditional formulation of indistinguishabiity under chosen plaintext attack (IND-CPA) is not adequate to capture the security of approximate homomorphic encryption against passive adversaries, and identified a stronger IND-CPA^D security definition (IND-CPA with decryption oracles) as the appropriate security target for approximate encryption schemes. We show how to transform any approximate homomorphic encryption scheme achieving the weak IND-CPA security definition, into one which is provably IND-CPA^D secure, offering strong guarantees against realistic passive attacks. The method works by postprocessing the output of the decryption function with a mechanism satisfying an appropriate notion of differentially privacy (DP), adding an amount of noise tailored to the worst-case error growth of the homomorphic computation. We apply these results to the approximate homomorphic encryption scheme of Cheon, Kim, Kim, and Song (CKKS, Asiacrypt 2017), proving that adding Gaussian noise to the output of CKKS decryption suffices to achieve IND-CPA^D security. We precisely quantify how much Gaussian noise must be added by proving nearly matching upper and lower bounds, showing that one cannot hope to significantly reduce the amount of noise added in this post-processing step. Based on our upper and lower bounds, we propose parameters for the counter-measures recently adopted by open-source libraries implementing CKKS. Lastly, we investigate the plausible claim that smaller DP noise parameters might suffice to achieve IND-CPA^D-security for schemes supporting more accurate (dynamic, key dependent) estimates of ciphertext noise during decryption. Perhaps surprisingly, we show that this claim is false, and that DP mechanisms with noise parameters tailored to the error present in a given ciphertext, rather than worst-case error, are vulnerable to IND-CPA^D attacks.
2022
CRYPTO
Tight Bounds on the Randomness Complexity of Secure Multiparty Computation 📺
We revisit the question of minimizing the randomness complexity of protocols for secure multiparty computation (MPC) in the setting of perfect information-theoretic security. Kushilevitz and Mansour (SIAM J. Discret. Math., 1997) studied the case of n-party semi-honest MPC for the XOR function with security threshold t<n, showing that O(t^2 * log(n/t)) random bits are sufficient and \Omega(t) random bits are necessary. Their positive result was obtained via a non-explicit protocol, whose existence was proved using the probabilistic method. We essentially close the question by proving an \Omega(t^2) lower bound on the randomness complexity of XOR, matching the previous upper bound up to a logarithmic factor (or constant factor when t=\Omega(n)). We also obtain an explicit protocol that uses O(t^2 * \log^2n) random bits, matching our lower bound up to a polylogarithmic factor. We extend these results from XOR to general symmetric Boolean functions and to addition over a finite Abelian group, showing how to amortize the randomness complexity over multiple additions. Finally, combining our techniques with recent randomness-efficient constructions of private circuits, we obtain an explicit protocol for evaluating a general circuit C using only O(t^2 * \log |C|) random bits, by employing additional ``helper parties'' who do not contribute any inputs. This upper bound too matches our lower bound up to a logarithmic factor.
2022
CRYPTO
Short Leakage Resilient and Non-malleable Secret Sharing Schemes 📺
Leakage resilient secret sharing (LRSS) allows a dealer to share a secret amongst $n$ parties such that any authorized subset of the parties can recover the secret from their shares, while an adversary that obtains shares of any unauthorized subset of parties along with bounded leakage from the other shares learns no information about the secret. Non-malleable secret sharing (NMSS) provides a guarantee that even shares that are tampered by an adversary will reconstruct to either the original message or something independent of it. The most important parameter of LRSS and NMSS schemes is the size of each share. For LRSS, in the "local leakage model" (i.e., when the leakage functions on each share are independent of each other and bounded), Srinivasan and Vasudevan (CRYPTO 2019), gave a scheme for threshold access structures with a share size of approximately 3.((message length) + $\mu$), where $\mu$ is the number of bits of leakage tolerated from every share. For the case of NMSS, the best-known result (again due to the above work) has a share size of 11.(message length). In this work, we build LRSS and NMSS schemes with much improved share size. Additionally, our LRSS scheme obtains optimal share and leakage size. In particular, we get the following results: -We build an information-theoretic LRSS scheme for threshold access structures with a share size of (message length + $\mu$). -As an application of the above result, we obtain an NMSS with a share size of 4.(message length). Further, for the special case of sharing random messages, we obtain a share size of 2.(message length).
2022
CRYPTO
Faster Sounder Succinct Arguments and IOPs 📺
Succinct arguments allow a prover to convince a verifier that a given statement is true, using an extremely short proof. A major bottleneck that has been the focus of a large body of work is in reducing the overhead incurred by the prover in order to prove correctness of the computation. By overhead we refer to the cost of proving correctness, divided by the cost of the original computation. In this work, for a large class of Boolean circuits C, we construct succinct arguments for satisfiability of C with soundness error 2^{-k}, and with prover overhead polylog(k). This result relies on the existence of (sub-exponentially secure) linear-size computable collision-resistant hash functions. The class of Boolean circuits that we can handle includes circuits with a repeated sub-structure, which arise in natural applications such as batch computation/verification, hashing, and related block chain applications. The succinct argument is obtained by constructing interactive oracle proofs for the same class of languages, with polylog(k) prover overhead, and soundness error 2^{-k}. Prior to our work, the best IOPs for Boolean circuits either had prover overhead of polylog(|C|) based on efficient PCPs due to Ben~Sasson et al. (STOC, 2013) or poly(k) due to Rothblum and Ron-Zewi (STOC, 2022).
2022
CRYPTO
Maliciously Secure Massively Parallel Computation for All-but-One Corruptions 📺
The Massive Parallel Computing (MPC) model gained wide adoption over the last decade. By now, it is widely accepted as the right model for capturing the commonly used programming paradigms (such as MapReduce, Hadoop, and Spark) that utilize parallel computation power to manipulate and analyze huge amounts of data. Motivated by the need to perform large-scale data analytics in a privacy-preserving manner, several recent works have presented generic compilers that transform algorithms in the MPC model into secure counterparts, while preserving various efficiency parameters of the original algorithms. The first paper, due to Chan et al. (ITCS '20), focused on the honest majority setting. Later, Fernando et al. (TCC '20) considered the dishonest majority setting. The latter work presented a compiler that transforms generic MPC algorithms into ones which are secure against \emph{semi-honest} attackers that may control all but one of the parties involved. The security of their resulting algorithm relied on the existence of a PKI and also on rather strong cryptographic assumptions: indistinguishability obfuscation and the circular security of certain LWE-based encryption systems. In this work, we focus on the dishonest majority setting, following Fernando et al. In this setting, the known compilers do not achieve the standard security notion called \emph{malicious} security, where attackers can arbitrarily deviate from the prescribed protocol. In fact, we show that unless very strong setup assumptions as made (such as a \emph{programmable} random oracle), it is provably \emph{impossible} to withstand malicious attackers due to the stringent requirements on space and round complexity. As our main contribution, we complement the above negative result by designing the first general compiler for malicious attackers in the dishonest majority setting. The resulting protocols withstand all-but-one corruptions. Our compiler relies on a simple PKI and a (programmable) random oracle, and is proven secure assuming LWE and SNARKs. Interestingly, even with such strong assumptions, it is rather non-trivial to obtain a secure protocol.
2022
CRYPTO
Secret Can Be Public: Low-Memory AEAD Mode for High-Order Masking 📺
We propose a new AEAD mode of operation for an efficient countermeasure against side-channel attacks. Our mode achieves the smallest memory with high-order masking, by minimizing the states that are duplicated in masking. An $s$-bit key-dependent state is necessary for achieving $s$-bit security, and the conventional schemes always protect the entire $s$ bits with masking. We reduce the protected state size by introducing an unprotected state in the key-dependent state: we protect only a half and give another half to a side-channel adversary. Ensuring independence between the unprotected and protected states is the key technical challenge since mixing these states reveals the protected state to the adversary. We propose a new mode $\mathsf{HOMA}$ that achieves $s$-bit security using a tweakable block cipher with the $s/2$-bit block size. We also propose a new primitive for instantiating $\mathsf{HOMA}$ with $s=128$ by extending the SKINNY tweakable block cipher to a 64-bit plaintext block, a 128-bit key, and a $(256+3)$-bit tweak. We make hardware performance evaluation by implementing $\mathsf{HOMA}$ with high-order masking for $d \le 5$. For any $d > 0$, $\mathsf{HOMA}$ outperforms the current state-of-the-art $\mathsf{PFB\_Plus}$ by reducing the circuit area larger than that of the entire S-box.
2022
CRYPTO
PI-Cut-Choo and Friends: Compact Blind Signatures via Parallel Instance Cut-and-Choose and More 📺
Blind signature schemes are one of the best-studied tools for privacy-preserving authentication. Unfortunately, known constructions of provably secure blind signatures either rely on non-standard hardness assumptions, or require parameters that grow linearly with the number of concurrently issued signatures, or involve prohibitively inefficient general techniques such as general secure two-party computation. Recently, Katz, Loss and Rosenberg (ASIACRYPT'21) gave a technique that, for the security parameter n, transforms blind signature schemes secure for O(log n) concurrent executions of the blind signing protocol into ones that are secure for any poly(n) concurrent executions. This transform has two drawbacks that we eliminate in this paper: 1) the communication complexity of the resulting blind signing protocol grows linearly with the number of signing interactions; 2) the resulting schemes inherit a very loose security bound from the underlying scheme and, as a result, require impractical parameter sizes. In this work, we give an improved transform for obtaining a secure blind signing protocol tolerating any poly(n) concurrent executions from one that is secure for O(log n) concurrent executions. While preserving the advantages of the original transform, the communication complexity of our new transform only grows logarithmically with the number of interactions. Under the CDH and RSA assumptions, we improve on this generic transform in terms of concrete efficiency and give (1) a BLS-based blind signature scheme over a standard-sized group where signatures are of size roughly 3 KB and communication per signature is roughly 120 KB; and (2) an Okamoto-Guillou-Quisquater-based blind signature scheme with signatures and communication of roughly 9 KB and 8 KB, respectively.
2022
CRYPTO
Beyond the Csiszár-Korner Bound: Best-Possible Wiretap Coding via Obfuscation 📺
A wiretap coding scheme (Wyner, Bell Syst.\ Tech.\ J.\ 1975) enables Alice to reliably communicate a message m to an honest Bob by sending an encoding c over a noisy channel ChB while at the same time hiding m from Eve who receives c over another noisy channel ChE. Wiretap coding is clearly impossible when ChB is a degraded version of ChE, in the sense that the output of ChB can be simulated using only the output of ChE. A classic work of Csiszár and Korner (IEEE Trans.\ Inf.\ Theory, 1978) shows that the converse does not hold. This follows from their full characterization of the channel pairs (ChB, ChE) that enable information-theoretic wiretap coding. In this work, we show that in fact the converse does hold when considering computational security; that is, wiretap coding against a computationally bounded Eve is possible if and only if ChB is not a degraded version of ChE. Our construction assumes the existence of virtual black-box (VBB) obfuscation of specific classes of ``evasive'' functions that generalize fuzzy point functions, and can be heuristically instantiated using indistinguishability obfuscation. Finally, our solution has the appealing feature of being universal in the sense that Alice's algorithm depends only on ChB and not on ChE.
2022
CRYPTO
Ofelimos: Combinatorial Optimization via Proof-of-Useful-Work 📺
Minimizing the energy cost and carbon footprint of the Bitcoin blockchain and related protocols is one of the most widely identified open questions in the cryptocurrency space. Substituting the proof-of-work (PoW) primitive in Nakamoto's longest chain protocol with a {\em proof of useful work} (PoUW) has been long theorized as an ideal solution in many respects but, to this day, the concept still lacks a convincingly secure realization. In this work we put forth {\em Ofelimos}, a novel PoUW-based blockchain protocol whose consensus mechanism simultaneously realizes a decentralized optimization-problem solver. Our protocol is built around a novel local search algorithm, which we call Doubly Parallel Local Search (DPLS), that is especially crafted to suit implementation as the PoUW component of our blockchain protocol. We provide a thorough security analysis of our protocol and additionally present metrics that reflect the usefulness of the system. As an illustrative example we show how DPLS can implement a variant of WalkSAT and experimentally demonstrate its competitiveness with respect to a vanilla WalkSAT implementation. In this way, our work paves the way for safely using blockchain systems as generic optimization engines for a variety of hard optimization problems for which a publicly verifiable solution is desired.
2022
CRYPTO
Multi-Input Attribute Based Encryption and Predicate Encryption 📺
Motivated by several new and natural applications, we initiate the study of multi-input predicate encryption (${\sf miPE}$) and further develop multi-input attribute based encryption (${\sf miABE}$). Our contributions are: 1. {\it Formalizing Security:} We provide definitions for ${\sf miABE}$ and ${\sf miPE}$ in the {symmetric} key setting and formalize security in the standard {\it indistinguishability} (IND) paradigm, against {\it unbounded} collusions. 2. {\it Two-input ${\sf ABE}$ for ${\sf NC}_1$ from ${\sf LWE}$ and Pairings:} We provide the first constructions for two-input {\it key-policy} ${\sf ABE}$ for ${\sf NC}_1$ from ${\sf LWE}$ and pairings. Our construction leverages a surprising connection between techniques recently developed by Agrawal and Yamada (Eurocrypt, 2020) in the context of succinct {\it single}-input {\it ciphertext-policy} ${\sf ABE}$, to the seemingly unrelated problem of {\it two}-input {\it key-policy} ${\sf ABE}$. Similarly to Agrawal-Yamada, our construction is proven secure in the bilinear generic group model. By leveraging inner product functional encryption and using (a variant of) the KOALA knowledge assumption, we obtain a construction in the standard model analogously to Agrawal, Wichs and Yamada (TCC, 2020). 3. {\it Heuristic two-input ${\sf ABE}$ for ${\sf P}$ from Lattices:} We show that techniques developed for succinct single-input ciphertext-policy ${\sf ABE}$ by Brakerski and Vaikuntanathan (ITCS 2022) can also be seen from the lens of ${\sf miABE}$ and obtain the first two-input key-policy ${\sf ABE}$ from lattices for ${\sf P}$. 4. {\it Heuristic three-input ${\sf ABE}$ and ${\sf PE}$ for ${\sf NC}_1$ from Pairings and Lattices:} We obtain the first {\it three}-input ${\sf ABE}$ for ${\sf NC}_1$ by harnessing the powers of both the Agrawal-Yamada and the Brakerski-Vaikuntanathan constructions. 5. {\it Multi-input ${\sf ABE}$ to multi-input ${\sf PE}$ via Lockable Obfuscation:} We provide a generic compiler that lifts multi-input ${\sf ABE}$ to multi-input $\PE$ by relying on the hiding properties of Lockable Obfuscation (${\sf LO}$) by Wichs-Zirdelis and Goyal-Koppula-Waters (FOCS 2018), which can be based on ${\sf LWE}$. Our compiler generalises such a compiler for single-input setting to the much more challenging setting of multiple inputs. By instantiating our compiler with our new two and three-input ${\sf ABE}$ schemes, we obtain the first constructions of two and three-input ${\sf PE}$ schemes. Our constructions of multi-input ${\sf ABE}$ provide the first improvement to the compression factor of {\it non-trivially exponentially efficient Witness Encryption} defined by Brakerski et al. (SCN 2018) without relying on compact functional encryption or indistinguishability obfuscation. We believe that the unexpected connection between succinct single-input ciphertext-policy ${\sf ABE}$ and multi-input key-policy ${\sf ABE}$ may lead to a new pathway for witness encryption. We remark that our constructions provide the first candidates for a nontrivial class of ${\sf miFE}$ without needing ${\sf LPN}$ or low depth ${\sf PRG}$. \keywords{Multi-input \and Attribute Based Encryption \and Predicate Encryption.}
2022
CRYPTO
Moz$\mathbb{Z}_{2^k}$zarella: Efficient Vector-OLE and Zero-Knowledge Proofs Over $\mathbb{Z}_{2^k}$ 📺
Zero-knowledge proof systems are usually designed to support computations for circuits over $\mathbb{F}_2$ or $\mathbb{F}_p$ for large $p$, but not for computations over $\mathbb{Z}_{2^k}$, which all modern CPUs operate on. Although $\mathbb{Z}_{2^k}$-arithmetic can be emulated using prime moduli, this comes with an unavoidable overhead. Recently, Baum et al. (CCS 2021) suggested a candidate construction for a designated-verifier zero-knowledge proof system that natively runs over $\mathbb{Z}_{2^k}$. Unfortunately, their construction requires preprocessed random vector oblivious linear evaluation (VOLE) to be instantiated over $\mathbb{Z}_{2^k}$. Currently, it is not known how to efficiently generate such random VOLE in large quantities. In this work, we present a maliciously secure, VOLE extension protocol that can turn a short seed-VOLE over $\mathbb{Z}_{2^k}$ into a much longer, pseudorandom VOLE over the same ring. Our construction borrows ideas from recent protocols over finite fields, which we non-trivially adapt to work over $\mathbb{Z}_{2^k}$. Moreover, we show that the approach taken by the QuickSilver zero-knowledge proof system (Yang et al. CCS 2021) can be generalized to support computations over $\mathbb{Z}_{2^k}$. This new VOLE-based proof system, which we call QuarkSilver, yields better efficiency than the previous zero-knowledge protocols suggested by Baum et al. Furthermore, we implement both our VOLE extension and our zero-knowledge proof system, and show that they can generate 13-50 million VOLEs per second for 64 to 256 bit rings, and evaluate 1.3 million 64 bit multiplications per second in zero-knowledge.
2022
CRYPTO
Le Mans: Dynamic and Fluid MPC for Dishonest Majority 📺
Most MPC protocols require the set of parties to be active for the entire duration of the computation. Deploying MPC for use cases such as complex and resource-intensive scientific computations increases the barrier of entry for potential participants. The model of Fluid MPC (Crypto 2021) tackles this issue by giving parties the flexibility to participate in the protocol only when their resources are free. As such, the set of parties is dynamically changing over time. In this work, we extend Fluid MPC, which only considered an honest majority, to the setting where the majority of participants at any point in the computation may be corrupt. We do this by presenting variants of the SPDZ protocol, which support dynamic participants. Firstly, we describe a \emph{universal preprocessing} for SPDZ, which allows a set of $n$ parties to compute some correlated randomness, such that later on, any subset of the parties can use this to take part in an online secure computation. We complement this with a \emph{Dynamic SPDZ} online phase, designed to work with our universal preprocessing, as well as a protocol for securely realising the preprocessing. Our preprocessing protocol is designed to efficiently use pseudorandom correlation generators, thus, the parties' storage and communication costs can be almost independent of the function being evaluated. We then extend this to support a \emph{fluid online phase}, where the set of parties can dynamically evolve during the online phase. Our protocol achieves \emph{maximal fluidity} and security with abort, similarly to the previous, honest majority construction. Achieving this requires a careful design and techniques to guarantee a small state complexity, allowing us to switch between committees efficiently.
2022
CRYPTO
Overloading the Nonce: Rugged PRPs, Nonce-Set AEAD, and Order-Resilient Channels 📺
We introduce a new security notion that lies right in between pseudorandom permutations (PRPs) and strong pseudorandom permutations (SPRPs). We call this new security notion and any (tweakable) cipher that satisfies it a rugged pseudorandom permutation (RPRP). Rugged pseudorandom permutations lend themselves to some interesting applications, have practical benefits, and lead to novel cryptographic constructions. Analogous to the encode-then-encipher paradigm first proposed by Bellare and Rogaway and later extended by Shrimpton and Terashima, we can transform a variable-length tweakable RPRP into an AEAD scheme. However, we can construct RPRPs more efficiently as they are weaker primitives than SPRPs (the notion traditionally required by the encode-then-encipher paradigm). We can construct RPRPs using two-pass schemes, whereas SPRPs typically require three passes over the input data. We also identify new transformations that yield RUP-secure AEAD schemes with more compact ciphertexts than previously known. Further extending this approach, we arrive at a new generalized notion of authenticated encryption and a matching construction, which we refer to as nonce-set AEAD. Nonce-set AEAD is particularly well-suited in the context of secure channels, like QUIC and DTLS, that operate over unreliable transports and employ a window mechanism at the receiver's end of the channel. We conclude by presenting a generic construction for transforming a nonce-set AEAD scheme into an order-resilient secure channel. Our channel construction sheds new light on order-resilient channels and additionally leads to more compact ciphertexts when instantiated from RPRPs.
2022
CRYPTO
Public Randomness Extraction with Ephemeral Roles and Worst-Case Corruptions 📺
We distill a simple information-theoretic model for randomness extraction motivated by the task of generating publicly verifiable randomness in blockchain settings and which is closely related to You-Only-Speak-Once (YOSO) protocols (CRYPTO 2021). With the goal of avoiding denial-of-service attacks, parties speak only once and in sequence by broadcasting a public value and forwarding secret values to future parties. Additionally, an unbounded adversary can corrupt any chosen subset of at most t parties. In contrast, existing YOSO protocols only handle random corruptions. As a notable example, considering worst-case corruptions allows us to reduce trust in the role assignment mechanism, which is assumed to be perfectly random in YOSO. We study the maximum corruption threshold t which allows for unconditional randomness extraction in our model: - With respect to feasibility, we give protocols for t corruptions and n=6t+1 or n=5t parties depending on whether the adversary learns secret values forwarded to corrupted parties immediately once they are sent or only once the corrupted party is executed, respectively. Both settings are motivated by practical implementations of secret value forwarding. To design such protocols, we go beyond the committee-based approach that is sufficient for random corruptions in YOSO but turns out to be sub-optimal for chosen corruptions. - To complement our protocols, we show that low-error randomness extraction is impossible with corruption threshold t and n\leq 4t parties in both settings above. This also provides a separation between chosen and random corruptions, since the latter allows for randomness extraction with close to n/2 random corruptions.
2022
CRYPTO
Lattice-Based SNARKs: Publicly Verifiable, Preprocessing, and Recursively Composable 📺
A succinct non-interactive argument of knowledge (SNARK) allows a prover to produce a short proof that certifies the veracity of a certain NP-statement. In the last decade, a large body of work has studied candidate constructions that are secure against quantum attackers. Unfortunately, no known candidate matches the efficiency and desirable features of (pre-quantum) constructions based on bilinear pairings. In this work, we make progress on this question. We propose the first lattice-based SNARK that simultaneously satisfies many desirable properties: It (i) is tentatively post-quantum secure, (ii) is publicly-verifiable, (iii) has a logarithmic-time verifier and (iv) has a purely algebraic structure making it amenable to efficient recursive composition. Our construction stems from a general technical toolkit that we develop to translate pairing-based schemes to lattice-based ones. At the heart of our SNARK is a new lattice-based vector commitment (VC) scheme supporting openings to constant-degree multivariate polynomial maps, which is a candidate solution for the open problem of constructing VC schemes with openings to beyond linear functions. However, the security of our constructions is based on a new family of lattice-based computational assumptions which naturally generalises the standard Short Integer Solution (SIS) assumption.
2022
CRYPTO
Succinct Interactive Oracle Proofs: Applications and Limitations 📺
Interactive Oracle Proofs (IOPs) are a new type of proof-system that combines key properties of interactive proofs and PCPs: IOPs enable a verifier to be convinced of the correctness of a statement by interacting with an untrusted prover while reading just a few bits of the messages sent by the prover. IOPs have become very prominent in the design of efficient proof-systems in recent years. In this work we study succinct IOPs, which are IOPs in which the communication complexity is polynomial (or even linear) in the original witness. While there are strong impossibility results for the existence of succinct PCPs (i.e., PCPs whose length is polynomial in the witness), it is known that the rich class of NP relations that are decidable in small space have succinct IOPs. In this work we show both new applications, and limitations, for succinct IOPs: 1. First, using one-way functions, we show how to compile IOPs into zero-knowledge proofs, while nearly preserving the proof length. This complements a recent line of work, initiated by Ben Sasson et al. (TCC,2016B), who compileIOPs into super-succinct zero-knowledge arguments. Applying the compiler to the state-of-the-art succinctIOPs yields zero-knowledge proofs for bounded-space NP relations, with communication that is nearly equal to the original witness length. This yields the shortest known zero-knowledge proofs from the minimal assumption of one-way functions. 2. Second, we give a barrier for obtaining succinct IOPs for more general NP relations. In particular, we show that if a language has a succinct IOP, then it can be decided in space that is proportionate only to the witness length, after a bounded-time probabilistic preprocessing. We use this result to show that under a simple and plausible (but to the best of our knowledge, new) complexity-theoretic conjecture, there is no succinct IOP for CSAT.
2022
CRYPTO
Formal Verification of Saber’s Public-Key Encryption Scheme in EasyCrypt 📺
In this work, we consider the formal verification of the public-key encryption scheme of Saber, one of the selected few post-quantum cipher suites currently considered for potential standardization. We formally verify this public-key encryption scheme's IND-CPA security and delta-correctness properties, i.e., the properties required to transform the public-key encryption scheme into an IND-CCA2 secure and delta-correct key encapsulation mechanism, in EasyCrypt. To this end, we initially devise hand-written proofs for these properties that are significantly more detailed and meticulous than the presently existing proofs. Subsequently, these hand-written proofs serve as a guideline for the formal verification. The results of this endeavor comprise hand-written and computer-verified proofs which demonstrate that Saber's public-key encryption scheme indeed satisfies the desired security and correctness properties.
2022
CRYPTO
On the Feasibility of Unclonable Encryption and, More 📺
Unclonable encryption, first introduced by Broadbent and Lord (TQC'20), is a one-time encryption scheme with the following security guarantee: any non-local adversary (A, B, C) cannot simultaneously distinguish encryptions of two equal length messages. This notion is termed as unclonable indistinguishability. Prior works focused on achieving a weaker notion of unclonable encryption, where we required that any non-local adversary (A, B, C) cannot simultaneously recover the entire message m. Seemingly innocuous, understanding the feasibility of encryption schemes satisfying unclonable indistinguishability (even for 1-bit messages) has remained elusive. We make progress towards establishing the feasibility of unclonable encryption. (*) We show that encryption schemes satisfying unclonable indistinguishability exist unconditionally in the quantum random oracle model. (*) Towards understanding the necessity of oracles, we present a negative result stipulating that a large class of encryption schemes cannot satisfy unclonable indistinguishability. (*) Finally, we also establish the feasibility of another closely related primitive: copy-protection for single-bit output point functions. Prior works only established the feasibility of copy-protection for multi-bit output point functions or they achieved constant security error for single-bit output point functions.
2022
CRYPTO
Syndrome Decoding in the Head: Shorter Signatures from Zero-Knowledge Proofs 📺
Zero-knowledge proofs of knowledge are useful tools to design signature schemes. The ongoing effort to build a quantum computer motivates the cryptography community to develop new secure cryptographic protocols based on quantum-hard cryptographic problems. One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) for random linear codes. This problem is known to be NP-hard and the cryptanalysis state of the art has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. Since its publication, many articles proposed optimizations, implementation, or variants. In this paper, we introduce a new zero-knowledge proof for the syndrome decoding problem on random linear codes. Instead of using permutations like most of the existing protocols, we rely on the MPC-in-the-head paradigm in which we reduce the task of proving the low Hamming weight of the SD solution to proving some relations between specific polynomials. We propose a 5-round zero-knowledge protocol that proves the knowledge of a vector x such that y=Hx and \wt(x) <= w and which achieves a soundness error closed to 1/N for an arbitrary N. While turning this protocol into a signature scheme, we achieve a signature size of 11-12 KB for a 128-bit security when relying on the hardness of the SD problem on binary fields. Using bigger fields (like \F_{2^8}), we can produce fast signatures of around 8 KB. This allows us to outperform Picnic3 and to be competitive with SPHINCS+, both post-quantum signature candidates in the ongoing NIST standardization effort. Moreover, our scheme outperforms all the existing code-based signature schemes for the common ``signature size + public key size'' metric.
2022
CRYPTO
Tight Preimage Resistance of the Sponge Construction 📺
The cryptographic sponge is a popular method for hash function design. The construction is in the ideal permutation model proven to be indifferentiable from a random oracle up to the birthday bound in the capacity of the sponge. This result in particular implies that, as long as the attack complexity does not exceed this bound, the sponge construction achieves a comparable level of collision, preimage, and second preimage resistance as a random oracle. We investigate these state-of-the-art bounds in detail, and observe that while the collision and second preimage security bounds are tight, the preimage bounds not tight. We derive an improved and tight preimage security bound for the cryptographic sponge construction. The result has direct implications for various lightweight cryptographic hash functions. For example, the NIST Lightweight Cryptography finalist Ascon-Hash does not generically achieve $2^{128}$ preimage security as claimed, but even $2^{192}$ preimage security. Comparable improvements are obtained for the modes of Spongent, PHOTON, ACE, Subterranean 2.0, and QUARK, among others.
2022
CRYPTO
log∗-Round Game-Theoretically-Fair Leader Election 📺
It is well-known that in the presence of majority coalitions, strongly fair coin toss is impossible. A line of recent works have shown that by relaxing the fairness notion to game theoretic, we can overcome this classical lower bound. In particular, Chung et al. (CRYPTO'21) showed how to achieve approximately (game-theoretically) fair leader election in the presence of majority coalitions, with round complexity as small as O(log log n) rounds. In this paper, we revisit the round complexity of game-theoretically fair leader election. We construct O(log* n) rounds leader election protocols that achieve (1-o(1))-approximate fairness in the presence of (1-o(1)) n-sized coalitions. Our protocols achieve the same round-fairness trade offs as Chung et al.'s and have the advantage of being conceptually simpler. Finally, we also obtain game-theoretically fair protocols for committee election which might be of independent interest.
2022
CRYPTO
Partial Key Exposure Attacks on BIKE, Rainbow and NTRU 📺
In a so-called partial key exposure attack one obtains some information about the secret key, e.g. via some side-channel leakage. This information might be a certain fraction of the secret key bits (erasure model) or some erroneous version of the secret key (error model). The goal is to recover the secret key from the leaked information. There is a common belief that, as opposed to e.g. the RSA cryptosystem, most post-quantum cryptosystems are usually resistant against partial key exposure attacks. We strongly question this belief by constructing partial key exposure attacks on code-based, multivariate, and lattice-based schemes (BIKE, Rainbow and NTRU). Our attacks exploit the redundancy that modern PQ cryptosystems inherently use for efficiency reasons. The application and development of techniques from information set decoding plays a crucial role for achieving our results. On the theoretical side, we show non-trivial information leakage bounds that allow for a polynomial time key recovery attack. As an example, for all schemes the knowledge of a constant fraction of the secret key bits suffices to reconstruct the full key in polynomial time. Even if we no longer insist on polynomial time attacks, most of our attacks extend well and remain feasible up to large erasure and error rates. In the case of BIKE for example we obtain attack complexities around 60 bits when half of the secret key bits are erased, or a quarter of the secret key bits are faulty. Our results show that even highly error-prone key leakage of modern PQ cryptosystems may lead to full secret key recoveries.
2022
CRYPTO
Practical Statistically-Sound Proofs of Exponentiation in any Group 📺
A proof of exponentiation (PoE) in a group G of unknown order allows a prover to convince a verifier that a tuple (x, q, T, y) ∈G × N × N × G satisfies x^q^T= y. This primitive has recently found exciting applications in the constructions of verifiable delay functions and succinct arguments of knowledge. The most practical PoEs only achieve soundness either under computational assumptions, i.e., they are arguments (Wesolowski, Journal of Cryptology 2020), or in groups that come with the promise of not having any small subgroups (Pietrzak, ITCS 2019). The only statistically-sound PoE in general groups of unknown order is due to Block et al. (CRYPTO 2021), and can be seen as an elaborate parallel repetition of Pietrzak’s PoE: to achieve λ bits of security, say λ = 80, the number of repetitions required (and thus the blow-up in communication) is as large as λ. In this work we propose a statistically-sound PoE for the case where the exponent q is the product of all primes up to some bound B. We show that, in this case, it suffices to run only λ/ log(B) parallel instances of Pietrzak’s PoE, which reduces the concrete proof-size compared to Block et al. by an order of magnitude. Furthermore, we show that in the known applications where PoEs are used as a building block such structured exponents are viable. Finally, we also discuss batching of our PoE, showing that many proofs (for the same G and q but different x and T) can be batched by adding only a single element to the proof per additional statement.
2022
CRYPTO
New Constructions of Collapsing Hashes 📺
Collapsing is the preferred post-quantum security notion for hash functions, needed to lift many classical results to the quantum setting. Unfortunately, the only existing standard-model proofs of collapsing hashes require LWE. We construct the first collapsing hashes from the quantum hardness of any one of the following problems: - LPN in a variety of low noise or high-hardness regimes, essentially matching what is known for collision resistance from LPN. - Finding cycles on certain exponentially-large expander graphs, such as those arising from isogenies on elliptic curves. - The "optimal" hardness of finding collisions in *any* hash function. - The *polynomial* hardness of finding collisions, assuming a certain plausible regularity condition on the hash. As an immediate corollary, we obtain the first statistically hiding post-quantum commitments and post-quantum succinct arguments (of knowledge) under the same assumptions. Our results are obtained by a general theorem which shows how to construct a collapsing hash H' from a post-quantum collision-resistant hash function H, regardless of whether or not H itself is collapsing, assuming H satisfies a certain regularity condition we call "semi-regularity".
2022
CRYPTO
To Label, or Not To Label (in Generic Groups) 📺
Generic groups are an important tool for analyzing the feasibility and in-feasibility of group-based cryptosystems. There are two distinct wide-spread versions of generic groups, Shoup's and Maurer's, the main difference being whether or not group elements are given explicit labels. The two models are often treated as equivalent. In this work, however, we demonstrate that the models are in fact quite different, and care is needed when stating generic group results: - We show that numerous textbook constructions are \emph{not} captured by Maurer, but are captured by Shoup. In the other direction, any construction captured by Maurer \emph{is} captured by Shoup. - For constructions that exist in both models, we show that security is equivalent for ``single stage'' games, but Shoup security is strictly stronger than Maurer security for some ``multi-stage'' games. - The existing generic group un-instantiability results do not apply to Maurer. We fill this gap with a new un-instantiability result. - We explain how the known black box separations between generic groups and identity-based encryption do not fully apply to Shoup, and resolve this by providing such a separation. - We give a new un-instantiability result for the \emph{algebraic} group model.
2022
CRYPTO
Augmented Random Oracles 📺
We propose a new paradigm for justifying the security of random oracle-based protocols, which we call the Augmented Random Oracle Model (AROM). We show that the AROM captures a wide range of important random oracle impossibility results. Thus a proof in the AROM implies some resiliency to such impossibilities. We then consider three ROM transforms which are subject to impossibilities: Fiat-Shamir (FS), Fujisaki-Okamoto (FO), and Encrypt-with-Hash (EwH). We show in each case how to obtain security in the AROM by strengthening the building blocks or modifying the transform. Along the way, we give a couple other results. We improve the assumptions needed for the FO and EwH impossibilities from indistinguishability obfuscation to circularly secure LWE; we argue that our AROM still captures this improved impossibility. We also demonstrate that there is no ``best possible'' hash function, by giving a pair of security properties, both of which can be instantiated in the standard model separately, which cannot be simultaneously satisfied by a single hash function.
2022
CRYPTO
Low Communication Complexity Protocols, Collision Resistant Hash Functions and Secret Key-Agreement Protocols 📺
We study communication complexity in computational settings where bad inputs may exist, but they should be hard to find for any computationally bounded adversary. We define a model where there is a source of public randomness but the inputs are chosen by a computationally bounded adversarial participant \emph{after seeing the public randomness}. We show that breaking some known communication lower bounds in this model is closely connected to known cryptographic assumptions. We consider the simultaneous messages model and the interactive communication model and show that for any non trivial predicate (such as equality): 1. Breaking the \Omega(\sqrt n) bound in the simultaneous message case or the \Omega(\log n) bound in the interactive communication case, implies the existence of distributional collision-resistant hash functions (dCRH). Note that with a CRH the lower bounds can be broken. This is shown using techniques from Babai and Kimmel [CCC 1997]. 2. There are no protocols of constant communication in this preset randomness settings (unlike the plain public randomness model). The other model we study is that of a stateful ``free talk", where participants can communicate freely \emph{before} the inputs are chosen and may maintain a state, and the communication complexity is measured only afterwards. We show that efficient protocols for equality in this model imply secret key-agreement protocols (in a constructive sense). On the other hand, secret key-agreement protocols imply optimal in terms of error protocols for equality.
2022
CRYPTO
On Codes and Learning with Errors over Function Fields 📺
It is a long standing open problem to find search to decision reductions for structured versions of the decoding problem of linear codes. Such results in the lattice-based setting have been carried out using number fields: Polynomial–LWE, Ring–LWE, Module–LWE and so on. We propose a function field version of the LWE problem. This new framework leads to another point of view on structured codes, e.g. quasi-cyclic codes, strengthening the connection between lattice-based and code-based cryptography. In particular, we obtain the first search to decision reduction for structured codes. Following the historical constructions in lattice–based cryptography, we instantiate our construction with function fields analogues of cyclotomic fields, namely Carlitz ex- tensions, leading to search to decision reductions on various versions of Ring-LPN, which have applications to secure multi party computation and to an authentication protocol.
2022
CRYPTO
Threshold Signatures with Private Accountability 📺
Existing threshold signature schemes come in two flavors: (i) fully private, where the signature reveals nothing about the set of signers that generated the signature, and (ii) accountable, where the signature completely identifies the set of signers. In this paper we propose a new type of threshold signature, called TAPS, that is a hybrid of privacy and accountability. A TAPS signature is fully private from the public's point of view. However, an entity that has a secret tracing key can trace a signature to the threshold of signers that generated it. A TAPS makes it possible for an organization to keep its inner workings private, while ensuring that signers are accountable for their actions. We construct a number of TAPS schemes. First, we present a generic construction that builds a TAPS from any accountable threshold signature. This generic construction is not efficient, and we next focus on efficient schemes based on standard assumptions. We build two efficient TAPS schemes (in the random oracle model) based on the Schnorr signature scheme. We conclude with a number of open problems relating to efficient TAPS
2022
CRYPTO
Sustained Space and Cumulative Complexity Trade-offs for Data-Dependent Memory-Hard Functions 📺
Memory-hard functions (MHFs) are a useful cryptographic primitive which can be used to design egalitarian proof of work puzzles and to protect low entropy secrets like passwords against brute-force attackers. Intuitively, a memory hard function is a function whose evaluation costs are dominated by memory costs even if the attacker uses specialized hardware (FPGAs/ASICs) and several cost metrics have been proposed to quantify this intution. For example, space-time cost looks at the product of running time and the maximum space usage over the entire execution of an algorithm. Alwen and Serbinenko (STOC 2015) observed that the space-time cost of evaluating a function multiple times may not scale linearly in the number of instances being evaluated and introduced the stricter requirement that a memory hard function has high cumulative memory complexity (CMC) to ensure that an attacker's amortized space-time costs remain large even if the attacker evaluates the function on multiple different inputs in parallel. Alwen et al. (EUROCRYPT 2018) observed that the notion of CMC still gives the attacker undesirable flexibility in selecting space-time tradeoffs e.g., while the MHF $\scrypt$ has maximal CMC $\Omega(N^2)$ an attacker could evaluate the function with constant $O(1)$ memory in time $O(N^2)$. Alwen et al. introduced an even stricter notion of Sustained Space complexity and designed an MHF which has $s=\Omega(N/\log N)$ sustained complexity $t=\Omega(N)$ i.e., any algorithm evaluating the function in the parallel random oracle model must have at least $t=\Omega(N)$ steps where the memory usage is at least $\Omega(N/\log N)$. In this work we use dynamic pebbling games and dynamic graphs to explore tradeoffs between sustained space complexity and cumulative memory complexity for data-dependent memory hard functions such as Argon2id and scrypt. We design our own dynamic graph (dMHF) with the property that {\em any} dynamic pebbling strategy either (1) has $\Omega(N)$ rounds with $\Omega(N)$ space, or (2) has CMC $\Omega(N^{3-\epsilon})$ --- substantially larger than $N^2$. For Argon2id we show that {\em any} dynamic pebbling strategy either(1) has $\Omega(N)$ rounds with $\Omega(N^{1-\epsilon})$ space, or (2) has CMC $\omega(N^2)$. We also present a dynamic version of DRSample (Alwen et al. 2017) for which {\em any} dynamic pebbling strategy either (1) has $\Omega(N)$ rounds with $\Omega(N/\log N)$ space, or (2) has CMC $\Omega(N^3/\log N)$.
2022
CRYPTO
Orion: Zero Knowledge Proof with Linear Prover Time 📺
Zero-knowledge proof is a powerful cryptographic primitive that has found various applications in the real world. However, existing schemes suffer from a high overhead on the proof generation time that is super-linear in the size of the statement represented as an arithmetic circuit, limiting their efficiency and scalability in practice. In this paper, we present Orion, a new zero-knowledge argument system that achieves $O(N)$ prover time of field operations and hash functions and $O(\log^2 N)$ proof size. Orion is concretely efficient and our implementation shows that the prover time is 3.09s and the proof size is 1.5MB for a circuit with $2^{20}$ multiplication gates. The prover time is the fastest among all existing systems, and the proof size is an order of magnitude smaller than a recent scheme in Golovnev et al. 2021. In particular, our scheme proposes two new techniques leading to the efficiency improvement. (1) We propose a new algorithm to test whether a random bipartite graph is a lossless expander graph or not based on the densest subgraph algorithm. It allows us to sample lossless expanders with an overwhelming probability. The technique improves the efficiency and/or security of all existing zero-knowledge argument schemes with a linear prover time. The testing algorithm based on densest subgraph may be of independent interest for other applications of expander graphs. (2) We develop an efficient proof composition scheme, code switching, to reduce the proof size from square root to polylogarithmic. The scheme is built on the encoding circuit of a linear code and shows that the witness of a second zero-knowledge argument is the same as the message in the linear code. The proof composition only introduces a small overhead on the prover time.
2022
CRYPTO
Block-Cipher-Based Tree Hashing 📺
First of all we take a thorough look at an error in a paper by Daemen et al. (ToSC 2018) which looks at minimal requirements for tree-based hashing based on multiple primitives, including block ciphers. This reveals that the error is more fundamental than previously shown by Gunsing et al. (ToSC 2020), which is mainly interested in its effect on the security bounds. It turns out that the cause for the error is due to an essential oversight in the interaction between the different oracles used in the indifferentiability proofs. In essence, it reduces the claim from the normal indifferentiability setting to the weaker sequential indifferentiability one. As a matter of fact, this error appeared in multiple earlier indifferentiability papers, including the optimal indifferentiability of the sum of permutations (EUROCRYPT 2018) and the recent ABR+ construction (EUROCRYPT 2021). We discuss in detail how this oversight is caused and how it can be avoided. We next demonstrate how the negative effects on the security bound of the construction by Daemen et al. can be resolved. Instead of only allowing a truncated output, we generalize the construction to allow for any finalization function and investigate the security of this for five different types of finalization. Our findings, among others, show that the security of the SHA-2 mode does not degrade if the feed-forward is dropped and that the modern BLAKE3 construction is secure in principle but that its use of the extendable output requires its counter used for random access to be public. Finally, we introduce the tree sponge, a generalization of the sequential sponge construction with parallel absorbing and squeezing.
2022
CRYPTO
A More Complete Analysis of the Signal Double Ratchet Algorithm 📺
Seminal works by Cohn-Gordon, Cremers, Dowling, Garratt, and Stebila [Journal of Cryptology 2020] and Alwen, Coretti, and Dodis [EUROCRYPT 2019] provided the first formal frameworks for studying the widely-used Signal Double Ratchet (DR for short) algorithm. In this work, we develop a new Universally Composable (UC) definition F_DR that we show is provably achieved by the DR protocol. Our definition captures not only the security and correctness guarantees of the DR already identified in the prior state-of-the-art analyses of Cohn-Gordon et al. and Alwen et al., but also more guarantees that are absent from one or both of these works. In particular, we construct six different modified versions of the DR protocol, all of which are insecure according to our definition F_DR, but remain secure according to one (or both) of their definitions. For example, our definition is the first to capture CCA-style attacks possible immediately after a compromise — attacks that, as we show, the DR protocol provably resists, but were not captured by prior definitions. We additionally show that multiple compromises of a party in a short time interval, which the DR should be able to withstand, as we understand from its whitepaper, nonetheless introduce a new non-trivial (albeit minor) weakness of the DR. Since the definitions in the literature (including our F_DR above) do not capture security against this more nuanced scenario, we define a new stronger definition F_TR that does. Finally, we provide a minimalistic modification to the DR (that we call the Triple Ratchet, or TR for short) and show that the resulting protocol securely realizes the stronger functionality F_TR. Remarkably, the modification incurs no additional communication cost and virtually no additional computational cost. We also show that these techniques can be used to improve communication costs in other scenarios, e.g. practical Updatable Public Key Encryption schemes and the re-randomized TreeKEM protocol of Alwen et al. [CRYPTO 2020] for Secure Group Messaging.
2022
CRYPTO
Structure-Aware Private Set Intersection, With Applications to Fuzzy Matching 📺
In two-party private set intersection (PSI), Alice holds a set $X$, Bob holds a set $Y$, and they learn (only) the contents of $X \cap Y$. We introduce \textbf{structure-aware PSI} protocols, which take advantage of situations where Alice's set $X$ is publicly known to have a certain structure. The goal of structure-aware PSI is to have communication that scales with the \emph{description size} of Alice's set, rather its \emph{cardinality}. We introduce a new generic paradigm for structure-aware PSI based on function secret-sharing (FSS). In short, if there exists compact FSS for a class of structured sets, then there exists a semi-honest PSI protocol that supports this class of input sets, with communication cost proportional only to the FSS share size. Several prior protocols for efficient (plain) PSI can be viewed as special cases of our new paradigm, with an implicit FSS for unstructured sets. Our PSI protocol can be instantiated from a significantly weaker flavor of FSS, which has not been previously studied. We develop several improved FSS techniques that take advantage of these relaxed requirements, and which are in some cases exponentially better than existing FSS. Finally, we explore in depth a natural application of structure-aware PSI. If Alice's set $X$ is the union of many radius-$\delta$ balls in some metric space, then an intersection between $X$ and $Y$ corresponds to \textbf{fuzzy PSI}, in which the parties learn which of their points are within distance $\delta$. In structure-aware PSI, the communication cost scales with the number of balls in Alice's set, rather than their total volume. Our techniques lead to efficient fuzzy PSI for $\ell_\infty$ and $\ell_1$ metrics (and approximations of $\ell_2$ metric) in high dimensions. We implemented this fuzzy PSI protocol for 2-dimensional $\ell_\infty$ metrics. For reasonable input sizes, our protocol requires 45--60\% less time and 85\% less communication than competing approaches that simply reduce the problem to plain PSI.
2022
CRYPTO
Nearly Optimal Property Preserving Hashing 📺
Property-preserving hashing (PPH) consists of a family of compressing hash functions $h$ such that, for any two inputs $x,y$, we can correctly identify whether some property $P(x,y)$ holds given only the digests $h(x),h(y)$. In a basic PPH, correctness should hold with overwhelming probability over the choice of $h$ when $x,y$ are worst-case values chosen a-priori and independently of $h$. In an adversarially robust PPH (RPPH), correctness must hold even when $x,y$ are chosen adversarially and adaptively depending on $h$. Here, we study (R)PPH for the property that the Hamming distance between $x$ and $y$ is at most $t$. The notion of (R)PPH was introduced by Boyle, LaVigne and Vaikuntanathan (ITCS '19), and further studied by Fleischhacker, Simkin (Eurocrypt '21) and Fleischhacker, Larsen, Simkin (Eurocrypt '22). In this work, we obtain improved constructions that are conceptually simpler, have nearly optimal parameters, and rely on more general assumptions than prior works. Our results are: * We construct information-theoretic non-robust PPH for Hamming distance via syndrome list-decoding of linear error-correcting codes. We provide a lower bound showing that this construction is essentially optimal. * We make the above construction robust with little additional overhead, by relying on homomorphic collision-resistant hash functions, which can be constructed from either the discrete-logarithm or the short-integer-solution assumptions. The resulting RPPH achieves improved compression compared to prior constructions, and is nearly optimal. * We also show an alternate construction of RPPH for Hamming distance under the minimal assumption that standard collision-resistant hash functions exist. The compression is slightly worse than our optimized construction using homomorphic collision-resistance, but essentially matches the prior state of the art constructions from specific algebraic assumptions. * Lastly, we study a new notion of randomized robust PPH (R2P2H) for Hamming distance, which relaxes RPPH by allowing the hashing algorithm itself to be randomized. We give an information-theoretic construction with optimal parameters.
2022
CRYPTO
Sharing Transformation and Dishonest Majority MPC with Packed Secret Sharing 📺
In the last few years, the efficiency of secure multi-party computation (MPC) in the dishonest majority setting has increased by several orders of magnitudes starting with the SPDZ protocol family which offers a speedy information-theoretic online phase in the prepossessing model. However, state-of-the-art n-party MPC protocols in the dishonest majority setting incur online communication complexity per multiplication gate which is linear in the number of parties, i.e. O(n), per gate across all parties. In this work, we construct the first MPC protocols in the preprocessing model for dishonest majority with sublinear communication complexity per gate in the number of parties n. To achieve our results, we extend the use of packed secret sharing to the dishonest majority setting. For a constant fraction of corrupted parties (i.e. if 99 percent of the parties are corrupt), we can achieve a communication complexity of O(1) field elements per multiplication gate across all parties. At the crux of our techniques lies a new technique called sharing transformation. The sharing transformation technique allows us to transform shares under one type of linear secret sharing scheme into another, and even perform arbitrary linear maps on the secrets of (packed) secret sharing schemes with optimal communication complexity. This technique can be of independent interest since transferring shares from one type of scheme into another (e.g., for degree reduction) is ubiquitous in MPC. Furthermore, we introduce what we call sparsely packed Shamir sharing which allows us to address the issue of network routing efficiently, and packed Beaver triples which is an extension of the widely used technique of Beaver triples for packed secret sharing (for dishonest majority).
2022
CRYPTO
Public-Coin 3-Round Zero-Knowledge from Learning with Errors and Keyless Multi-Collision-Resistant Hash 📺
We construct a public-coin 3-round zero-knowledge argument for NP assuming (i) the sub-exponential hardness of the learning with errors (LWE) problem and (ii) the existence of keyless multi-collision-resistant hash functions against slightly super-polynomial-time adversaries. These assumptions are almost identical to those that were used recently to obtain a private-coin 3-round zero-knowledge argument [Bitansky et al., STOC 2018]. (The difference is that we assume sub-exponential hardness instead of quasi-polynomial hardness for the LWE problem.)
2022
CRYPTO
Succinct Classical Verification of Quantum Computation 📺
We construct a classically verifiable succinct interactive argument for quantum computation (BQP) with communication complexity and verifier runtime that are poly-logarithmic in the runtime of the BQP computation (and polynomial in the security parameter). Our protocol is secure assuming the post-quantum security of indistinguishability obfuscation (iO) and Learning with Errors (LWE). This is the first succinct argument for quantum computation in the plain model; prior work (Chia-Chung-Yamakawa, TCC ’20) requires both a long common reference string and non-black-box use of a hash function modeled as a random oracle. At a technical level, we revisit the framework for constructing classically verifiable quantum computation (Mahadev, FOCS ’18). We give a self-contained, modular proof of security for Mahadev’s protocol, which we believe is of independent interest. Our proof readily generalizes to a setting in which the verifier’s first message (which consists of many public keys) is compressed. Next, we formalize this notion of compressed public keys; we view the object as a generalization of constrained/programmable PRFs and instantiate it based on indistinguishability obfuscation. Finally, we compile the above protocol into a fully succinct argument using a (sufficiently composable) succinct argument of knowledge for NP. Using our framework, we achieve several additional results, including – Succinct arguments for QMA (given multiple copies of the witness), – Succinct non-interactive arguments for BQP (or QMA) in the quantum random oracle model, and – Succinct batch arguments for BQP (or QMA) assuming post-quantum LWE (without iO).
2022
CRYPTO
Nova: Recursive Zero-Knowledge Arguments from Folding Schemes 📺
We introduce a new approach to realize incrementally verifiable computation (IVC), in which the prover recursively proves the correct execution of incremental computations of the form y=F^{(\ell)}(x), where F is a (potentially non-deterministic) computation, x is the input, y is the output, and \ell > 0. Unlike prior approaches to realize IVC, our approach avoids succinct non-interactive arguments of knowledge (SNARKs) entirely (and arguments of knowledge in general). Instead, we introduce and employ folding schemes, a weaker, simpler, and more efficiently-realizable primitive, which reduces the task of checking two instances in some relation to the task of checking a single instance. We construct a folding scheme for NP and show that it implies an IVC scheme with improved efficiency characteristics: (1) the "recursion overhead" (i.e., the number of steps that the prover proves in addition to proving the execution of F) is a constant and it is dominated by two group scalar multiplications expressed as a circuit (this is the smallest recursion overhead in the literature) and (2) the prover's work at each step is dominated by two multiexponentiations of size O(|F|), providing the fastest prover in the literature. The size of a proof is O(|F|) group elements, but we show that using a variant of an existing zkSNARK, the prover can prove the knowledge of a valid proof succinctly and in zero-knowledge with O(\log{|F|}) group elements. Finally, our approach neither requires a trusted setup nor FFTs, so it can be instantiated efficiently with any cycles of elliptic curves where DLOG is hard.
2022
CRYPTO
Authenticated garbling from simple correlations 📺
We revisit the problem of constant-round malicious secure two-party computation by considering the use of simple correlations, namely sources of correlated randomness that can be securely generated with sublinear communication complexity and good concrete efficiency. The current state-of-the-art protocol of Katz et al. (Crypto 2018) achieves malicious security by realizing a variant of the authenticated garbling functionality of Wang et al. (CCS 2017). Given oblivious transfer correlations, the communication cost of this protocol (with 40 bits of statistical security) is comparable to roughly 10 garbled circuits (GCs). This protocol inherently requires more than 2 rounds of interaction. In this work, we use other kinds of simple correlations to realize the authenticated garbling functionality with better efficiency. Concretely, we get the following reduced costs in the random oracle model: - Using variants of both vector oblivious linear evaluation (VOLE) and multiplication triples (MT), we reduce the cost to 1.31 GCs. - Using only variants of VOLE, we reduce the cost to 2.25 GCs. - Using only variants of MT, we obtain a non-interactive (i.e., 2-message) protocol with cost comparable to 7.47 GCs. Finally, we show that by using recent constructions of pseudorandom correlation generators (Boyle et al., CCS 2018, Crypto 2019, 2020), the simple correlations consumed by our protocols can be securely realized without forming an efficiency bottleneck.
2022
CRYPTO
MuSig-L: Lattice-Based Multi-Signature With Single-Round Online Phase 📺
Multi-signatures are protocols that allow a group of signers to jointly produce a single signature on the same message. In recent years, a number of practical multi-signature schemes have been proposed in the discrete-log setting, such as MuSigT (CRYPTO'21) and DWMS (CRYPTO'21). The main technical challenge in constructing a multi-signature scheme is to achieve a set of several desirable properties, such as (1) security in the plain public-key (PPK) model, (2) concurrent security, (3) low online round complexity, and (4) key aggregation. However, previous lattice-based, post-quantum counterparts to Schnorr multi-signatures fail to satisfy these properties. In this paper, we introduce MuSigL, a lattice-based multi-signature scheme simultaneously achieving these design goals for the first time. Unlike the recent, round-efficient proposal of Damgård et al. (PKC'21), which had to rely on lattice-based trapdoor commitments, we do not require any additional primitive in the protocol, while being able to prove security from the standard module-SIS and LWE assumptions. The resulting output signature of our scheme therefore looks closer to the usual Fiat--Shamir-with-abort signatures.
2022
CRYPTO
Differential Cryptanalysis in the Fixed-Key Model 📺
A systematic approach to the fixed-key analysis of differential probabilities is proposed. It is based on the propagation of 'quasidifferential trails', which keep track of probabilistic linear relations on the values satisfying a differential characteristic in a theoretically sound way. It is shown that the fixed-key probability of a differential can be expressed as the sum of the correlations of its quasidifferential trails. The theoretical foundations of the method are based on an extension of the difference-distribution table, which we call the quasidifferential transition matrix. The role of these matrices is analogous to that of correlation matrices in linear cryptanalysis. This puts the theory of differential and linear cryptanalysis on an equal footing. The practical applicability of the proposed methodology is demonstrated by analyzing several differentials for RECTANGLE, KNOT, Speck and Simon. The analysis is automated and applicable to other SPN and ARX designs. Several attacks are shown to be invalid, most others turn out to work only for some keys but can be improved for weak-keys.
2022
CRYPTO
Practical Sublinear Proofs for R1CS from Lattices 📺
We propose a practical sublinear-size zero-knowledge proof system for Rank-1 Constraint Satisfaction (R1CS) based on lattices. The proof size scales asymptotically with the square root of the witness size. Concretely, the size becomes 2-3 times smaller than Ligero (ACM CCS 2017), which also exhibits square root scaling, for large instances of R1CS. At the core lies an interactive variant of the Schwartz-Zippel Lemma that might be of independent interest.
2022
CRYPTO
Multimodal Private Signatures 📺
We introduce Multimodal Private Signature (MPS) - an anonymous signature system that offers a novel accountability feature: it allows a designated opening authority to learn \emph{some partial information}~$\ms{op}$ about the signer's identity $\ms{id}$, and nothing beyond. Such partial information can flexibly be defined as $\ms{op} = \ms{id}$ (as in group signatures), or as $\ms{op} = \mb{0}$ (like in ring signatures), or more generally, as $\ms{op} = G_j(\ms{id})$, where $G_j(\cdot)$ is certain disclosing function. Importantly, the value of $op$ is known in advanced by the signer, and hence, the latter can decide whether she/he wants to disclose that piece of information. The concept of MPS significantly generalizes the notion of tracing in traditional anonymity-oriented signature primitives, and can enable various new and appealing privacy-preserving applications. We formalize the definitions and security requirements for MPS. We next present a generic construction to demonstrate the feasibility of designing MPS in a modular manner and from commonly used cryptographic building blocks (ordinary signatures, public-key encryption and NIZKs). We also provide an efficient construction in the standard model based on pairings, and a lattice-based construction in the random oracle model.
2022
CRYPTO
Provably Secure Reflection Ciphers 📺
This paper provides the first analysis of reflection ciphers such as PRINCE from a provable security viewpoint. As a first contribution, we initiate the study of key-alternating reflection ciphers in the ideal permutation model. Specifically, we prove the security of the two-round case and give matching attacks. The resulting security bound takes form $O(qp^2/2^{2n}+q^2/2^n)$, where q is the number of construction evaluations and p is the number of direct adversarial queries to the underlying permutation. Since the two-round construction already achieves an interesting security lower bound, this result can also be of interest for the construction of reflection ciphers based on a single public permutation. Our second contribution is a generic key-length extension method for reflection ciphers. It provides an attractive alternative to the FX construction, which is used by PRINCE and other concrete key-alternating reflection ciphers. We show that our construction leads to better security with minimal changes to existing designs. The security proof is in the ideal cipher model and relies on a reduction to the two-round Even-Mansour cipher with a single round key. In order to obtain the desired result, we sharpen the bad-transcript analysis and consequently improve the best-known bounds for the single-key Even-Mansour cipher with two rounds. This improvement is enabled by a new sum-capture theorem that is of independent interest.
2022
CRYPTO
On the Impossibility of Key Agreements from Quantum Random Oracles 📺
We study the following question, first publicly posed by Hosoyamada and Yamakawa in 2018. Can parties A, B with local quantum computing power rely (only) on a random oracle and classical communication to agree on a key? (Note that A, B can now query the random oracle at quantum superpositions.) We make the first progress on the question above and prove the following. – When only one of the parties A is classical and the other party B is quantum powered, as long as they ask a total of d oracle queries and agree on a key with probability 1, then there is always a way to break the key agreement by asking O(d^2) number of classical oracle queries. – When both parties can make quantum queries to the random oracle, we introduce a natural conjecture, which if true would imply attacks with poly(d) classical queries to the random oracle. Our conjecture, roughly speaking, states that the multiplication of any two degree-d real-valued polynomials over the Boolean hypercube of influence at most δ = 1/ poly(d) is nonzero. We then prove our conjecture for exponentially small influences, which leads to an (unconditional) classical 2^O(md)-query attack on any such key agreement protocol, where m is the random oracle’s output length. – Since our attacks are classical, we then ask whether it is possible to find such classical attacks in general. We prove a barrier for this approach, by showing that if the folklore “simulation conjecture” about the possibility of simulating efficient-query quantum algorithms classically is false, then that implies a possible quantum protocol that cannot be broken by classical adversaries.
2022
CRYPTO
Programmable Distributed Point Functions 📺
A distributed point function (DPF) is a cryptographic primitive that enables compressed additive sharing of a secret unit vector across two or more parties. Despite growing ubiquity within applications and notable research efforts, the best 2-party DPF construction to date remains the tree-based construction from (Boyle et al, CCS’16), with no significantly new approaches since. We present a new framework for 2-party DPF construction, which applies in the setting of feasible (polynomial-size) domains. This captures in particular all DPF applications in which the keys are expanded to the full domain. Our approach is motivated by a strengthened notion we put forth, of programmable DPF (PDPF): in which a short, input-independent “offline” key can be reused for sharing many point functions. – PDPF from OWF. We construct a PDPF for feasible domains from the minimal assumption that one-way functions exist, where the second “online” key size is polylogarithmic in the domain size N. Our approach offers multiple new efficiency features and applications: – Privately puncturable PRFs. Our PDPF gives the first OWF-based privately puncturable PRFs (for feasible domains) with sublinear keys. – O(1)-round distributed DPF Gen. We obtain a (standard) DPF with polylog-size keys that admits an analog of Doerner-shelat (CCS’17) distributed key generation, requiring only O(1) rounds (versus log N). – PCG with 1 short key. Compressing useful correlations for secure computation, where one key size is of minimal size. This provides up to exponential communication savings in some application scenarios.
2022
CRYPTO
Gossiping for Communication-Efficient Broadcast 📺
Byzantine Broadcast is crucial for many cryptographic protocols such as secret sharing, multiparty computation and blockchain consensus. In this paper we apply \emph{gossiping} (propagating a message by sending to a few random parties who in turn do the same, until the message is delivered) and propose new communication-efficient protocols, under dishonest majority, for Single-Sender Broadcast (BC) and Parallel Broadcast (PBC), improving the state-of-the-art in several ways. As our first warm-up result, we give a randomized protocol for BC which achieves $O(n^2\kappa^2)$ communication complexity from plain public key setup assumptions. This is the first protocol with subcubic communication in this setting, but does so only against static adversaries. Using some ideas from our BC protocol, we then move to our central contribution and present two protocols for PBC that are secure against adaptive adversaries. To the best of our knowledge we are the first to study PBC \emph{specifically}: All previous approaches for parallel BC (PBC) naively run $n$ instances of single-sender Broadcast, increasing the communication complexity by an undesirable factor of $n$. Our insight of avoiding black-box invocations of BC is particularly crucial for achieving our asymptotic improvements. In particular: \begin{enumerate} \item Our first PBC protocol achieves $\tilde{O}(n^3\kappa^2)$ communication complexity and relies only on plain public key setup assumptions. \item Our second PBC protocol uses trusted setup and achieves nearly optimal communication complexity $\tilde{O}(n^2\kappa^4)$. \end{enumerate} Both PBC protocols yield an almost linear improvement over the best known solutions involving $n$ parallel invocations of the respective BC protocols such as those of Dolev and Strong (SIAM Journal on Computing, 1983) and Chan et al. (Public Key Cryptography, 2020). Central to our PBC protocols is a new problem that we define and solve, that we call ``{Converge}''. In {Converge}, parties must run an adaptively-secure and \emph{efficient} protocol such that by the end of the protocol, the honest parties that remain possess a superset of the union of the inputs of the initial honest parties.
2022
EUROCRYPT
Batch-OT with Optimal Rate 📺
We show that it is possible to perform $n$ independent copies of $1$-out-of-$2$ oblivious transfer in two messages, where the communication complexity of the receiver and sender (each) is $n(1+o(1))$ for sufficiently large $n$. Note that this matches the information-theoretic lower bound. Prior to this work, this was only achievable by using the heavy machinery of rate-$1$ fully homomorphic encryption (Rate-$1$ FHE, Brakerski et al., TCC 2019). To achieve rate-$1$ both on the receiver's and sender's end, we use the LPN assumption, with slightly sub-constant noise rate $1/m^{\epsilon}$ for any $\epsilon>0$ together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHE-based solution which inherently requires an expensive ``bootstrapping'' operation. We believe that in terms of efficiency we compare favorably to existing batch-OT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE). For our DDH-based solution we develop a new technique that may be of independent interest. We show that it is possible to ``emulate'' the binary group $\bbZ_2$ (or any other small-order group) inside a prime-order group $\bbZ_p$ \emph{in a function-private manner}. That is, $\bbZ_2$ operations are mapped to $\bbZ_p$ operations such that the outcome of the latter do not reveal additional information beyond the $\bbZ_2$ outcome. Our encoding technique uses the discrete Gaussian distribution, which to our knowledge was not done before in the context of DDH.
2022
EUROCRYPT
A Correlation Attack on Full SNOW-V and SNOW-Vi 📺
In this paper, a method for searching correlations between the binary stream of Linear Feedback Shift Register (LFSR) and the keystream of SNOW-V and SNOW-Vi is presented based on the technique of approximation to composite functions. With the aid of the linear relationship between the four taps of LFSR input into Finite State Machine (FSM) at three consecutive clocks, we present an automatic search model based on the SAT/SMT technique and search out a series of linear approximation trails with high correlation. By exhausting the intermediate masks, we find a binary linear approximation with a correlation $-2^{-47.76}$. Using such approximation, we propose a correlation attack on SNOW-V with an expected time complexity $2^{246.53}$, a memory complexity $2^{238.77}$ and $2^{237.5}$ keystream words generated by the same key and Initial Vector (IV). For SNOW-Vi, we provide a binary linear approximation with the same correlation and mount a correlation attack with the same complexity as that of SNOW-V. To the best of our knowledge, this is the first known efficient attack on full SNOW-V and SNOW-Vi, which is better than the exhaustive key search. The results indicate that neither SNOW-V nor SNOW-Vi can guarantee the 256-bit security level if we ignore the design constraint that the maximum length of keystream for a single pair of key and IV is less than $2^{64}$.
2022
EUROCRYPT
On the security of ECDSA with additive key derivation and presignatures 📺
Two common variations of ECDSA signatures are {\em additive key derivation} and presignatures. Additive key derivation is a simple mechanism for deriving many subkeys from a single master key, and is already widely used in cryptocurrency applications with the Hierarchical Deterministic Wallet mechanism standardized in Bitcoin Improvement Proposal 32 (BIP32). Because of its linear nature, additive key derivation is also amenable to efficient implementation in the threshold setting. With presignatures, the secret and public nonces used in the ECDSA signing algorithm are precomputed. In the threshold setting, using presignatures along with other precomputed data allows for an extremely efficient "online phase" of the protocol. Recent works have advocated for both of these variations, sometimes combined together. However, somewhat surprisingly, we are aware of no prior security proof for additive key derivation, let alone for additive key derivation in combination with presignatures. In this paper, we provide a thorough analysis of these variations, both in isolation and in combination. Our analysis is in the generic group model (GGM). Importantly, we do not modify ECDSA or weaken the standard notion of security in any way. Of independent interest, we also present a version of the GGM that is specific to elliptic curves. This EC-GGM better models some of the idiosyncrasies (such as the conversion function and malleability) of ECDSA. In addition to this analysis, we report security weaknesses in these variations that apparently have not been previously reported. For example, we show that when both variations are combined, there is a cube-root attack on ECDSA, which is much faster than the best known, square-root attack on plain ECDSA. We also present two mitigations against these weaknesses: re-randomized presignatures and homogeneous key derivation. Each of these mitigations is very lightweight, and when used in combination, the security is essentially the same as that of plain ECDSA (in the EC-GGM).
2022
EUROCRYPT
Key Guessing Strategies for Linear Key-Schedule Algorithms in Rectangle Attacks 📺
When generating quartets for the rectangle attack on ciphers with linear key-schedule ciphers, we find the right quartets which may suggest key candidates have to satisfy some nonlinear relationships. However, some quartets generated always violate these relationships, so that they will never suggest any key candidates. Inspired by previous rectangle frameworks, we find that guessing certain key cells before generating quartets may reduce the number of those invalid quartets. However, guessing a lot of key cells at once may lose the benefit from the early abort technique, which may lead to a higher overall complexity. To get better tradeoff, we build a new rectangle attack framework on ciphers with linear key-schedule with the purpose of reducing the overall complexity or attacking more rounds. In the tradeoff model, there are many parameters affecting the overall complexity, especially for the choices of the number and positions of key guessing cells before generating quartets. To identify optimal parameters, we build a uniform automatic tool on SKINNY as an example, which includes the optimal rectangle distinguishers for key-recovery phase, the number and positions of key guessing cells before generating quartets, the size of key counters to build that affecting the exhaustive search step, etc. Based on the automatic tool, we identify a 32-round key-recovery attack on SKINNY-128-384 in the related-key setting, which extends the best previous attack by 2 rounds. For other versions with n-2n or n-3n, we also achieve one more round than before. In addition, using the previous rectangle distinguishers, we achieve better attacks on round-reduced ForkSkinny, Deoxys-BC-384 and GIFT-64. At last, we discuss the conversion of our rectangle framework from related-key setting into single-key setting and give new single-key rectangle attack on 10-round Serpent.
2022
EUROCRYPT
Group Signature and More from Isogenies and Lattices: Generic, Simple, and Efficient 📺
We construct an efficient dynamic group signature (or more generally an accountable ring signature) from isogeny and lattice assumptions. Our group signature is based on a simple generic construction that can be instantiated by cryptographically hard group actions such as the CSIDH group action or an MLWE-based group action. The signature is of size $O(¥log N)$, where $N$ is the number of users in the group. Our idea builds on the recent efficient OR-proof by Beullens, Katsumata, and Pintore (Asiacrypt'20), where we efficiently add a proof of valid ciphertext to their OR-proof and further show that the resulting non-interactive zero-knowledge proof system is ¥emph{online extractable}. Our group signatures satisfy more ideal security properties compared to previously known constructions, while simultaneously having an attractive signature size. The signature size of our isogeny-based construction is an order of magnitude smaller than all previously known post-quantum group signatures (e.g., 6.6 KB for 64 members). In comparison, our lattice-based construction has a larger signature size (e.g., either 126 KB or 89 KB for 64 members depending on the satisfied security property). However, since the $O(¥cdot)$-notation hides a very small constant factor, it remains small even for very large group sizes, say $2^{20}$.
2022
EUROCRYPT
A Greater GIFT: Strengthening GIFT against Statistical Cryptanalysis 📺
GIFT-64 is a 64-bit block cipher with a 128-bit key that is more lightweight than PRESENT. This paper provides a detailed analysis of GIFT-64 against differential and linear attacks. Our work complements automatic search methods for the best differential and linear characteristics with a careful manual analysis. This hybrid approach leads to new insights. In the differential setting, we theoretically explain the existence of differential characteristics with two active S-boxes per round and derive some novel properties of these characteristics. Furthermore, we prove that all optimal differential characteristics of GIFT-64 covering more than seven rounds must activate two S-boxes per round. We can construct all optimal characteristics by hand. In parallel to the work in the differential setting, we conduct a similar analysis in the linear setting. However, unlike the clear view in differential setting, the optimal linear characteristics of GIFT-64 must have at least one round activating only one S-box. Moreover, with the assistance of automatic searching methods, we identify 24 GIFT-64 variants achieving better resistance against differential attack while maintaining a similar security level against a linear attack. Since the new variants strengthen GIFT-64 against statistical cryptanalysis, we claim that the number of rounds could be reduced from 28 to 26 for the variants. This observation enables us to create a cipher with lower energy consumption than GIFT-64. Similarly to the case in GIFT-64, we do not claim any related-key security for the round-reduced variant as this is not relevant for most applications.
2022
EUROCRYPT
Private Circuits with Quasilinear Randomness 📺
A {\em $t$-private} circuit for a function $f$ is a randomized Boolean circuit $C$ that maps a randomized encoding of an input $x$ to an encoding of the output $f(x)$, such that probing $t$ wires anywhere in $C$ reveals nothing about $x$. Private circuits can be used to protect embedded devices against side-channel attacks. Motivated by the high cost of generating fresh randomness in such devices, several works have studied the question of minimizing the randomness complexity of private circuits. The best known upper bound, due to Coron et al. (Eurocrypt 2020), is $O(t^2\cdot\log s)$ random bits, where $s$ is the circuit size of $f$. We improve this to $O(t\cdot \log s)$, including the randomness used by the input encoder, and extend this bound to the stateful variant of private circuits. Our constructions are semi-explicit in the sense that there is an efficient randomized algorithm that generates the private circuit $C$ from a circuit for $f$ with negligible failure probability.
2022
EUROCRYPT
Orientations and the supersingular endomorphism ring problem 📺
We study two important families of problems in isogeny-based cryptography and how they relate to each other: computing the endomorphism ring of supersingular elliptic curves, and inverting the action of class groups on oriented supersingular curves. We prove that these two families of problems are closely related through polynomial-time reductions, assuming the generalized Riemann hypothesis. We identify two classes of essentially equivalent problems. The first class corresponds to the problem of computing the endomorphism ring of oriented curves. The security of a large family of cryptosystems (such as CSIDH) reduces to (and sometimes from) this class, for which there are heuristic quantum algorithms running in subexponential time. The second class corresponds to computing the endomorphism ring of orientable curves. The security of essentially all isogeny-based cryptosystems reduces to (and sometimes from) this second class, for which the best known algorithms are still exponential. Some of our reductions not only generalise, but also strengthen previously known results. For instance, it was known that in the particular case of curves defined over $\mathbb F_p$, the security of CSIDH reduces to the endomorphism ring problem in subexponential time. Our reductions imply that the security of CSIDH is actually equivalent to the endomorphism ring problem, under polynomial time reductions (circumventing arguments that proved such reductions unlikely).
2022
EUROCRYPT
Property-Preserving Hash Functions for Hamming Distance from Standard Assumptions 📺
Property-preserving hash functions allow for compressing long inputs $x_0$ and $x_1$ into short hashes $h(x_0)$ and $h(x_1)$ in a manner that allows for computing a predicate $P(x_0, x_1)$ given only the two hash values without having access to the original data. Such hash functions are said to be adversarially robust if an adversary that gets to pick $x_0$ and $x_1$ after the hash function has been sampled, cannot find inputs for which the predicate evaluated on the hash values outputs the incorrect result. In this work we construct robust property-preserving hash functions for the hamming-distance predicate which distinguishes inputs with a hamming distance at least some threshold $t$ from those with distance less than $t$. The security of the construction is based on standard lattice hardness assumptions. Our construction has several advantages over the best known previous construction by Fleischhacker and Simkin (Eurocrypt 2021). Our construction relies on a single well-studied hardness assumption from lattice cryptography whereas the previous work relied on a newly introduced family of computational hardness assumptions. In terms of computational effort, our construction only requires a small number of modular additions per input bit, whereas the work of Fleischhacker and Simkin required several exponentiations per bit as well as the interpolation and evaluation of high-degree polynomials over large fields. An additional benefit of our construction is that the description of the hash function can be compressed to $\lambda$ bits assuming a random oracle. Previous work has descriptions of length $\bigO{\ell \lambda}$ bits for input bit-length $\ell$. We prove a lower bound on the output size of any property-preserving hash function for the hamming distance predicate. The bound shows that the size of our hash value is not far from optimal.
2022
EUROCRYPT
Embedding the UC Model into the IITM Model 📺
Universal Composability is a widely used concept for the design and analysis of protocols. Since Canetti's original UC model and the model by Pfitzmann and Waidner several different models for universal composability have been proposed, including, for example, the IITM model, GNUC, CC, but also extensions and restrictions of the UC model, such as JUC, GUC, and SUC. These were motivated by the lack of expressivity of existing models, ease of use, or flaws in previous models. Cryptographers choose between these models based on their needs at hand (e.g., support for joint state and global state) or simply their familiarity with a specific model. While all models follow the same basic idea, there are huge conceptually differences, which raises fundamental and practical questions: (How) do the concepts and results proven in one model relate to those in another model? Do the different models and the security notions formulated therein capture the same classes of attacks? Most importantly, can cryptographers re-use results proven in one model in another model, and if so, how? In this paper, we initiate a line of research with the aim to address this lack of understanding, consolidate the space of models, and enable cryptographers to re-use results proven in other models. As a start, here we focus on Canetti's prominent UC model and the IITM model proposed by K{\"u}sters et al. The latter is an interesting candidate for comparison with the UC model since it has been used to analyze a wide variety of protocols, supports a very general protocol class and provides, among others, seamless treatment of protocols with shared state, including joint and global state. Our main technical contribution is an embedding of the UC model into the IITM model showing that all UC protocols, security and composition results carry over to the IITM model. Hence, protocol designers can profit from the features of the IITM model while being able to use all their results proven in the UC model. We also show that, in general, one cannot embed the full IITM model into the UC model.
2022
EUROCRYPT
Beyond quadratic speedups in quantum attacks on symmetric schemes 📺
In this paper, we report the first quantum key-recovery attack on a symmetric block cipher design, using classical queries only, with a more than quadratic time speedup compared to the best classical attack. We study the 2XOR-Cascade construction of Ga{\v{z}}i and Tessaro (EUROCRYPT~2012). It is a key length extension technique which provides an n-bit block cipher with 5n/2 bits of security out of an n-bit block cipher with 2n bits of key, with a security proof in the ideal model. We show that the offline-Simon algorithm of Bonnetain et al. (ASIACRYPT~2019) can be extended to, in particular, attack this construction in quantum time $\widetilde{\mathcal{O}}{2^n}$, providing a 2.5 quantum speedup over the best classical attack. Regarding post-quantum security of symmetric ciphers, it is commonly assumed that doubling the key sizes is a sufficient precaution. This is because Grover's quantum search algorithm, and its derivatives, can only reach a quadratic speedup at most. Our attack shows that the structure of some symmetric constructions can be exploited to overcome this limit. In particular, the 2XOR-Cascade cannot be used to generically strengthen block ciphers against quantum adversaries, as it would offer only the same security as the block cipher itself.
2022
EUROCRYPT
Universally Composable Subversion-Resilient Cryptography 📺
Subversion attacks undermine security of cryptographic protocols by replacing a legitimate honest party's implementation with one that leaks information in an undetectable manner. An important limitation of all currently known techniques for designing cryptographic protocols with security against subversion attacks is that they do not automatically guarantee security in the realistic setting where a protocol session may run concurrently with other protocols. We remedy this situation by providing a foundation of reverse firewalls (Mironov and Stephens-Davidowitz, EUROCRYPT'15) in the universal composability (UC) framework (Canetti, FOCS'01 and J. ACM'20). More in details, our contributions are threefold: - We generalize the UC framework to the setting where each party consists of a core (which has secret inputs and is in charge of generating protocol messages) and a firewall (which has no secrets and sanitizes the outgoing/incoming communication from/to the core). Both the core and the firewall can be subject to different flavors of corruption, modeling different kinds of subversion attacks. For instance, we capture the setting where a subverted core looks like the honest core to any efficient test, yet it may leak secret information via covert channels (which we call specious subversion). - We show how to sanitize UC commitments and UC coin tossing against specious subversion, under the DDH assumption. - We show how to sanitize the classical GMW compiler (Goldreich, Micali and Wigderson, STOC 1987) for turning MPC with security in the presence of semi-honest adversaries into MPC with security in the presence of malicious adversaries. This yields a completeness theorem for maliciously secure MPC in the presence of specious subversion. Additionally, all our sanitized protocols are transparent, in the sense that communicating with a sanitized core looks indistinguishable from communicating with an honest core. Thanks to the composition theorem, our methodology allows, for the first time, to design subversion-resilient protocols by sanitizing different sub-components in a modular way.
2022
EUROCRYPT
Sine Series Approximation of the Mod Function for Bootstrapping of Approximate HE 📺
While it is well known that the sawtooth function has a point-wise convergent Fourier series, the rate of convergence is not the best possible for the application of approximating the mod function in small intervals around multiples of the modulus. We show a different sine series, such that the sine series of order $n$ has error $O(\epsilon^{2n+1})$ for approximating the mod function in $\epsilon$-sized intervals around multiples of the modulus. Moreover, the resulting polynomial, after Taylor series approximation of the sine function, has small coefficients, and the whole polynomial can be computed at a precision that is only slightly larger than $-(2n+1)\log \epsilon$, the precision of approximation being sought. This polynomial can then be used to approximate the mod function to almost arbitrary precision, and hence allows practical CKKS-HE bootstrapping with arbitrary precision. We validate our approach by an implementation and obtain $100$ bit precision bootstrapping as well as improvements over prior work even at lower precision.
2022
EUROCRYPT
A PCP Theorem for Interactive Proofs and Applications 📺
The celebrated PCP Theorem states that any language in NP can be decided via a verifier that reads O(1) bits from a polynomially long proof. Interactive oracle proofs (IOP), a generalization of PCPs, allow the verifier to interact with the prover for multiple rounds while reading a small number of bits from each prover message. While PCPs are relatively well understood, the power captured by IOPs (beyond $\NP$) has yet to be fully explored. We present a generalization of the PCP theorem for interactive languages. We show that any language decidable by a k(n)-round IP has a k(n)-round public-coin IOP, where the verifier makes its decision by reading only O(1) bits from each (polynomially long) prover message and $O(1)$ bits from each of its own (random) messages to the prover. Our result and the underlying techniques have several applications. We get a new hardness of approximation result for a stochastic satisfiability problem, we show IOP-to-IOP transformations that previously were known to hold only for IPs, and we formulate a new notion of PCPs (index-decodable PCPs) that enables us to obtain a commit-and-prove SNARK in the random oracle model for nondeterministic computations.
2022
EUROCRYPT
Quantum Algorithms for Variants of Average-Case Lattice Problems via Filtering 📺
We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. The SIS, LWE, and EDCP problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for the variants of SIS and LWE we consider. For EDCP, our quantum algorithm slightly extends the result of Ivanyos et al. (2018). Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contribution is solving LWE given LWE-like quantum states with interesting parameters using a filtering technique. We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. The SIS, LWE, and EDCP problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for the variants of SIS and LWE we consider. For EDCP, our quantum algorithm slightly extends the result of Ivanyos et al. (2018). Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contribution is solving LWE given LWE-like quantum states with interesting parameters using a filtering technique.
2022
EUROCRYPT
Highly Efficient OT-Based Multiplication Protocols 📺
We present a new OT-based two-party multiplication protocol that is almost as efficient as Gilboa's semi-honest protocol (Crypto '99), but has a high-level of security without further compilation. The achieved security suffices for many applications, and, assuming DDH, can be cheaply compiled into full security.
2022
EUROCRYPT
Practical Non-interactive Publicly Verifiable Secret Sharing with Thousands of Parties 📺
Non-interactive publicly verifiable secret sharing (PVSS) schemes enables (re-)sharing of secrets in a decentralized setting in the presence of malicious parties. A recently proposed application of PVSS schemes is to enable permissionless proof-of-stake blockchains to ``keep a secret" via a sequence of committees that share that secret. These committees can use the secret to produce signatures on the blockchain's behalf, or to disclose hidden data conditioned on consensus that some event has occurred. That application needs very large committees with thousands of parties, so the PVSS scheme in use must be efficient enough to support such large committees, in terms of both computation and communication. Yet, previous PVSS schemes have large proofs and/or require many exponentiations over large groups. We present a non-interactive PVSS scheme in which the underlying encryption scheme is based on the learning with errors (LWE) problem. While lattice-based encryption schemes are very fast, they often have long ciphertexts and public keys. We use the following two techniques to conserve bandwidth: First, we adapt the Peikert-Vaikuntanathan-Waters (PVW) encryption scheme to the multi-receiver setting, so that the bulk of the parties' keys is a common random string. The resulting scheme yields $\Omega(1)$ amortized plaintext/ciphertext rate, where concretely the rate is $\approx 1/60$ for 100 parties, $\approx 1/8$ for 1000 parties, and approaching 1/2 as the number of parties grows. Second, we use bulletproofs over a DL-group of order about 256 bits to get compact proofs of correct encryption/decryption of shares. Alternating between the lattice and DL settings is relatively painless, as we equate the LWE modulus with the order of the group. We also show how to reduce the the number of exponentiations in the bulletproofs by applying Johnson-Lindenstrauss-like compression to reduce the dimension of the vectors whose properties must be verified. An implementation of our PVSS with 1000 parties showed that it is feasible even at that size, and should remain so even with one or two order of magnitude increase in the committee size.
2022
EUROCRYPT
Garbled Circuits With Sublinear Evaluator 📺
A recent line of work, Stacked Garbled Circuit (SGC), showed that Garbled Circuit (GC) can be improved for functions that include conditional behavior. SGC relieves the communication bottleneck of 2PC by only sending enough garbled material for a single branch out of the $b$ total branches. Hence, communication is sublinear in the circuit size. However, both the evaluator and the generator pay in computation and perform at least factor $\log b$ extra work as compared to standard GC evaluation. We extend the sublinearity of SGC to also include the work performed by the GC Evaluator E; thus we achieve a fully sublinear E, which is essential when optimizing for the online phase. We formalize our approach as a garbling scheme called GCWise: GC WIth Sublinear Evaluator. We show one attractive and immediate application, Garbled PIR, a primitive that marries GC with Private Information Retrieval. Garbled PIR allows the GC to non-interactively and sublinearly access a privately indexed element from a publicly known database, and then use this element in continued GC evaluation.
2022
EUROCRYPT
Practical Post-Quantum Signature Schemes from Isomorphism Problems of Trilinear Forms 📺
In this paper, we propose a practical signature scheme based on the alternating trilinear form equivalence problem. Our scheme is inspired from the Goldreich-Micali-Wigderson's zero-knowledge protocol for graph isomorphism, and can be served as an alternative candidate for the NIST's post-quantum digital signatures. First, we present theoretical evidences to support its security, especially in the post-quantum cryptography context. The evidences are drawn from several research lines, including hidden subgroup problems, multivariate cryptography, cryptography based on group actions, the quantum random oracle model, and recent advances on isomorphism problems for algebraic structures in algorithms and complexity. Second, we demonstrate its potential for practical uses. Based on algorithm studies, we propose concrete parameter choices, and then implement a prototype. One concrete scheme achieves 128 bit security with public key size ~4100 bytes, signature size ~6800 bytes, and running times (key generation, sign, verify) ~0.8ms on a common laptop computer.
2022
EUROCRYPT
Round-Optimal Black-Box Protocol Compilers 📺
We give black-box, round-optimal protocol compilers from semi-honest security to malicious security in the Random Oracle Model (ROM) and in the 1-out-2 OT correlations model. We use our compilers to obtain the following results: \begin{itemize} \item A two-round, two-party protocol secure against malicious adversaries in the random oracle model making black-box use of a two-round semi-honest secure protocol. Prior to our work, such a result was not known even considering special functionalities such as a two-round oblivious transfer. This result also implies the first constructions of two-round malicious (batch) OT/OLE in the random oracle model based on the black-box use of two-round semi-honest (batch) OT/OLE. \item A three-round multiparty secure computation protocol in the random oracle model secure against malicious adversaries that is based on the black-box use of two-round semi-honest OT. This protocol matches a known round complexity lower bound due to Applebaum et al. (ITCS'20) and is based on a minimal cryptographic hardness assumption. \item A two-round, multiparty secure computation protocol in the $1$-out-of-$2$ OT correlations model that is secure against malicious adversaries and makes black-box use of cryptography. This gives new round-optimal protocols for computing arithmetic branching programs that are statistically secure and makes black-box use of the underlying field. \end{itemize} As a contribution of independent interest, we provide a new variant of the IPS compiler (Ishai, Prabhakaran and Sahai, Crypto 2008) in the two-round setting, where we relax requirements on the IPS ``inner protocol'' by strengthening the ``outer protocol''.
2022
EUROCRYPT
Secure Non-Interactive Reduction and Spectral Analysis of Correlations 📺
Correlated pairs of random variables are a central concept in information-theoretically secure cryptography. Secure reductions between different correlations have been studied, and completeness results are known. Further, the complexity of such reductions is intimately connected with circuit complexity and efficiency of locally decodable codes. As such, making progress on these complexity questions faces strong barriers. Motivated by this, in this work, we study a restricted form of secure reductions --- namely, Secure Non-Interactive Reductions (SNIR) --- which is still closely related to the original problem, and establish several fundamental results and relevant techniques for it. We uncover striking connections between SNIR and linear algebraic properties of correlations. Specifically, we define the spectrum of a correlation, and show that a target correlation has a SNIR to a source correlation only if the spectrum of the latter contains the entire spectrum of the former. We also establish a `mirroring lemma' that shows an unexpected symmetry between the two parties in a SNIR, when viewed through the lens of spectral analysis. We also use cryptographic insights and elementary linear algebraic analysis to fully characterize the role of common randomness as well as local randomness in SNIRs. We employ these results to resolve several fundamental questions about SNIRs, and to define future directions.
2022
EUROCRYPT
Approximate Divisor Multiples - Factoring with Only a Third of the Secret CRT-Exponents 📺
We address Partial Key Exposure attacks on CRT-RSA on secret exponents $d_p, d_q$ with small public exponent $e$. For constant $e$ it is known that the knowledge of half of the bits of one of $d_p, d_q$ suffices to factor the RSA modulus $N$ by Coppersmith's famous {\em factoring with a hint} result. We extend this setting to non-constant $e$. Somewhat surprisingly, our attack shows that RSA with $e$ of size $N^{\frac 1 {12}}$ is most vulnerable to Partial Key Exposure, since in this case only a third of the bits of both $d_p, d_q$ suffices to factor $N$ in polynomial time, knowing either most significant bits (MSB) or least significant bits (LSB). Let $ed_p = 1 + k(p-1)$ and $ed_q = 1 + \ell(q-1)$. On the technical side, we find the factorization of $N$ in a novel two-step approach. In a first step we recover $k$ and $\ell$ in polynomial time, in the MSB case completely elementary and in the LSB case using Coppersmith's lattice-based method. We then obtain the prime factorization of $N$ by computing the root of a univariate polynomial modulo $kp$ for our known $k$. This can be seen as an extension of Howgrave-Graham's {\em approximate divisor} algorithm to the case of {\em approximate divisor multiples} for some known multiple $k$ of an unknown divisor $p$ of $N$. The point of {\em approximate divisor multiples} is that the unknown that is recoverable in polynomial time grows linearly with the size of the multiple $k$. Our resulting Partial Key Exposure attack with known MSBs is completely rigorous, whereas in the LSB case we rely on a standard Coppersmith-type heuristic. We experimentally verify our heuristic, thereby showing that in practice we reach our asymptotic bounds already using small lattice dimensions. Thus, our attack is highly practical.
2022
EUROCRYPT
One-Shot Fiat-Shamir-based NIZK Arguments of Composite Residuosity and Logarithmic-Size Ring Signatures in the Standard Model 📺
The standard model security of the Fiat-Shamir transform has been an active research area for many years. In breakthrough results, Canetti {\it et al.} (STOC'19) and Peikert-Shiehian (Crypto'19) showed that, under the Learning-With-Errors (LWE) assumption, it provides soundness by applying correlation-intractable (CI) hash functions to so-called {\it trapdoor} $\Sigma$-protocols. In order to be compatible with CI hash functions based on standard LWE assumptions with polynomial approximation factors, all known such protocols have been obtained via parallel repetitions of a basic protocol with binary challenges. In this paper, we consider languages related to Paillier's composite residuosity assumption (DCR) for which we give the first trapdoor $\Sigma$-protocols providing soundness in one shot, via exponentially large challenge spaces. This improvement is analogous to the one enabled by Schnorr over the original Fiat-Shamir protocol in the random oracle model. Using the correlation-intractable hash function paradigm, we then obtain simulation-sound NIZK arguments showing that an element of $\mathbb{Z}_{N^2}^\ast$ is a composite residue, which opens the door to space-efficient applications in the standard model. As a concrete example, we build logarithmic-size ring signatures (assuming a common reference string) with the shortest signature length among schemes based on standard assumptions in the standard model. We prove security under the DCR and LWE assumptions, while keeping the signature size comparable with that of random-oracle-based schemes.
2022
EUROCRYPT
On Building Fine-Grained One-Way Functions from Strong Average-Case Hardness 📺
Constructing one-way functions from average-case hardness is a long-standing open problem. A positive result would exclude Pessiland (Impagliazzo '95) and establish a highly desirable win-win situation: either (symmetric) cryptography exists unconditionally, or all NP problems can be solved efficiently on the average. Motivated by the lack of progress on this seemingly very hard question, we initiate the investigation of weaker yet meaningful candidate win-win results of the following type: either there are *fine-grained* one-way functions (FGOWF), or nontrivial speedups can be obtained for all NP problems on the average. FGOWFs only require a fixed polynomial gap (as opposed to superpolynomial) between the running time of the function and the running time of an inverter. We obtain three main results: Construction. We show that if there is an NP language having a very strong form of average-case hardness, which we call *block finding hardness*, then FGOWF exist. We provide heuristic support for this very strong average-case hardness notion by showing that it holds for a random language. Then, we study whether weaker (and more natural) forms of average-case hardness could already suffice to obtain FGOWF, and obtain two negative results: Separation I. We provide a strong oracle separation for the implication (exponentially average-case hard languages exist => FGOWF exist). Separation II. We provide a second strong negative result for an even weaker candidate win-win result. Namely, we rule out a black-box proof for the implication (exponentially average-case hard language *whose hardness amplifies optimally through parallel repetitions* exist => FGOWF exist). This separation forms the core technical contribution of our work.
2022
EUROCRYPT
Cryptanalysis of Candidate Obfuscators for Affine Determinant Programs 📺
At ITCS 2020, Bartusek et al. proposed a candidate indistinguishability obfuscator (iO) for affine determinant programs (ADPs). The candidate is special since it is the only unbroken candidate iO to date that does not rely on the hardness of traditional cryptographic assumptions like discrete-log or learning with errors. Instead, it directly applies specific randomization techniques to the underlying ADP. It is relatively efficient compared to the rest of the iO candidates. However, the obfuscation scheme requires further cryptanalysis since it was not known to be based on any well-formed mathematical assumptions. In this paper, we show cryptanalytic attacks on the iO candidate provided by Bartusek et al. Our attack exploits the weakness of one of the randomization steps in the candidate. The attack applies to a fairly general class of programs. At the end of the paper we discuss plausible countermeasures to defend against our attacks.
2022
EUROCRYPT
A Fast and Simple Partially Oblivious PRF, with Applications 📺
We build the first construction of a partially oblivious pseudorandom function (POPRF) that does not rely on bilinear pairings. Our construction can be viewed as combining elements of the 2HashDH OPRF of Jarecki, Kiayias, and Krawczyk with the Dodis-Yampolskiy PRF. We analyze our POPRF’s security in the random oracle model via reduction to a new one-more gap strong Diffie-Hellman inversion assumption. The most significant technical challenge is establishing confidence in the new assumption, which requires new proof techniques that enable us to show that its hardness is implied by the q-DL assumption in the algebraic group model. Our new construction is as fast as the current, standards-track OPRF 2HashDH protocol, yet provides a new degree of flexibility useful in a variety of applications. We show how POPRFs can be used to prevent token hoarding attacks against Privacy Pass, reduce key management complexity in the OPAQUE password authenticated key exchange protocol, and ensure stronger security for password breach alerting services.
2022
EUROCRYPT
Rubato: Noisy Ciphers for Approximate Homomorphic Encryption 📺
A transciphering framework converts a symmetric ciphertext into a homomorphic ciphertext on the server-side, reducing computational and communication overload on the client-side. In Asiacrypt 2021, Cho et al. proposed the RtF framework that supports approximate computation. In this paper, we propose a family of noisy ciphers, dubbed Rubato, with a novel design strategy of introducing noise to a symmetric cipher of a low algebraic degree. With this strategy, the multiplicative complexity of the cipher is significantly reduced, compared to existing HE-friendly ciphers, without degrading the overall security. More precisely, given a moderate block size (16 to 64), Rubato enjoys a low multiplicative depth (2 to 5) and a small number of multiplications per encrypted word (2.1 to 6.25) at the cost of slightly larger ciphertext expansion (1.26 to 1.31). The security of Rubato is supported by comprehensive analysis including symmetric and LWE cryptanalysis. Compared to HERA within the RtF framework, client-side and server-side throughput is improved by 22.9% and 32.2%, respectively, at the cost of only 1.6% larger ciphertext expansion.
2022
EUROCRYPT
A Complete Characterization of Game-Theoretically Fair, Multi-Party Coin Toss 📺
Cleve's celebrated lower bound (STOC'86) showed that a de facto strong fairness notion is impossible in 2-party coin toss, i.e., the corrupt party always has a strategy of biasing the honest party's outcome by a noticeable amount. Nonetheless, Blum's famous coin-tossing protocol (CRYPTO'81) achieves a strictly weaker "game-theoretic'' notion of fairness — specifically, it is a 2-party coin toss protocol in which neither party can bias the outcome towards its own preference; and thus the honest protocol forms a Nash equilibrium in which neither party would want to deviate. Surprisingly, an n-party analog of Blum's famous coin toss protocol was not studied till recently. The work by Chung et al.~(TCC'18) was the first to explore the feasibility of game-theoretically fair n-party coin toss in the presence of corrupt majority. We may assume that each party has a publicly stated preference for either the bit 0 or 0, and if the outcome agrees with the party's preference, it obtains utility 1; else it obtains nothing. A natural game-theoretic formulation is to require that the honest protocol form a coalition-resistant Nash equilibrium, i.e., no coalition should have incentive to deviate from the honest behavior. Chung et al. phrased this game-theoretic notion as “cooperative-strategy-proofness'' or ”CSP-fairness'' for short. Unfortunately, Chung et al.~showed that under (n-1)-sized coalitions, it is impossible to design such a CSP-fair coin toss protocol, unless all parties except one prefer the same bit. In this paper, we show that the impossibility of Chung et al.~is in fact not as broad as it may seem. When coalitions are majority but not $n-1$ in size, we can indeed get feasibility results in some meaningful parameter regimes. We give a complete characterization of the regime in which CSP-fair coin toss is possible, by providing a matching upper- and lower-bound. Our complete characterization theorem also shows that the mathematical structure of game-theoretic fairness is starkly different from the de facto strong fairness notion in the multi-party computation literature.
2022
EUROCRYPT
Single-Server Private Information Retrieval with Sublinear Amortized Time 📺
We construct new private-information-retrieval protocols in the single-server setting. Our schemes allow a client to privately fetch a sequence of database records from a server, while the server answers each query in average time sublinear in the database size. Specifically, we introduce the first single-server private-information-retrieval schemes that have sublinear amortized server time, require sublinear additional storage, and allow the client to make her queries adaptively. Our protocols rely only on standard cryptographic assumptions (decision Diffie-Hellman, quadratic residuosity, learning with errors, etc.). They work by having the client first fetch a small "hint" about the database contents from the server. Generating this hint requires server time linear in the database size. Thereafter, the client can use the hint to make a bounded number of adaptive queries to the server, which the server answers in sublinear time--yielding sublinear amortized cost. Finally, we give lower bounds proving that our most efficient scheme is optimal with respect to the trade-off it achieves between server online time and client storage.
2022
EUROCRYPT
McEliece needs a Break -- Solving McEliece-1284 and Quasi-Cyclic-2918 with Modern ISD 📺
With the recent shift to post-quantum algorithms it becomes increasingly important to provide precise bit-security estimates for code-based cryptography such as McEliece and quasi-cyclic schemes like BIKE and HQC. While there has been significant progress on information set decoding (ISD) algorithms within the last decade, it is still unclear to which extent this affects current cryptographic security estimates. We provide the first concrete implementations for representation-based ISD, such as May-Meurer-Thomae (MMT) or Becker-Joux-May-Meurer (BJMM), that are parameter-optimized for the McEliece and quasi-cyclic setting. Although MMT and BJMM consume more memory than naive ISD algorithms like Prange, we demonstrate that these algorithms lead to significant speedups for practical cryptanalysis already for cryptographic instances of medium security level (around 60 bit). More concretely, we provide data for the record computations of McEliece-1223 and McEliece-1284 (old record: 1161), and for the quasi-cyclic setting up to dimension 2918 (before: 1938). Based on our record computations we extrapolate to the bit-security level of the proposed BIKE, HQC and McEliece parameters in NIST's standardization process. For BIKE/HQC, we also show how to transfer the Decoding-One-Out-of-Many (DOOM) technique to MMT/BJMM. Although we achieve significant DOOM speedups, our estimates confirm the bit-security levels of BIKE and HQC. For the proposed McEliece round-3 parameter sets of 192 and 256 bit, however, our extrapolation indicates a security level overestimate by roughly 20 and 10 bits, respectively, i.e., the high-security McEliece instantiations may be a bit less secure than desired.
2022
EUROCRYPT
Field Instruction Multiple Data 📺
Fully homomorphic encryption~(FHE) has flourished since it was first constructed by Gentry~(STOC 2009). Single instruction multiple data~(SIMD) gave rise to efficient homomorphic operations on vectors in \((\mathbb{F}_{t^d})^\ell\), for prime \(t\). RLWE instantiated with cyclotomic polynomials of the form \(X^{2^N}+1\) dominate implementations of FHE due to highly efficient fast fourier transformations. However, this choice yields very short SIMD plaintext vectors and high degree extension fields, e.g. \(\ell < 100, d > 100\) for small primes~(\(t = 3, 5, \dots\)). In this work, we describe a method to encode more data on top of SIMD, \emph{Field Instruction Multiple Data}, applying reverse multiplication friendly embedding~(RMFE) to FHE. With RMFE, length-\(k\) \(\mathbb{F}_{t}\) vectors can be encoded into \(\mathbb{F}_{t^d}\) and multiplied once. The results have to be recoded~(decoded and then re-encoded) before further multiplications can be done. We introduce an FHE-specific technique to additionally evaluate arbitrary linear transformations on encoded vectors for free during the FHE recode operation. On top of that, we present two optimizations to unlock high degree extension fields with small \(t\) for homomorphic computation: \(r\)-fold RMFE, which allows products of up to \(2^r\) encoded vectors before recoding, and a three-stage recode process for RMFEs obtained by composing two smaller RMFEs. Experiments were performed to evaluate the effectiveness of FIMD from various RMFEs compared to standard SIMD operations. Overall, we found that FIMD generally had \(>2\times\) better (amortized) multiplication times compared to FHE for the same amount of data, while using almost \(k/2 \times\) fewer ciphertexts required.
2022
EUROCRYPT
Mitaka: A Simpler, Parallelizable, Maskable Variant of Falcon 📺
This work describes the Mitaka signature scheme: a new hash-and-sign signature scheme over NTRU lattices which can be seen as a variant of NIST finalist Falcon. It achieves comparable efficiency but is considerably simpler, online/offline, and easier to parallelize and protect against side-channels, thus offering significant advantages from an implementation standpoint. It is also much more versatile in terms of parameter selection. We obtain this signature scheme by replacing the FFO lattice Gaussian sampler in Falcon by the “hybrid” sampler of Ducas and Prest, for which we carry out a detailed and corrected security analysis. In principle, such a change can result in a substantial security loss, but we show that this loss can be largely mitigated using new techniques in key generation that allow us to construct much higher quality lattice trapdoors for the hybrid sampler relatively cheaply. This new approach can also be instantiated on a wide variety of base fields, in contrast with Falcon's restriction to power-of-two cyclotomics. We also introduce a new lattice Gaussian sampler with the same quality and efficiency, but which is moreover compatible with the integral matrix Gram root technique of Ducas et al., allowing us to avoid floating point arithmetic. This makes it possible to realize the same signature scheme as Mitaka efficiently on platforms with poor support for floating point numbers. Finally, we describe a provably secure masking of Mitaka. More precisely, we introduce novel gadgets that allow provable masking at any order at much lower cost than previous masking techniques for Gaussian sampling-based signature schemes, for cheap and dependable side-channel protection.
2022
EUROCRYPT
On IND-qCCA security in the ROM and its applications: CPA security is sufficient for TLS 1.3 📺
Bounded IND-CCA security (IND-qCCA) is a notion similar to the traditional IND-CCA security, except the adversary is restricted to a constant number q of decryption/decapsulation queries. We show in this work that IND-qCCA is easily obtained from any passively secure PKE in the (Q)ROM. That is, simply adding a confirmation hash or computing the key as the hash of the plaintext and ciphertext holds an IND-qCCA KEM. In particular, there is no need for derandomization or re-encryption as in the Fujisaki-Okamoto (FO) transform (JoC 2013). This makes the decapsulation process of such IND-qCCA KEM much more efficient than its FO-derived counterpart. In addition, IND-qCCA KEMs could be used in the recently proposed KEMTLS protocol (ACM CCS 2020) that requires IND-1CCA ephemeral key-exchange mechanisms or in TLS 1.3. Then, using similar proof techniques, we show that CPA-secure KEMs are sufficient for the TLS 1.3 handshake to be secure, solving an open problem in the ROM. In turn, this implies that the PRF-ODH assumption used to prove the security of TLS 1.3 is not necessary in the ROM. We also highlight and briefly discuss several use cases of IND-1CCA KEMs in protocols and ratcheting primitives.
2022
EUROCRYPT
Online-Extractability in the Quantum Random-Oracle Model 📺
We show the following generic result: Whenever a quantum query algorithm in the quantum random-oracle model outputs a classical value t that is promised to be in some tight relation with H(x) for some x, then x can be efficiently extracted with almost certainty. The extraction is by means of a suitable simulation of the random oracle and works online, meaning that it is straightline, i.e., without rewinding, and on- the-fly, i.e., during the protocol execution and without disturbing it. The technical core of our result is a new commutator bound that bounds the operator norm of the commutator of the unitary operator that describes the evolution of the compressed oracle (which is used to simulate the random oracle above) and of the measurement that extracts x. We show two applications of our generic online extractability result. We show tight online extractability of commit-and-open Σ-protocols in the quantum setting, and we offer the first complete post-quantum security proof of the textbook Fujisaki-Okamoto transformation, i.e, without adjustments to facilitate the proof, including concrete security bounds.
2022
EUROCRYPT
A Novel Completeness Test for Leakage Models and its Application to Side Channel Attacks and Responsibly Engineered Simulators 📺
Today’sdside channel attack targets are often complex devices in which instructions are processed in parallel and work on 32-bit datae words. Consedsquently, the state that is involved in producing leakage in these modern devices is large, and basing evaluations (i.e. worst case attacks) and simulators, and on a potentially incomplete state can lead to wrong conclusions. We put forward a novel notion for the “completeness” of an assumed state, together with an efficient statistical test that is based on “collapsed models”. Our novel test can be used to recover a state that contains multiple 32-bit variables in a grey box setting. We illustrate how our novel test can help to guide side channel attacks and we reveal new attack vectors for existing implementations. We then demonstrate the application of this test in the context of leakage modelling for leakage simulators and confirm that even the most recent leakage simulators do not capture all available leakage of their respective target devices. Our new test enables finding nominal models that capture all available leakage but do not give a helping hand to adversaries. Thereby we make a first step towards leakage simulators that are responsibly engineered.
2022
EUROCRYPT
Towards Micro-Architectural Leakage Simulators: Reverse Engineering Micro-Architectural Leakage Features is Practical 📺
Leakage simulators offer the tantalising promise of easy and quick testing of software with respect to the presence of side channel leakage. The quality of their build in leakage models is therefore crucial, this includes the faithful inclusion of micro-architectural leakage. Microarchitectural leakage is a reality even on low- to mid-range commercial processors, such as the ARM Cortex M series. Dealing with it seems initially infeasible in a grey box setting: how should we describe it if micro-architectural elements are not publicly known? We demonstrate, for the first time, that it is feasible, using a recent leakage modelling technique, to reverse engineer significant elements of the micro-architectural leakage of a commercial processor. Our approach first recovers the micro-architectural leakage of each stage in the pipeline, and the leakage of elements that are known to produce glitches. Using the reverse engineered leakage features we build an enhanced version of the popular leakage simulator ELMO.
2022
EUROCRYPT
Fiat-Shamir Bulletproofs are Non-Malleable (in the Algebraic Group Model) 📺
Bulletproofs (B{\"u}nz et al.~IEEE S\&P 2018) are a celebrated ZK proof system that allows for short and efficient proofs, and have been implemented and deployed in several real-world systems. In practice, they are most often implemented in their \emph{non-interactive} version obtained using the Fiat-Shamir transform, despite the lack of a formal proof of security for this setting. Prior to this work, there was no evidence that \emph{malleability attacks} were not possible against Fiat-Shamir Bulletproofs. Malleability attacks can lead to very severe vulnerabilities, as they allow an adversary to forge proofs re-using or modifying parts of the proofs provided by the honest parties. In this paper, we show for the first time that Bulletproofs (or any other similar multi-round proof system satisfying some form of \emph{weak unique response} property) achieve \emph{simulation-extractability} in the \emph{algebraic group model}. This implies that Fiat-Shamir Bulletproofs are \emph{non-malleable}.
2022
EUROCRYPT
On the Lattice Isomorphism Problem, Quadratic Forms, Remarkable Lattices, and Cryptography 📺
A natural and recurring idea in the knapsack/lattice cryptography literature is to start from a lattice with remarkable decoding capability as your private key, and hide it somehow to make a public key. This is also how the code-based encryption scheme of McEliece (1978) proceeds. This idea has never worked out very well for lattices: ad-hoc approaches have been proposed, but they have been subject to ad-hoc attacks, using tricks beyond lattice reduction algorithms. On the other hand the framework offered by the Short Integer Solution (SIS) and Learning With Errors (LWE) problems, while convenient and well founded, remains frustrating from a coding perspective: the underlying decoding algorithms are rather trivial, with poor decoding performance. In this work, we provide generic realizations of this natural idea (independently of the chosen remarkable lattice) by basing cryptography on the lattice isomorphism problem (LIP). More specifically, we provide: - a worst-case to average-case reduction for search-LIP and distinguish-LIP within an isomorphism class, by extending techniques of Haviv and Regev (SODA 2014). - a zero-knowledge proof of knowledge (ZKPoK) of an isomorphism. This implies an identification scheme based on search-LIP. - a key encapsulation mechanism (KEM) scheme and a hash-then-sign signature scheme, both based on distinguish-LIP. The purpose of this approach is for remarkable lattices to improve the security and performance of lattice-based cryptography. For example, decoding within poly-logarithmic factor from Minkowski's bound in a remarkable lattice would lead to a KEM resisting lattice attacks down to poly-logarithmic approximation factor, provided that the dual lattice is also close to Minkowski's bound. Recent works have indeed reached such decoders for certain lattices (Chor-Rivest, Barnes-Sloan), but these do not perfectly fit our need as their duals have poor minimal distance.
2022
EUROCRYPT
Limits of Polynomial Packings for $\mathbb{Z}_{p^k}$ and $\mathbb{F}_{p^k}$ 📺
We formally define polynomial packing methods and initiate a unified study of related concepts in various contexts of cryptography. This includes homomorphic encryption (HE) packing and reverse multiplication-friendly embedding (RMFE) in information-theoretically secure multi-party computation (MPC). We prove several upper bounds and impossibility results on packing methods for $\mathbb{Z}_{p^k}$ or $\mathbb{F}_{p^k}$-messages into $\mathbb{Z}_{p^t}[x]/f(x)$ in terms of (i) packing density, (ii) level-consistency, and (iii) surjectivity. These results have implications on recent development of HE-based MPC over $\mathbb{Z}_{2^k}$ secure against actively corrupted majority and provide new proofs for upper bounds on RMFE.
2022
EUROCRYPT
High-Precision Bootstrapping for Approximate Homomorphic Encryption by Error Variance Minimization 📺
The Cheon-Kim-Kim-Song (CKKS) scheme (Asiacrypt'17) is one of the most promising homomorphic encryption (HE) schemes as it enables privacy-preserving computing over real (or complex) numbers. It is known that bootstrapping is the most challenging part of the CKKS scheme. Further, homomorphic evaluation of modular reduction is the core of the CKKS bootstrapping. As modular reduction is not represented by the addition and multiplication of complex numbers, approximate polynomials for modular reduction should be used. The best-known techniques (Eurocrypt'21) use a polynomial approximation for trigonometric functions and their composition. However, all the previous methods are based on an indirect approximation, and thus it requires lots of multiplicative depth to achieve high accuracy. This paper proposes a direct polynomial approximation of modular reduction for CKKS bootstrapping, which is optimal in error variance and depth. Further, we propose an efficient algorithm, namely the lazy baby-step giant-step (BSGS) algorithm, to homomorphically evaluate the approximate polynomial, utilizing the lazy relinearization/rescaling technique. The lazy-BSGS reduces the computational complexity by half compared to the ordinary BSGS algorithm. The performance improvement for the CKKS scheme by the proposed algorithm is verified by implementation using HE libraries. The implementation results show that the proposed method has a multiplicative depth of 10 for modular reduction to achieve the state-of-the-art accuracy, while the previous methods have depths of 11 to 12. Moreover, we achieve higher accuracy within a small multiplicative depth, for example, 93-bit within multiplicative depth 11.
2022
EUROCRYPT
Dynamic Collusion Bounded Functional Encryption from Identity-Based Encryption 📺
Functional Encryption is a powerful notion of encryption in which each decryption key is associated with a function f such that decryption recovers the function evaluation f(m). Informally, security states that a user with access to function keys sk_f1,sk_f2,..., (and so on) can only learn f1(m), f2(m),... (and so on) but nothing more about the message. The system is said to be q-bounded collusion resistant if the security holds as long as an adversary gets access to at most q = q(λ) function keys. A major drawback of such statically bounded collusion systems is that the collusion bound q must be declared at setup time and is fixed for the entire lifetime of the system. We initiate the study of dynamically bounded collusion resistant functional encryption systems which provide more flexibility in terms of selecting the collusion bound, while reaping the benefits of statically bounded collusion FE systems (such as quantum resistance, simulation security, and general assumptions). Briefly, the virtues of a dynamically bounded scheme can be summarized as: -Fine-grained individualized selection: It lets each encryptor select the collusion bound by weighing the trade-off between performance overhead and the amount of collusion resilience. -Evolving encryption strategies: Since the system is no longer tied to a single collusion bound, thus it allows to dynamically adjust the desired collusion resilience based on any number of evolving factors such as the age of the system, or a number of active users, etc. -Ease and simplicity of updatability: None of the system parameters have to be updated when adjusting the collusion bound. That is, the same key skf can be used to decrypt ciphertexts for collusion bound q = 2 as well as q = 2^λ. We construct such a dynamically bounded functional encryption scheme for the class of all polynomial-size circuits under the general assumption of Identity-Based Encryption
2022
EUROCRYPT
Round-Optimal Byzantine Agreement 📺
Byzantine agreement is a fundamental primitive in cryptography and distributed computing, and minimizing its round complexity is of paramount importance. It is long known that any randomized $r$-round protocol must fail with probability at least $(c\cdot r)^{-r}$, for some constant $c$, when the number of corruptions is linear in the number of parties, $t = \theta(n)$. On the other hand, current protocols fail with probability at least $2^{-r}$. Whether we can match the lower bound agreement probability remains unknown. In this work, we resolve this long-standing open question. We present a protocol that matches the lower bound up to constant factors. Our results hold under a (strongly rushing) adaptive adversary that can corrupt up to $t = (1-\epsilon)n/2$ parties, and our protocols use a public-key infrastructure and a trusted setup for unique threshold signatures. This is the first protocol that decreases the failure probability (overall) by a \emph{super-constant} factor per round.
2022
EUROCRYPT
Post-Quantum Security of the Even-Mansour Cipher 📺
The Even-Mansour cipher is a simple method for constructing a (keyed) pseudorandom permutation $E$ from a public random permutation~$P:\bool^n \rightarrow \bool^n$. It is a core ingredient in a wide array of symmetric-key constructions, including several lightweight cryptosystems presently under consideration for standardization by NIST. It is secure against classical attacks, with optimal attacks requiring $q_E$ queries to $E$ and $q_P$ queries to $P$ such that $q_P \cdot q_E \approx 2^n$. If the attacker is given \emph{quantum} access to both $E$ and $P$, however, the cipher is completely insecure, with attacks using $q_P = q_E = O(n)$ queries known. In any plausible real-world setting, however, a quantum attacker would have only \emph{classical} access to the keyed permutation $E$ implemented by honest parties, while retaining quantum access to $P$. Attacks in this setting with $q_P^2 \cdot q_E \approx 2^n$ are known, showing that security degrades as compared to the purely classical case, but leaving open the question as to whether the Even-Mansour cipher can still be proven secure in this natural ``post-quantum'' setting. We resolve this open question, showing that any attack in this post-quantum setting requires $q^2_P \cdot q_E + q_P \cdot q_E^2 \approx 2^n$. Our results apply to both the two-key and single-key variants of Even-Mansour. Along the way, we establish several generalizations of results from prior work on quantum-query lower bounds that may be of independent interest.
2022
EUROCRYPT
Round-Optimal and Communication-Efficient Multiparty Computation 📺
Typical approaches for minimizing the round complexity of multi-party computation (MPC) come at the cost of increased communication complexity (CC) or the reliance on setup assumptions. A notable exception is the recent work of Ananth et al. [TCC 2019], which used Functional Encryption (FE) combiners to obtain a round optimal (two-round) semi-honest MPC in the plain model with CC proportional to the depth and input-output length of the circuit being computed---we refer to such protocols as circuit scalable. This leaves open the question of obtaining communication efficient protocols that are secure against malicious adversaries in the plain model, which our work solves. Concretely, our two main contributions are: 1) We provide a round-preserving black-box compiler that compiles a wide class of MPC protocols into circuit-scalable maliciously secure MPC protocols in the plain model, assuming (succinct) FE combiners. 2) We provide a round-preserving black-box compiler that compiles a wide class of MPC protocols into circuit-independent --- i.e., with CC that depends only on the input-output length of the circuit---maliciously secure MPC protocols in the plain model, assuming Multi-Key Fully-Homomorphic Encryption (MFHE). Our constructions are based on a new compiler that turns a wide class of MPC protocols into k-delayed-input function MPC protocols (a notion we introduce), where the functions to be computed is specified only in the k-th round of the protocol. As immediate corollaries of our two compilers, we derive (1) the first round-optimal and circuit-scalable maliciously secure MPC, and (2) the first round-optimal and circuit-independent maliciously secure MPC in the plain model. The latter MPC achieves the best to-date CC for a round-optimal malicious MPC protocol. In fact, it is even communication-optimal when the output size of the function being evaluated is smaller than its input size (e.g., for boolean functions). All of our results are based on standard polynomial time assumptions.
2022
EUROCRYPT
Incompressible Cryptography 📺
Incompressible encryption allows us to make the ciphertext size flexibly large and ensures that an adversary learns nothing about the encrypted data, even if the decryption key later leaks, unless she stores essentially the entire ciphertext. Incompressible signatures can be made arbitrarily large and ensure that an adversary cannot produce a signature on any message, even one she has seen signed before, unless she stores one of the signatures essentially in its entirety. In this work, we give simple constructions of both incompressible public-key encryption and signatures under minimal assumptions. Furthermore, large incompressible ciphertexts (resp. signatures) can be decrypted (resp. verified) in a streaming manner with low storage. In particular, these notions strengthen the related concepts of disappearing encryption and signatures, recently introduced by Guan and Zhandry (TCC 2021), whose previous constructions relied on sophisticated techniques and strong, non-standard assumptions. We extend our constructions to achieve an optimal "rate", meaning the large ciphertexts (resp. signatures) can contain almost equally large messages, at the cost of stronger assumptions.
2022
EUROCRYPT
Multi-Designated Receiver Signed Public Key Encryption 📺
This paper introduces a new type of public-key encryption scheme, called Multi-Designated Receiver Signed Public Key Encryption (MDRS-PKE), which allows a sender to select a set of designated receivers and both encrypt and sign a message that only these receivers will be able to read and authenticate (confidentiality and authenticity). An MDRS-PKE scheme provides several additional security properties which allow for a fundamentally new type of communication not considered before. Namely, it satisfies consistency---a dishonest sender cannot make different receivers receive different messages---off-the-record---a dishonest receiver cannot convince a third party of what message was sent (e.g., by selling their secret key), because dishonest receivers have the ability to forge signatures---and anonymity---parties that are not in the set of designated receivers cannot identify who the sender and designated receivers are. We give a construction of an MDRS-PKE scheme from standard assumptions. At the core of our construction lies yet another new type of public-key encryption scheme, which is of independent interest: Public Key Encryption for Broadcast (PKEBC) which provides all the security guarantees of MDRS-PKE schemes, except authenticity. We note that MDRS-PKE schemes give strictly more guarantees than Multi-Designated Verifier Signature (MDVS) schemes with privacy of identities. This in particular means that our MDRS-PKE construction yields the first MDVS scheme with privacy of identities from standard assumptions. The only prior construction of such schemes was based on Verifiable Functional Encryption for general circuits (Damgard et al., TCC '20).
2022
EUROCRYPT
On Succinct Non-Interactive Arguments in Relativized Worlds 📺
Succinct non-interactive arguments of knowledge (SNARKs) are cryptographic proofs with strong efficiency properties. Applications of SNARKs often involve proving computations that include the SNARK verifier, a technique called recursive composition. Unfortunately, SNARKs with desirable features such as a transparent (public-coin) setup are known only in the random oracle model (ROM). In applications this oracle must be heuristically instantiated and used in a non-black-box way. In this paper we identify a natural oracle model, the low-degree random oracle model, in which there exist transparent SNARKs for all NP computations *relative to this oracle*. Informally, letting $O$ be a low-degree encoding of a random oracle, and assuming the existence of (standard-model) collision-resistant hash functions, there exist SNARKs relative to $O$ for all languages in $NP^{O}$. Such a SNARK can directly prove a computation about its own verifier. To analyze this model, we introduce a more general framework, the *linear code random oracle model* (LCROM). We show how to obtain SNARKs in the LCROM for computations that query the oracle, given an *accumulation scheme* for oracle queries. Then we construct such an accumulation scheme for the special case of a low degree random oracle.
2022
EUROCRYPT
Refined Cryptanalysis of the GPRS Ciphers GEA-1 and GEA-2 📺
At EUROCRYPT~2021, Beierle et al. presented the first public analysis of the GPRS ciphers GEA-1 and GEA-2. They showed that although GEA-1 uses a 64-bit session key, it can be recovered with the knowledge of only 65 bits of keystream in time $2^{40}$ using $44$ GiB of memory. The attack exploits a weakness in the initialization process of the cipher that was presumably hidden intentionally by the designers to reduce its security. While no such weakness was found for GEA-2, the authors presented an attack on this cipher with time complexity of about $2^{45}$. The main practical obstacle is the required knowledge of 12800 bits of keystream used to encrypt a full GPRS frame. Variants of the attack are applicable (but more expensive) when given less consecutive keystream bits, or when the available keystream is fragmented (it contains no long consecutive block). In this paper, we improve and complement the previous analysis of GEA-1 and GEA-2. For GEA-1, we devise an attack in which the memory complexity is reduced by a factor of about $2^{13} = 8192$ from $44$ GiB to about 4 MiB, while the time complexity remains $2^{40}$. Our implementation recovers the GEA-1 session key in average time of 2.5~hours on a modern laptop. For GEA-2, we describe two attacks that complement the analysis of Beierle et al. The first attack obtains a linear tradeoff between the number of consecutive keystream bits available to the attacker (denoted by $\ell$) and the time complexity. It improves upon the previous attack in the range of (roughly) $\ell \leq 7000$. Specifically, for $\ell = 1100$ the complexity of our attack is about $2^{54}$, while the previous one is not faster than the $2^{64}$ brute force complexity. In case the available keystream is fragmented, our second attack reduces the memory complexity of the previous attack by a factor of $512$ from 32 GiB to 64 MiB with no time complexity penalty. Our attacks are based on new combinations of stream cipher cryptanalytic techniques and algorithmic techniques used in other contexts (such as solving the $k$-XOR problem).
2022
EUROCRYPT
Families of SNARK-friendly 2-chains of elliptic curves 📺
At CANS'20, El Housni and Guillevic introduced a new 2-chain of pairing-friendly elliptic curves for recursive zero-knowledge Succinct Non-interactive ARguments of Knowledge (zk-SNARKs) made of the former BLS12-377 curve (a Barreto--Lynn--Scott curve over a 377-bit prime field) and the new BW6-761 curve (a Brezing--Weng curve of embedding degree 6 over a 761-bit prime field). First we generalise the curve construction, the pairing formulas ($e \colon \G_1 \times \G_2 \to \G_T$) and the group operations to any BW6 curve defined on top of any BLS12 curve, forming a family of 2-chain pairing-friendly curves. Second, we investigate other possible 2-chain families made on top of the BLS12 and BLS24 curves. We compare BW6 to Cocks--Pinch curves of higher embedding degrees 8 and 12 (CP8, CP12) at the 128-bit security level. We derive formulas for efficient optimal ate and optimal Tate pairings on our new CP curves. We show that for both BLS12 and BLS24, the BW6 construction always gives the fastest pairing and curve arithmetic compared to Cocks-Pinch curves. Finally, we suggest a short list of curves suitable for Groth16 and KZG-based universal SNARKs and present an optimized implementation of these curves. Based on Groth16 and PlonK (a KZG-based SNARK) implementations in the \texttt{gnark} ecosystem, we obtain that the BLS12-377/BW6-761 pair is optimized for the former while the BLS24-315/BW6-672 pair is optimized for the latter.
2022
EUROCRYPT
Zero-Knowledge IOPs with Linear-Time Prover and Polylogarithmic-Time Verifier 📺
Interactive oracle proofs (IOPs) are a multi-round generalization of probabilistically checkable proofs that play a fundamental role in the construction of efficient cryptographic proofs. We present an IOP that simultaneously achieves the properties of zero knowledge, linear-time proving, and polylogarithmic-time verification. We construct a zero-knowledge IOP where, for the satisfiability of an $N$-gate arithmetic circuit over any field of size $\Omega(N)$, the prover uses $O(N)$ field operations and the verifier uses $\polylog(N)$ field operations (with proof length $O(N)$ and query complexity $\polylog(N)$). Polylogarithmic verification is achieved in the holographic setting for every circuit (the verifier has oracle access to a linear-time-computable encoding of the circuit whose satisfiability is being proved). Our result implies progress on a basic goal in the area of efficient zero knowledge. Via a known transformation, we obtain a zero knowledge argument system where the prover runs in linear time and the verifier runs in polylogarithmic time; the construction is plausibly post-quantum and only makes a black-box use of lightweight cryptography (collision-resistant hash functions).
2022
EUROCRYPT
Anamorphic Encryption: Private Communication against a Dictator 📺
Cryptosystems have been developed over the years under the typical prevalent setting which assumes that the receiver’s key is kept secure from the adversary, and that the choice of the message to be sent is freely performed by the sender and is kept secure from the adversary as well. Under these fundamental and basic operational assumptions, modern Cryptography has flourished over the last half a century or so, with amazing achievements: New systems (including public-key Cryptography), beautiful and useful models (including security definitions such as semantic security), and new primitives (such as zero-knowledge proofs) have been developed. Furthermore, these fundamental achievements have been translated into actual working systems, and span many of the daily human activities over the Internet. However, in recent years, there is an overgrowing pressure from many governments to allow the government itself access to keys and messages of encryption systems (under various names: escrow encryption, emergency access, communication decency acts, etc.). Numerous non-direct arguments against such policies have been raised, such as “the bad guys can utilize other encryption system” so all other cryptosystems have to be declared illegal, or that “allowing the government access is an ill-advised policy since it creates a natural weak systems security point, which may attract others (to masquerade as the government).” It remains a fundamental open issue to show directly that the above mentioned efforts by a government (called here “a dictator” for brevity) which mandate breaking of the basic operational assumption (and disallowing other cryptosystems), is, in fact, a futile exercise. This is a direct technical point which needs to be made and has not been made to date. In this work, as a technical demonstration of the futility of the dictator’s demands, we invent the notion of “Anamorphic Encryption” which shows that even if the dictator gets the keys and the messages used in the system (before anything is sent) and no other system is allowed, there is a covert way within the context of well established public-key cryptosystems for an entity to send secure messages which are, in spite of the stringent dictator conditions, hidden from the dictator itself! We feel that this may be an important direct technical argument against the nature of governments attempts to police the use of strong cryptographic systems, and we hope to stimulate further works in this direction.
2022
EUROCRYPT
Watermarking PRFs against Quantum Adversaries 📺
We initiate the study of software watermarking against quantum adversaries. A quantum adversary generates a quantum state as a pirate software that potentially removes an embedded message from a classical marked software. Extracting an embedded message from quantum pirate software is difficult since measurement could irreversibly alter the quantum state. In software watermarking against classical adversaries, a message extraction algorithm crucially uses the (input-output) behavior of a classical pirate software to extract an embedded message. Even if we instantiate existing watermarking PRFs with quantum-safe building blocks, it is not clear whether they are secure against quantum adversaries due to the quantum-specific property above. Thus, we need entirely new techniques to achieve software watermarking against quantum adversaries. In this work, we define secure watermarking PRFs for quantum adversaries (unremovability against quantum adversaries). We also present two watermarking PRFs as follows. - We construct a privately extractable watermarking PRF against quantum adversaries from the quantum hardness of the learning with errors (LWE) problem. The marking and extraction algorithms use a public parameter and a private extraction key, respectively. The watermarking PRF is unremovable even if adversaries have (the public parameter and) access to the extraction oracle, which returns a result of extraction for a queried quantum circuit. - We construct a publicly extractable watermarking PRF against quantum adversaries from indistinguishability obfuscation (IO) and the quantum hardness of the LWE problem. The marking and extraction algorithms use a public parameter and a public extraction key, respectively. The watermarking PRF is unremovable even if adversaries have the extraction key (and the public parameter). We develop a quantum extraction technique to extract information (a classical string) from a quantum state without destroying the state too much. We also introduce the notion of extraction-less watermarking PRFs as a crucial building block to achieve the results above by combining the tool with our quantum extraction technique.
2022
EUROCRYPT
Non-Interactive Zero-Knowledge Proofs with Fine-Grained Security 📺
We construct the first non-interactive zero-knowledge (NIZK) proof systems in the fine-grained setting where adversaries' resources are bounded and honest users have no more resources than an adversary. More concretely, our setting is the NC1-fine-grained setting, namely, all parties (including adversaries and honest participants) are in NC1. Our NIZK systems are for circuit satisfiability (SAT) under the worst-case assumption, $NC1 \subsetneq L/poly$. As technical contributions, we propose two approaches to construct NIZK in the NC1-fine-grained setting. In stark contrast to the classical Fiat-Shamir transformation, both our approaches start with a simple Sigma-protocol and transform it into NIZKs for circuit SAT without random oracles. Additionally, our second approach firstly proposes a fully homomorphic encryption (FHE) scheme in the fine-grained setting, which was not known before, as a building block. Compared with the first approach, the resulting NIZK only supports circuits with constant multiplicative depth, while its proof size is independent of the statement circuit size. Extending our approaches, we obtain two NIZK systems in the uniform reference string model and two non-interactive zaps (namely, non-interactive witness-indistinguishability proof systems in the plain model). While the previous constructions from Ball, Dachman-Soled, and Kulkarni (CRYPTO 2020) require provers to run in polynomial-time, our constructions are the first one with provers in NC1.
2022
EUROCRYPT
Anonymity of NIST PQC Round 3 KEMs 📺
This paper investigates \emph{anonymity} of all NIST PQC Round~3 KEMs: Classic McEliece, Kyber, NTRU, Saber, BIKE, FrodoKEM, HQC, NTRU Prime (Streamlined NTRU Prime and NTRU LPRime), and SIKE. We show the following results: * NTRU is anonymous in the quantum random oracle model (QROM) if the underlying deterministic PKE is strongly disjoint-simulatable. NTRU is collision-free in the QROM. A hybrid PKE scheme constructed from NTRU as KEM and appropriate DEM is anonymous and robust. (Similar results for BIKE, FrodoKEM, HQC, NTRU LPRime, and SIKE hold except one of three parameter sets of HQC.) * Classic McEliece is anonymous in the QROM if the underlying PKE is strongly disjoint-simulatable and a hybrid PKE scheme constructed from it as KEM and appropriate DEM is anonymous. * Grubbs, Maram, and Paterson pointed out that Kyber and Saber have a gap in the current IND-CCA security proof in the QROM (EUROCRYPT 2022). We found that Streamlined NTRU Prime has another technical obstacle for the IND-CCA security proof in the QROM. Those answer the open problem to investigate the anonymity and robustness of NIST PQC Round~3 KEMs posed by Grubbs, Maram, and Paterson (EUROCRYPT 2022). We use strong disjoint-simulatability of the underlying PKE of KEM and strong pseudorandomness and smoothness/sparseness of KEM as the main tools, which will be of independent interest.
2022
EUROCRYPT
EpiGRAM: Practical Garbled RAM 📺
Best Paper Award
Garbled RAM (GRAM) is a powerful technique introduced by Lu and Ostrovsky that equips Garbled Circuit (GC) with a sublinear cost RAM without adding rounds of interaction. While multiple GRAM constructions are known, none are suitable for practice, due to costs that have high constants and poor scaling. We present the first GRAM suitable for practice. For computational security parameter $\kappa$ and for a size-$n$ RAM that stores blocks of size $w = \Omega(\log^2 n)$ bits, our GRAM incurs only amortized $O(w \cdot \log^2 n \cdot \kappa)$ communication and computation per access. We evaluate the concrete cost of our GRAM; our approach outperforms trivial linear-scan-based RAM for as few as $512$ $128$-bit elements.
2022
EUROCRYPT
Efficient Schemes for Committing Authenticated Encryption 📺
This paper provides efficient authenticated-encryption (AE) schemes in which a ciphertext is a commitment to the key. These are extended, at minimal additional cost, to schemes where the ciphertext is a commitment to all encryption inputs, meaning key, nonce, associated data and message. Our primary schemes are modifications of GCM (for basic, unique-nonce AE security) and AES-GCM-SIV (for misuse-resistant AE security) and add both forms of commitment without any increase in ciphertext size. We also give more generic, but somewhat more costly, solutions.
2022
EUROCRYPT
Authentication in the Bounded Storage Model 📺
We consider the streaming variant of the Bounded Storage Model (BSM), where the honest parties can stream large amounts of data to each other, while only maintaining a small memory of size $n$. The adversary also operates as a streaming algorithm, but has a much larger memory size $m \gg n$. The goal is to construct unconditionally secure cryptographic schemes in the BSM, and prior works did so for symmetric-key encryption, key agreement, oblivious transfer and multiparty computation. In this work, we construct message authentication and signatures in the BSM. First, we consider the symmetric-key setting, where Alice and Bob share a small secret key. Alice can authenticate arbitrarily many messages to Bob by streaming long authentication tags of size $k \gg m$, while ensuring that the tags can be generated and verified using only $n$ bits of memory. We show a solution using local extractors (Vadhan; JoC '04), which allows for up to exponentially large adversarial memory $m = 2^{O(n)}$, and has tags of size $k= O(m)$. Second, we consider the same setting as above, but now additionally require each individual tag to be small, of size $k \leq n$. We show a solution is still possible when the adversary's memory is $m = O(n^2)$, which is optimal. Our solution relies on a space lower bound for leaning parities (Raz; FOCS '16). Third, we consider the public-key signature setting. A signer Alice initially streams a long verification key over an authentic channel, while only keeping a short signing key in her memory. A verifier Bob receives the streamed verification key and generates some short verification digest that he keeps in his memory. Later, Alice can sign arbitrarily many messages using her signing key by streaming the signatures to Bob, who can verify them using his verification digest. We show a solution for $m= O(n^2)$, which we show to be optimal. Our solution relies on a novel entropy lemma, of independent interest. We show that, if a sequence of blocks has sufficiently high min-entropy, then a large fraction of individual blocks must have high min-entropy. Naive versions of this lemma are false, but we show how to patch it to make it hold.
2022
EUROCRYPT
Short Pairing-Free Blind Signatures with Exponential Security 📺
This paper proposes the first practical pairing-free three-move blind signature schemes that (1) are concurrently secure, (2) produce short signatures (i.e., {\em three} or {\em four} group elements/scalars), and (3) are provably secure either in the generic group model (GGM) or the algebraic group model (AGM) under the (plain or one-more) discrete logarithm assumption (beyond additionally assuming random oracles). We also propose a partially blind version of one of our schemes. Our schemes do not rely on the hardness of the ROS problem (which can be broken in polynomial time) or of the mROS problem (which admits sub-exponential attacks). The only prior work with these properties is Abe's signature scheme (EUROCRYPT '02), which was recently proved to be secure in the AGM by Kastner et al. (PKC '22), but which also produces signatures twice as long as those from our scheme. The core of our proofs of security is a new problem, called {\em weighted} {\em fractional} ROS (WFROS), for which we prove (unconditional) exponential lower bounds.
2022
EUROCRYPT
Secure Non-interactive Simulation: Feasibility \& Rate 📺
A natural solution to increase the efficiency of secure computation will be to non-interactively and securely transform diverse inexpensive-to-generate correlated randomness, like, joint samples from noise sources, into correlations useful for secure computation protocols. Motivated by this general application for secure computation, our work introduces the notion of {\em secure non-interactive simulation} (\snis). Parties receive samples of correlated randomness, and they, without any interaction, securely convert them into samples from another correlated randomness. Our work presents a simulation-based security definition for \snis and initiates the study of the feasibility and efficiency of \snis. We also study \snis among fundamental correlated randomnesses like random samples from the binary symmetric and binary erasure channels, represented by \BSC and \BEC, respectively. We show the impossibility of interconversion between \BSC and \BEC samples. Next, we prove that a \snis of a $\BEC(\eps')$ sample (a \BEC with noise characteristic $\eps'$) from $\BEC(\eps)$ is feasible if and only if $(1-\eps') = (1-\eps)^k$, for some $k\in\NN$. In this context, we prove that all \snis constructions must be linear. Furthermore, if $(1-\eps') = (1-\eps)^k$, then the rate of simulating multiple independent $\BEC(\eps')$ samples is at most $1/k$, which is also achievable using (block) linear constructions. Finally, we show that a \snis of a $\BSC(\eps')$ sample from $\BSC(\eps)$ samples is feasible if and only if $(1-2\eps')=(1-2\eps)^k$, for some $k\in\NN$. Interestingly, there are linear as well as non-linear \snis constructions. When $(1-2\eps')=(1-2\eps)^k$, we prove that the rate of a {\em perfectly secure} \snis is at most $1/k$, which is achievable using linear and non-linear constructions. Our technical approach algebraizes the definition of \snis and proceeds via Fourier analysis. Our work develops general analysis methodologies for Boolean functions, explicitly incorporating cryptographic security constraints. Our work also proves strong forms of {\em statistical-to-perfect security} transformations: one can error-correct a statistically secure \snis to make it perfectly secure. We show a connection of our research with {\em homogeneous Boolean functions} and {\em distance-invariant codes}, which may be of independent interest.
2022
EUROCRYPT
SNARGs for P from Sub-exponential DDH and QR 📺
We obtain publicly verifiable Succinct Non-Interactive Arguments (SNARGs) for arbitrary deterministic computations and bounded space non-deterministic computation from well-studied group-based assumptions. In particular, assuming the sub-exponential hardness of the Decisional Diffie-Hellman (DDH) and Quadratic Residuosity (QR) assumptions, we obtain the following results, where n denotes the length of the instance: 1. A SNARG for any language that can be decided in non-deterministic time T and space S with communication complexity and verifier runtime(n+S)·T^{o(1)}. 2. A SNARG for any language that can be decided in deterministic time T with communication complexity n·T^{o(1)} and verifier runtime n·T^{o(1)}.
2022
EUROCRYPT
Hiding in Plain Sight: Memory-tight Proofs via Randomness Programming 📺
This paper continues the study of {\em memory-tight reductions} (Auerbach et al, CRYPTO '17). These are reductions that only incur minimal memory costs over those of the original adversary, allowing precise security statements for memory-bounded adversaries (under appropriate assumptions expressed in terms of adversary time and memory usage). Despite its importance, only a few techniques to achieve memory-tightness are known and impossibility results in prior works show that even basic, textbook reductions cannot be made memory-tight. This paper introduces a new class of memory-tight reductions which leverage random strings in the interaction with the adversary to hide state information, thus shifting the memory costs to the adversary. We exhibit this technique with several examples. We give memory-tight proofs for digital signatures allowing many forgery attempts when considering randomized message distributions or probabilistic RSA-FDH signatures specifically. We prove security of the authenticated encryption scheme Encrypt-then-PRF with a memory-tight reduction to the underlying encryption scheme. By considering specific schemes or restricted definitions we avoid generic impossibility results of Auerbach et al.~(CRYPTO '17) and Ghoshal et al.~(CRYPTO '20). As a further case study, we consider the textbook equivalence of CCA-security for public-key encryption for one or multiple encryption queries. We show two qualitatively different memory-tight versions of this result, depending on the considered notion of CCA security.
2022
EUROCRYPT
On the Multi-User Security of Short Schnorr Signatures with Preprocessing 📺
The Schnorr signature scheme is an efficient digital signature scheme with short signature lengths, i.e., $4k$-bit signatures for $k$ bits of security. A Schnorr signature $\sigma$ over a group of size $p\approx 2^{2k}$ consists of a tuple $(s,e)$, where $e \in \{0,1\}^{2k}$ is a hash output and $s\in \mathbb{Z}_p$ must be computed using the secret key. While the hash output $e$ requires $2k$ bits to encode, Schnorr proposed that it might be possible to truncate the hash value without adversely impacting security. In this paper, we prove that \emph{short} Schnorr signatures of length $3k$ bits provide $k$ bits of multi-user security in the (Shoup's) generic group model and the programmable random oracle model. We further analyze the multi-user security of key-prefixed short Schnorr signatures against preprocessing attacks, showing that it is possible to obtain secure signatures of length $3k + \log S + \log N$ bits. Here, $N$ denotes the number of users and $S$ denotes the size of the hint generated by our preprocessing attacker, e.g., if $S=2^{k/2}$, then we would obtain secure $3.75k$-bit signatures for groups of up to $N \leq 2^{k/4}$ users. Our techniques easily generalize to several other Fiat-Shamir-based signature schemes, allowing us to establish analogous results for Chaum-Pedersen signatures and Katz-Wang signatures. As a building block, we also analyze the $1$-out-of-$N$ discrete-log problem in the generic group model, with and without preprocessing.
2022
EUROCRYPT
Stacking Sigmas: A Framework to Compose Sigma-Protocols for Disjunctions 📺
Zero-Knowledge (ZK) Proofs for disjunctive statements have been a focus of a long line of research. Classical results such as Cramer {\em et al.} [CRYPTO'94] and Abe {\em et al.} [AC'02] design generic compilers that transform certain classes of ZK proofs into ZK proofs for disjunctive statements. However, communication complexity of the resulting protocols in these results ends up being proportional to the total size of all the proofs in the disjunction. More recently, a series of works (e.g. Heath {\em et al.} [EC'20]) has exploited special properties of garbled circuits to construct efficient ZK proofs for disjunctions, where the proof size is only proportional to the length of the largest clause in the disjunction. However, these techniques do not appear to generalize beyond garbled circuits. In this work, we focus on achieving the best of both worlds. We design a \textit{general framework} that compiles a large class of {unmodified} $\Sigma$-protocols, each for an individual statement, into a new $\Sigma$-protocol that proves a disjunction of these statements. Our framework can be used both when each clause is proved with the same $\Sigma$-protocol and when different $\Sigma$-protocols are used for different clauses. The resulting $\Sigma$-protocol is concretely efficient and has communication complexity proportional to the communication required by the largest clause, with additive terms that are only logarithmic in the number of clauses. We show that our compiler can be applied to many well-known $\Sigma$-protocols, including classical protocols (\emph{e.g.} Schnorr and Guillou-Quisquater) and modern MPC-in-the-head protocols such as the recent work of Katz, Kolesnikov and Wang [CCS'18] and the Ligero protocol of Ames {\em et al.} [CCS'17] Finally, since all of the protocols in our class can be made non-interactive in the random oracle model using the Fiat-Shamir transform, our result yields the first generic non-interactive zero-knowledge protocol for disjunctions where the communication only depends on the size of the largest clause.
2022
EUROCRYPT
Gemini: elastic SNARKs for diverse environments 📺
We introduce a new class of succinct arguments, that we call elastic. Elastic SNARKs allow the prover to allocate different resources (such as memory and time) depending on the execution environment and the statement to prove. The resulting output is independent of the prover’s configuration. To study elastic SNARKs, we extend the streaming paradigm of [Block et al., TCC’20]. We provide a definitional framework for elastic polynomial interactive oracle proofs for R1CS instances and design a compiler which transforms an elastic PIOP into a preprocessing argument system that supports streaming or random access to its inputs. Depending on the configuration, the prover will choose different trade-offs for time (either linear, or quasilinear) and memory (either linear, or logarithmic). We prove the existence of elastic SNARKS by presenting Gemini, a novel FFT-free preprocessing argument. We prove its security and develop a proof-of-concept implementation in Rust based on the arkworks framework. We provide benchmarks for large R1CS instances of tens of billions of gates on a single machine.
2022
EUROCRYPT
Lightweight, Maliciously Secure Verifiable Function Secret Sharing 📺
In this work, we present a lightweight construction of verifiable two-party function secret sharing (FSS) for point functions and multi-point functions. Our verifiability method is lightweight in two ways. Firstly, it is concretely efficient, making use of only symmetric key operations and no public key or MPC techniques are involved. Our performance is comparable with the state-of-the-art non-verifiable DPF constructions, and we outperform all prior DPF verification techniques in both computation and communication complexity, which we demonstrate with an implementation of our scheme. Secondly, our verification procedure is essentially unconstrained. It will verify that distributed point function (DPF) shares correspond to some point function irrespective of the output group size, the structure of the DPF output, or the set of points on which the DPF must be evaluated. This is in stark contrast with prior works, which depended on at least one and often all three of these constraints. In addition, our construction is the first DPF verification protocol that can verify general DPFs while remaining secure even if one server is malicious. Prior work on maliciously secure DPF verification could only verify DPFs where the non-zero output is binary and the output space is a large field. As an additional feature, our verification procedure can be batched so that verifying a polynomial number of DPF shares requires the exact same amount of communication as verifying one pair of DPF shares. We combine this packed DPF verification with a novel method for packing DPFs into shares of a multi-point function where the evaluation time, verification time, and verification communication are independent of the number of non-zero points in the function. An immediate corollary of our results are two-server protocols for PIR and PSI that remain secure when any one of the three parties is malicious (either the client or one of the servers).
2022
EUROCRYPT
Indistinguishability Obfuscation from LPN over F_p, DLIN, and PRGs in NC^0 📺
In this work, we study what minimal sets of assumptions suffice for constructing indistinguishability obfuscation ($\iO$). We prove: {\bf Theorem}(Informal): {\em Assume sub-exponential security of the following assumptions: - the Learning Parity with Noise ($\mathsf{LPN}$) assumption over general prime fields $\mathbb{F}_p$ with polynomially many $\mathsf{LPN}$ samples and error rate $1/k^\delta$, where $k$ is the dimension of the $\mathsf{LPN}$ secret, and $\delta>0$ is any constant; - the existence of a Boolean Pseudo-Random Generator ($\mathsf{PRG}$) in $\mathsf{NC}^0$ with stretch $n^{1+\tau}$, where $n$ is the length of the $\mathsf{PRG}$ seed, and $\tau>0$ is any constant; - the Decision Linear ($\mathsf{DLIN}$) assumption on symmetric bilinear groups of prime order. Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exists. Further, assuming only polynomial security of the aforementioned assumptions, there exists collusion resistant public-key functional encryption for all polynomial-size circuits. This removes the reliance on the Learning With Errors (LWE) assumption from the recent work of [Jain, Lin, Sahai STOC'21]. As a consequence, we obtain the first fully homomorphic encryption scheme that does not rely on any lattice-based hardness assumption. Our techniques feature a new notion of randomized encoding called Preprocessing Randomized Encoding (PRE), that essentially can be computed in the exponent of pairing groups. When combined with other new techniques, PRE gives a much more streamlined construction of $\iO$ while still maintaining reliance only on well-studied assumptions.
2022
EUROCRYPT
Asymmetric PAKE with low computation and communication 📺
In Crypto'21 Gu, Jarecki, and Krawczyk [20] showed an asymmetric password authenticated key exchange protocol (aPAKE) whose computational cost matches (symmetric) password authenticated key exchange (PAKE) and plain (i.e. unauthenticated) key exchange (KE). However, this minimal-cost aPAKE did not match prior aPAKE's in round complexity, using 4 rounds assuming the client initiates compared to 2 rounds in an aPAKE of Bradley et al. In this paper we show two aPAKE protocols that achieve optimal computational cost and optimal round complexity. Our protocols can be seen as applications of the Encrypted Key Exchange (EKE) compiler of Bellovin and Merritt [6], which creates password-authenticated key exchange by password-encrypting messages in a key exchange protocol. Whereas Bellovin and Merritt used this method to construct a PAKE by applying password-encryption to KE messages, we construct an aPAKE by applying password-encryption to messages of a unilaterally authenticated Key Exchange (ua-KE). We present two versions of this compiler. The first uses salted password hash and takes 3 rounds if the client initiates. The second uses unsalted password hash and takes a single simultaneous flow (it is the first aPAKE to do so), thus simultaneously matching the minimal computational cost and the minimal round complexity of PAKE and KE. We analyze our aPAKE protocols assuming Ideal Cipher (IC) on a group as modular constructions from ua-KE realized via a (universally composable) Authenticated Key Exchange where the server uses one-time keys (otk-AKE). We then show that one-pass variants of 3DH and HMQV securely realize otk-AKE in ROM. Interestingly, the two resulting concrete aPAKE's use the exact same protocol messages as two natural variants of EKE, and the only difference between the symmetric PAKE (EKE) and asymmetric PAKE (our protocols) is in the key derivation equation used to derive the final session key output.
2022
EUROCRYPT
Anonymous, Robust Post-Quantum Public Key Encryption 📺
A core goal of the NIST PQC competition is to produce PKE schemes which, even if attacked with a large-scale quantum computer, maintain the security guarantees needed by applications. The main security focus in the NIST PQC context has been IND-CCA security, but other applications demand that PKE schemes provide 'anonymity' (Bellare et al., ASIACRYPT 2001), and 'robustness' (Abdalla et al., TCC 2010). Examples of such applications include anonymous cryptocurrencies, searchable encryption, and auction protocols. However, almost nothing is known about how to build post-quantum PKE schemes offering these security properties. In particular, the status of the NIST PQC candidates with respect to anonymity and robustness is unknown. This paper initiates a systematic study of anonymity and robustness for post-quantum PKE schemes. Firstly, we identify implicit rejection as a crucial design choice shared by most post-quantum KEMs, show that implicit rejection renders prior results on anonymity and robustness for KEM-DEM PKEs inapplicable, and transfer prior results to the implicit-rejection setting where possible. Secondly, since they are widely used to build post-quantum PKEs, we examine how the Fujisaki-Okamoto (FO) transforms (Fujisaki and Okamoto, Journal of Cryptology 2013) confer robustness and enhance weak anonymity of a base PKE. We then leverage our theoretical results to study the anonymity and robustness of three NIST KEM finalists---Saber, Kyber, and Classic McEliece---and one alternate, FrodoKEM. Overall, our findings for robustness are definitive: we provide positive robustness results for Saber, Kyber, and FrodoKEM, and a negative result for Classic McEliece. Our negative result stems from a striking property of KEM-DEM PKE schemes built with the Classic McEliece KEM: for any message 'm', we can construct a single hybrid ciphertext 'c' which decrypts to the chosen 'm' under any Classic McEliece private key. Our findings for anonymity are more mixed: we identify barriers to proving anonymity for Saber, Kyber, and Classic McEliece. We also found that in the case of Saber and Kyber, these barriers lead to issues with their IND-CCA security claims. We have worked with the Saber and Kyber teams to fix these issues, but they remain unresolved. On the positive side, we were able to prove anonymity for FrodoKEM and a variant of Saber introduced by D'Anvers et al. (AFRICACRYPT 2018). Our analyses of these two schemes also identified technical gaps in their IND-CCA security claims, but we were able to fix them.
2022
EUROCRYPT
Secure Multiparty Computation with Free Branching 📺
We study secure multi-party computation (MPC) protocols for branching circuits that contain multiple sub-circuits (i.e., branches) and the output of the circuit is that of single ``active'' branch. Crucially, the identity of the active branch must remain hidden from the protocol participants. While such circuits can be securely computed by evaluating each branch and then multiplexing the output, such an approach incurs a communication cost linear in the size of the entire circuit. To alleviate this, a series of recent works have investigated the problem of reducing the communication cost of branching executions inside MPC (without relying on fully homomorphic encryption). Most notably, the stacked garbling paradigm [Heath and Kolesnikov, CRYPTO'20] yields garbled circuits for branching circuits whose size only depends on the size of the largest branch. Presently, however, it is not known how to obtain similar communication improvements for secure computation involving {\em more than two parties}. In this work, we provide a generic framework for branching multi-party computation that supports {\em any number of parties}. The communication complexity of our scheme is proportional to the size of the largest branch and the computation is linear in the size of the entire circuit. We provide an implementation and benchmarks to demonstrate practicality of our approach.
2022
EUROCRYPT
Adaptively Secure Computation for RAM Programs 📺
We obtain the first two-round two-party computation protocol, in the plain model, that is secure against passive adversaries who can adaptively corrupt all parties where the communication complexity is proportional to the square of the RAM complexity of the function up to polylogarithmic factors assuming the existence of non-committing encryption.
2022
EUROCRYPT
Round-Optimal Multi-Party Computation with Identifiable Abort 📺
Secure multi-party computation (MPC) protocols that are resilient to a dishonest majority allow the adversary to get the output of the computation while, at the same time, forcing the honest parties to abort. Aumann and Lindell introduced the enhanced notion of security with identifiable abort, which still allows the adversary to trigger an abort but, at the same time, it enables the honest parties to agree on the identity of the party that led to the abort. More recently, in Eurocrypt 2016, Garg et al. showed that, assuming access to a simultaneous message exchange channel for all the parties, at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model. Following Garg et al., a sequence of works has matched this lower bound, but none of them achieved security with identifiable abort. In this work, we close this gap and show that four rounds of communication are also sufficient to securely realize any functionality with identifiable abort using standard and generic polynomial-time assumptions. To achieve this result we introduce the new notion of bounded-rewind secure MPC that guarantees security even against an adversary that performs a mild form of reset attacks. We show how to instantiate this primitive starting from any MPC protocol and by assuming trapdoor-permutations. The notion of bounded-rewind secure MPC allows for easier parallel composition of MPC protocols with other (interactive) cryptographic primitives. Therefore, we believe that this primitive can be useful in other contexts in which it is crucial to combine multiple primitives with MPC protocols while keeping the round complexity of the final protocol low.
2022
EUROCRYPT
Non-malleable Commitments Against Quantum Attacks 📺
We construct, under standard hardness assumptions, the first non-malleable commitments secure against quantum attacks. Our commitments are statistically binding and satisfy the standard notion of {\em non-malleability with respect to commitment}. We obtain a $\log^\star(\lambda)$-round classical protocol, assuming the existence of post-quantum one-way functions. Previously, non-malleable commitments with quantum security were only known against a restricted class of adversaries known as {\em synchronizing adversaries.} At the heart of our results is a new general technique that allows to modularly obtain non-malleable commitments from any extractable commitment protocol, obliviously of the underlying extraction strategy (black-box or non-black-box) or round complexity. The transformation may also be of interest in the classical setting.
2022
EUROCRYPT
Secure Multiparty Computation with Sublinear Preprocessing 📺
A common technique for enhancing the efficiency of secure multiparty computation (MPC) with dishonest majority is via {\em preprocessing}: In an offline phase, parties engage in an input-independent protocol to securely generate correlated randomness. Once inputs are known, the correlated randomness is consumed by a ``non-cryptographic'' and highly efficient online protocol. The correlated randomness in such protocols traditionally comes in two flavors: multiplication triples (Beaver, Crypto '91), which suffice for security against semi-honest parties, and {\em authenticated} multiplication triples (Bendlin et al., Eurocrypt '11, Damg{\aa}rd et al., Crypto '12) that yield efficient protocols against malicious parties. Recent constructions of pseudorandom correlation generators (Boyle et al., Crypto '19, '20) enable concretely efficient secure generation of multiplication triples with {\em sublinear communication complexity}. However, these techniques do not efficiently apply to authenticated triples, except in the case of secure two-party computation of arithmetic circuits over large fields. In this work, we propose the first {\em concretely efficient} approach for (malicious) MPC with preprocessing in which the offline communication is {\em sublinear} in the circuit size. More specifically, the offline communication scales with the {\em square root} of the circuit size. From a feasibility point of view, our protocols can make use of any secure protocol for generating (unauthenticated) multiplication triples together with any {\em additive} homomorphic encryption. We propose concretely efficient instantiations (based on strong but plausible ``linear-only'' assumptions) from existing homomorphic encryption schemes and pseudorandom correlation generators. Our technique is based on a variant of a recent protocol of Boyle et al. (Crypto '21) for MPC with preprocessing. As a result, our protocols inherit the succinct correlated randomness feature of the latter protocol.
2022
EUROCRYPT
Information-Combining Differential Fault Attacks on DEFAULT 📺
Differential fault analysis (DFA) is a very powerful attack vector on implementations of symmetric cryptography. Most countermeasures are applied at the implementation level. At ASIACRYPT 2021, Baksi et al. proposed a design strategy that aims to provide inherent cipher level resistance against DFA by using S-boxes with linear structures. They argue that in their instantiation, the block cipher DEFAULT, a DFA adversary can learn at most 64 of the 128 key bits, so the remaining brute-force complexity of 2^64 is impractical. In this paper, we show that a DFA adversary can combine information across rounds to recover the full key, invalidating their security claim. In particular, we observe that such ciphers exhibit large classes of equivalent keys that can be represented efficiently in normalized form using linear equations. We exploit this in combination with the specifics of DEFAULT's strong key schedule to recover the key using less than 100 faulty computation and negligible time complexity. Moreover, we show that even an idealized version of DEFAULT with independent round keys is vulnerable to our information-combining attacks based on normalized keys.
2022
EUROCRYPT
Revamped Differential-Linear Cryptanalysis on Reduced Round ChaCha 📺
In this paper, we provide several improvements over the existing differential-linear attacks on ChaCha. ChaCha is a stream cipher which has $20$ rounds. At CRYPTO $2020$, Beierle et al. observed a differential in the $3.5$-th round if the right pairs are chosen. They produced an improved attack using this, but showed that to achieve a right pair, we need $2^5$ iterations on average. In this direction, we provide a technique to find the right pairs with the help of listing. Also, we provide a strategical improvement in PNB construction, modification of complexity calculation and an alternative attack method using two input-output pairs. Using these, we improve the time complexity, reducing it to $2^{221.95}$ from $2^{230.86}$ reported by Beierle et al. for $256$ bit version of ChaCha. Also, after a decade, we improve existing complexity (Shi et al: ICISC 2012) for a $6$-round of $128$ bit version of ChaCha by more than 11 million times and produce the first-ever attack on 6.5-round ChaCha$128$ with time complexity $2^{123.04}.$
2022
EUROCRYPT
CoCoA: Concurrent Continuous Group Key Agreement 📺
Messaging platforms like Signal are widely deployed and provide strong security in an asynchronous setting. It is a challenging problem to construct a protocol with similar security guarantees that can \emph{efficiently} scale to large groups. A major bottleneck are the frequent key rotations users need to perform to achieve post compromise forward security. In current proposals -- most notably in TreeKEM (which is part of the IETF's Messaging Layer Security (MLS) protocol draft) -- for users in a group of size $n$ to rotate their keys, they must each craft a message of size $\log(n)$ to be broadcast to the group using an (untrusted) delivery server. In larger groups, having users sequentially rotate their keys requires too much bandwidth (or takes too long), so variants allowing any $T \leq n$ users to simultaneously rotate their keys in just $2$ communication rounds have been suggested (e.g.\ ``Propose and Commit" by MLS). Unfortunately, $2$-round concurrent updates are either damaging or expensive (or both); i.e.\ they either result in future operations being more costly (e.g.\ via ``blanking'' or ``tainting'') or are costly themselves requiring $\Omega(T)$ communication for each user [Bienstock et al., TCC'20]. In this paper we propose CoCoA; a new scheme that allows for $T$ concurrent updates that are neither damaging nor costly. That is, they add no cost to future operations yet they only require $\Omega(\log^2(n))$ communication per user. To circumvent the [Bienstock et al.] lower bound, CoCoA increases the number of rounds needed to complete all updates from $2$ up to (at most) $\log(n)$; though typically fewer rounds are needed. The key insight of our protocol is the following: in the (non-concurrent version of) TreeKEM, a delivery server which gets $T$ concurrent update requests will approve one and reject the remaining $T-1$. In contrast, our server attempts to apply all of them. If more than one user requests to rotate the same key during a round, the server arbitrarily picks a winner. Surprisingly, we prove that regardless of how the server chooses the winners, all previously compromised users will recover after at most $\log(n)$ such update rounds. To keep the communication complexity low, CoCoA is a server-aided CGKA. That is, the delivery server no longer blindly forwards packets, but instead actively computes individualized packets tailored to each user. As the server is untrusted, this change requires us to develop new mechanisms ensuring robustness of the protocol.
2022
EUROCRYPT
On the Concrete Security of TLS 1.3 PSK Mode 📺
The pre-shared key (PSK) handshake modes of TLS 1.3 allow for the performant, low-latency resumption of previous connections and are widely used on the Web and by resource-constrained devices, e.g., in the Internet of Things. Taking advantage of these performance benefits with optimal and theoretically-sound parameters requires tight security proofs. We give the first tight security proofs for the TLS 1.3 PSK handshake modes. Our main technical contribution is to address a gap in prior tight security proofs of TLS 1.3 which modeled either the entire key schedule or components thereof as independent random oracles to enable tight proof techniques. These approaches ignore existing interdependencies in TLS 1.3's key schedule, arising from the fact that the same cryptographic hash function is used in several components of the key schedule and the handshake more generally. We overcome this gap by proposing a new abstraction for the key schedule and carefully arguing its soundness via the indifferentiability framework. Interestingly, we observe that for one specific configuration, PSK-only mode with hash function SHA-384, it seems difficult to argue indifferentiability due to a lack of domain separation between the various hash function usages. We view this as an interesting insight for the design of protocols, such as future TLS versions. For all other configurations however, our proofs significantly tighten the security of the TLS 1.3 PSK modes, confirming standardized parameters (for which prior bounds provided subpar or even void guarantees) and enabling a theoretically-sound deployment.
2022
EUROCRYPT
Distributed (Correlation) Samplers: How to Remove a Trusted Dealer in One Round 📺
Structured random strings (SRSs) and correlated randomness are important for many cryptographic protocols. In settings where interaction is expensive, it is desirable to obtain such randomness in as few rounds of communication as possible; ideally, simply by exchanging one reusable round of messages which can be considered public keys. In this paper, we describe how to generate any SRS or correlated randomness in such a single round of communication, using, among other things, indistinguishable obfuscation. We introduce what we call a distributed sampler, which enables n parties to sample a single public value (SRS) from any distribution. We construct a semi-malicious distributed sampler in the plain model, and use it to build a semi-malicious public- key PCF (Boyle et al., FOCS 2020) in the plain model. A public-key PCF can be thought of as a distributed correlation sampler; instead of producing a public SRS, it gives each party a private random value (where the values satisfy some correlation). We introduce a general technique called an anti-rusher which compiles any one-round protocol with semi-malicious security without inputs to a similar one-round protocol with active security by making use of a programmable random oracle. This gets us actively secure distributed samplers and public-key PCFs in the random oracle model. Finally, we explore some tradeoffs. Our first PCF construction is limited to reverse-sampleable correlations (where the random outputs of honest parties must be simulatable given the random outputs of corrupt parties); we additionally show a different construction without this limitation, but which does not allow parties to hold secret parameters of the correlation. We also describe how to avoid the use of a random oracle at the cost of relying on sub-exponentially secure indistinguishability obfuscation.
2022
EUROCRYPT
COA-Secure Obfuscation and Applications 📺
We put forth a new paradigm for program obfuscation, where obfuscated programs are endowed with proofs of ``well formedness.'' In addition to asserting existence of an underlying plaintext program with an attested structure, these proofs also prevent mauling attacks, whereby an adversary surreptitiously creates an obfuscated program based on secrets which are embedded in other obfuscated programs. We call this new guarantee Chosen Obfuscation Attacks (COA) security. We show how to enhance a large class of obfuscation mechanisms to be COA-secure, assuming subexponentially secure IO for circuits and subexponentially secure one-way functions.To demonstrate the power of the new notion, we also use it to realize: - A new form of software watermarking, which provides significantly broader protection than current schemes against counterfeits that pass a keyless, public verification process. - Completely CCA encryption, which is a strengthening of completely non-malleable encryption.
2022
EUROCRYPT
Unclonable Polymers and Their Cryptographic Applications 📺
We propose a mechanism for generating and manipulating protein polymers to obtain a new type of *consumable storage* that exhibits intriguing cryptographic "self-destruct" properties, assuming the hardness of certain polymer-sequencing problems. To demonstrate the cryptographic potential of this technology, we first develop a formalism that captures (in a minimalistic way) the functionality and security properties provided by the technology. Next, using this technology, we construct and prove security of two cryptographic applications that are currently obtainable only via trusted hardware that implements logical circuitry (either classical or quantum). The first application is a password-controlled *secure vault* where the stored data is irrecoverably erased once a threshold of unsuccessful access attempts is reached. The second is (a somewhat relaxed version of) *one time programs*, namely a device that allows evaluating a secret function only a limited number of times before self-destructing, where each evaluation is made on a fresh user-chosen input. Finally, while our constructions, modeling, and analysis are designed to capture the proposed polymer-based technology, they are sufficiently general to be of potential independent interest.
2022
EUROCRYPT
Asymptotically Quasi-Optimal Cryptography 📺
The question of minimizing the {\em computational overhead} of cryptography was put forward by the work of Ishai, Kushilevitz, Ostrovsky and Sahai (STOC 2008). The main conclusion was that, under plausible assumptions, most cryptographic primitives can be realized with {\em constant} computational overhead. However, this ignores an additive term that may depend polynomially on the (concrete) computational security parameter $\lambda$. In this work, we study the question of obtaining optimal efficiency, up to polylogarithmic factors, for {\em all} choices of $n$ and $\lambda$, where $n$ is the size of the given task. In particular, when $n=\lambda$, we would like the computational cost to be only $\tilde O(\lambda)$. We refer to this goal as {\em asymptotically quasi-optimal} (AQO) cryptography. We start by realizing the first AQO semi-honest batch oblivious linear evaluation (BOLE) protocol. Our protocol applies to OLE over small fields and relies on the near-exponential security of the ring learning with errors (RLWE) assumption. Building on the above and on known constructions of AQO PCPs, we design the first AQO zero-knowledge (ZK) argument system for Boolean circuit satisfiability. Our construction combines a new AQO ZK-PCP construction that respects the AQO property of the underlying PCP along with a technique for converting statistical secrecy into soundness via OLE reversal. Finally, combining the above results, we get AQO secure computation protocols for Boolean circuits with security against malicious parties under RLWE.
2022
EUROCRYPT
Guaranteed Output in O(sqrt(n)) Rounds for Round-Robin Sampling Protocols 📺
We introduce a notion of round-robin secure sampling that captures several protocols in the literature, such as the "powers-of-tau" setup protocol for pairing-based polynomial commitments and zk-SNARKs, and certain verifiable mixnets. Due to their round-robin structure, protocols of this class inherently require n sequential broadcast rounds, where n is the number of participants. We describe how to compile them generically into protocols that require only O(sqrt(n)) broadcast rounds. Our compiled protocols guarantee output delivery against any dishonest majority. This stands in contrast to prior techniques, which require Omega(n) sequential broadcasts in most cases (and sometimes many more). Our compiled protocols permit a certain amount of adversarial bias in the output, as all sampling protocols with guaranteed output must, due to Cleve's impossibility result (STOC'86). We show that in the context of the aforementioned applications, this bias is harmless.
2022
EUROCRYPT
Constant-round Blind Classical Verification of Quantum Sampling 📺
In a recent breakthrough, Mahadev constructed a classical verification of quantum computation (CVQC) protocol for a classical client to delegate decision problems in BQP to an untrusted quantum prover under computational assumptions. In this work, we explore further the feasibility of CVQC with the more general sampling problems in BQP and with the desirable blindness property. We contribute affirmative solutions to both as follows. * Motivated by the sampling nature of many quantum applications (e.g., quantum algorithms for machine learning and quantum supremacy tasks), we initiate the study of CVQC for quantum sampling problems (denoted by SampBQP). More precisely, in a CVQC protocol for a SampBQP problem, the prover and the verifier are given an input x\in{0, 1}^n and a quantum circuit C, and the goal of the classical client is to learn a sample from the output z\leftarrow C(x) up to a small error, from its interaction with an untrusted prover. We demonstrate its feasibility by constructing a four-message CVQC protocol for SampBQP based on the quantum Learning With Errors assumption. * The blindness of CVQC protocols refers to a property of the protocol where the prover learns nothing, and hence is blind, about the client’s input. It is a highly desirable property that has been intensively studied for the delegation of quantum computation. We provide a simple yet powerful generic compiler that transforms any CVQC protocol to a blind one while preserving its completeness and soundness errors as well as the number of rounds. Applying our compiler to (a parallel repetition of) Mahadev’s CVQC protocol for BQP and our CVQC protocol for SampBQP yields the first constant-round blind CVQC protocol for BQP and SampBQP respectively, with negligible and inverse polynomial soundness errors respectively, and negligible completeness errors.
2022
EUROCRYPT
Optimal Broadcast Encryption and CP-ABE from Evasive Lattice Assumptions 📺
We present a new, simple candidate broadcast encryption scheme for N users with parameter size poly(logN). We prove security of our scheme under a non-standard variant of the LWE assumption where the distinguisher additionally receives short Gaussian pre-images while avoiding zeroizing attacks. This yields the first candidate optimal broadcast encryption that is plausibly post-quantum secure, and enjoys a security reduction to a simple assumption. As a secondary contribution, we present a candidate ciphertext-policy attribute-based encryption (CP-ABE) scheme for circuits of a-priori bounded polynomial depth where the parameter size is independent of the circuit size, and prove security under an additional non-standard assumption.
2022
PKC
Polynomial IOPs for Linear Algebra Relations 📺
This paper proposes new Polynomial IOPs for arithmetic circuits. They rely on the monomial coefficient basis to represent the matrices and vectors arising from the arithmetic constraint satisfaction system, and build on new protocols for establishing the correct computation of linear algebra relations such as matrix-vector products and Hadamard products. Our protocols give rise to concrete proof systems with succinct verification when compiled down with a cryptographic compiler whose role is abstracted away in this paper. Depending only on the compiler, the resulting SNARKs are either transparent or rely on a trusted setup.
2022
PKC
Efficient Lattice-Based Inner-Product Functional Encryption 📺
In the recent years, many research lines on Functional Encryption (FE) have been suggested and studied regarding the functionality, security, or efficiency. Nevertheless, an open problem on a basic functionality, the single-input inner-product (IPFE), remains: can IPFE be instantiated based on the Ring Learning With Errors (RLWE) assumption? The RLWE assumption provides quantum-resistance security while in comparison with LWE assumption gives significant performance and compactness gains. In this paper we present the first RLWE-based IPFE scheme. We carefully choose strategies in the security proofs to optimize the size of parameters. More precisely, we develop two new results on ideal lattices. The first result is a variant of Ring-LWE, that we call multi-hint extended Ring-LWE, where some hints on the secret and the noise are given. We present a reduction from RLWE problem to this variant. The second tool is a special form of Leftover Hash Lemma (LHL) over rings, known as Ring-LHL. To demonstrate the efficiency of our scheme we provide an optimized implementation of RLWE-based IPFE scheme and show its performance on a practical use case. We further present new compilers that, combined with some existing ones, can transfer a single-input FE to its (identity-based, decentralized) multi-client variant with linear size of the ciphertext (w.r.t the number of clients).
2022
PKC
A Note on the Post-Quantum Security of (Ring) Signatures 📺
This work revisits the security of classical signatures and ring signatures in a quantum world. For (ordinary) signatures, we focus on the arguably preferable security notion of {\em blind-unforgeability} recently proposed by Alagic et al.\ (Eurocrypt'20). We present two {\em short} signature schemes achieving this notion: one is in the quantum random oracle model, assuming quantum hardness of SIS; and the other is in the plain model, assuming quantum hardness of LWE with super-polynomial modulus. Prior to this work, the only known blind-unforgeable schemes are Lamport's one-time signature and the Winternitz one-time signature, and both of them are in the quantum random oracle model. For ring signatures, the recent work by Chatterjee et al.\ (Crypto'21) proposes a definition trying to capture adversaries with quantum access to the signer. However, it is unclear if their definition, when restricted to the classical world, is as strong as the standard security notion for ring signatures. They also present a construction that only {\em partially} achieves (even) this seeming weak definition, in the sense that the adversary can only conduct superposition attacks over the messages, but not the rings. We propose a new definition that does not suffer from the above issue. Our definition is an analog to the blind-unforgeability in the ring signature setting. Moreover, assuming the quantum hardness of LWE, we construct a compiler converting any blind-unforgeable (ordinary) signatures to a ring signature satisfying our definition.
2022
PKC
Multitarget decryption failure attacks and their application to Saber and Kyber 📺
Many lattice-based encryption schemes are subject to a very small probability of decryption failures. It has been shown that an adversary can efficiently recover the secret key using a number of ciphertexts that cause such a decryption failure. In PKC 2019, D'Anvers et al. introduced `failure boosting', a technique to speed up the search for decryption failures. In this work we first improve the state-of-the-art multitarget failure boosting attacks. We then improve the cost calculation of failure boosting and extend the applicability of these calculations to permit cost calculations of real-world schemes. Using our newly developed methodologies we determine the multitarget decryption failure attack cost for all parameter sets of Saber and Kyber, showing among others that the quantum security of Saber can theoretically be reduced from 172 bits to 145 bits in specific circumstances. We then discuss the applicability of decryption failure attacks in real-world scenarios, showing that an attack might not be practical to execute.
2022
PKC
On the Isogeny Problem with Torsion Point Information 📺
It has recently been rigorously proven (and was previously known under certain heuristics) that the general supersingular isogeny problem reduces to the supersingular endomorphism ring computation problem. However, in order to attack SIDH-type schemes, one requires a particular isogeny which is usually not returned by the general reduction. At Asiacrypt 2016, Galbraith, Petit, Shani and Ti presented a polynomial-time reduction of the problem of finding the secret isogeny in SIDH to the problem of computing the endomorphism ring of a supersingular elliptic curve. Their method exploits the fact that secret isogenies in SIDH are of degree approximately $p^{1/2}$. The method does not extend to other SIDH-type schemes, where secret isogenies of larger degree are used and this condition is not fulfilled. We present a more general reduction algorithm that generalises to all SIDH-type schemes. The main idea of our algorithm is to exploit available torsion point images together with the KLPT algorithm to obtain a linear system of equations over a certain residue class ring. We show that this system will have a unique solution that can be lifted to the integers if some mild conditions on the parameters are satisfied. This lift then yields the secret isogeny. One consequence of this work is that the choice of the prime $p$ in \mbox{B-SIDH} is tight.
2022
PKC
Financially Backed Covert Security 📺
The security notion of covert security introduced by Aumann and Lindell (TCC'07) allows the adversary to successfully cheat and break security with a fixed probability 1-e, while with probability e, honest parties detect the cheating attempt. Asharov and Orlandi (ASIACRYPT'12) extend covert security to enable parties to create publicly verifiable evidence about misbehavior that can be transferred to any third party. This notion is called publicly verifiable covert security (PVC) and has been investigated by multiple works. While these two notions work well in settings with known identities in which parties care about their reputation, they fall short in Internet-like settings where there are only digital identities that can provide some form of anonymity. In this work, we propose the notion of financially backed covert security (FBC), which ensures that the adversary is financially punished if cheating is detected. Next, we present three transformations that turn PVC protocols into FBC protocols. Our protocols provide highly efficient judging, thereby enabling practical judge implementations via smart contracts deployed on a blockchain. In particular, the judge only needs to non-interactively validate a single protocol message while previous PVC protocols required the judge to emulate the whole protocol. Furthermore, by allowing an interactive punishment procedure, we can reduce the amount of validation to a single program instruction, e.g., a gate in a circuit. An interactive punishment, additionally, enables us to create financially backed covert secure protocols without any form of common public transcript, a property that has not been achieved by prior PVC protocols.
2022
PKC
Efficient Lattice-Based Blind Signatures via Gaussian One-Time Signatures 📺
Lattice-based blind signature schemes have been receiving some recent attention lately. Earlier efficient 3-round schemes (Asiacrypt 2010, Financial Cryptography 2020) were recently shown to have mistakes in their proofs, and fixing them turned out to be extremely inefficient and limited the number of signatures that a signer could send to less than a dozen (Crypto 2020). In this work we propose a round-optimal, 2-round lattice-based blind signature scheme which produces signatures of length 150KB. The running time of the signing protocol is linear in the maximum number signatures that can be given out, and this limits the number of signatures that can be signed per public key. Nevertheless, the scheme is still quite efficient when the number of signatures is limited to a few dozen thousand, and appears to currently be the most efficient lattice-based candidate.
2022
PKC
Post-quantum Asynchronous Deniable Key Exchange and the Signal Handshake 📺
The key exchange protocol that establishes initial shared secrets in the handshake of the Signal end-to-end encrypted messaging protocol has several important characteristics: (1) it runs asynchronously (without both parties needing to be simultaneously online), (2) it provides implicit mutual authentication while retaining deniability (transcripts cannot be used to prove either party participated in the protocol), and (3) it retains security even if some keys are compromised (forward secrecy and beyond). All of these properties emerge from clever use of the highly flexible Diffie--Hellman protocol. While quantum-resistant key encapsulation mechanisms (KEMs) can replace Diffie--Hellman key exchange in some settings, there is no KEM-based replacement for the Signal handshake that achieves all three aforementioned properties, in part due to the inherent asymmetry of KEM operations. In this paper, we show how to construct asynchronous deniable key exchange by combining KEMs and designated verifier signature (DVS) schemes. There are several candidates for post-quantum DVS schemes, either direct constructions or via ring signatures. This yields a template for an efficient post-quantum realization of the Signal handshake with the same asynchronicity and security properties as the original Signal protocol.
2022
PKC
Towards a Simpler Lattice Gadget Toolkit 📺
As a building block, gadgets and associated algorithms are widely used in advanced lattice cryptosystems. The gadget algorithms for power-of-base moduli are very efficient and simple, however the current algorithms for arbitrary moduli are still complicated and practically more costly despite several efforts. Considering the necessity of arbitrary moduli, developing simpler and more practical gadget algorithms for arbitrary moduli is crucial to improving the practical performance of lattice based applications. In this work, we propose two new gadget sampling algorithms for arbitrary moduli. Our first algorithm is for gadget Gaussian sampling. It is simple and efficient. One distinguishing feature of our Gaussian sampler is that it does not need floating-point arithmetic, which makes it better compatible with constrained environments. Our second algorithm is for gadget subgaussian sampling. Compared with the existing algorithm, it is simpler, faster, and requires asymptotically less randomness. In addition, our subgaussian sampler achieves an almost equal quality for different practical parameters. Overall these two algorithms provide simpler options for gadget algorithms and enhance the practicality of the gadget toolkit.
2022
PKC
The Direction of Updatable Encryption Does Matter 📺
We introduce a new definition for key updates, called backward-leak uni-directional key updates, in updatable encryption (UE). This notion is a variant of uni-directional key updates for UE. We show that existing secure UE schemes in the bi-directional key updates setting are not secure in the backward-leak uni-directional key updates setting. Thus, security in the backward-leak uni-directional key updates setting is strictly stronger than security in the bi-directional key updates setting. This result is in sharp contrast to the equivalence theorem by Jiang (Asiacrypt 2020), which says security in the bi-directional key updates setting is equivalent to security in the existing uni-directional key updates setting. We call the existing uni-directional key updates ``forward-leak uni-directional'' key updates to distinguish two types of uni-directional key updates in this paper. We also present two UE schemes with the following features. - The first scheme is post-quantum secure in the backward-leak uni-directional key updates setting under the learning with errors assumption. - The second scheme is secure in the no-directional key updates setting and based on indistinguishability obfuscation and one-way functions. This result solves the open problem left by Jiang (Asiacrypt 2020).
2022
PKC
Leakage-Resilient IBE/ABE with Optimal Leakage Rates from Lattices 📺
We derive the first adaptively secure \ibe~and \abe for t-CNF, and selectively secure \abe for general circuits from lattices, with $1-o(1)$ leakage rates, in the both relative leakage model and bounded retrieval model (\BRM). To achieve this, we first identify a new fine-grained security notion for \abe~-- partially adaptive/selective security, and instantiate this notion from \LWE. Then, by using this notion, we design a new key compressing mechanism for identity-based/attributed-based weak hash proof system (\ib/\ab-\whps) for various policy classes, achieving (1) succinct secret keys and (2) adaptive/selective security matching the existing non-leakage resilient lattice-based designs. Using the existing connection between weak hash proof system and leakage resilient encryption, the succinct-key \ib/\ab-\whps~can yield the desired leakage resilient \ibe/\abe schemes with the optimal leakage rates in the relative leakage model. Finally, by further improving the prior analysis of the compatible locally computable extractors, we can achieve the optimal leakage rates in the \BRM.
2022
PKC
Syndrome Decoding Estimator 📺
The selection of secure parameter sets requires an estimation of the attack cost to break the respective cryptographic scheme instantiated under these parameters. The current NIST standardization process for post-quantum schemes makes this an urgent task, especially considering the announcement to select final candidates by the end of 2021. For code-based schemes, recent estimates seemed to contradict the claimed security of most proposals, leading to a certain doubt about the correctness of those estimates. Furthermore, none of the available estimates includes most recent algorithmic improvements on decoding linear codes, which are based on information set decoding (ISD) in combination with nearest neighbor search. In this work we observe that \emph{all} major ISD improvements are build on nearest neighbor search, explicitly or implicitly. This allows us to derive a framework from which we obtain \emph{practical} variants of all relevant ISD algorithms including the most recent improvements. We derive formulas for the practical attack costs and make those online available in an easy to use estimator tool written in python and C. Eventually, we provide classical and quantum estimates for the bit security of all parameter sets of current code-based NIST proposals.
2022
PKC
On the Bottleneck Complexity of MPC with Correlated Randomness 📺
At ICALP 2018, Boyle et al. introduced the notion of the \emph{bottleneck complexity} of a secure multi-party computation (MPC) protocol. This measures the maximum communication complexity of any one party in the protocol, aiming to improve load-balancing among the parties. In this work, we study the bottleneck complexity of MPC in the preprocessing model, where parties are given correlated randomness ahead of time. We present two constructions of \emph{bottleneck-efficient} MPC protocols, whose bottleneck complexity is independent of the number of parties: 1. A protocol for computing abelian programs, based only on one-way functions. 2. A protocol for selection functions, based on any linearly homomorphic encryption scheme. Compared with previous bottleneck-efficient constructions, our protocols can be based on a wider range of assumptions, and avoid the use of fully homomorphic encryption.
2022
PKC
Traceable PRFs: Full Collusion Resistance and Active Security 📺
The main goal of traceable cryptography is to protect against unauthorized redistribution of cryptographic functionalities. Such schemes provide a way to embed identities (i.e., a "mark") within cryptographic objects (e.g., decryption keys in an encryption scheme, signing keys in a signature scheme). In turn, the tracing guarantee ensures that any "pirate device" that successfully replicates the underlying functionality can be successfully traced to the set of identities used to build the device. In this work, we study traceable pseudorandom functions (PRFs). As PRFs are the workhorses of symmetric cryptography, traceable PRFs are useful for augmenting symmetric cryptographic primitives with strong traceable security guarantees. However, existing constructions of traceable PRFs either rely on strong notions like indistinguishability obfuscation or satisfy weak security guarantees like single-key security (i.e., tracing only works against adversaries that possess a single marked key). In this work, we show how to use fingerprinting codes to upgrade a single-key traceable PRF into a fully collusion resistant traceable PRF, where security holds regardless of how many keys the adversary possesses. We additionally introduce a stronger notion of security where tracing security holds even against active adversaries that have oracle access to the tracing algorithm. In conjunction with known constructions of single-key traceable PRFs, we obtain the first fully collusion resistant traceable PRF from standard lattice assumptions. Our traceable PRFs directly imply new lattice-based secret-key traitor tracing schemes that are CCA-secure and where tracing security holds against active adversaries that have access to the tracing oracle.
2022
PKC
Post-Quantum Anonymous One-Sided Authenticated Key Exchange without Random Oracles 📺
Authenticated Key Exchange (AKE) is a cryptographic protocol to share a common session key among multiple parties. Usually, PKI-based AKE schemes are designed to guarantee secrecy of the session key and mutual authentication. However, in practice, there are many cases where mutual authentication is undesirable such as in anonymous networks like Tor and Riffle, or difficult to achieve due to the certificate management at the user level such as the Internet. Goldberg et al. formulated a model of anonymous one-sided AKE which guarantees the anonymity of the client by allowing only the client to authenticate the server, and proposed a concrete scheme. However, existing anonymous one-sided AKE schemes are only known to be secure in the random oracle model. In this paper, we propose generic constructions of anonymous one-sided AKE in the random oracle model and in the standard model, respectively. Our constructions allow us to construct the first post-quantum anonymous one-sided AKE scheme from isogenies in the standard model.
2022
PKC
Efficient Verifiable Partially-Decryptable Commitments from Lattices and Applications 📺
We introduce verifiable partially-decryptable commitments (VPDC), as a building block for constructing efficient privacy-preserving protocols supporting auditability by a trusted party. A VPDC is an extension of a commitment along with an accompanying proof, convincing a verifier that (i) the given commitment is well-formed and (ii) a certain part of the committed message can be decrypted using a (secret) trapdoor known to a trusted party. We first formalize VPDCs and then introduce a general decryption feasibility result that overcomes the challenges in relaxed proofs arising in the lattice setting. Our general result can be applied to a wide class of Fiat-Shamir based protocols and may be of independent interest. Next, we show how to extend the commonly used lattice-based `Hashed-Message Commitment' (HMC) scheme into a succinct and efficient VPDC. In particular, we devise a novel `gadget'-based Regev-style (partial) decryption method, compatible with efficient relaxed lattice-based zero-knowledge proofs. We prove the soundness of our VPDC in the setting of adversarial proofs, where a prover tries to create a valid VPDC output that fails in decryption. To demonstrate the effectiveness of our results, we extend a private blockchain payment protocol, MatRiCT, by Esgin et al. (ACM CCS '19) into a formally auditable construction, which we call MatRiCT-Au, with very low communication and computation overheads over MatRiCT.
2022
PKC
On Pairing-Free Blind Signature Schemes in the Algebraic Group Model 📺
Studying the security and efficiency of blind signatures is an important goal for privacy sensitive applications. In particular, for large- scale settings (e.g., cryptocurrency tumblers), it is important for schemes to scale well with the number of users in the system. Unfortunately, all practical schemes either 1) rely on (very strong) number theoretic hard- ness assumptions and/or computationally expensive pairing operations over bilinear groups, or 2) support only a polylogarithmic number of concurrent (i.e., arbitrarily interleaved) signing sessions per public key. In this work, we revisit the security of two pairing-free blind signature schemes in the Algebraic Group Model (AGM) + Random Oracle Model (ROM). Concretely, 1. We consider the security of Abe’s scheme (EUROCRYPT ‘01), which is known to have a flawed proof in the plain ROM. We adapt the scheme to allow a partially blind variant and give a proof of the new scheme under the discrete logarithm assumption in the AGM+ROM, even for (polynomially many) concurrent signing sessions. 2. We then prove that the popular blind Schnorr scheme is secure un- der the one-more discrete logarithm assumption if the signatures are issued sequentially. While the work of Fuchsbauer et al. (EURO- CRYPT ‘20) proves the security of the blind Schnorr scheme for con- current signing sessions in the AGM+ROM, its underlying assump- tion, ROS, is proven false by Benhamouda et al. (EUROCRYPT ‘21) when more than polylogarithmically many signatures are issued. Given the recent progress, we present the first security analysis of the blind Schnorr scheme in the slightly weaker sequential setting. We also show that our security proof reduces from the weakest possible assumption, with respect to known reduction techniques.
2022
PKC
A New Security Notion for PKC in the Standard Model: Weaker, Simpler, and Still Realizing Secure Channels 📺
Encryption satisfying CCA2 security is commonly known to be unnecessarily strong for realizing secure channels. Moreover, CCA2 constructions in the standard model are far from being competitive practical alternatives to constructions via random oracle. A promising research area to alleviate this problem are weaker security notions—like IND-RCCA secure encryption or IND-atag-wCCA secure tag-based encryption—which are still able to facilitate secure message transfer (SMT) via authenticated channels. In this paper we introduce the concept of sender-binding encryption (SBE), unifying prior approaches of SMT construction in the universal composability (UC) model. We furthermore develop the corresponding non-trivial security notion of IND-SB-CPA and formally prove that it suffices for realizing SMT in conjunction with authenticated channels. Our notion is the weakest so far in the sense that it can be generically constructed from the weakest prior notions—RCCA and atag-wCCA—without additional assumptions, while the reverse is not true. A direct consequence is that IND-stag-wCCA, which is strictly weaker than IND-atag-wCCA but stronger than our IND-SB-CPA, can be used to construct a secure channel. Finally, we give an efficient IND-SB-CPA secure construction in the standard model from IND-CPA secure double receiver encryption (DRE) based on McEliece. This shows that IND-SB-CPA security yields simpler and more efficient constructions in the standard model than the weakest prior notions, i.e., IND-atag-wCCA and IND-stag-wCCA.
2022
PKC
Post-quantum Security of Plain OAEP Transform 📺
In this paper, we show that OAEP transform is indistinguishable under chosen ciphertext attack in the quantum random oracle model if the underlying trapdoor permutation is quantum partial-domain one-way. The existing post-quantum security of OAEP (TCC 2016-B ) requires a modification to the OAEP transform using an extra hash function. We prove the security of the OAEP transform without any modification and this answers an open question in one of the finalists of NIST competition, NTRU submission, affirmatively.
2022
PKC
Rational Modular Encoding in the DCR Setting: Non-Interactive Range Proofs and Paillier-Based Naor-Yung in the Standard Model 📺
Range proofs allow a sender to convince a verifier that committed integers belong to an interval without revealing anything else. So far, all known non-interactive range proofs in the standard model rely on groups endowed with a bilinear map. Moreover, they either require the group order to be larger than the range of any proven statement or they suffer from a wasteful rate. Recently (Eurocrypt'21), Couteau et al. introduced a new approach to efficiently prove range membership by encoding integers as a modular ratio between small integers. We show that their technique can be transposed in the standard model under the Composite Residuosity (DCR) assumption. Interestingly, with this modification, the size of ranges is not a priori restricted by the common reference string. It also gives a constant ratio between the size of ranges and proofs. Moreover, we show that their technique of encoding messages as bounded rationals provides a secure standard model instantiation of the Naor-Yung CCA2 encryption paradigm under the DCR assumption. Keywords: Range proofs, NIZK, standard model, Naor-Yung.
2022
PKC
Lockable Obfuscation from Circularly Insecure Fully Homomorphic Encryption 📺
In a lockable obfuscation scheme, a party called the obfuscator takes as input a circuit C, a lock value y and, a message m, and outputs an obfuscated circuit. Given the obfuscated circuit, an evaluator can run it on an input x and learn the message if C(x) = y. For security, we require that the obfuscation reveals no information on the circuit as long as the lock y has high entropy even given the circuit C. The only known constructions of lockable obfuscation schemes require indistinguishability obfuscation (iO) or the learning with errors (LWE) assumption. Furthermore, in terms of technique, all known constructions, excluding iO-based, are build from provably secure variations of graph-induced multilinear maps. We show a generic construction of a lockable obfuscation scheme built from a (leveled) fully homomorphic encryption scheme that is circularly insecure. Specifically, we need a fully homomorphic encryption scheme that is secure under chosen-plaintext attack (IND-CPA) but for which there is an efficient cycle tester that can detect encrypted key cycles. Our finding sheds new light on how to construct lockable obfuscation schemes and shows why cycle tester constructions were helpful in the design of lockable obfuscation schemes. One of the many use cases for lockable obfuscation schemes are constructions for IND-CPA secure but circularly insecure encryption schemes. Our work shows that there is a connection in both ways between circular insecure encryption and lockable obfuscation.
2022
PKC
Improved Constructions of Anonymous Credentials From Structure-Preserving Signatures on Equivalence Classes 📺
Anonymous attribute-based credentials (ABCs) are a powerful tool allowing users to authenticate while maintaining privacy. When instantiated from structure-preserving signatures on equivalence classes (SPS-EQ) we obtain a controlled form of malleability, and hence increased functionality and privacy for the user. Existing constructions consider equivalence classes on the message space, allowing the joint randomization of credentials and the corresponding signatures on them. In this work, we additionally consider equivalence classes on the signing-key space. In this regard, we obtain a \emph{signer hiding} notion, where the issuing organization is not revealed when a user shows a credential. To achieve this, we instantiate the ABC framework of Fuchsbauer, Hanser, and Slamanig (FHS, Journal of Cryptology '19) with a recent SPS-EQ scheme (ASIACRYPT '19) modified to support a fully adaptive NIZK from the framework of Couteau and Hartmann (CRYPTO '20). We also show how to obtain Mercurial Signatures (CT-RSA, 2019), extending the application of our construction to anonymous delegatable credentials. To further increase functionality and efficiency, we augment the set-commitment scheme of FHS19 to support openings on attribute sets disjoint from those possessed by the user, while integrating a proof of exponentiation to allow for a more efficient verifier. Instantiating in the CRS model, we obtain an efficient credential system, anonymous under malicious organization keys, with increased expressiveness and privacy, proven secure in the standard model.
2022
PKC
Reusable Two-Round MPC from LPN 📺
We present a new construction of maliciously-secure, two-round multiparty computation (MPC) in the CRS model, where the first message is reusable an unbounded number of times. The security of the protocol relies on the Learning Parity with Noise (LPN) assumption with inverse polynomial noise rate $1/n^{1-\epsilon}$ for small enough constant $\epsilon$, where $n$ is the LPN dimension. Prior works on reusable two-round MPC required assumptions such as DDH or LWE that imply some flavor of homomorphic computation. We obtain our result in two steps: - In the first step, we construct a two-round MPC protocol in the {\it silent pre-processing model} (Boyle et al., Crypto 2019). Specifically, the parties engage in a computationally inexpensive setup procedure that generates some correlated random strings. Then, the parties commit to their inputs. Finally, each party sends a message depending on the function to be computed, and these messages can be decoded to obtain the output. Crucially, the complexity of the pre-processing phase and the input commitment phase do not grow with the size of the circuit to be computed. We call this {\it multiparty silent NISC} (msNISC), generalizing the notion of two-party silent NISC of Boyle et al. (CCS 2019). We provide a construction of msNISC from LPN in the random oracle model. - In the second step, we give a transformation that removes the pre-processing phase and use of random oracle from the previous protocol. This transformation additionally adds (unbounded) reusability of the first round message, giving the first construction of reusable two-round MPC from the LPN assumption. This step makes novel use of randomized encoding of circuits (Applebaum et al., FOCS 2004) and a variant of the ``tree of MPC messages" technique of Ananth et al. and Bartusek et al. (TCC 2020).
2022
PKC
Radical Isogenies on Montgomery Curves 📺
We work on some open problems in radical isogenies. Radical isogenies are formulas to compute chains of N-isogenies for small N and proposed by Castryck, Decru, and Vercauteren in Asiacrypt 2020. These formulas do not need to generate a point of order N generating the kernel and accelerate some isogeny-based cryptosystems like CSIDH. On the other hand, since these formulas use Tate normal forms, these need to transform Tate normal forms to curves with efficient arithmetic, e.g., Montgomery curves. In this paper, we propose radical-isogeny formulas of degrees 3 and 4 on Montgomery curves. Our formulas compute some values determining Montgomery curves, from which one can efficiently recover Montgomery coefficients. And our formulas are more efficient for some cryptosystems than the original radical isogenies. In addition, we prove a conjecture left open by Castryck et al. that relates to radical isogenies of degree 4.
2022
PKC
Lattice-based Signatures with Tight Adaptive Corruptions and More 📺
We construct the first tightly secure signature schemes in the multi-user setting with adaptive corruptions from lattices. In stark contrast to the previous tight constructions whose security is solely based on number-theoretic assumptions, our schemes are based on the Learning with Errors (LWE) assumption which is supposed to be post-quantum secure. The security of our scheme is independent of the numbers of users and signing queries, and it is in the non-programmable random oracle model. Our LWE-based scheme is compact, namely, its signatures contain only a constant number of lattice vectors. At the core of our construction are a new abstraction of the existing lossy identification (ID) schemes using dual-mode commitment schemes and a refinement of the framework by Diemert et al. (PKC 2021) which transforms a lossy ID scheme to a signature using sequential OR proofs. In combination, we obtain a tight generic construction of signatures from dual-mode commitments in the multi-user setting. Improving the work of Diemert et al., our new approach can be instantiated using not only the LWE assumption, but also an isogeny-based assumption. We stress that our LWE-based lossy ID scheme in the intermediate step uses a conceptually different idea than the previous lattice-based ones. Of independent interest, we formally rule out the possibility that the aforementioned ``ID-to-Signature'' methodology can work tightly using parallel OR proofs. In addition to the results of Fischlin et al. (EUROCRYPT 2020), our impossibility result shows a qualitative difference between both forms of OR proofs in terms of tightness.
2022
PKC
Logarithmic-Size (Linkable) Threshold Ring Signatures in the Plain Model 📺
A $1$-out-of-$N$ ring signature scheme, introduced by Rivest, Shamir, and Tauman-Kalai (ASIACRYPT '01), allows a signer to sign a message as part of a set of size $N$ (the so-called ``ring'') which are anonymous to any verifier, including other members of the ring. Threshold ring (or ``thring'') signatures generalize ring signatures to $t$-out-of-$N$ parties, with $t \geq 1$, who anonymously sign messages and show that they are distinct signers (Bresson et al., CRYPTO'02). Until recently, there was no construction of ring signatures that both $(i)$ had logarithmic signature size in $N$, and $(ii)$ was secure in the plain model. The work of Backes et al. (EUROCRYPT'19) resolved both these issues. However, threshold ring signatures have their own particular problem: with a threshold $t \geq 1$, signers must often reveal their identities to the other signers as part of the signing process. This is an issue in situations where a ring member has something controversial to sign; he may feel uncomfortable requesting that other members join the threshold, as this reveals his identity. Building on the Backes et al. template, in this work we present the first construction of a thring signature that is logarithmic-sized in $N$, in the plain model, and does not require signers to interact with each other to produce the thring signature. We also present a linkable counterpart to our construction, which supports a fine-grained control of linkability. Moreover, our thring signatures can easily be adapted to achieve the recent notions of claimability and repudiability (Park and Sealfon, CRYPTO'19).
2022
PKC
Low-Communication Multiparty Triple Generation for SPDZ from Ring-LPN 📺
The SPDZ protocol for multi-party computation relies on a correlated randomness setup consisting of authenticated, multiplication triples. A recent line of work by Boyle et al. (Crypto 2019, Crypto 2020) has investigated the possibility of producing this correlated randomness in a \emph{silent preprocessing} phase, which involves a ``small'' setup protocol with less communication than the total size of the triples being produced. These works do this using a tool called a \emph{pseudorandom correlation generator} (PCG), which allows a large batch of correlated randomness to be compressed into a set of smaller, correlated seeds. However, existing methods for compressing SPDZ triples only apply to the 2-party setting. In this work, we construct a PCG for producing SPDZ triples over large prime fields in the multi-party setting. The security of our PCG is based on the ring-LPN assumption over fields, similar to the work of Boyle et al. (Crypto 2020) in the 2-party setting. We also present a corresponding, actively secure setup protocol, which can be used to generate the PCG seeds and instantiate SPDZ with a silent preprocessing phase. As a building block, which may be of independent interest, we construct a new type of 3-party distributed point function supporting outputs over arbitrary groups (including large prime order), as well as an efficient protocol for setting up our DPF keys with active security.
2022
PKC
CNF-FSS and its Applications 📺
Function Secret Sharing (FSS), introduced by Boyle, Gilboa and Ishai~\cite{BGI15}, extends the classical notion of secret-sharing a \textit{value} to secret sharing a \textit{function}. Namely, for a secret function $f$ (from a class $\cal F$), FSS provides a sharing of $f$ whereby {\em succinct} shares (``keys'') are distributed to a set of parties, so that later the parties can non-interactively compute an additive sharing of $f(x)$, for any input $x$ in the domain of $f$. Previous work on FSS concentrated mostly on the two-party case, where highly efficient schemes are obtained for some simple, yet extremely useful, classes $\cal F$ (in particular, FSS for the class of point functions, a task referred to as DPF~--~Distributed Point Functions~\cite{GI14,BGI15}). In this paper, we concentrate on the multi-party case, with $p\ge 3$ parties and $t$-security ($1\le t<p$). First, we introduce the notion of \textsf{CNF-DPF} (or, more generally, \textsf{CNF-FSS}), where the scheme uses the CNF version of secret sharing (rather than additive sharing) to share each value $f(x)$. We then demonstrate the utility of CNF-DPF by providing several applications. Our main result shows how CNF-DPF can be used to achieve substantial asymptotic improvement in communication complexity when using it as a building block for constructing {\em standard} $(t,p)$-DPF protocols that tolerate $t>1$ (semi-honest) corruptions (of the $p$ parties). For example, we build a 2-out-of-5 secure (standard) DPF scheme of communication complexity $O(N^{1/4})$, where $N$ is the domain size of $f$ (compared with the current best-known of $O(N^{1/2})$ for $(2,5)$-DPF). More generally, with $p>dt$ parties, we give a $(t,p)$-DPF whose communication grows as $O(N^{1/2d})$ (rather than $O(\sqrt{N})$ that follows from the $(p-1,p)$-DPF scheme of \cite{BGI15}). We also present a 1-out-of-3 secure CNF-DPF scheme, in which each party holds two of the three keys, with poly-logarithmic communication complexity. These results have immediate implications to scenarios where (multi-server) DPF was shown to be applicable. For example, we show how to use such a scheme to obtain asymptotic improvement ($O(\log^2N)$ versus $O(\sqrt{N})$) in communication complexity over the 3-party protocol of~\cite{BKKO20}.
2022
PKC
Count Me In! Extendablity for Threshold Ring Signatures 📺
Ring signatures enable a signer to sign a message on behalf of a group anonymously, without revealing her identity. Similarly, threshold ring signatures allow several signers to sign the same message on behalf of a group; while the combined signature reveals that some threshold t of group members signed the message, it does not leak anything else about the signers’ identities. Anonymity is a central feature in threshold ring signature applications, such as whistleblowing, e-voting and privacy-preserving cryptocurrencies: it is often crucial for signers to remain anonymous even from their fellow signers. When the generation of a signature requires interaction, this is diffcult to achieve. There exist threshold ring signatures with non-interactive signing — where signers locally produce partial signatures which can then be aggregated — but a limitation of existing threshold ring signature constructions is that all of the signers must agree on the group on whose behalf they are signing, which implicitly assumes some coordination amongst them. The need to agree on a group before generating a signature also prevents others — from outside that group — from endorsing a message by adding their signature to the statement post-factum. We overcome this limitation by introducing extendability for ring signatures, same-message linkable ring signatures, and threshold ring signatures. Extendability allows an untrusted third party to take a signature, and extend it by enlarging the anonymity set to a larger set. In the extendable threshold ring signature, two signatures on the same message which have been extended to the same anonymity set can then be combined into one signature with a higher threshold. This enhances signers’ anonymity, and enables new signers to anonymously support a statement already made by others. For each of those primitives, we formalize the syntax and provide a meaningful security model which includes different flavors of anonymous extendability. In addition, we present concrete realizations of each primitive and formally prove their security relying on signatures of knowledge and the hardness of the discrete logarithm problem. We also describe a generic transformation to obtain extendable threshold ring signatures from same-message-linkable extendable ring signatures. Finally, we implement and benchmark our constructions.
2022
PKC
Encapsulated Search Index : Public-Key, Sub-linear, Distributed, and Delegatable 📺
We build the first *sub-linear* (in fact, potentially constant-time) *public-key* searchable encryption system: - server can publish a public key $PK$. - anybody can build an encrypted index for document $D$ under $PK$. - client holding the index can obtain a token $z_w$ from the server to check if a keyword $w$ belongs to $D$. - search using $z_w$ is almost as fast (e.g., sub-linear) as the non-private search. - server granting the token does not learn anything about the document $D$, beyond the keyword $w$. - yet, the token $z_w$ is specific to the pair $(D,w)$: the client does not learn if other keywords $w'\neq w$ belong to $D$, or if $w$ belongs to other, freshly indexed documents $D'$. - server cannot fool the client by giving a wrong token $z_w$. We call such a primitive *encapsulated search index* (ESI). Our ESI scheme can be made $(t,n)$-distributed among $n$ servers in the best possible way: *non-interactive*, verifiable, and resilient to any coalition of up to $(t-1)$ malicious servers. We also introduce the notion of *delegatable* ESI and show how to extend our construction to this setting. Our solution --- including public indexing, sub-linear search, delegation, and distributed token generation --- is deployed as a commercial application by a real-world company.
2022
PKC
KDM Security for the Fujisaki-Okamoto Transformations in the QROM 📺
Key dependent message (KDM) security is a security notion that guarantees confidentiality of communication even if secret keys are encrypted. KDM security has found a number of applications in practical situations such as hard-disk encryption systems, anonymous credentials, and bootstrapping of fully homomorphic encryptions. Recently, it also found an application in quantum delegation protocols as shown by Zhang (TCC 2019). In this work, we investigate the KDM security of existing practical public-key encryption (PKE) schemes proposed in the quantum random oracle model (QROM). Concretely, we study a PKE scheme whose KEM is constructed by using Fujisaki-Okamoto (FO) transformations in the QROM. FO transformations are applied to an IND-CPA secure PKE schemes and yield IND-CCA secure key encapsulation mechanisms (KEM). Then, we show the following results. - We can reduce the KDM-CPA security in the QROM of a PKE scheme whose KEM is derived from any of the FO transformations proposed by Hofheinz et al. (TCC 2017) to the IND-CPA security of the underlying PKE scheme, without square root security loss. For this result we use one-time-pad (OTP) as DEM to convert KEM into PKE. - We can reduce the KDM-CCA security in the QROM of a PKE scheme whose KEM is derived from a single variant of the FO transformation proposed by Hofheinz et al. (TCC 2017) to the IND-CPA security of the underlying PKE scheme, without square root security loss. For this result, we use OTP-then-MAC construction as DEM to convert KEM into PKE. Also, we require a mild injectivity assumption for the underlying IND-CPA secure PKE scheme. In order to avoid square root security loss, we use a double-sided one-way to hiding (O2H) lemma proposed by Kuchta et al. (EUROCRYPT 2020). In the context of KDM security, there is a technical hurdle for using double-sided O2H lemma due to the circularity issue. Our main technical contribution is to overcome the hurdle.
2022
PKC
Making Private Function Evaluation Safer, Faster, and Simpler 📺
In the problem of two-party \emph{private function evaluation} (PFE), one party $P_A$ holds a \emph{private function} $f$ and (optionally) a private input $x_A$, while the other party $P_B$ possesses a private input $x_B$. Their goal is to evaluate $f$ on $x_A$ and $x_B$, and one or both parties may obtain the evaluation result $f(x_A, x_B)$ while no other information beyond $f(x_A, x_B)$ is revealed. In this paper, we revisit the two-party PFE problem and provide several enhancements. We propose the \emph{first} constant-round actively secure PFE protocol with linear complexity. Based on this result, we further provide the \emph{first} constant-round publicly verifiable covertly (PVC) secure PFE protocol with linear complexity to gain better efficiency. For instance, when the deterrence factor is $\epsilon = 1/2$, compared to the passively secure protocol, its communication cost is very close and its computation cost is around $2.6\times$. In our constructions, as a by-product, we design a specific protocol for proving that a list of ElGamal ciphertexts is derived from an \emph{extended permutation} performed on a given list of elements. It should be noted that this protocol greatly improves the previous result and may be of independent interest. In addition, a reusability property is added to our two PFE protocols. Namely, if the same function $f$ is involved in multiple executions of the protocol between $P_A$ and $P_B$, then the protocol could be executed more efficiently from the second execution. Moreover, we further extend this property to be \emph{global}, such that it supports multiple executions for the same $f$ in a reusable fashion between $P_A$ and \emph{arbitrary} parties playing the role of $P_B$.
2022
PKC
Lifting Standard Model Reductions to Common Setup Assumptions 📺
In this paper we show that standard model black-box reductions naturally lift to various setup assumptions, such as the random oracle (ROM) or ideal cipher model. Concretely, we prove that a black-box reduction from a security notion $P$ to security notion $Q$ in the standard model can be turned into a non-programmable black-box reduction from $P_\oracle$ to $Q_\oracle$ in a model with a setup assumption $\oracle$, where $P_\oracle$ and $Q_\oracle$ are the natural extensions of $P$ and $Q$ to a model with a setup assumption $\oracle$. Our results rely on a generalization of the recent framework by Hofheinz and Nguyen (PKC 2019) to support primitives which make use of a trusted setup. Our framework encompasses standard idealized settings like the random oracle and the ideal cipher model. At the core of our main result lie novel properties of negligible functions that can be of independent interest.
2022
PKC
ECLIPSE: Enhanced Compiling method for Pedersen-committed zkSNARK Engines 📺
We advance the state-of-the art for zero-knowledge commit-and-prove SNARKs (CP-SNARKs). CP-SNARKs are an important class of SNARKs which, using commitments as ``glue'', allow to efficiently combine proof systems---e.g., general-purpose SNARKs (an efficient way to prove statements about circuits) and $\Sigma$-protocols (an efficient way to prove statements about group operations). Thus, CP-SNARKs allow to efficiently provide zero-knowledge proofs for composite statements such as $h=H(g^{x})$ for some hash-function $H$. Our main contribution is providing the first construction of CP-SNARKs where the proof size is succinct in the number of commitments. We achieve our result by providing a general technique to compile Algebraic Holographic Proofs (AHP) (an underlying abstraction used in many modern SNARKs) with special ``decomposition'' properties into an efficient CP-SNARK. We then show that some of the most efficient AHP constructions---Marlin, PLONK, and Sonic---satisfy our compilation requirements. Our resulting SNARKs achieve universal and updatable reference strings, which are highly desirable features as they greatly reduce the trust needed in the SNARK setup phase.
2022
PKC
Time-Memory tradeoffs for large-weight syndrome decoding in ternary codes 📺
We propose new algorithms for solving a class of large-weight syndrome decoding problems in random ternary codes. This is the main generic problem underlying the security of the recent Wave signature scheme (Debris-Alazard et al., 2019), and it has so far received limited attention. At SAC 2019 Bricout et al. proposed a reduction to a binary subset sum problem requiring many solutions, and used it to obtain the fastest known algorithm. However —as is often the case in the coding theory literature— its memory cost is proportional to its time cost, which makes it unattractive in most applications. In this work we propose a range of memory-efficient algorithms for this problem, which describe a near-continuous time-memory tradeoff curve. Those are obtained by using the same reduction as Bricout et al. and carefully instantiating the derived subset sum problem with exhaustive- search algorithms from the literature, in particular dissection (Dinur et al., 2012) and dissection in tree (Dinur, 2019). We also spend significant effort adapting those algorithms to decrease their granularity, thereby allowing them to be smoothly used in a syndrome decoding context when not all the solutions to the subset sum problem are required. For a proposed parameter set for Wave, one of our best instantiations is estimated to cost 2^177 bit operations and requiring 2^88.5 bits of storage, while we estimate this to be 2^152 and 2^144 for the best algorithm from Bricout et al..
2022
PKC
Two-Round Oblivious Linear Evaluation from Learning with Errors 📺
Oblivious Linear Evaluation (OLE) is the arithmetic analogue of the well-know oblivious transfer primitive. It allows a sender, holding an affine function $f(x)=a+bx$ over a finite field or ring, to let a receiver learn $f(w)$ for a $w$ of the receiver's choice. In terms of security, the sender remains oblivious of the receiver's input $w$, whereas the receiver learns nothing beyond $f(w)$ about $f$. In recent years, OLE has emerged as an essential building block to construct efficient, reusable and maliciously-secure two-party computation. In this work, we present efficient two-round protocols for OLE over large fields based on the Learning with Errors (LWE) assumption, providing a full arithmetic generalization of the oblivious transfer protocol of Peikert, Vaikuntanathan and Waters (CRYPTO 2008). At the technical core of our work is a novel extraction technique which allows to determine if a non-trivial multiple of some vector is close to a $q$-ary lattice.
2022
PKC
On the security of OSIDH 📺
The Oriented Supersingular Isogeny Diffie-Hellman is a post-quantum key exchange scheme recently introduced by Colò and Kohel. It is based on the group action of an ideal class group of a quadratic imaginary order on a subset of supersingular elliptic curves, and in this sense it can be viewed as a generalization of the popular isogeny based key exchange CSIDH. From an algorithmic standpoint, however, OSIDH is quite different from CSIDH. In a sense, OSIDH uses class groups which are more structured than in CSIDH, creating a potential weakness that was already recognized by Colò and Kohel. To circumvent the weakness, they proposed an ingenious way to realize a key exchange by exchanging partial information on how the class group acts in the neighborhood of the public curves, and conjectured that this additional information would not impact security. In this work we revisit the security of OSIDH by presenting a new attack, building upon previous work of Onuki. Our attack has exponential complexity, but it practically breaks Colò and Kohel's parameters unlike Onuki's attack. We also discuss countermeasures to our attack, and analyze their impact on OSIDH, both from an efficiency and a functionality point of view.
2022
PKC
Storing and Retrieving Secrets on a blockchain 📺
A secret sharing scheme enables one party to distribute shares of a secret to n parties and ensures that an adversary in control of t out of n parties will learn no information about the secret. However, traditional secret sharing schemes are often insufficient, especially for applications in which the set of parties who hold the secret shares might change over time. To achieve security in this setting, dynamic proactive secret sharing (DPSS) is used. DPSS schemes proactively update the secret shares held by the parties and allow changes to the set of parties holding the secrets. We propose FaB-DPSS (FAst Batched DPSS) -- a new and highly optimized batched DPSS scheme. While previous work on batched DPSS focuses on a single client submitting a batch of secrets and does not allow storing and releasing secrets independently, we allow multiple different clients to dynamically share and release secrets. FaB-DPSS is the most efficient robust DPSS scheme that supports the highest possible adversarial threshold of 1/2. We prove FaB-DPSS secure and implement it. All operations complete in seconds, and we outperform a prior state-of-the-art DPSS scheme by over 6 times. Additionally, we propose new applications of DPSS in the context of blockchains. Specifically, we propose a protocol that uses blockchains and FaB-DPSS to provide conditional secret storage. The protocol allows parties to store secrets along with a release condition, and once a (possibly different) party satisfies this release condition, the secret is privately released to that party. This functionality is similar to extractable witness encryption. While there are numerous compelling applications (e.g., time-lock encryption, one-time programs, and fair multi-party computation) which rely on extractable witness encryption, there are no known efficient constructions (or even constructions based on any well-studied assumptions) of extractable witness encryption. However, by utilizing blockchains and FaB-DPSS, we can easily build all those applications. We provide an implementation of our conditional secret storage protocol as well as several applications building on top of it.
2022
TOSC
The Legendre Symbol and the Modulo-2 Operator in Symmetric Schemes over Fnp: Preimage Attack on Full Grendel 📺
Motivated by modern cryptographic use cases such as multi-party computation (MPC), homomorphic encryption (HE), and zero-knowledge (ZK) protocols, several symmetric schemes that are efficient in these scenarios have recently been proposed in the literature. Some of these schemes are instantiated with low-degree nonlinear functions, for example low-degree power maps (e.g., MiMC, HadesMiMC, Poseidon) or the Toffoli gate (e.g., Ciminion). Others (e.g., Rescue, Vision, Grendel) are instead instantiated via high-degree functions which are easy to evaluate in the target application. A recent example for the latter case is the hash function Grendel, whose nonlinear layer is constructed using the Legendre symbol. In this paper, we analyze high-degree functions such as the Legendre symbol or the modulo-2 operation as building blocks for the nonlinear layer of a cryptographic scheme over Fnp.Our focus regards the security analysis rather than the efficiency in the mentioned use cases. For this purpose, we present several new invertible functions that make use of the Legendre symbol or of the modulo-2 operation.Even though these functions often provide strong statistical properties and ensure a high degree after a few rounds, the main problem regards their small number of possible outputs, that is, only three for the Legendre symbol and only two for the modulo-2 operation. By fixing them, it is possible to reduce the overall degree of the function significantly. We exploit this behavior by describing the first preimage attack on full Grendel, and we verify it in practice.
2022
TOSC
Weak Tweak-Keys for the CRAFT Block Cipher 📺
CRAFT is a lightweight tweakable Substitution-Permutation-Network (SPN) block cipher optimized for efficient protection of its implementations against Differential Fault Analysis (DFA) attacks. In this paper, we present an equivalent description of CRAFT up to a simple mapping on the plaintext, ciphertext and round tweakeys. We show that the new representation, for a sub-class of keys, leads to a new structure which is a Feistel network, with non-linear operation and key addition only on half the state. Consequently, it reveals a class of weak keys for which CRAFT is less resistant against differential and linear cryptanalyses. As a result, we present one weak-key single-tweak differential attack on 23 rounds (with time complexity of 294 encryptions and data complexity of 274 chosen plaintext/tweak/ciphertext tuples and works for 2112 weak keys) and one weak-key related-tweak attack on 26 rounds of the cipher (with time complexity of 2105 encryptions and data complexity 273 chosen plaintext/tweak/ciphertext tuples and works for 2108 weak keys). Note that these attacks do not break the security claim of the CRAFT block cipher.
2022
TOSC
Bounds for the Security of Ascon against Differential and Linear Cryptanalysis 📺
The NIST Lightweight Cryptography project aims to standardize symmetric cryptographic designs, including authenticated encryption and hashing, suitable for constrained devices. One essential criterion for the evaluation of the 10 finalists is the evidence for their security against attacks like linear and differential cryptanalysis. For Ascon, one of the finalists and previous winner of the CAESAR competition in the ‘lightweight’ category, there is a large gap between the proven bounds and the best known characteristics found with heuristic tools: The bounds only cover up to 3 rounds with 15 differentially and 13 linearly active S-boxes, insufficient for proving a level of security for the full constructions.In this paper, we propose a new modeling strategy for SAT solvers and derive strong bounds for the round-reduced Ascon permutation. We prove that 4 rounds already ensure that any single characteristic has a differential probability or squared correlation of at most 2−72, and 6 rounds at most 2−108. This is significantly below the bound that could be exploited within the query limit for keyed Ascon modes. These bounds are probably not tight. To achieve this result, we propose a new search strategy of dividing the search space into a large number of subproblems based on ‘girdle patterns’, and show how to exploit the rotational symmetry of Ascon using necklace theory. Additionally, we evaluate and optimize several aspects of the pure SAT model, including the counter implementation and parallelizability, which we expect to be useful for future applications to other models.
2022
TOSC
A Formal Analysis of Boomerang Probabilities 📺
In the past 20 years since their conception, boomerang attacks have become an important tool in the cryptanalysis of block ciphers. In the classical estimate of their success probability, assumptions are made about the independence of the underlying differential trails that are not well-founded. We underline the problems inherent in these independence assumptions by using them to prove that for any boomerang there exists a differential trail over the entire cipher with a higher probability than the boomerang.While cryptanalysts today have a clear understanding that the trails can be dependent, the focus of previous research has mostly gone into using these dependencies to improve attacks but little effort has been put into giving boomerangs and their success probabilities a stronger theoretical underpinning. With this publication, we provide such a formalization.We provide a framework which allows us to formulate and prove rigorous statements about the probabilities involved in boomerang attacks without relying on independence assumptions of the trails. Among these statements is a proof that two-round boomerangs on SPNs with differentially 4-uniform S-boxes always deviate from the classical probability estimate to the largest degree possible.We applied the results of this formalization to analyze the validity of some of the first boomerang attacks. We show that the boomerang constructed in the amplified boomerang attack on Serpent by Kelsey, Kohno, and Schneier has probability zero. For the rectangle attack on Serpent by Dunkelman, Biham, and Keller, we demonstrate that a minuscule fraction of only 2−43.4 of all differential trail combinations used in the original attack have a non-zero probability. In spite of this, the probability of the boomerang is in fact a little higher than the original estimate suggests as the non-zero trails have a vastly higher probability than the classical estimate predicts.
2022
TOSC
Influence of the Linear Layer on the Algebraic Degree in SP-Networks 📺
We consider SPN schemes, i.e., schemes whose non-linear layer is defined as the parallel application of t ≥ 1 independent S-Boxes over F2n and whose linear layer is defined by the multiplication with a (n · t) × (n · t) matrix over F2. Even if the algebraic representation of a scheme depends on all its components, upper bounds on the growth of the algebraic degree in the literature usually only consider the details of the non-linear layer. Hence a natural question arises: (how) do the details of the linear layer influence the growth of the algebraic degree? We show that the linear layer plays a crucial role in the growth of the algebraic degree and present a new upper bound on the algebraic degree in SP-networks. As main results, we prove that in the case of low-degree round functions with large S-Boxes: (a) an initial exponential growth of the algebraic degree can be followed by a linear growth until the maximum algebraic degree is reached; (b) the rate of the linear growth is proportional to the degree of the linear layer over Ft2n. Besides providing a theoretical insight, our analysis is particularly relevant for assessing the security of the security of cryptographic permutations designed to be competitive in applications like MPC, FHE, SNARKs, and STARKs, including permutations based on the Hades design strategy. We have verified our findings on small-scale instances and we have compared them against the currently best results in the literature, showing a substantial improvement of upper bounds on the algebraic degree in case of low-degree round functions with large S-Boxes.
2022
TOSC
Security of COFB against Chosen Ciphertext Attacks 📺
COFB is a lightweight Authenticated Encryption with Associated Data (AEAD) mode based on block ciphers. It was proposed in CHES 2017 and is the basis for GIFT-COFB, a finalist in the NIST lightweight standardization project. It comes with provable security results that guarantee its security up to the birthday bound in the nonce-respecting model. However, the designers offer multiple versions of the analysis with different details and the implications of attacks against the scheme are not discussed deeply. In this article, we look at a group of possible forgery and privacy attacks against COFB. We show that the security for both forgery and privacy is bounded by the number of forgery attempts. We show the existence of forgery and privacy attacks with success probability qd/2n/2, given qd forgery attempts. In particular, we show an attack with 2n/2 attempts using only a single known-plaintext encryption query against COFB. While these attacks do not contradict the claims made by the designers of GIFT-COFB, they show its limitations in terms of the number of forgery attempts. They also show that, while COFB generates a 128-bit tag, it behaves in a very similar manner to an AEAD scheme with 64-bit tag. As a result of independent interest, our analysis provides a contradiction to the main theorem of Journal of Cryptology volume 33, pages 703–741 (2020), which includes an improved security proof of COFB compared to the CHES 2017 version. Finally, we discuss the term nqd/2n/2 that appears in the security proof of GIFT-COFB and CHES 2017, showing why there is a security gap between the provable results and the attacks. We emphasize that the results in this article do not threaten the security of GIFT-COFB in the scope of the NIST lightweight cryptography requirements or the claims made by the designers in the specification document of the design.
2022
TOSC
Towards Low-Latency Implementation of Linear Layers 📺
Lightweight cryptography features a small footprint and/or low computational complexity. Low-cost implementations of linear layers usually play an important role in lightweight cryptography. Although it has been shown by Boyar et al. that finding the optimal implementation of a linear layer is a Shortest Linear Program (SLP) problem and NP-hard, there exist a variety of heuristic methods to search for near-optimal solutions. This paper considers the low-latency criteria and focuses on the heuristic search of lightweight implementation for linear layers. Most of the prior approach iteratively combines the inputs (of linear layers) to reach the output, which can be regarded as the forward search. To better adapt the low-latency criteria, we propose a new framework of backward search that attempts to iteratively split every output (into an XORing of two bits) until all inputs appear. By bounding the time of splitting, the new framework can find a sub-optimal solution with a minimized depth of circuits.We apply our new search algorithm to linear layers of block ciphers and find many low-latency candidates for implementations. Notably, for AES Mixcolumns, we provide an implementation with 103 XOR gates with a depth of 3, which is among the best hardware implementations of the AES linear layer. Besides, we obtain better implementations in XOR gates for 54.3% of 4256 Maximum Distance Separable (MDS) matrices proposed by Li et al. at FSE 2019. We also achieve an involutory MDS matrix (in M4(GL(8, F2))) whose implementation uses the lowest number (i.e., 86, saving 2 from the state-of-the-art result) of XORs with the minimum depth.
2022
TOSC
Quantum Period Finding is Compression Robust 📺
We study quantum period finding algorithms such as Simon and Shor (and its variant Ekerå-Håstad). For a periodic function f these algorithms produce – via some quantum embedding of f – a quantum superposition ∑x |x〉 |f(x)〉, which requires a certain amount of output qubits that represent |f(x)〉. We show that one can lower this amount to a single output qubit by hashing f down to a single bit in an oracle setting.Namely, we replace the embedding of f in quantum period finding circuits by oracle access to several embeddings of hashed versions of f. We show that on expectation this modification only doubles the required amount of quantum measurements, while significantly reducing the total number of qubits. For example, for Simon’s algorithm that finds periods in f : Fn2 → Fn2 our hashing technique reduces the required output qubits from n down to 1, and therefore the total amount of qubits from 2n to n + 1. We also show that Simon’s algorithm admits real world applications with only n + 1 qubits by giving a concrete realization of a hashed version of the cryptographic Even-Mansour construction. Moreover, for a variant of Simon’s algorithm on Even-Mansour that requires only classical queries to Even-Mansour we save a factor of (roughly) 4 in the qubits.Our oracle-based hashed version of the Ekerå-Håstad algorithm for factoring n-bit RSA reduces the required qubits from (3/2 + o(1))n down to (1/2+ o(1))n.
2021
ASIACRYPT
Generic Framework for Key-Guessing Improvements 📺
We propose a general technique to improve the key-guessing step of several attacks on block ciphers. This is achieved by defining and studying some new properties of the associated S-boxes and by representing them as a special type of decision trees that are crucial for finding fine-grained guessing strategies for various attack vectors. We have proposed and implemented the algorithm that efficiently finds such trees, and use it for providing several applications of this approach, which include the best known attacks on NOKEON, GIFT, and RECTANGLE.
2021
ASIACRYPT
A Systematic Approach and Analysis of Key Mismatch Attacks on Lattice-Based NIST Candidate KEMs 📺
Research on key mismatch attacks against lattice-based KEMs is an important part of the cryptographic assessment of the ongoing NIST standardization of post-quantum cryptography. There have been a number of these attacks to date. However, a unified method to evaluate these KEMs' resilience under key mismatch attacks is still missing. Since the key index of efficiency is the number of queries needed to successfully mount such an attack, in this paper, we propose and develop a systematic approach to find lower bounds on the minimum average number of queries needed for such attacks. Our basic idea is to transform the problem of finding the lower bound of queries into finding an optimal binary recovery tree (BRT), where the computations of the lower bounds become essentially the computations of a certain Shannon entropy. The optimal BRT approach also enables us to understand why, for some lattice-based NIST candidate KEMs, there is a big gap between the theoretical bounds and bounds observed in practical attacks, in terms of the number of queries needed. This further leads us to propose a generic improvement method for these existing attacks, which are confirmed by our experiments. Moreover, our proposed method could be directly used to improve the side-channel attacks against CCA-secure NIST candidate KEMs.
2021
ASIACRYPT
DEFAULT: Cipher Level Resistance Against Differential Fault Attack 📺
Differential Fault Analysis (DFA) is a well known cryptanalytic technique that exploits faulty outputs of an encryption device. Despite its popularity and similarity with the classical Differential Analysis (DA), a thorough analysis explaining DFA from a designer's point of view is missing in the literature. To the best of our knowledge, no DFA immune cipher at an algorithmic level has been proposed so far. Furthermore, all known DFA countermeasures somehow depend on the device/protocol or on the implementation such as duplication/comparison. As all of these are outside the scope of the cipher designer, we focus on designing a primitive which can protect from DFA on its own. We present the first concept of cipher level DFA resistance which does not rely on any device/protocol related assumption, nor does it depend on any form of duplication. Our construction is simple, software/hardware friendly and DFA security scales up with the state size. It can be plugged before and/or after (almost) any symmetric key cipher and will ensure a non-trivial search complexity against DFA. One key component in our DFA protection layer is an SBox with linear structures. Such SBoxes have never been used in cipher design as they generally perform poorly against differential attacks. We argue that they in fact represent an interesting trade-off between good cryptographic properties and DFA resistance. As a proof of concept, we construct a DFA protecting layer, named DEFAULT-LAYER, as well as a full-fledged block cipher DEFAULT. Our solutions compare favourably to the state-of-the-art, offering advantages over the sophisticated duplication based solutions like impeccable circuits/CRAFT or infective countermeasures.
2021
ASIACRYPT
Massive Superpoly Recovery with Nested Monomial Predictions 📺
Determining the exact algebraic structure or some partial information of the superpoly for a given cube is a necessary step in the cube attack -- a generic cryptanalytic technique for symmetric-key primitives with some secret and public tweakable inputs. Currently, the division property based approach is the most powerful tool for exact superpoly recovery. However, as the algebraic normal form (ANF) of the targeted output bit gets increasingly complicated as the number of rounds grows, existing methods for superpoly recovery quickly hit their bottlenecks. For example, previous method stuck at round 842, 190, and 892 for \trivium, \grain, and \kreyvium, respectively. In this paper, we propose a new framework for recovering the exact ANFs of massive superpolies based on the monomial prediction technique (ASIACRYPT 2020, an alternative language for the division property). In this framework, the targeted output bit is first expressed as a polynomial of the bits of some intermediate states. For each term appearing in the polynomial, the monomial prediction technique is applied to determine its superpoly if the corresponding MILP model can be solved within a preset time limit. Terms unresolved within the time limit are further expanded as polynomials of the bits of some deeper intermediate states with symbolic computation, whose terms are again processed with monomial predictions. The above procedure is iterated until all terms are resolved. Finally, all the sub-superpolies are collected and assembled into the superpoly of the targeted bit. We apply the new framework to \trivium, \grain, and \kreyvium. As a result, the exact ANFs of the superpolies for 843-, 844- and 845-round \trivium, 191-round \grain and 894-round \kreyvium are recovered. Moreover, with help of the M\"{o}bius transform, we present a novel key-recovery technique based on superpolies involving \textit{all} key bits by exploiting the sparse structures, which leads to the best key-recovery attacks on the targets considered.
2021
ASIACRYPT
Gentry-Wichs Is Tight: A Falsifiable Non-Adaptively Sound SNARG 📺
By the impossibility result of Gentry and Wichs, non-falsifiable assumptions are needed to construct (even non-zero-knowledge) adaptively sound succinct non-interactive arguments (SNARGs) for hard languages. It is important to understand whether this impossibility result is tight. While it is known how to construct adaptively sound non-succinct non-interactive arguments for $\mathsf{NP}$ from falsifiable assumptions, adaptively sound SNARGs for $\mathsf{NP}$ from non-falsifiable assumptions, and adaptively sound SNARGs for $\mathsf{P}$ from falsifiable assumptions, there are no known non-adaptively sound SNARGs for $\mathsf{NP}$ from falsifiable assumptions. We show that Gentry-Wichs is tight by constructing the latter. In addition, we prove it is non-adaptively knowledge-sound in the algebraic group model and Sub-ZK (i.e., zero-knowledge even if the CRS is subverted) under a non-falsifiable assumption.
2021
ASIACRYPT
Fine-tuning the ISO/IEC Standard LightMAC 📺
LightMAC, by Luykx et al., is a block cipher based message authentication code (MAC). The simplicity of design and low overhead allows it to have very compact implementations. As a result, it has been recently chosen as an ISO/IEC standard MAC for lightweight applications. LightMAC has been shown to achieve query-length independent security bound of $O(q^2/2^n)$ when instantiated with two independently keyed $n$-bit block ciphers, where $q$ denotes the number of MAC queries and the query-length is upper bounded by $(n-s)2^s$ bits for a fixed counter size $s$. In this paper, we aim to minimize the number of block cipher keys in LightMAC. First, we show that the original LightMAC instantiated with a single block cipher key, referred as 1k-LightMAC, achieves security bound of $O(q^2/2^n)$ while the query-length is at least $(n-s)$ bits and at most $(n-s)\min\{2^{n/4},2^s\}$ bits. Second, we show that a minor variant of 1k-LightMAC, dubbed as LightMAC-ds, achieves security bound of $O(q^2/2^n)$ while query-length is upper bounded by $(n-s)2^{s-1}$ bits. Of independent interest, our security proof of 1k-LightMAC employs a novel sampling approach, called the reset-sampling, as a subroutine within the H-coefficient proof setup.
2021
ASIACRYPT
2021
ASIACRYPT
2021
ASIACRYPT
Private Join and Compute from PIR with Default 📺
The private join and compute (PJC) functionality enables secure computation over data distributed across different databases, and is applicable to a wide range of applications, many of which address settings where the input databases are of significantly different sizes. We introduce the notion of private information retrieval (PIR) with default, which enables two-party PJC functionalities in a way that hides the size of the intersection of the two databases and incurs sublinear communication cost in the size of the bigger database. We provide two constructions for this functionality, one of which requires offline linear communication, which can be amortized across queries, and one that provides sublinear cost for each query but relies on more computationally expensive tools. We construct inner-product PJC, which has applications to ads conversion measurement and contact tracing, relying on an extension of PIR with default. We evaluate the efficiency of our constructions, which can enable $\mathbf{2^{8}}$ PIR with default lookups on a database of size $\mathbf{2^{25}}$ (or inner-product PJC on databases with such sizes) with the communication of $\mathbf{44}$MB, which costs less than $\mathbf{0.17}$c. for the client and $\mathbf{26.48}$c. for the server.
2021
ASIACRYPT
A Geometric Approach to Linear Cryptanalysis 📺
A new interpretation of linear cryptanalysis is proposed. This 'geometric approach' unifies all common variants of linear cryptanalysis, reveals links between various properties, and suggests additional generalizations. For example, new insights into invariants corresponding to non-real eigenvalues of correlation matrices and a generalization of the link between zero-correlation and integral attacks are obtained. Geometric intuition leads to a fixed-key motivation for the piling-up principle, which is illustrated by explaining and generalizing previous results relating invariants and linear approximations. Rank-one approximations are proposed to analyze cell-oriented ciphers, and used to resolve an open problem posed by Beierle, Canteaut and Leander at FSE 2019. In particular, it is shown how such approximations can be analyzed automatically using Riemannian optimization.
2021
ASIACRYPT
Gladius: LWR based efficient hybrid public key encryption with distributed decryption 📺
Standard hybrid encryption schemes based on the KEM-DEM framework are hard to implement efficiently in a distributed manner whilst maintaining the CCA security property of the scheme. This is because the DEM needs to be decrypted under the key encapsulated by the KEM, before the whole ciphertext is declared valid. In this paper we present a new variant of the KEM-DEM framework, closely related to Tag-KEMs, which sidesteps this issue. We then present a post-quantum KEM for this framework based on Learning-with-Rounding, which is designed specifically to have fast distributed decryption. Our combined construction of a hybrid encryption scheme with Learning-with-Rounding based KEM, called Gladius, is closely related to the NIST Round 3 candidate called Saber. Finally, we give a prototype distributed implementation that achieves a decapsulation time of 4.99 seconds for three parties.
2021
ASIACRYPT
Public Key Encryption with Flexible Pattern Matching 📺
Many interesting applications of pattern matching (e.g. deep-packet inspection or medical data analysis) target very sensitive data. In particular, spotting illegal behaviour in internet traffic conflicts with legitimate privacy requirements, which usually forces users (e.g. children, employees) to blindly trust an entity that fully decrypts their traffic in the name of security. The compromise between traffic analysis and privacy can be achieved through searchable encryption. However, as the traffic data is a stream and as the patterns to search are bound to evolve over time (e.g. new virus signatures), these applications require a kind of searchable encryption that provides more flexibility than the classical schemes. We indeed need to be able to search for patterns of variable sizes in an arbitrary long stream that has potentially been encrypted prior to pattern identification. To stress these specificities, we call such a scheme a stream encryption supporting pattern matching. Recent papers use bilinear groups to provide public key constructions supporting these features. These solutions are lighter than more generic ones (e.g. fully homomorphic encryption) while retaining the adequate expressivity to support pattern matching without harming privacy more than needed. However, all existing solutions in this family have weaknesses with respect to efficiency and security that need to be addressed. Regarding efficiency, their public key has a size linear in the size of the alphabet, which can be quite large, in particular for applications that naturally process data as bytestrings. Regarding security, they all rely on a very strong computational assumption that is both interactive and specially tailored for this kind of scheme. In this paper, we tackle these problems by providing two new constructions using bilinear groups to support pattern matching on encrypted streams. Our first construction shares the same strong assumption but dramatically reduces the size of the public key by removing the dependency on the size of the alphabet, while nearly halving the size of the ciphertext. On a typical application with large patterns, our public key is two order of magnitude smaller that the one of previous schemes, which demonstrates the practicality of our approach. Our second construction manages to retain most of the good features of the first one while exclusively relying on a simple (static) variant of DDH, which solves the security problem of previous works.
2021
ASIACRYPT
Algebraic Attacks on Rasta and Dasta Using Low-Degree Equations 📺
Rasta and Dasta are two fully homomorphic encryption friendly symmetric-key primitives proposed at CRYPTO 2018 and ToSC 2020, respectively. We point out that the designers of Rasta and Dasta neglected an important property of the $\chi$ operation. Combined with the special structure of Rasta and Dasta, this property directly leads to significantly improved algebraic cryptanalysis. Especially, it enables us to theoretically break 2 out of 3 instances of full Agrasta, which is the aggressive version of Rasta with the block size only slightly larger than the security level in bits. We further reveal that Dasta is more vulnerable against our attacks than Rasta for its usage of a linear layer composed of an ever-changing bit permutation and a deterministic linear transform. Based on our cryptanalysis, the security margins of Dasta and Rasta parameterized with $(n,\kappa,r)\in\{(327,80,4),(1877,128,4),(3545,256,5)\}$ are reduced to only 1 round, where $n$, $\kappa$ and $r$ denote the block size, the claimed security level and the number of rounds, respectively. These parameters are of particular interest as the corresponding ANDdepth is the lowest among those that can be implemented in reasonable time and target the same claimed security level.
2021
ASIACRYPT
Generalized Channels from Limited Blockchain Scripts and Adaptor Signatures 📺
Decentralized and permissionless ledgers offer an inherently low transaction rate, as a result of their consensus protocol demanding the storage of each transaction on-chain. A prominent proposal to tackle this scalability issue is to utilize off-chain protocols, where parties only need to post a limited number of transactions on-chain. Existing solutions can roughly be categorized into: (i) application-specific channels (e.g., payment channels), offering strictly weaker functionality than the underlying blockchain; and (ii) state channels, supporting arbitrary smart contracts at the cost of being compatible only with the few blockchains having Turing-complete scripting languages (e.g., Ethereum). In this work, we introduce and formalize the notion of generalized channels allowing users to perform any operation supported by the underlying blockchain in an off-chain manner. Generalized channels thus extend the functionality of payment channels and relax the definition of state channels. We present a concrete construction compatible with any blockchain supporting transaction authorization, time-locks and constant number of Boolean and and or operations -- requirements fulfilled by many (non-Turing-complete) blockchains including the popular Bitcoin. To this end, we leverage adaptor signatures -- a cryptographic primitive already used in the cryptocurrency literature but formalized as a standalone primitive in this work for the first time. We formally prove the security of our generalized channel construction in the Universal Composability framework. As an important practical contribution, our generalized channel construction outperforms the state-of-the-art payment channel construction, the Lightning Network, in efficiency. Concretely, it halves the off-chain communication complexity and reduces the on-chain footprint in case of disputes from linear to constant in the number of off-chain applications funded by the channel. Finally, we evaluate the practicality of our construction via a prototype implementation and discuss various applications including financially secured fair two-party computation.
2021
ASIACRYPT
Key Encapsulation Mechanism with Tight Enhanced Security in the Multi-User Setting: Impossibility Result and Optimal Tightness 📺
For Key Encapsulation Mechanism (KEM) deployed in a multi-user setting, an adversary may corrupt some users to learn their secret keys, and obtain some encapsulated keys due to careless key managements of users. To resist such attacks, we formalize Enhanced security against Chosen Plaintext/Ciphertext Attack (ECPA/ECCA), which ask the pseudorandomness of unrevealed encapsulated keys under uncorrupted users. This enhanced security for KEM serves well for the security of a class of Authenticated Key Exchange protocols built from KEM. In this paper, we study the achievability of tight ECPA and ECCA security for KEM in the multi-user setting, and present an impossibility result and an optimal security loss factor that can be obtained. The existing meta-reduction technique due to Bader et al. (EUROCRYPT 2016) rules out some KEMs, but many well-known KEMs, e.g., Cramer-Shoup KEM (SIAM J. Comput. 2003), Kurosawa-Desmedt KEM (CRYPTO 2004), run out. To solve this problem, we develop a new technique tool named rank of KEM and a new secret key partitioning strategy for meta-reduction. With this new tool and new strategy, we prove that KEM schemes with polynomially-bounded ranks have no tight ECPA and ECCA security from non-interactive complexity assumptions, and the security loss is at least linear in the number n of users. This impossibility result covers lots of well-known KEMs, including the Cramer-Shoup KEM, Kurosawa-Desmedt KEM and many others. Moreover, we show that the linear security loss is optimal by presenting concrete KEMs with security loss Θ(n). This is justified by a non-trivial security reduction with linear loss factor from ECPA/ECCA security to the traditional multi-challenge CPA/CCA security.
2021
ASIACRYPT
Balanced Non-Adjacent Forms 📺
Integers can be decomposed in multiple ways. The choice of a recoding technique is generally dictated by performance considerations. The usual metric for optimizing the decomposition is the Hamming weight. In this work, we consider a different metric and propose new modified forms (i.e., integer representations using signed digits) that satisfy minimality requirements under the new metric. Specifically, we introduce what we call balanced non-adjacent forms and prove that they feature a minimal Euclidean weight. We also present efficient algorithms to produce these new minimal forms. We analyze their asymptotic and exact distributions. We extend the definition to modular integers and show similar optimality results. The balanced non adjacent forms find natural applications in fully homomorphic encryption as they optimally reduce the noise variance in LWE-type ciphertexts.
2021
ASIACRYPT
Garbling, Stacked and Staggered: Faster k-out-of-n Garbled Function Evaluation 📺
Stacked Garbling (SGC) is a Garbled Circuit (GC) improvement that efficiently and securely evaluates programs with conditional branching. SGC reduces bandwidth consumption such that communication is proportional to the size of the single longest program execution path, rather than to the size of the entire program. Crucially, the parties expend increased computational effort compared to classic GC. Motivated by procuring a subset in a menu of computational services or tasks, we consider GC evaluation of k-out-of-n branches, whose indices are known (or eventually revealed) to the GC evaluator E. Our stack-and-stagger technique amortizes GC computation in this setting. We retain the communication advantage of SGC, while significantly improving computation and wall-clock time. Namely, each GC party garbles (or evaluates) the total of n branches, a significant improvement over the O(nk) garblings/evaluations needed by standard SGC. We present our construction as a garbling scheme. Our technique brings significant overall performance improvement in various settings, including those typically considered in the literature: e.g. on a 1Gbps LAN we evaluate 16-out-of-128 functions ~7.68x faster than standard stacked garbling.
2021
ASIACRYPT
PrORAM: Fast O(log n) Authenticated Shares ZK ORAM 📺
We construct a concretely efficient Zero Knowledge (ZK) Oblivious RAM (ORAM) for ZK Proof (ZKP) systems based on authenticated sharings of arithmetic values. It consumes 2logn oblivious transfers (OTs) of length-2sigma secrets per access of an arithmetic value, for statistical security parameter sigma and array size n. This is an asymptotic and concrete improvement over previous best (concretely efficient) ZK ORAM BubbleRAM of Heath and Kolesnikov ([HK20a], CCS 2020), whose access cost is 1/2 log^2 n OTs of length-2sigma secrets. ZK ORAM is essential for proving statements that are best expressed as RAM programs, rather than Boolean or arithmetic circuits. Our construction is private-coin ZK. We integrate it with [HK20a]’s ZKP protocol and prove the resulting ZKP system secure. We implemented PrORAM in C++. Compared to the state-of-the-art BubbleRAM, our PrORAM is ~10x faster for arrays of size 2^20 of 40-bit values.
2021
ASIACRYPT
Redeeming Reset Indifferentiability and Applications to Post-Quantum Security 📺
Indifferentiability is used to analyze the security of constructions of idealized objects, such as random oracles or ideal ciphers. Reset indifferentiability is a strengthening of plain indifferentiability which is applicable in far more scenarios, but has largely been abandoned due to significant impossibility results and a lack of positive results. Our main results are: - Under \emph{weak} reset indifferentiability, ideal ciphers imply (fixed size) random oracles, and domain shrinkage is possible. We thus show reset indifferentiability is more useful than previously thought. - We lift our analysis to the quantum setting, showing that ideal ciphers imply random oracles under quantum indifferentiability. - Despite Shor's algorithm, we observe that generic groups are still meaningful quantumly, showing that they are quantumly (reset) indifferentiable from ideal ciphers; combined with the above, cryptographic groups yield post-quantum \emph{symmetric} key cryptography. In particular, we obtain a plausible post-quantum random oracle that is a subset-product followed by two modular reductions.
2021
ASIACRYPT
Bounded Collusion ABE for TMs from IBE 📺
We give an attribute-based encryption system for Turing Machines that is provably secure assuming only the existence of identity- based encryption (IBE) for large identity spaces. Currently, IBE is known to be realizable from most mainstream number theoretic assumptions that imply public key cryptography including factoring, the search Diffie-Hellman assumption, and the Learning with Errors assumption. Our core construction provides security against an attacker that makes a single key query for a machine T before declaring a challenge string w∗ that is associated with the challenge ciphertext. We build our construction by leveraging a Garbled RAM construction of Gentry, Halevi, Raykova and Wichs; however, to prove security we need to introduce a new notion of security called iterated simulation security. We then show how to transform our core construction into one that is secure for an a-priori bounded number q = q(\lambda) of key queries that can occur either before or after the challenge ciphertext. We do this by first showing how one can use a special type of non-committing encryption to transform a system that is secure only if a single key is chosen before the challenge ciphertext is declared into one where the single key can be requested either before or after the challenge ciphertext. We give a simple construction of this non-committing encryption from public key encryption in the Random Oracle Model. Next, one can apply standard combinatorial techniques to lift from single-key adaptive security to q-key adaptive security.
2021
ASIACRYPT
Adaptive Security via Deletion in Attribute-Based Encryption: Solutions from Search Assumptions in Bilinear Groups 📺
One of the primary research challenges in Attribute-Based Encryption (ABE) is constructing and proving cryptosystems that are adaptively secure. To date the main paradigm for achieving adaptive security in ABE is dual system encryption. However, almost all such solutions in bilinear groups rely on (variants of) either the subgroup decision problem over composite order groups or the decision linear assumption. Both of these assumptions are decisional rather than search assumptions and the target of the assumption is a source or bilinear group element. This is in contrast to earlier selectively secure ABE systems which can be proven secure from either the decisional or search Bilinear Diffie-Hellman assumption. In this work we make progress on closing this gap by giving a new ABE construction for the subset functionality and prove security under the Search Bilinear Diffie-Hellman assumption. We first provide a framework for proving adaptive security in Attribute-Based Encryption systems. We introduce a concept of ABE with deletable attributes where any party can take a ciphertext encrypted under the attribute string x in {0, 1}^n and modify it into a ciphertext encrypted under any string x' in {0, 1, bot}^n where x' is derived by replacing any bits of x with bot symbols (i.e. ``deleting" attributes of x). The semantics of the system are that any private key for a circuit C can be used to decrypt a ciphertext associated with x' if none of the input bits read by circuit C are bot symbols and C(x') = 1. We show a pathway for combining ABE with deletable attributes with constrained pseudorandom functions to obtain adaptively secure ABE building upon the recent work of [Tsabary19]. Our new ABE system will be adaptively secure and be a ciphertext-policy ABE that supports the same functionality as the underlying constrained PRF as long as the PRF is ``deletion conforming". Here we also provide a simple constrained PRF construction that gives subset functionality. Our approach enables us to access a broader array of Attribute-Based Encryption schemes support deletion of attributes. For example, we show that both the [GPSW06] and [Boyen13] ABE schemes can trivially handle a deletion operation. And, by using a hardcore bit variant of GPSW scheme we obtain an adaptively secure ABE scheme under the Search Bilinear Diffie-Hellman assumption in addition to pseudo random functions in NC1. This gives the first adaptively secure ABE from a search assumption as all prior work relied on decision assumptions over source group elements.
2021
ASIACRYPT
Homomorphic Secret Sharing for Multipartite and General Adversary Structures Supporting Parallel Evaluation of Low-Degree Polynomials 📺
Homomorphic secret sharing (HSS) for a function $f$ allows input parties to distribute shares for their private inputs and then locally compute output shares from which the value of $f$ is recovered. HSS can be directly used to obtain a two-round multiparty computation (MPC) protocol for possibly non-threshold adversary structures whose communication complexity is independent of the size of $f$. In this paper, we propose two constructions of HSS schemes supporting parallel evaluation of a single low-degree polynomial and tolerating multipartite and general adversary structures. Our multipartite scheme tolerates a wider class of adversary structures than the previous multipartite one in the particular case of a single evaluation and has exponentially smaller share size than the general construction. While restricting the range of tolerable adversary structures (but still applicable to non-threshold ones), our schemes perform $\ell$ parallel evaluations with communication complexity approximately $\ell/\log\ell$ times smaller than simply using $\ell$ independent instances. We also formalize two classes of adversary structures taking into account real-world situations to which the previous threshold schemes are inapplicable. Our schemes then perform $O(m)$ parallel evaluations with almost the same communication cost as a single evaluation, where $m$ is the number of parties.
2021
ASIACRYPT
Onion Routing with Replies 📺
Onion routing (OR) protocols are a crucial tool for providing anonymous internet communication. An OR protocol enables a user to anonymously send requests to a server. A fundamental problem of OR protocols is how to deal with replies: ideally, we would want the server to be able to send a reply back to the anonymous user without knowing or disclosing the user's identity. Existing OR protocols do allow for such replies, but do not provably protect the payload (i.e., message) of replies against manipulation. Kuhn et al. (IEEE S&P 2020) show that such manipulations can in fact be leveraged to break anonymity of the whole protocol. In this work, we close this gap and provide the first framework and protocols for OR with protected replies. We define security in the sense of an ideal functionality in the universal composability model, and provide corresponding (less complex) game-based security notions for the individual properties. We also provide two secure instantiations of our framework: one based on updatable encryption, and one based on succinct non-interactive arguments (SNARGs) to authenticate payloads both in requests and replies. In both cases, our central technical handle is an implicit authentication of the transmitted payload data, as opposed to an explicit, but insufficient authentication (with MACs) in previous solutions. Our results exhibit a new and surprising application of updatable encryption outside of long-term data storage.
2021
ASIACRYPT
Partial Key Exposure Attack on Short Secret Exponent CRT-RSA 📺
Let $(N,e)$ be an RSA public key, where $N=pq$ is the product of equal bitsize primes $p,q$. Let $d_p, d_q$ be the corresponding secret CRT-RSA exponents. Using a Coppersmith-type attack, Takayasu, Lu and Peng (TLP) recently showed that one obtains the factorization of $N$ in polynomial time, provided that $d_p, d_q \leq N^{0.122}$. Building on the TLP attack, we show the first {\em Partial Key Exposure} attack on short secret exponent CRT-RSA. Namely, let $N^{0.122} \leq d_p, d_q \leq N^{0.5}$. Then we show that a constant known fraction of the least significant bits (LSBs) of both $d_p, d_q$ suffices to factor $N$ in polynomial time. Naturally, the larger $d_p,d_q$, the more LSBs are required. E.g. if $d_p, d_q$ are of size $N^{0.13}$, then we have to know roughly a $\frac 1 5$-fraction of their LSBs, whereas for $d_p, d_q$ of size $N^{0.2}$ we require already knowledge of a $\frac 2 3$-LSB fraction. Eventually, if $d_p, d_q$ are of full size $N^{0.5}$, we have to know all of their bits. Notice that as a side-product of our result we obtain a heuristic deterministic polynomial time factorization algorithm on input $(N,e,d_p,d_q)$.
2021
ASIACRYPT
Beyond Software Watermarking: Traitor-Tracing for Pseudorandom Functions 📺
Software watermarking schemes allow a user to embed an identifier into a piece of code such that the resulting program is nearly functionally-equivalent to the original program, and yet, it is difficult to remove the identifier without destroying the functionality of the program. Such schemes are often considered for proving software ownership or for digital rights management. Existing constructions of watermarking have focused primarily on watermarking pseudorandom functions (PRFs). In this work, we revisit the definitional foundations of watermarking, and begin by highlighting a major flaw in existing security notions. Existing security notions for watermarking only require that the identifier be successfully extracted from programs that preserve the exact input/output behavior of the original program. In the context of PRFs, this means that an adversary that constructs a program which computes a quarter of the output bits of the PRF or that is able to distinguish the outputs of the PRF from random are considered to be outside the threat model. However, in any application (e.g., watermarking a decryption device or an authentication token) that relies on PRF security, an adversary that manages to predict a quarter of the bits or distinguishes the PRF outputs from random would be considered to have defeated the scheme. Thus, existing watermarking schemes provide very little security guarantee against realistic adversaries. None of the existing constructions of watermarkable PRFs would be able to extract the identifier from a program that only outputs a quarter of the bits of the PRF or one that perfectly distinguishes. To address the shortcomings in existing watermarkable PRF definitions, we introduce a new primitive called a traceable PRF. Our definitions are inspired by similar definitions from public-key traitor tracing, and aim to capture a very robust set of adversaries: namely, any adversary that produces a useful distinguisher (i.e., a program that can break PRF security), can be traced to a specific identifier. We provide a general framework for constructing traceable PRFs via an intermediate primitive called private linear constrained PRFs. Finally, we show how to construct traceable PRFs from a similar set of assumptions previously used to realize software watermarking. Namely, we obtain a single-key traceable PRF from standard lattice assumptions and a fully collusion-resistant traceable PRF from indistinguishability obfuscation (together with injective one-way functions).
2021
ASIACRYPT
Chain Reductions for Multi-Signatures and the HBMS Scheme 📺
Existing proofs for existing Discrete Log (DL) based multi-signature schemes give only weak guarantees if the schemes are implemented, as they are in practice, in 256-bit groups. This is because the underlying reductions, which are mostly in the standard model and from DL, are loose. We show that relaxing either the model or the assumption suffices to obtain tight reductions. Namely we give (1) tight proofs from DL in the Algebraic Group Model, and (2) tight, standard-model proofs from well-founded assumptions other than DL. We first do this for the classical 3-round schemes, namely $\BN$ and $\MuSig$. Then we give a new 2-round multi-signature scheme, $\MSB$, as efficient as prior ones, for which we do the same. These multiple paths to security for a single scheme are made possible by a framework of chain reductions, in which a reduction is broken into a chain of sub-reductions involving intermediate problems. Overall our results improve the security guarantees for DL-based multi-signature schemes in the groups in which they are implemented in practice.
2021
ASIACRYPT
Transciphering Framework for Approximate Homomorphic Encryption 📺
Homomorphic encryption (HE) is a promising cryptographic primitive that enables computation over encrypted data, with a variety of applications including medical, genomic, and financial tasks. In Asiacrypt 2017, Cheon et al. proposed the CKKS scheme to efficiently support approximate computation over encrypted data of real numbers. HE schemes including CKKS, nevertheless, still suffer from slow encryption speed and large ciphertext expansion compared to symmetric cryptography. In this paper, we propose a novel hybrid framework, dubbed RtF (Real-to-Finite-field) framework, that supports CKKS. The main idea behind this construction is to combine the CKKS and the FV homomorphic encryption schemes, and use a stream cipher using modular arithmetic in between. As a result, real numbers can be encrypted without significant ciphertext expansion or computational overload on the client side. As an instantiation of the stream cipher in our framework, we propose a new HE-friendly cipher, dubbed HERA, and extensively analyze its security and efficiency. The main feature of HERA is that it uses a simple randomized key schedule. Compared to recent HE-friendly ciphers such as FLIP and Rasta using randomized linear layers, HERA requires a smaller number of random bits. For this reason, HERA significantly outperforms existing HE-friendly ciphers on both the client and the server sides. With the RtF transciphering framework combined with HERA at the 128-bit security level, we achieve small ciphertext expansion ratio with a range of 1.23 to 1.54, which is at least 23 times smaller than using (symmetric) CKKS-only, assuming the same precision bits and the same level of ciphertexts at the end of the framework. We also achieve 1.6 $\mu$s and 21.7 MB/s for latency and throughput on the client side, which are 9085 times and 17.8 times faster than the CKKS-only environment, respectively.
2021
ASIACRYPT
A New Variant of Unbalanced Oil and Vinegar Using Quotient Ring: QR-UOV 📺
The unbalanced oil and vinegar signature scheme (UOV) is a multivariate signature scheme that has essentially not been broken for over 20 years. However, it requires the use of a large public key; thus, various methods have been proposed to reduce its size. In this paper, we propose a new variant of UOV with a public key represented by block matrices whose components correspond to an element of a quotient ring. We discuss how it affects the security of our proposed scheme whether or not the quotient ring is a field. Furthermore, we discuss their security against currently known and newly possible attacks and propose parameters for our scheme. We demonstrate that our proposed scheme can achieve a small public key size without significantly increasing the signature size compared with other UOV variants. For example, the public key size of our proposed scheme is 85.8 KB for NIST's Post-Quantum Cryptography Project (security level 3), whereas that of compressed Rainbow is 252.3 KB, where Rainbow is a variant of UOV and is one of the third-round finalists of the NIST PQC project.
2021
ASIACRYPT
FAST: Secure and High Performance Format-Preserving Encryption and Tokenization 📺
We propose a new construction for format-preserving encryption. Our design provides the flexibility for use in format-preserving encryption (FPE) and for static table-driven tokenization. Our algorithm is a substitution-permutation network based on random Sboxes. Using pseudorandom generators and pseudorandom functions, we prove a strong adaptive security based on the super-pseudorandom permutation assumption of our core design. We obtain empirical parameters to reach this assumption. We suggest parameters for quantum security. Our design accommodates very small domains, with a radix $a$ from 4 to the Unicode alphabet size and a block length $l$ starting 2. The number of Sbox evaluations per encryption is asymptotically $l^{\frac32}$, which is also the number of bytes we need to generate using AES in CTR mode for each tweak setup. For instance, we tokenize 10 decimal digits using 29 (parallel) AES computations to be done only once, when the tweak changes.
2021
ASIACRYPT
Bit Security as Computational Cost for Winning Games with High Probability 📺
We introduce a novel framework for quantifying the bit security of security games. Our notion is defined with an operational meaning that a $\lambda$-bit secure game requires a total computational cost of $2^\lambda$ for winning the game with high probability, e.g., 0.99. We define the bit security both for search-type and decision-type games. Since we identify that these two types of games should be structurally different, we treat them differently but define the bit security using the unified framework to guarantee the same operational interpretation. The key novelty of our notion of bit security is to employ two types of adversaries: inner adversary and outer adversary. While the inner adversary plays a ``usual'' security game, the outer adversary invokes the inner adversary many times to amplify the winning probability for the security game. We find from our framework that the bit security for decision games can be characterized by the information measure called the \emph{R\'enyi divergence} of order $1/2$ of the inner adversary. The conventional ``advantage,'' defined as the probability of winning the game, characterizes our bit security for search-type games. We present several security reductions in our framework for justifying our notion of bit security. Many of our results quantitatively match the results for the bit security notion proposed by Micciancio and Walter in 2018. In this sense, our bit security strengthens the previous notion of bit security by adding an operational meaning. A difference from their work is that, in our framework, the Goldreich-Levin theorem gives an optimal reduction only for ``balanced'' adversaries who output binary values in a balanced manner.
2021
ASIACRYPT
Fault-Injection Attacks against NIST’s Post-Quantum Cryptography Round 3 KEM Candidates 📺
We investigate __all__ NIST PQC Round 3 KEM candidates from the viewpoint of fault-injection attacks: Classic McEliece, Kyber, NTRU, Saber, BIKE, FrodoKEM, HQC, NTRU Prime, and SIKE. All KEM schemes use variants of the Fujisaki-Okamoto transformation, so the equality test with re-encryption in decapsulation is critical. We survey effective key-recovery attacks when we can skip the equality test. We found the existing key-recovery attacks against Kyber, NTRU, Saber, FrodoKEM, HQC, one of two KEM schemes in NTRU Prime, and SIKE. We propose a new key-recovery attack against the other KEM scheme in NTRU Prime. We also report an attack against BIKE that leads to leakage of information of secret keys. The open-source pqm4 library contains all KEM schemes except Classic McEliece and HQC. We show that giving a single instruction-skipping fault in the decapsulation processes leads to skipping the equality test __virtually__ for Kyber, NTRU, Saber, BIKE, and SIKE. We also report the experimental attacks against them. We also report the implementation of NTRU Prime allows chosen-ciphertext attacks freely and the timing side-channel of FrodoKEM reported in Guo, Johansson, and Nilsson (CRYPTO 2020) remains, while there are no such bugs in their NIST PQC Round 3 submissions.
2021
ASIACRYPT
Toward a Fully Secure Authenticated Encryption Scheme From a Pseudorandom Permutation 📺
In this paper, we propose a new block cipher-based authenticated encryption scheme, dubbed the Synthetic Counter with Masking (SCM) mode. SCM follows the NSIV paradigm proposed by Peyrin and Seurin (CRYPTO 2016), where a keyed hash function accepts a nonce N with associated data and a message, yielding an authentication tag T, and then the message is encrypted by a counter-like mode using both T and N. Here we move one step further by encrypting nonces; in the encryption part, the inputs to the block cipher are determined by T, counters, and an encrypted nonce, and all its outputs are also masked by an (additional) encrypted nonce, yielding keystream blocks. As a result, we obtain, for the first time, a block cipher-based authenticated encryption scheme of rate 1/2 that provides n-bit security with respect to the query complexity (ignoring the influence of message length) in the nonce-respecting setting, and at the same time guarantees graceful security degradation in the faulty nonce model, when the underlying n-bit block cipher is modeled as a secure pseudorandom permutation. Seen as a slight variant of GCM-SIV, SCM is also parallelizable and inverse-free, and its performance is still comparable to GCM-SIV.
2021
ASIACRYPT
Strong and Tight Security Guarantees against Integral Distinguishers 📺
Integral attacks belong to the classical attack vectors against any given block ciphers. However, providing arguments that a given cipher is resistant against those attacks is notoriously difficult. In this paper, based solely on the assumption of independent round keys, we develop significantly stronger arguments than what was possible before: our main result is that we show how to argue that the sum of ciphertexts over any possible subset of plaintext is key-dependent, i.e., the non existence of integral distinguishers.
2021
ASIACRYPT
Secure and Efficient Software Masking on Superscalar Pipelined Processors 📺
Physical side-channel attacks like power analysis pose a serious threat to cryptographic devices in real-world applications. Consequently, devices implement algorithmic countermeasures like masking. In the past, works on the design and verification of masked software implementations have mostly focused on simple microprocessors that findusage on smart cards. However, many other applications such as in the automotive industry require side-channel protected cryptographic computations on much more powerful CPUs. In such situations, the security loss due to complex architectural side-effects, the corresponding performance degradation, as well as discussions of suitable probing models and verification techniques are still vastly unexplored research questions. We answer these questions and perform a comprehensive analysis of more complex processor architectures in the context of masking-related side effects. First, we analyze the RISC-V SweRV core — featuring a 9-stage pipeline, two execution units, and load/store buffers — and point out a significant gap between security in a simple software probing model and practical security on such CPUs. More concretely, we show that architectural side effects of complex CPU architectures can significantly reduce the protection order of masked software, both via formal analysis in the hardware probing model, as well as empirically via gate-level timing simulations. We then discuss the options of fixing these problems in hardware or leaving them as constraints to software. Based on these software constraints, we formulate general rules for the design of masked software on more complex CPUs. Finally, we compare several implementation strategies for masking schemes and present in a case study that designing secure masked software for complex CPUs is still possible with overhead as low as 13%.
2021
ASIACRYPT
NTRU Fatigue: How Stretched is Overstretched? 📺
Until recently lattice reduction attacks on NTRU lattices were thought to behave similar as on (ring)-LWE lattices with the same parameters. However several works (Albrecht-Bai-Ducas 2016, Kirchner-Fouque 2017) showed a significant gap for large moduli $q$, the so-called overstretched regime of NTRU. With the NTRU scheme being a finalist to the NIST PQC competition it is important to understand ---both asymptotically and concretely--- where the fatigue point lies exactly, i.e. at which $q$ the overstretched regime begins. Unfortunately the analysis by Kirchner and Fouque is based on an impossibility argument, which only results in an asymptotic upper bound on the fatigue point. It also does not really {\em explain} how lattice reduction actually recovers secret-key information. We propose a new analysis that asymptotically improves on that of Kirchner and Fouque, narrowing down the fatigue point for ternary NTRU from $q \leq n^{2.783+o(1)}$ to $q=n^{2.484+o(1)}$, and finally explaining the mechanism behind this phenomenon. We push this analysis further to a concrete one, settling the fatigue point at $q \approx 0.004 \cdot n^{2.484}$, and allowing precise hardness predictions in the overstretched regime. These predictions are backed by extensive experiments.
2021
ASIACRYPT
On Time-Lock Cryptographic Assumptions in Abelian Hidden-Order Groups 📺
In this paper we study cryptographic finite abelian groups of unknown order and hardness assumptions in these groups. Abelian groups necessitate multiple group generators, which may be chosen at random. We formalize this setting and hardness assumptions therein. Furthermore, we generalize the algebraic group model and strong algebraic group model from cyclic groups to arbitrary finite abelian groups of unknown order. Building on these formalizations, we present techniques to deal with this new setting, and prove new reductions. These results are relevant for class groups of imaginary quadratic number fields and time-lock cryptography build upon them.
2021
ASIACRYPT
Quantum Linearization Attacks 📺
Recent works have shown that quantum period-finding can be used to break many popular constructions (some block ciphers such as Even-Mansour, multiple MACs and AEs...) in the superposition query model. So far, all the constructions broken exhibited a strong algebraic structure, which enables to craft a periodic function of a single input block. The recovery of the secret period allows to recover a key, distinguish, break the confidentiality or authenticity of these modes. In this paper, we introduce the \emph{quantum linearization attack}, a new way of using Simon's algorithm to target MACs in the superposition query model. Specifically, we use inputs of multiple blocks as an interface to a function hiding a linear structure. The recovery of this structure allows to perform forgeries. We also present some variants of this attack that use other quantum algorithms, which are much less common in quantum symmetric cryptanalysis: Deutsch's, Bernstein-Vazirani's, and Shor's. To the best of our knowledge, this is the first time these algorithms have been used in quantum forgery or key-recovery attacks. Our attack breaks many parallelizable MACs such as {\sf LightMac}, {\sf PMAC}, and numerous variants with (classical) beyond-birthday-bound security ({\sf LightMAC+}, {\sf PMAC+}) or using tweakable block ciphers ({\sf ZMAC}). More generally, it shows that constructing parallelizable quantum-secure PRFs might be a challenging task.
2021
ASIACRYPT
Giving an Adversary Guarantees (Or: How to Model Designated Verifier Signatures in a Composable Framework) 📺
When defining a security notion, one typically specifies what dishonest parties cannot achieve. For example, communication is confidential if a third party cannot learn anything about the messages being transmitted, and it is authentic if a third party cannot impersonate the real (honest) sender. For certain applications, however, security crucially relies on giving dishonest parties certain capabilities. As an example, in Designated Verifier Signature (DVS) schemes, one captures that only the designated verifier can be convinced of the authenticity of a message by guaranteeing that any dishonest party can forge signatures which look indistinguishable (to a third party) from original ones created by the sender. However, composable frameworks cannot typically model such guarantees as they are only designed to bound what a dishonest party can do. In this paper we show how to model such guarantees---that dishonest parties must have some capability---in the Constructive Cryptography (CC) framework (Maurer and Renner, ICS 2011). More concretely, we give the first composable security definitions for Multi-Designated Verifier Signature (MDVS) schemes---a generalization of DVS schemes. The ideal world is defined as the intersection of two worlds. The first captures authenticity in the usual way. The second provides the guarantee that a dishonest party can forge signatures. By taking the intersection we have an ideal world with the desired properties. We also compare our composable definitions to existing security notions for MDVS schemes from the literature. We find that only recently, 23 years after the introduction of MDVS schemes, sufficiently strong security notions were introduced capturing the security of MDVS schemes (Damg{\r a}rd et al., TCC 2020). As we prove, however, these notions are still strictly stronger than necessary.
2021
ASIACRYPT
Tardigrade: An Atomic Broadcast Protocol for Arbitrary Network Conditions 📺
We study the problem of \emph{atomic broadcast}---the underlying problem addressed by blockchain protocols---in the presence of a malicious adversary who corrupts some fraction of the $n$ parties running the protocol. Existing protocols are either robust for any number of corruptions in a \emph{synchronous} network (where messages are delivered within some known time~$\Delta$) but fail if the synchrony assumption is violated, or tolerate fewer than $n/3$ corrupted parties in an \emph{asynchronous} network (where messages can be delayed arbitrarily) and cannot tolerate more corruptions even if the network happens to be well behaved. We design an atomic broadcast protocol (TARDIGRADE) that, for any $t_s \geq t_a$ with $2t_s + t_a < n$, provides security against $t_s$ corrupted parties if the network is synchronous, while remaining secure when $t_a$ parties are corrupted even in an asynchronous network. We show that TARDIGRADE achieves optimal tradeoffs between $t_s$ and~$t_a$. Finally, we show a second protocol (UPGRADE) with similar (but slightly weaker) guarantees that achieves per-transaction communication complexity linear in~$n$.
2021
ASIACRYPT
Revisiting Homomorphic Encryption Schemes for Finite Fields 📺
The Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/ Fan-Vercauteren (BFV) schemes are the two main homomorphic encryption (HE) schemes to perform exact computations over finite fields and integers. Although the schemes work with the same plaintext space, there are significant differences in their noise management, algorithms for the core homomorphic multiplication operation, message encoding, and practical usability. The main goal of our work is to revisit both schemes, focusing on closing the gap between the schemes by improving their noise growth, computational complexity of the core algorithms, and usability. The other goal of our work is to provide both theoretical and experimental performance comparison of BGV and BFV. More precisely, we propose an improved variant of BFV where the encryption operation is modified to significantly reduce the noise growth, which makes the BFV noise growth somewhat better than for BGV (in contrast to prior results showing that BGV has smaller noise growth for larger plaintext moduli). We also modify the homomorphic multiplication procedure, which is the main bottleneck in BFV, to reduce its algorithmic complexity. Our work introduces several other novel optimizations, including lazy scaling in BFV homomorphic multiplication and an improved BFV decryption procedure in the Residue Number System (RNS) representation. We also develop a usable variant of BGV as a more efficient alternative to BFV for common practical scenarios. We implement our improved variants of BFV and BGV in PALISADE and evaluate their experimental performance for several benchmark computations. The experimental results suggest that our BGV implementation is faster for intermediate and large plaintext moduli, which are often used in practical scenarios with ciphertext packing, while our BFV implementation is faster for small plaintext moduli. More precisely, we propose an improved variant of BFV where the encryption operation is modified to significantly reduce the noise growth, which makes the BFV noise growth somewhat better than for BGV (in contrast to prior results showing that BGV has smaller noise growth for larger plaintext moduli). We also modify the homomorphic multiplication procedure, which is the main bottleneck in BFV, to reduce its algorithmic complexity. Our work introduces several other novel optimizations, including lazy scaling in BFV homomorphic multiplication and an improved BFV decryption procedure in the Residue Number System (RNS) representation. We also develop a usable variant of BGV as a more efficient alternative to BFV for common practical scenarios. We implement our improved variants of BFV and BGV in PALISADE and evaluate their experimental performance for several benchmark computations. Our results suggest that BGV is faster for intermediate and large plaintext moduli, which are often used in practical scenarios with ciphertext packing, while BFV is faster for small plaintext moduli.
2021
ASIACRYPT
Quantum Computationally Predicate-Binding Commitments with Application in Quantum Zero-Knowledge Arguments for NP 📺
A quantum bit commitment scheme is to realize bit (rather than qubit) commitment by exploiting quantum communication and quantum computation. In this work, we study the binding property of the quantum string commitment scheme obtained by composing a generic quantum perfectly(resp. statistically)-hiding computationally-binding bit commitment scheme (which can be realized based on quantum-secure one-way permutations(resp. functions)) in parallel. We show that the resulting scheme satisfies a stronger quantum computational binding property, which we will call predicate-binding, than the trivial honest-binding. Intuitively and very roughly, the predicate-binding property guarantees that given any inconsistent predicate pair over a set of strings (i.e. no strings in this set can satisfy both predicates), if a (claimed) quantum commitment can be opened so that the revealed string satisfies one predicate with certainty, then the same commitment cannot be opened so that the revealed string satisfies the other predicate (except for a negligible probability). As an application, we plug a generic quantum perfectly(resp. statistically)-hiding computationally-binding bit commitment scheme in Blum's zero-knowledge protocol for the NP-complete language Hamiltonian Cycle. This will give rise to the first quantum perfect(resp. statistical) zero-knowledge argument system (with soundness error 1/2) for all NP languages based solely on quantum-secure one-way permutations(resp. functions). The quantum computational soundness of this system will follow immediately from the quantum computational predicate-binding property of commitments.
2021
ASIACRYPT
(Compact) Adaptively Secure FE for Attribute-Weighted Sums from k-Lin 📺
This paper presents the first adaptively simulation secure functional encryption (FE) schemes for attribute-weighted sums. In such an FE scheme, encryption takes as input N pairs of attribute {(x_i, z_i )}_{i \in [N]} for some N \in \mathbb{N} where the attributes {x_i}_{i \in [N]} are public while the attributes {z_i}_{i \in [N]} are private. The indices i \in [N] are referred to as the slots. A secret key corresponds to some weight function f, and decryption recovers the weighted sum \sum_{i \in [N]} f(x_i)z_i. This is an important functionality with a wide range of potential real life applications. In the proposed FE schemes attributes are viewed as vectors and weight functions are arithmetic branching programs (ABP). We present two schemes with varying parameters and levels of adaptive security. (a) We first present a one-slot scheme that achieves adaptive security in the simulation-based security model against a bounded number of ciphertext queries and an arbitrary polynomial number of secret key queries both before and after the ciphertext queries. This is the best possible level of security one can achieve in the adaptive simulation-based framework. From the relations between the simulation-based and indistinguishability-based security frameworks for FE, it follows that the proposed FE scheme also achieves indistinguishability- based adaptive security against an a-priori unbounded number of ciphertext queries and an arbitrary polynomial number of secret key queries both before and after the ciphertext queries. Moreover, the scheme enjoys compact ciphertexts that do not grow with the number of appearances of the attributes within the weight functions. (b) Next, bootstrapping from the one-slot scheme, we present an unbounded-slot scheme that achieves simulation-based adaptive security against a bounded number of ciphertext and pre-ciphertext secret key queries while supporting an a-priori unbounded number of post-ciphertext secret key queries. The scheme achieves public parameters and secret key sizes independent of the number of slots N and a secret key can decrypt a ciphertext for any a-priori unbounded N. Further, just like the one-slot scheme, this scheme also has the ciphertext size independent of the number of appearances of the attributes within the weight functions. However, all the parameters of the scheme, namely, the master public key, ciphertexts, and secret keys scale linearly with the bound on the number of pre-ciphertext secret key queries. Our schemes are built upon asymmetric bilinear groups of prime order and the security is derived under the standard (bilateral) k-Linear (k-Lin) assumption. Our work resolves an open problem posed by Abdalla, Gong, and Wee in CRYPTO 2020, where they presented an unbounded-slot FE scheme for attribute-weighted sum achieving only semi-adaptive simulation security. At a technical level, our work extends the recent adaptive security framework of Lin and Luo [EUROCRYPT 2020], devised to achieve compact ciphertexts in the context of indistinguishability-based payload-hiding security, into the setting of simulation-based adaptive attribute-hiding security.
2021
ASIACRYPT
Compressed Sigma-Protocols for Bilinear Group Arithmetic Circuits and Application to Logarithmic Transparent Threshold Signatures 📺
Lai et al. (CCS 2019) have shown how Bulletproof’s arithmetic circuit zero-knowledge protocol (Bootle et al., EUROCRYPT 2016 and B{\"u}nz et al., S\&P 2018) can be generalized to work for bilinear group arithmetic circuits directly, i.e., without requiring these circuits to be translated into arithmetic circuits. In a nutshell, a bilinear group arithmetic circuit is a standard arithmetic circuit augmented with special gates capturing group exponentiations or pairings. Such circuits are highly relevant, e.g., in the context of zero-knowledge statements over pairing-based languages. As expressing these special gates in terms of a standard arithmetic circuit results in a significant overhead in circuit size, an approach to zero-knowledge via standard arithmetic circuits may incur substantial additional costs. The approach due to Lai et al. shows how to avoid this by integrating additional zero-knowledge techniques into the Bulletproof framework so as to handle the special gates very efficiently. We take a different approach by generalizing {\em Compressed $\Sigma$-Protocol Theory} (CRYPTO 2020) from arithmetic circuit relations to bilinear group arithmetic circuit relations. Besides its conceptual simplicity, our approach has the practical advantage of reducing the communication costs of Lai et al.'s protocol by roughly a multiplicative factor $3$. Finally, we show an application of our results which may be of independent interest. We construct the first $k$-out-of-$n$ threshold signature scheme (TSS) that allows for transparent setup {\em and} that yields threshold signatures of size logarithmic in $n$. The threshold signature hides the identities of the $k$ signers and the threshold $k$ can be dynamically chosen at aggregation time.
2021
ASIACRYPT
Luby-Rackoff Backwards with More Users and More Security 📺
It is known, from the work of Dai \textit{et al.} (in CRYPTO'17), that the PRF advantage of $\xorp$ (bitwise-xor of two outputs of $n$-bit random permutations with domain separated inputs), against an adversary making $q$ queries, is about $q/2^n$ for $q \leq 2^{n- 5}$. The same bound can be easily shown to hold for $\xorp[k]$ (bitwise-xor of $k$ outputs $n$-bit pseudorandom random permutations with domain separated inputs), for $k \geq 3$. In this work, we first consider multi-user security of $\xorp[3]$. We show that the multi-user PRF advantage of $\xorp[3]$ is about $\sqrt{uq_{\max}}/2^n$ for all {$q_{\max} \leq 2^{n}/12$}, where $u$ is the number of users and $q_{\max}$ is the maximum number of queries the adversary can make to each user. In the multi-user setup, this implies that $\xorp[3]$ gives security for $O(2^n)$ users even allowing almost $O(2^n)$ queries to each user. This also indicates significant improvement in the single-user setup ({\em i.e.,} when $u =1$), where the distinguishing advantage of the adversary even after making $O(2^n)$ queries is $O({1 \over \sqrt{2^n}})$, {\em i.e.,} negligible. Subsequently, we consider a simple efficient variant of $\xorp[3]$ in which we use five calls to produce $2n$ bit output (instead of six calls in the case of $\xorp[3]$). This variant also achieves similar level of security. As an immediate application, we can construct a variant of block cipher based counter mode which provides much higher security (both in the single-user and the multi-user setup) compared to the security of the encryption part of GCM at the cost of efficiency.
2021
ASIACRYPT
Lattice-Based Group Encryption with Full Dynamicity and Message Filtering Policy 📺
Group encryption (GE) is a fundamental privacy-preserving primitive analog of group signatures, which allows users to decrypt specific ciphertexts while hiding themselves within a crowd. Since its first birth, numerous constructions have been proposed, among which the schemes separately constructed by Libert et al. (Asiacrypt 2016) over lattices and by Nguyen et al. (PKC 2021) over coding theory are postquantum secure. Though the last scheme, at the first time, achieved the full dynamicity (allowing group users to join or leave the group in their ease) and message filtering policy, which greatly improved the state-of-affairs of GE systems, its practical applications are still limited due to the rather complicated design, inefficiency and the weaker security (secure in the random oracles). In return, the Libert et al.’s scheme possesses a solid security (secure in the standard model), but it lacks the previous functions and still suffers from inefficiency because of extremely using lattice trapdoors. In this work, we re-formalize the model and security definitions of fully dynamic group encryption (FDGE) that are essentially equivalent to but more succinct than Nguyen et al.’s; Then, we provide a generic and efficient zero-knowledge proof method for proving that a binary vector is non-zero over lattices, on which a proof for the Prohibitive message filtering policy in the lattice setting is first achieved (yet in a simple manner); Finally, by combining appropriate cryptographic materials and our presented zero-knowledge proofs, we achieve the first latticebased FDGE schemes in a simpler manner, which needs no any lattice trapdoor and is proved secure in the standard model (assuming interaction during the proof phase), outweighing the existing post-quantum secure GE systems in terms of functions, efficiency and security.
2021
ASIACRYPT
On the hardness of the NTRU problem 📺
The 25 year-old NTRU problem is an important computational assumption in public-key cryptography. However, from a reduction perspective, its relative hardness compared to other problems on Euclidean lattices is not well-understood. Its decision version reduces to the search Ring-LWE problem, but this only provides a hardness upper bound. We provide two answers to the long-standing open problem of providing reduction-based evidence of the hardness of the NTRU problem. First, we reduce the worst-case approximate Shortest Vector Problem over ideal lattices to an average-case search variant of the NTRU problem. Second, we reduce another average-case search variant of the NTRU problem to the decision NTRU problem.
2021
ASIACRYPT
Snarky Ceremonies 📺
Succinct non-interactive arguments of knowledge (SNARKs) have found numerous applications in the blockchain setting and elsewhere. The most efficient SNARKs require a distributed ceremony protocol to generate public parameters, also known as a structured reference string (SRS). Our contributions are two-fold: \begin{compactitem} \item We give a security framework for non-interactive zero-knowledge arguments with a ceremony protocol. \item We revisit the ceremony protocol of Groth's SNARK [Bowe et al., 2017]. We show that the original construction can be simplified and optimized, and then prove its security in our new framework. Importantly, our construction avoids the random beacon model used in the original work. \end{compactitem}
2021
ASIACRYPT
Reverse Firewalls for Adaptively Secure MPC without Setup 📺
We study Multi-party computation (MPC) in the setting of subversion, where the adversary tampers with the machines of honest parties. Our goal is to construct actively secure MPC protocols where parties are corrupted adaptively by an adversary (as in the standard adaptive security setting), and in addition, honest parties' machines are compromised. The idea of reverse firewalls (RF) was introduced at EUROCRYPT'15 by Mironov and Stephens-Davidowitz as an approach to protecting protocols against corruption of honest parties' devices. Intuitively, an RF for a party $\mathcal{P}$ is an external entity that sits between $\mathcal{P}$ and the outside world and whose scope is to sanitize $\mathcal{P}$’s incoming and outgoing messages in the face of subversion of their computer. Mironov and Stephens-Davidowitz constructed a protocol for passively-secure two-party computation. At CRYPTO'20, Chakraborty, Dziembowski and Nielsen constructed a protocol for secure computation with firewalls that improved on this result, both by extending it to \textit{multi}-party computation protocol, and considering \textit{active} security in the presence of \textit{static} corruptions. In this paper, we initiate the study of RF for MPC in the \textit{adaptive} setting. We put forward a definition for adaptively secure MPC in the reverse firewall setting, explore relationships among the security notions, and then construct reverse firewalls for MPC in this stronger setting of adaptive security. We also resolve the open question of Chakraborty, Dziembowski and Nielsen by removing the need for a trusted setup in constructing RF for MPC. Towards this end, we construct reverse firewalls for adaptively secure augmented coin tossing and adaptively secure zero-knowledge protocols and obtain a constant round adaptively secure MPC protocol in the reverse firewall setting without setup. Along the way, we propose a new multi-party adaptively secure coin tossing protocol in the plain model, that is of independent interest.
2021
ASIACRYPT
Algebraic Adversaries in the Universal Composability Framework 📺
The algebraic-group model (AGM), which lies between the generic group model and the standard model of computation, provides a means by which to analyze the security of cryptosystems against so-called algebraic adversaries. We formalize the AGM within the framework of universal composability, providing formal definitions for this setting and proving an appropriate composition theorem. This extends the applicability of the AGM to more-complex protocols, and lays the foundations for analyzing algebraic adversaries in a composable fashion. Our results also clarify the meaning of composing proofs in the AGM with other proofs and they highlight a natural form of independence between idealized groups that seems inherent to the AGM and has not been made formal before---these insights also apply to the composition of game-based proofs in the AGM. We show the utility of our model by proving several important protocols universally composable for algebraic adversaries, specifically: (1) the Chou-Orlandi protocol for oblivious transfer, and (2) the SPAKE2 and CPace protocols for password-based authenticated key exchange.
2021
ASIACRYPT
Categorization of Faulty Nonce Misuse Resistant Message Authentication 📺
A growing number of lightweight block ciphers are proposed for environments such as the Internet of Things. An important contribution to the reduced implementation cost is a block length n of 64 or 96 bits rather than 128 bits. As a consequence, encryption modes and message authentication code (MAC) algorithms require security beyond the 2^{n/2} birthday bound. This paper provides an extensive treatment of MAC algorithms that offer beyond birthday bound PRF security for both nonce-respecting and nonce-misusing adversaries. We study constructions that use two block cipher calls, one universal hash function call and an arbitrary number of XOR operations. We start with the separate problem of generically identifying all possible secure n-to-n-bit pseudorandom functions (PRFs) based on two block cipher calls. The analysis shows that the existing constructions EDM, SoP, and EDMD are the only constructions of this kind that achieve beyond birthday bound security. Subsequently we deliver an exhaustive treatment of MAC algorithms, where the outcome of a universal hash function evaluation on the message may be entered at any point in the computation of the PRF. We conclude that there are a total amount of nine schemes that achieve beyond birthday bound security, and a tenth construction that cannot be proven using currently known proof techniques. For these former nine MAC algorithms, three constructions achieve optimal n-bit security in the nonce-respecting setting, but are completely insecure if the nonce is reused. The remaining six constructions have 3n/4-bit security in the nonce-respecting setting, and only four out of these six constructions still achieve beyond the birthday bound security in the case of nonce misuse.
2021
ASIACRYPT
Better Security-Efficiency Trade-Offs in Permutation-Based Two-Party Computation 📺
We improve upon the security of (tweakable) correlation-robust hash functions, which are essential components of garbling schemes and oblivious-transfer extension schemes. We in particular focus on constructions from permutations, and improve upon the work by Guo etal. (IEEE S\&P '20) in terms of security and efficiency. We present a tweakable one-call construction which matches the security of the most secure two-call construction -- the resulting security bound takes form O((p+q)q/2^n), where q is the number of construction evaluations and p is the number of direct adversarial queries to the underlying n-bit permutation, which is modeled as random. Moreover, we present a new two-call construction with much better security degradation -- in particular, for applications of interest, where only a constant number of evaluations per tweak are made, the security degrades as O((\sqrt{q} p+q^2)/2^n). Our security proof relies on on the sum-capture theorems (Babai ’02; Steinberger ’12, Cogliati and Seurin ’18), as well as on new balls-into-bins combinatorial lemmas for limited independence ball-throws. Of independent interest, we also provide a self-contained concrete security treatment of oblivious transfer extension.
2021
ASIACRYPT
A Practical Key-Recovery Attack on 805-Round Trivium 📺
The cube attack is one of the most important cryptanalytic techniques against Trivium. Many key-recovery attacks based on cube attacks have been established. However, few attacks can recover the 80-bit full key information practically. In particular, the previous best practical key-recovery attack was on 784-round Trivium proposed by Fouque and Vannet at FSE 2013. To mount practical key-recovery attacks, it requires a sufficient number of low-degree superpolies. It is difficult both for experimental cube attacks and division property based cube attacks with randomly selected cubes due to lack of efficiency. In this paper, we give a new algorithm to construct candidate cubes targeting linear superpolies. Our experiments show that the success probability is 100% for finding linear superpolies using the constructed cubes. We obtain over 1000 linear superpolies for 805-round Trivium. With 42 independent linear superpolies, we mount a practical key-recovery attack on 805-round Trivium, which increases the number of attacked rounds by 21. The complexity of our attack is $ 2^{41.40} $, which could be carried out on a PC with a GTX-1080 GPU in several hours.
2021
ASIACRYPT
Automatic Classical and Quantum Rebound Attacks on AES-like Hashing by Exploiting Related-key Differentials 📺
Collision attacks on AES-like hashing (hash functions constructed by plugging AES-like ciphers or permutations into the famous PGV modes or their variants) can be reduced to the problem of finding a pair of inputs respecting a differential of the underlying AES-like primitive whose input and output differences are the same. The rebound attack due to Mendel et al. is a powerful tool for achieving this goal, whose quantum version was first considered by Hosoyamada and Sasaki at EUROCRYPT 2020. In this work, we automate the process of searching for the configurations of rebound attacks by taking related-key differentials of the underlying block cipher into account with the MILP-based approach. In the quantum setting, our model guide the search towards characteristics that minimize the resources (e.g., QRAM) and complexities of the resulting rebound attacks. We apply our method to Saturnin-hash, Skinny, and Whirlpool and improved results are obtained.
2021
ASIACRYPT
On the non-tightness of measurement-based reductions for key encapsulation mechanism in the quantum random oracle model 📺
Key encapsulation mechanism (KEM) variants of the Fujisaki-Okamoto (FO) transformation (TCC 2017) that turn a weakly-secure public-key encryption (PKE) into an IND-CCA-secure KEM, were widely used among the KEM submissions to the NIST Post-Quantum Cryptography Standardization Project. Under the standard CPA security assumptions, i.e., OW-CPA and IND-CPA, the security of these variants in the quantum random oracle model (QROM) has been proved by black-box reductions, e.g., Jiang et al. (CRYPTO 2018), and by non-black-box reductions (EUROCRYPT 2020). The non-black-box reductions (EUROCRYPT 2020) have a liner security loss, but can only apply to specific \emph{reversible} adversaries with strict \emph{reversible} implementation. On the contrary, the existing black-box reductions in the literature can apply to an arbitrary adversary with an arbitrary implementation, but suffer a quadratic security loss. In this paper, for KEM variants of the FO transformation, we first show the tightness limits of the black-box reductions, and prove that a \emph{measurement-based} reduction in the QROM from breaking the standard OW-CPA (or IND-CPA) security of the underlying PKE to breaking the IND-CCA security of the resulting KEM, will \emph{inevitably} incur a quadratic loss of the security, where ``measurement-based" means the reduction measures a hash query from the adversary and uses the measurement outcome to break the underlying security of PKE. In particular, most black-box reductions for these FO-like KEM variants are of this type, and our results suggest an explanation for the lack of progress in improving this reduction tightness in terms of the degree of security loss. Then, we further show that the quadratic loss is also unavoidable when one turns a search problem into a decision problem using the one-way to hiding technique in a black-box manner, which has been recognized as an essential technique to prove the security of cryptosystems involving quantum random oracles.
2021
ASIACRYPT
Simulation-Based Bi-Selective Opening Security for Public Key Encryption 📺
Selective opening attacks (SOA) (for public-key encryption, PKE) concern such a multi-user scenario, where an adversary adaptively corrupts some fraction of the users to break into a subset of honestly created ciphertexts, and tries to learn the information on the messages of some unopened (but potentially related) ciphertexts. Until now, the notion of selective opening attacks is only considered in two settings: sender selective opening (SSO), where part of senders are corrupted and messages together with randomness for encryption are revealed; and receiver selective opening (RSO), where part of receivers are corrupted and messages together with secret keys for decryption are revealed. In this paper, we consider a more natural and general setting for selective opening security. In the setting, the adversary may adaptively corrupt part of senders and receivers \emph{simultaneously}, and get the plaintext messages together with internal randomness for encryption and secret keys for decryption, while it is hoped that messages of uncorrupted parties remain protected. We denote it as Bi-SO security since it is reminiscent of Bi-Deniability for PKE. We first formalize the requirement of Bi-SO security by the simulation-based (SIM) style, and prove that some practical PKE schemes achieve SIM-Bi-$\text{SO}$-CCA security in the random oracle model. Then, we suggest a weak model of Bi-SO security, denoted as SIM-wBi-$\text{SO}$-CCA security, and argue that it is still meaningful and useful. We propose a generic construction of PKE schemes that achieve SIM-wBi-$\text{SO}$-CCA security in the standard model and instantiate them from various standard assumptions. Our generic construction is built on a newly presented primitive, namely, universal$_{\kappa}$ hash proof system with key equivocability, which may be of independent interest.
2021
ASIACRYPT
Modular Design of Role-Symmetric Authenticated Key Exchange Protocols 📺
Authenticated Key Exchange (AKE) is an important primitive in applied cryptography. Previously several strong models of AKE were introduced, e.g., CK, CK+, eCK and their extended versions considering perfect forward secrecy (PFS), (denoted by a “-PFS” suffix). These models provide different security guarantees and they are incomparable. Hence, one still lacks systematic understanding of the prerequisites for secure AKEs and a modular design of AKE protocols. In this paper, we investigate this issue in the context of One-Round Authenticated Key Exchange (ORKE), which is role-symmetric for players and only needs a single round to establish a session key. Our treatments are as follows: First, we reformat the CK, CK-PFS, CK+, CK+-PFS, eCK and eCK-PFS models in the context of ORKE, some of which are formulated for the first time in the literature. Next, we introduce a new tool, Key-wise Recoverable Function (KRF). With merely black-box calls to KRFs, we build modular constructions for ORKEs. As an immediate application, many previous protocols can be explained naturally by the construction. We further build a protocol with CK, CK+, eCK, CK-PFS, CK+-PFS and eCK-PFS security simultaneously, by properly instantiating the underlying KRF. As a by-product, we have simplified proofs for a few known protocols, with non-standard assumptions avoidable.
2021
ASIACRYPT
A formula for disaster: a unified approach to elliptic curve special-point-based attacks 📺
The Refined Power Analysis, Zero-Value Point, and Exceptional Procedure attacks introduced side-channel techniques against specific cases of elliptic curve cryptography. The three attacks recover bits of a static ECDH key adaptively, collecting information on whether a certain multiple of the input point was computed. We unify and generalize these attacks in a common framework, and solve the corresponding problem for a broader class of inputs. We also introduce a version of the attack against windowed scalar multiplication methods, recovering the full scalar instead of just a part of it. Finally, we systematically analyze elliptic curve point addition formulas from the Explicit-Formulas Database, classify all non-trivial exceptional points, and find them in new formulas. These results indicate the usefulness of our tooling, which we released publicly, for unrolling formulas and finding special points, and potentially for independent future work.
2021
ASIACRYPT
Security Analysis of CPace 📺
In response to standardization requests regarding password-authenticated key exchange (PAKE) protocols, the IRTF working group CFRG has setup a PAKE selection process in 2019, which led to the selection of the CPace protocol in the balanced setting, in which parties share a common password. In subsequent standardization efforts, the CPace protocol further developed, yielding a protocol family whose actual security guarantees in practical settings are not well understood. In this paper, we provide a comprehensive security analysis of CPace in the universal composability framework. Our analysis is realistic in the sense that it captures adaptive corruptions and refrains from modeling CPace's MapToPoint function that maps field elements to curve points as an idealized function. In order to extend our proofs to different CPace variants optimized for specific elliptic-curve ecosystems, we employ a new approach which represents the assumptions required by the proof as libraries accessed by a simulator. By allowing for the modular replacement of assumptions used in the proof, this new approach avoids a repeated analysis of unchanged protocol parts and lets us efficiently analyze the security guarantees of all the different CPace variants. As a result of our analysis, all of the investigated practical CPace variants enjoy adaptive UC security.
2021
ASIACRYPT
Cryptanalysis of an oblivious PRF from supersingular isogenies 📺
We cryptanalyse the SIDH-based oblivious pseudorandom function from supersingular isogenies proposed at Asiacrypt'20 by Boneh, Kogan and Woo. To this end, we give an attack on an assumption, the auxiliary one-more assumption, that was introduced by Boneh et al. and we show that this leads to an attack on the oblivious PRF itself. The attack breaks the pseudorandomness as it allows adversaries to evaluate the OPRF without further interactions with the server after some initial OPRF evaluations and some offline computations. More specifically, we first propose a polynomial-time attack. Then, we argue it is easy to change the OPRF protocol to include some countermeasures, and present a second subexponential attack that succeeds in the presence of said countermeasures. Both attacks break the security parameters suggested by Boneh et al. Furthermore, we provide a proof of concept implementation as well as some timings of our attack. Finally, we examine the generation of one of the OPRF parameters and argue that a trusted third party is needed to guarantee provable security.
2021
ASIACRYPT
Boosting the Security of Blind Signature Schemes 📺
Existing blind signature schemes that are secure for polynomially many concurrent executions of the signing protocol are either inefficient or rely on non-standard assumptions (even in the random-oracle model). We show the first efficient blind signature schemes achieving this level of security based on the RSA, quadratic residuosity, and discrete logarithm assumptions (in the random-oracle model). Our core technique involves an extension and generalization of a transform due to Pointcheval (Eurocrypt~'98) that allows us to convert certain blind signature schemes that are secure for (concurrently) issuing logarithmically many signatures into ones secure for (concurrently) issuing polynomially many signatures.
2021
ASIACRYPT
The One-More Discrete Logarithm Assumption in the Generic Group Model 📺
The one more-discrete logarithm assumption (OMDL) underlies the security analysis of identification protocols, blind signature and multi-signature schemes, such as blind Schnorr signatures and the recent MuSig2 multi-signatures. As these schemes produce standard Schnorr signatures, they are compatible with existing systems, e.g. in the context of blockchains. OMDL is moreover assumed for many results on the impossibility of certain security reductions. Despite its wide use, surprisingly, OMDL is lacking any rigorous analysis; there is not even a proof that it holds in the generic group model (GGM). (We show that a claimed proof is flawed.) In this work we give a formal proof of OMDL in the GGM. We also prove a related assumption, the one-more computational Diffie-Hellman assumption, in the GGM. Our proofs deviate from prior GGM proofs and replace the use of the Schwartz-Zippel Lemma by a new argument.
2021
ASIACRYPT
New Attacks on LowMC instances with a Single Plaintext/Ciphertext pair 📺
Cryptanalysis of the LowMC block cipher when the attacker has access to a single known plaintext/ciphertext pair is a mathematically challenging problem. This is because the attacker is unable to employ most of the standard techniques in symmetric cryptography like linear and differential cryptanalysis. This scenario is particularly relevant while arguing the security of the Picnic digital signature scheme in which the plaintext/ciphertext pair generated by the LowMC block cipher serves as the public (verification) key and the corresponding LowMC encryption key also serves as the secret (signing) key of the signature scheme. In the paper by Banik et al. (IACR ToSC 2020:4), the authors used a linearization technique of the LowMC S-box to mount attacks on some instances of the block cipher. In this paper, we first make a more precise complexity analysis of the linearization attack. Then, we show how to perform a 2-stage MITM attack on LowMC. The first stage reduces the key candidates corresponding to a fraction of key bits of the master key. The second MITM stage between this reduced candidate set and the remaining fraction of key bits successfully recovers the master key. We show that the combined computational complexity of both these stages is significantly lower than those reported in the ToSC paper by Banik et al.
2021
ASIACRYPT
Tight Security for Key-Alternating Ciphers with Correlated Sub-Keys 📺
A substantial effort has been devoted to proving optimal bounds for the security of key-alternating ciphers with independent sub-keys in the random permutation model (e.g., Chen and Steinberger, EUROCRYPT '14; Hoang and Tessaro, CRYPTO '16). While common in the study of multi-round constructions, the assumption that sub-keys are truly independent is not realistic, as these are generally highly correlated and generated from shorter keys. In this paper, we show the existence of non-trivial distributions of limited independence for which a t-round key-alternating cipher achieves optimal security. Our work is a natural continuation of the work of Chen et al. (CRYPTO '14) which considered the case of t = 2 when all-subkeys are identical. Here, we show that key-alternating ciphers remain secure for a large class of (t-1)-wise and (t-2)-wise independent distribution of sub-keys. Our proofs proceed by generalizations of the so-called Sum-Capture Theorem, which we prove using Fourier-analytic techniques.
2021
ASIACRYPT
Astrolabous: A Universally Composable Time Lock Encryption Scheme 📺
In this work, we study the cryptographic primitive called time-lock encryption (TLE). The concept of TLE involves a party initiating the encryption of a message that one can only decrypt after a certain amount of time has elapsed. Following the universal composability (UC) paradigm introduced by Canetti [IEEE FOCS 2001], we formally abstract the concept of TLE into an ideal functionality in a flexible way. In addition, we provide a standalone definition for secure TLE schemes in a game-based style and we devise a hybrid protocol that relies on such a secure TLE scheme. We show that if the underlying TLE scheme satisfies the standalone game-based security definition, then our hybrid protocol UC realises the TLE functionality in the random oracle model. Finally, we present \emph{Astrolabous}, a TLE construction that satisfies our security definition, leading to the first UC realization of the TLE functionality. Interestingly, it is hard to prove UC secure any of the TLE construction proposed in the literature. The reason behind this difficulty relates to the UC framework itself. Intuitively, to capture semantic security, no information should be leaked regarding the plaintext in the ideal world, thus the ciphertext should not contain any information relating to the message. On the other hand, all ciphertexts will eventually open, resulting in a trivial distinction of the real from the ideal world in the standard model. We overcome this limitation by extending any secure TLE construction adopting the techniques of Nielsen [CRYPTO 2002] in the random oracle model. Specifically, the description of the extended TLE algorithms includes calls to the random oracle, allowing our simulator to equivocate. This extension can be applied to any TLE algorithm that satisfies our standalone game-based security definition, and in particular to Astrolabous.
2021
ASIACRYPT
Two-Round Adaptively Secure MPC from Isogenies, LPN, or CDH 📺
We present a new framework for building round-optimal (two-round) adaptively secure MPC. We show that a relatively weak notion of OT that we call indistinguishability OT with receiver oblivious sampleability (r-iOT) is enough to build two-round, adaptively secure MPC against malicious adversaries in the CRS model. We then show how to construct r-iOT from CDH, LPN, or isogeny-based assumptions that can be viewed as group actions (such as CSIDH and CSI-FiSh). This yields the first concrete constructions of two-round adaptively secure MPC against malicious adversaries from CDH, LPN, or isogeny-based assumptions. We further extend our non-isogeny results to the plain model, achieving (to the best of our knowledge) the first construction of two-round adaptively secure MPC against semi-honest adversaries in the plain model from LPN. Our results allow us to build two-round adaptively secure MPC against malicious adversaries from essentially all of the well-studied assumptions in cryptography. In addition, our constructions from isogenies or LPN provide the first post-quantum alternatives to LWE-based constructions for round-optimal adaptively secure MPC. Along the way, we show that r-iOT also implies non-committing encryption (NCE), thereby yielding the first constructions of NCE from isogenies or LPN.
2021
ASIACRYPT
Improved single-round secure multiplication using regenerating codes 📺
In 2016, Guruswami and Wootters showed Shamir's secret-sharing scheme defined over an extension field has a regenerating property. Namely, we can compress each share to an element of the base field by applying a linear form, such that the secret is determined by a linear combination of the compressed shares. Immediately it seemed like an application to improve the complexity of unconditionally secure multiparty computation must be imminent; however, thus far, no result has been published. We present the first application of regenerating codes to MPC, and show that its utility lies in reducing the number of rounds. Concretely, we present a protocol that obliviously evaluates a depth-$d$ arithmetic circuit in $d + O(1)$ rounds, in the amortized setting of parallel evaluations, with $o(n^2)$ ring elements communicated per multiplication. Our protocol makes use of function-dependent preprocessing, and is secure against the maximal adversary corrupting $t < n/2$ parties. All existing approaches in this setting have complexity $\Omega(n^2)$. Moreover, we extend some of the theory on regenerating codes to Galois rings. It was already known that the repair property of MDS codes over fields can be fully characterized in terms of its dual code. We show this characterization extends to linear codes over Galois rings, and use it to show the result of Guruswami and Wootters also holds true for Shamir's scheme over Galois rings.
2021
ASIACRYPT
Identity-Based Encryption for Fair Anonymity Applications: Defining, Implementing, and Applying Rerandomizable RCCA-secure IBE 📺
Our context is anonymous encryption schemes hiding their receiver, but in a setting which allows authorities to reveal the receiver when needed. While anonymous Identity-Based Encryption (IBE) is a natural candidate for such fair anonymity (it gives trusted authority access by design), the {\it de facto} security standard (a.k.a. IND-ID-CCA) is incompatible with the ciphertext rerandomizability which is crucial to anonymous communication. Thus, we seek to extend IND-ID-CCA security for IBE to a notion that can be meaningfully relaxed for rerandomizability while it still protects against active adversaries. To the end, inspired by the notion of replayable adaptive chosen-ciphertext attack (RCCA) security (Canetti {\it et al.}, Crypto'03), we formalize a new security notion called Anonymous Identity-Based RCCA (ANON-ID-RCCA) security for rerandomizable IBE and propose the first construction with rigorous security analysis. The core of our scheme is a novel extension of the double-strand paradigm, which was originally proposed by Golle {\it et al.} (CT-RSA'04) and later extended by Prabhakaran and Rosulek (Crypto'07), to the well-known Gentry-IBE (Eurocrypt'06). Notably, our scheme is the first IBE that simultaneously satisfies adaptive security, rerandomizability, and recipient-anonymity to date. As the application of our new notion, we design a new universal mixnet in the identity-based setting that does not require public key distribution (with fair anonymity). More generally, our new notion is also applicable to most existing rerandomizable RCCA-secure applications to eliminate the need for public key distribution infrastructure while allowing fairness.
2021
ASIACRYPT
Double-Block-Length Hash Function for Minimum Memory Size 📺
Sharing a common primitive for multiple functionalities is essential for lightweight cryptography, and NIST's lightweight cryptography competition (LWC) considers the integration of hashing to AEAD. While permutations are natural primitive choices in such a goal, for design diversity, it is interesting to investigate how small block-cipher (BC) based and tweakable block-cipher (TBC) based schemes can be. Double-block-length (DBL) hash function modes are suitable to ensure the same security level for AEAD and hashing, but hard to achieve a small memory size. Romulus, a TBC-based finalist in NIST LWC, introduced the DBL hashing scheme Romulus-H, but it requires $3n+k$ bits of memory using an underlying primitive with an $n$-bit block and a $k$-bit (twea)key. Even the smallest DBL modes in the literature require $2n+k$ bits of memory. Addressing this issue, we present new DBL modes EXEX-NI and EXEX-I achieving $(n+k)$-bit state size, i.e., no extra memory in addition to $n+k$ bits needed within the primitive. EXEX-NI is indifferentiable from a random oracle up to $n - \log n$ bits. By instantiating it with SKINNY, we can provide hashing to Romulus with zero memory overhead. EXEX-I is an optimized mode with collision resistance. We finally compare the hardware performances of EXEX-NI and EXEX-I, and Romulus-H with SKINNY-128-384. EXEX-NI and EXEX-I achieve the circuit-area reduction by 2,000+ GE, yielding the total areas being smaller than 70% of that of Romulus-H.
2021
ASIACRYPT
Proofs for Inner Pairing Products and Applications 📺
We present a generalized inner product argument and demonstrate its applications to pairing-based languages. We apply our generalized argument to prove that an inner pairing product is correctly evaluated with respect to committed vectors of $n$ source group elements. With a structured reference string (SRS), we achieve a logarithmic-time verifier whose work is dominated by $6 \log n$ target group exponentiations. Proofs are of size $6 \log n$ target group elements, computed using $6n$ pairings and $4n$ exponentiations in each source group. We apply our inner product arguments to build the first polynomial commitment scheme with succinct (logarithmic) verification, $O(\sqrt{d})$ prover complexity for degree $d$ polynomials (not including the cost to evaluate the polynomial), and a SRS of size $O(\sqrt{d})$. Concretely, this means that for $d=2^{28}$, producing an evaluation proof in our protocol is $76\times$ faster than doing so in the KZG commitment scheme, and the CRS in our protocol is $1000\times$ smaller: $13$MB vs $13$GB for KZG. As a second application, we introduce an argument for aggregating $n$ Groth16 zkSNARKs into an $O(\log n)$ sized proof. Our protocol is significantly faster ($>1000\times$) than aggregating SNARKs via recursive composition: we aggregate $\sim 130,000$ proofs in $25$ minutes, versus $90$ proofs via recursive composition. Finally, we further apply our aggregation protocol to construct a low-memory SNARK for machine computations that does not rely on recursive composition. For a computation that requires time $T$ and space $S$, our SNARK produces proofs in space $\tilde{\mathcal{O}}(S+T)$, which is significantly more space efficient than a monolithic SNARK, which requires space $\tilde{\mathcal{O}}(S \cdot T)$.
2021
ASIACRYPT
Batching Base Oblivious Transfers 📺
Protocols that make use of oblivious transfer (OT) rarely require just one instance. Usually a batch of OTs is required — notably, when generating base OTs for OT extension. There is a natural way to optimize 2-round OT protocols when generating a batch, by reusing certain protocol messages across all instances. In this work we show that this batch optimization is error-prone. We catalog many implementations and papers that have an incorrect treatment of this batch optimization, some of them leading to catastrophic leakage in OT extension protocols. We provide a full treatment of how to properly optimize recent 2-round OT protocols for the batch setting. Along the way we show several performance improvements to the OT protocol of McQuoid, Rosulek, and Roy (ACM CCS 2020). In particular, we show an extremely simple OT construction that may be of pedagogical interest.
2021
ASIACRYPT
Lunar: a Toolbox for More Efficient Universal and Updatable zkSNARKs and Commit-and-Prove Extensions 📺
We study how to construct zkSNARKs whose SRS is universal and updatable, i.e., valid for all relations within a size-bound and to which a dynamic set of participants can indefinitely add secret randomness. Our focus is: efficient universal updatable zkSNARKs with linear-size SRS and their commit-and-prove variants. We both introduce new formal frameworks and techniques, as well as systematize existing ones. We achieve a collection of zkSNARKs with different tradeoffs. One of our schemes achieves the smallest proof size and proving time compared to the state of art for proofs for arithmetic circuits. The language supported by this scheme is a variant of R1CS that we introduce, called R1CS-lite. Another of our constructions directly supports standard R1CS and achieves the fastest proving time for this type of constraints. These results stem from different contributions: (1) a new algebraically-flavored variant of IOPs that we call Polynomial Holographic IOPs (PHPs); (2) a new compiler that combines our PHPs with commit-and-prove zk-SNARKs (CP-SNARKs) for committed polynomials; (3) pairing-based realizations of these CP-SNARKs for polynomials; (4) constructions of PHPs for R1CS and R1CS-lite. Finally, we extend the compiler in item (2) to yield commit-and-prove universal zkSNARKs.
2021
ASIACRYPT
Clustering Effect in Simon and Simeck 📺
Simon and Simeck are two lightweight block ciphers with a simple round function using only word rotations and a bit-wise AND operation. Previous work has shown a strong clustering effect for differential and linear cryptanalysis, due to the existence of many trails with the same inputs and outputs. In this paper, we explore this clustering effect by exhibiting a class of high probability differential and linear trails where the active bits stay in a fixed window of w bits. Instead of enumerating a set of good trails contributing to a differential or a linear approximation, we compute the probability distribution over this space, including all trails in the class. This results in stronger distinguishers than previously proposed, and we describe key recovery attacks against Simon and Simeck improving the previous results by up to 7 rounds. In particular, we obtain an attack against 42-round Simeck-64, leaving only two rounds of security margin, and an attack against 45-round Simon-96/144, reducing the security margin from 16 rounds to 9 rounds.
2021
ASIACRYPT
Quantum Encryption with Certified Deletion, Revisited: Public Key, Attribute-Based, and Classical Communication 📺
Broadbent and Islam (TCC '20) proposed a quantum cryptographic primitive called quantum encryption with certified deletion. In this primitive, a receiver in possession of a quantum ciphertext can generate a classical certificate that the encrypted message is deleted. Although their construction is information-theoretically secure, it is limited to the setting of one-time symmetric key encryption (SKE), where a sender and receiver have to share a common key in advance and the key can be used only once. Moreover, the sender has to generate a quantum state and send it to the receiver over a quantum channel in their construction. Deletion certificates are privately verifiable, which means a verification key for a certificate must be kept secret, in the definition by Broadbent and Islam. However, we can also consider public verifiability. In this work, we present various constructions of encryption with certified deletion. - Quantum communication case: We achieve (reusable-key) public key encryption (PKE) and attribute-based encryption (ABE) with certified deletion. Our PKE scheme with certified deletion is constructed assuming the existence of IND-CPA secure PKE, and our ABE scheme with certified deletion is constructed assuming the existence of indistinguishability obfuscation and one-way function. These two schemes are privately verifiable. - Classical communication case: We also achieve interactive encryption with certified deletion that uses only classical communication. We give two schemes, a privately verifiable one and a publicly verifiable one. The former is constructed assuming the LWE assumption in the quantum random oracle model. The latter is constructed assuming the existence of one-shot signatures and extractable witness encryption.
2021
ASIACRYPT
Faster Dual Lattice Attacks for Solving LWE -- with applications to CRYSTALS 📺
Cryptosystems based on the learning with errors (LWE) problem are assigned a security level that relates to the cost of generic algorithms for solving the LWE problem. This includes at least the so- called primal and dual lattice attacks. In this paper, we present an improvement of the dual lattice attack using an idea that can be traced back to work by Bleichenbacher. We present an improved distinguisher that in combination with a guessing step shows a reduction in the overall complexity for the dual attack on all schemes. Our second contribution is a new two-step lattice reduction strategy that allows the new dual lattice attack to exploit two recent techniques in lattice reduction algorithms, i.e., the "dimensions for free" trick and the trick of producing many short vectors in one sieving. Since the incompatibility of these two tricks was believed to be the main reason that dual attacks are less interesting, our new reduction strategy allows more efficient dual approaches than primal attacks, for important cryptographic parameter sets. We apply the proposed attacks on CRYSTALS-Kyber and CRYSTALS-Dilithium, two of the finalists in the NIST post-quantum cryptography project and present new lower complexity numbers, both classically and quantumly in the core-SVP model. Most importantly, for the proposed security parameters, our new dual attack with refined lattice reduction strategy greatly improves the state-of-the-art primal attack in the classical gate-count metric, i.e., the classical Random Access Machine (RAM) model, indicating that some parameters are really on the edge for their claimed security level. Specifically, the improvement factor can be as large as 15 bits for Kyber1024 with an extrapolation model (Albrecht et al. at Eurocrypt 2019). Also, we show that Kyber768 could be solved with classical gate complexity below its claimed security level. Last, we apply the new attack to the proposed parameters in a draft version of Homomorphic Encryption Standard (see https://homomorphicencryption.org) and obtain significant gains. For instance, we could solve a parameter set aiming for 192-bit security in $2^{187.0}$ operations in the classical RAM model. Note that these parameters are deployed in well-known Fully Homomorphic Encryption libraries.
2021
ASIACRYPT
Symmetric Key Exchange with Full Forward Security and Robust Synchronization 📺
We construct lightweight authenticated key exchange protocols based on pre-shared keys, which achieve full forward security and rely only on simple and efficient symmetric-key primitives. All of our protocols have rigorous security proofs in a strong security model, all have low communication complexity, and are particularly suitable for resource-constrained devices. We describe three protocols that apply linear key evolution to provide different performance and security properties. Correctness in parallel and concurrent protocol sessions is difficult to achieve for linearly key-evolving protocols, emphasizing the need for assurance of availability alongside the usual confidentiality and authentication security goals. We introduce synchronization robustness as a new formal security goal, which essentially guarantees that parties can re-synchronize efficiently. All of our new protocols achieve this property. Since protocols based on linear key evolution cannot guarantee that all concurrently initiated sessions successfully derive a key, we also propose two constructions with non-linear key evolution based on puncturable PRFs. These are instantiable from standard hash functions and require O( C log(|CTR|)) memory, where C is the number of concurrent sessions and |CTR| is an upper bound on the total number of sessions per party. These are the first protocols to simultaneously achieve full forward security, synchronization robustness, and concurrent correctness.
2021
ASIACRYPT
Séta: Supersingular Encryption from Torsion Attacks 📺
We present Séta, a new family of public-key encryption schemes with post-quantum security based on isogenies of supersingular elliptic curves. It is constructed from a new family of trapdoor one-way functions, where the inversion algorithm uses Petit's so called \emph{torsion attacks} on SIDH to compute an isogeny between supersingular elliptic curves given an endomorphism of the starting curve and images of torsion points. We prove the OW-CPA security of S\'eta and present an IND-CCA variant using the post-quantum OAEP transformation. Several variants for key generation are explored together with their impact on the selection of parameters, such as the base prime of the scheme. We furthermore formalise an ``uber'' isogeny assumption framework which aims to generalize computational isogeny problems encountered in schemes including SIDH, CSDIH, OSIDH and ours. Finally, we carefully select parameters to achieve a balance between security and run-times and present experimental results from our implementation.
2021
ASIACRYPT
How to Build a Trapdoor Function from an Encryption Scheme 📺
In this work we ask the following question: Can we transform any encryption scheme into a trapdoor function (TDF)? Alternatively stated, can we make any encryption scheme randomness recoverable? We propose a generic compiler that takes as input any encryption scheme with pseudorandom ciphertexts and adds a trapdoor to invert the encryption, recovering also the random coins. This universal TDFier only assumes in addition the existence of a hinting pseudorandom generator (PRG). Despite the simplicity, our transformation is quite general and we establish a series of new feasibility results: - The first identity-based TDF [Bellare et al. EUROCRYPT 2012] from the CDH assumption in pairing-free groups (or from factoring), thus matching the state of the art for identity-based encryption schemes. Prior works required pairings or LWE. - The first collusion-resistant attribute-based TDF (AB-TDF) for all ($NC^1$, resp.) circuits from LWE (bilinear maps, resp.). Moreover, the first single-key AB-TDF from CDH. To the best of our knowledge, no AB-TDF was known in the literature (not even for a single key) from any assumption. We obtain the same results for predicate encryption. As an additional contribution, we define and construct a trapdoor garbling scheme: A simulation secure garbling scheme with a hidden ``trigger'' that allows the evaluator to fully recover the randomness used by the garbling algorithm. We show how to construct trapdoor garbling from the DDH or LWE assumption with an interplay of key-dependent message (KDM) and randomness-dependent message (RDM) techniques. Trapdoor garbling allows us to obtain alternative constructions of (single-key) AB-TDFs with additional desirable properties, such as adaptive security (in the choice of the attribute) and projective keys. We expect trapdoor garbling to be useful in other contexts, e.g. in case where, upon successful execution, the evaluator needs to immediately verify that the garbled circuit was well-formed.
2021
ASIACRYPT
Lattice sieving via quantum random walks 📺
Lattice-based cryptography is one of the leading proposals for post-quantum cryptography. The Shortest Vector Problem (SVP) is arguably the most important problem for the cryptanalysis of lattice-based cryptography, and many lattice-based schemes have security claims based on its hardness. The best quantum algorithm for the SVP is due to Laarhoven [Laa16 PhD] and runs in (heuristic) time $2^{0.2653d + o(d)}$. In this article, we present an improvement over Laarhoven's result and present an algorithm that has a (heuristic) running time of $2^{0.2570 d + o(d)}$ where $d$ is the lattice dimension. We also present time-memory trade-offs where we quantify the amount of quantum memory and quantum random access memory of our algorithm. The core idea is to replace Grover's algorithm used in [Laa16 PhD] in a key part of the sieving algorithm by a quantum random walk in which we add a layer of local sensitive filtering.
2021
ASIACRYPT
Shorter Lattice-Based Group Signatures via ``Almost Free'' Encryption and Other Optimizations 📺
We present an improved lattice-based group signature scheme whose parameter sizes and running times are independent of the group size. The signature length in our scheme is around $200$KB, which is approximately a $3$X reduction over the previously most compact such scheme, based on any quantum-safe assumption, of del Pino et al. (CCS 2018). The improvement comes via several optimizations of some basic cryptographic components that make up group signature schemes, and we think that they will find other applications in privacy-based lattice cryptography.
2021
ASIACRYPT
Divided We Stand, United We Fall: Security Analysis of Some SCA+SIFA Countermeasures Against SCA-Enhanced Fault Template Attacks 📺
Protection against side-channel (SCA) and fault attacks (FA) requires two classes of countermeasures to be simultaneously embedded in a cryptographic implementation. It has already been shown that a straightforward combination of SCA and FA countermeasures are vul- nerable against FAs, such as Statistical Ineffective Fault Analysis (SIFA) and Fault Template Attacks (FTA). Consequently, new classes of countermeasures have been proposed which prevent against SIFA, and also includes masking for SCA protection. While they are secure against SIFA and SCA individually, one important question is whether the security claim still holds at the presence of a combined SCA and FA adversary. Security against combined attacks is, however, desired, as countermeasures for both threats are included in such implementations. In this paper, we show that some of the recently proposed combined SIFA and SCA countermeasures fall prey against combined attacks. To this end, we enhance the FTA attacks by considering side-channel information during fault injection. The success of the proposed attacks stems from some non-trivial fault propagation properties of S-Boxes, which remains unexplored in the original FTA proposal. The proposed attacks are validated on an open-source software implementation of Keccak with SIFA-protected χ 5 S-Box with laser fault injection and power measurement, and a hardware implementation of a SIFA-protected χ3 S-Box through gate-level power trace simulation. Finally, we discuss some mitigation strategies to strengthen existing countermeasures.
2021
ASIACRYPT
Improved Programmable Bootstrapping with Larger Precision and Efficient Arithmetic Circuits for TFHE 📺
Fully Homomorphic Encryption} (FHE) schemes enable to compute over encrypted data. Among them, TFHE [CGGI17] has the great advantage of offering an efficient method for bootstrapping noisy ciphertexts, i.e., reduce the noise. Indeed, homomorphic computation increases the noise in ciphertexts and might compromise the encrypted message. TFHE bootstrapping, in addition to reducing the noise, also evaluates (for free) univariate functions expressed as look-up tables. It however requires to have the most significant bit of the plaintext to be known a priori, resulting in the loss of one bit of space to store messages. Furthermore it represents a non negligible overhead in terms of computation in many use cases. In this paper, we propose a solution to overcome this limitation, that we call Programmable Bootstrapping Without Padding (WoP-PBS). This approach relies on two building blocks. The first one is the multiplication à la BFV [FV12] that we incorporate into TFHE. This is possible thanks to a thorough noise analysis showing that correct multiplications can be computed using practical TFHE parameters. The second building block is the generalization of TFHE bootstrapping introduced in this paper. It offers the flexibility to select any chunk of bits in an encrypted plaintext during a bootstrap. It also enables to evaluate many LUTs at the same time when working with small enough precision. All these improvements are particularly helpful in some applications such as the evaluation of Boolean circuits (where a bootstrap is no longer required in each evaluated gate) and, more generally, in the efficient evaluation of arithmetic circuits even with large integers. Those results improve TFHE circuit bootstrapping as well. Moreover, we show that bootstrapping large precision integers is now possible using much smaller parameters than those obtained by scaling TFHE ones.
2021
ASIACRYPT
SHealS and HealS: isogeny-based PKEs from a key validation method for SIDH 📺
In 2016, Galbraith et al. presented an adaptive attack on the SIDH key exchange protocol. In SIKE, one applies a variant of the Fujisaki-Okamoto transform to force Bob to reveal his encryption key to Alice, which Alice then uses to re-encrypt Bob's ciphertext and verify its validity. Therefore, Bob can not reuse his encryption keys. There have been two other proposed countermeasures enabling static-static private keys: k-SIDH and its variant by Jao and Urbanik. These countermeasures are relatively expensive since they consist in running multiple parallel instances of SIDH. In this paper, firstly, we propose a new countermeasure to the GPST adaptive attack on SIDH. Our countermeasure does not require key disclosure as in SIKE, nor multiple parallel instances as in k-SIDH. We translate our countermeasure into a key validation method for SIDH-type schmes. Secondly, we use our key validation to design HealSIDH, an efficient SIDH-type static-static key interactive exchange protocol. Thirdly, we derive a PKE scheme SHealS using HealSIDH. SHealS uses larger primes compared to SIKE, has larger keys and ciphertexts, but only $4$ isogenies are computed in a full execution of the scheme, as opposed to $5$ isogenies in SIKE. We prove that SHealS is IND-CPA secure relying on a new assumption we introduce and we conjecture its IND-CCA security. We suggest HealS, a variant of SHealS using a smaller prime, providing smaller keys and ciphertexts. As a result, HealSIDH is a practically efficient SIDH based (interactive) key exchange incorporating a "direct" countermeasure to the GPST adaptive attack.
2021
ASIACRYPT
Digital Signatures with Memory-Tight Security in the Multi-Challenge Setting 📺
The standard security notion for digital signatures is "single-challenge" (SC) EUF-CMA security, where the adversary outputs a single message-signature pair and "wins" if it is a forgery. Auerbach et al. (CRYPTO 2017) introduced memory-tightness of reductions and argued that the right security goal in this setting is actually a stronger "multi-challenge" (MC) definition, where an adversary may output many message-signature pairs and "wins" if at least one is a forgery. Currently, no construction from simple standard assumptions is known to achieve full tightness with respect to time, success probability, and memory simultaneously. Previous works showed that memory-tight signatures cannot be achieved via certain natural classes of reductions (Auerbach et al., CRYPTO 2017; Wang et al., EUROCRYPT 2018). These impossibility results may give the impression that the construction of memory-tight signatures is difficult or even impossible. We show that this impression is false, by giving the first constructions of signature schemes with full tightness in all dimensions in the MC setting. To circumvent the known impossibility results, we first introduce the notion of canonical reductions in the SC setting. We prove a general theorem establishing that every signature scheme with a canonical reduction is already memory-tightly secure in the MC setting, provided that it is strongly unforgeable, the adversary receives only one signature per message, and assuming the existence of a tightly-secure pseudorandom function. We then achieve memory-tight many-signatures-per-message security in the MC setting by a simple additional generic transformation. This yields the first memory-tightly, strongly EUF-CMA-secure signature schemes in the MC setting. Finally, we show that standard security proofs often already can be viewed as canonical reductions. Concretely, we show this for signatures from lossy identification schemes (Abdalla et al., EUROCRYPT 2012), two variants of RSA Full-Domain Hash (Bellare and Rogaway, EUROCRYPT 1996), and two variants of BLS signatures (Boneh et al., ASIACRYPT 2001).
2021
ASIACRYPT
Dynamic Random Probing Expansion with Quasi Linear Asymptotic Complexity 📺
The masking countermeasure is widely used to protect cryptographic implementations against side-channel attacks. While many masking schemes are shown to be secure in the widely deployed probing model, the latter raised a number of concerns regarding its relevance in practice. Offering the adversary the knowledge of a fixed number of intermediate variables, it does not capture the so-called horizontal attacks which exploit the repeated manipulation of sensitive variables. Therefore, recent works have focused on the random probing model in which each computed variable leaks with some given probability p. This model benefits from fitting better the reality of the embedded devices. In particular, Belaïd, Coron, Prouff, Rivain, and Taleb (CRYPTO 2020) introduced a framework to generate random probing circuits. Their compiler somehow extends base gadgets as soon as they satisfy a notion called random probing expandability (RPE). A subsequent work from Belaïd, Rivain, and Taleb (EUROCRYPT 2021) went a step forward with tighter properties and improved complexities. In particular, their construction reaches a complexity of O(κ^{3.9}), for a κ-bit security, while tolerating a leakage probability of p = 2^{−7.5}. In this paper, we generalize the random probing expansion approach by considering a dynamic choice of the base gadgets at each step in the expansion. This approach makes it possible to use gadgets with high number of shares –which enjoy better asymptotic complexity in the expansion framework– while still tolerating the best leakage rate usually obtained for small gadgets. We investigate strategies for the choice of the sequence of compilers and show that it can reduce the complexity of an AES implementation by a factor 10. We also significantly improve the asymptotic complexity of the expanding compiler by exhibiting new asymptotic gadget constructions. Specifically, we introduce RPE gadgets for linear operations featuring a quasi-linear complexity, as well as, an RPE multiplication gadget with linear number of multiplications. These new gadgets drop the complexity of the expanding compiler from quadratic to quasi-linear.
2021
ASIACRYPT
Verifiably-Extractable OWFs and Their Applications to Subversion Zero-Knowledge 📺
An extractable one-way function (EOWF), introduced by Canetti and Dakdouk (ICALP 2008) and generalized by Bitansky et al. (SIAM Journal on Computing vol. 45), is an OWF that allows for efficient extraction of a preimage for the function. We study (generalized) EOWFs that have a public image verification algorithm. We call such OWFs verifiably-extractable and show that several previously known constructions satisfy this notion. We study how such OWFs relate to subversion zero-knowledge (Sub-ZK) NIZKs by using them to generically construct a Sub-ZK NIZK from a NIZK satisfying certain additional properties, and conversely show how to obtain them from any Sub-ZK NIZK. Prior to our work, the Sub-ZK property of NIZKs was achieved using concrete knowledge assumptions.
2021
ASIACRYPT
QCB: Efficient Quantum-secure Authenticated Encryption 📺
It was long thought that symmetric cryptography was only mildly affected by quantum attacks, and that doubling the key length was sufficient to restore security. However, recent works have shown that Simon's quantum period finding algorithm breaks a large number of MAC and authenticated encryption algorithms when the adversary can query the MAC/encryption oracle with a quantum superposition of messages. In particular, the OCB authenticated encryption mode is broken in this setting, and no quantum-secure mode is known with the same efficiency (rate-one and parallelizable). In this paper we generalize the previous attacks, show that a large class of OCB-like schemes is unsafe against superposition queries, and discuss the quantum security notions for authenticated encryption modes. We propose a new rate-one parallelizable mode named QCB inspired by TAE and OCB and prove its security against quantum superposition queries.
2021
ASIACRYPT
Efficient NIZKs for Algebraic Sets 📺
Significantly extending the framework of (Couteau and Hartmann, Crypto 2020), we propose a general methodology to construct NIZKs for showing that an encrypted vector $\vec{\chi}$ belongs to an algebraic set, i.e., is in the zero locus of an ideal $\mathscr{I}$ of a polynomial ring. In the case where $\mathscr{I}$ is principal, i.e., generated by a single polynomial $F$, we first construct a matrix that is a ``quasideterminantal representation'' of $F$ and then a NIZK argument to show that $F (\vec{\chi}) = 0$. This leads to compact NIZKs for general computational structures, such as polynomial-size algebraic branching programs. We extend the framework to the case where $\IDEAL$ is non-principal, obtaining efficient NIZKs for R1CS, arithmetic constraint satisfaction systems, and thus for $\mathsf{NP}$. As an independent result, we explicitly describe the corresponding language of ciphertexts as an algebraic language, with smaller parameters than in previous constructions that were based on the disjunction of algebraic languages. This results in an efficient GL-SPHF for algebraic branching programs.
2021
ASIACRYPT
ConTra Corona: Contact Tracing against the Coronavirus by Bridging the Centralized–Decentralized Divide for Stronger Privacy 📺
Contact tracing is among the most important interventions to mitigate the spread of any pandemic usually in the form of manual contact tracing. Smartphone-facilitated digital contact tracing may help to increase tracing capabilities and extend the coverage to those contacts one does not know in person. Most implemented protocols use local Bluetooth Low Energy (BLE) communication to detect contagion-relevant proximity, together with cryptographic protections, as necessary to improve the privacy of the users of such a system. However, current decentralized protocols, including DP3T, do not sufficiently protect infected users from having their status revealed to their contacts, which raises fear of stigmatization. We alleviate this by proposing a new and practical solution with stronger privacy guarantees against active adversaries. It is based on the upload-what-you-observed paradigm, includes a separation of duties on the server side, and a mechanism to ensure that users cannot deduce which encounter caused a warning with high time resolution. Finally, we present a simulation-based security notion of digital contact tracing in the real–ideal setting, and prove the security of our protocol in this framework.
2021
ASIACRYPT
Franchised Quantum Money 📺
The construction of public key quantum money based on standard cryptographic assumptions is a longstanding open question. Here we introduce franchised quantum money, an alternative form of quantum money that is easier to construct. Franchised quantum money retains the features of a useful quantum money scheme, namely unforgeability and local verification: anyone can verify banknotes without communicating with the bank. In franchised quantum money, every user gets a unique secret verification key, and the scheme is secure against counterfeiting and sabotage, a new security notion that appears in the franchised model. Finally, we construct franchised quantum money and prove security assuming one-way functions.
2021
ASIACRYPT
Cryptographic Analysis of the Bluetooth Secure Connection Protocol Suite 📺
We give a cryptographic analysis of the Bluetooth Secure Connection Protocol Suite. Bluetooth supports several subprotocols such as numeric comparison, passkey entrance, and just works, in order to match the devices' different input/output capabilities. Previous analyses (e.g., Lindell, CT-RSA'09, or Troncoso and Hale, NDSS'21) often considered (and confirmed) the security of single subprotocols only. Recent practically verified attacks, however, such as the Method Confusion Attack (von Tschirschnitz et al., S&P 21) against Bluetooth's authentication and key secrecy property often exploit the bad interplay of different subprotocols. Even worse, some of these attacks show that one cannot show the Bluetooth protocol suite to be a secure authenticated key exchange protocol. We therefore aim at the best we can hope for, and show that the protocol still matches the common key secrecy requirements of a key-exchange protocol if one assumes a trust-on-first-use relationship. This means that the adversary needs to mount an active attack during the first connection, otherwise the subsequent reconnections remain secure. Investigating the cryptographic strength of the Bluetooth protocol we also look into the privacy mechanism of address randomization in Bluetooth (which is only available in the Low Energy version). We show that the cryptography indeed provides a decent level of address privacy, although this does not rule out identification of devices via other means, such as physical characteristics.
2021
ASIACRYPT
Efficient Boolean Search over Encrypted Data with Reduced Leakage 📺
Encrypted multi-maps enable outsourcing the storage of a multi-map to an untrusted server while maintaining the ability to query privately. We focus on encrypted Boolean multi-maps that support arbitrary Boolean queries over the multi-map. Kamara and Moataz [Eurocrypt’17] presented the first encrypted multi-map, BIEX, that supports CNF queries with optimal communication, worst-case sublinear search time and non-trivial leakage. We improve on previous work by presenting a new construction CNFFilter for CNF queries with significantly less leakage than BIEX, while maintaining both optimal communication and worst-case sublinear search time. As a direct consequence our construction shows additional resistance to leakage-abuse attacks in comparison to prior works. For most CNF queries, CNFFilter avoids leaking the result sets for any singleton queries for labels appearing in the CNF query. As an example, for the CNF query of the form (l1 ∨ l2) ∧ l3, our scheme does not leak the result sizes of queries to l1, l2 or l3 individually. On the other hand, BIEX does leak some of this information. This is just an example of the reduced leakage obtained by CNFFilter. The core of CNFFilter is a new filtering algorithm that performs set intersections with significantly less leakage compared to prior works. We implement CNFFilter and show that CNFFilter achieves faster search times and similar communication overhead compared to BIEX at the cost of a small increase in server storage.
2021
ASIACRYPT
Efficient Leakage-Resilient MACs without Idealized Assumptions 📺
The security proofs of leakage-resilient MACs based on symmetric building blocks currently rely on idealized assumptions that hardly translate into interpretable guidelines for the cryptographic engineers implementing these schemes. In this paper, we first present a leakage-resilient MAC that is both efficient and secure under standard and easily interpretable black box and physical assumptions. It only requires a collision resistant hash function and a single call per message authentication to a Tweakable Block Cipher (TBC) that is unpredictable with leakage. This construction leverages two design twists: large tweaks for the TBC and a verification process that checks the inverse TBC against a constant. It enjoys beyond birthday security bounds. We then discuss the cost of getting rid of these design twists. We show that security can be proven without them as well. Yet, a construction without large tweaks requires stronger (non idealized) assumptions and inevitably incurs performance overheads if specialized TBCs can be exploited, and a construction without twisted verification requires even stronger assumptions (still non idealized) and leads to more involved bounds. The combination of these results makes a case for our first pragmatic construction and suggests the design of TBCs with large tweaks and good properties for side-channel countermeasures as an interesting challenge.
2021
ASIACRYPT
Tight adaptive reprogramming in the QROM 📺
The random oracle model (ROM) enjoys widespread popularity, mostly because it tends to allow for tight and conceptually simple proofs where provable security in the standard model is elusive or costly. While being the adequate replacement of the ROM in the post-quantum security setting, the quantum-accessible random oracle model (QROM) has thus far failed to provide these advantages in many settings. In this work, we focus on adaptive reprogrammability, a feature of the ROM enabling tight and simple proofs in many settings. We show that the straightforward quantum-accessible generalization of adaptive reprogramming is feasible by proving a bound on the adversarial advantage in distinguishing whether a random oracle has been reprogrammed or not. We show that our bound is tight by providing a matching attack. We go on to demonstrate that our technique recovers the mentioned advantages of the ROM in three QROM applications: 1) We give a tighter proof of security of the message compression routine as used by XMSS. 2) We show that the standard ROM proof of chosen-message security for Fiat-Shamir signatures can be lifted to the QROM, straightforwardly, achieving a tighter reduction than previously known. 3) We give the first QROM proof of security against fault injection and nonce attacks for the hedged Fiat-Shamir transform.
2021
ASIACRYPT
Convexity of division property transitions: theory, algorithms and compact models 📺
Integral cryptanalysis is a powerful tool for attacking symmetric primitives, and division property is a state-of-the-art framework for finding integral distinguishers. This work describes new theoretical and practical insights into traditional bit-based division property. We focus on analyzing and exploiting monotonicity/convexity of division property and its relation to the graph indicator. In particular, our investigation leads to a new compact representation of propagation, which allows CNF/MILP modeling for larger S-Boxes, such as 16-bit Super-Sboxes of lightweight block ciphers or even 32-bit random S-boxes. This solves the challenge posed by Derbez and Fouque (ToSC 2020), who questioned the possibility of SAT/SMT/MILP modeling of 16-bit Super-Sboxes. As a proof-of-concept, we model the Super-Sboxes of the 8-round LED by CNF formulas, which was not feasible by any previous approach. Our analysis is further supported by an elegant algorithmic framework. We describe simple algorithms for computing division property of a set of $n$-bit vectors in time $O(n2^n)$, reducing such sets to minimal/maximal elements in time $O(n2^n)$, computing division property propagation table of an $n\times m$-bit S-box and its compact representation in time $O((n+m)2^{n+m})$. In addition, we develop an advanced algorithm tailored to "heavy" bijections, allowing to model, for example, a randomly generated 32-bit S-box.
2021
ASIACRYPT
Hierarchical Integrated Signature and Encryption 📺
In this work, we introduce the notion of hierarchical integrated signature and encryption (HISE), wherein a single public key is used for both signature and encryption, and one can derive a secret key used only for decryption from the signing key, which enables secure delegation of decryption capability. HISE enjoys the benefit of key reuse, and admits individual key escrow. We present two generic constructions of HISE. One is from (constrained) identity-based encryption. The other is from uniform one-way function, public-key encryption, and general-purpose public-coin zero-knowledge proof of knowledge. To further attain global key escrow, we take a little detour to revisit global escrow PKE, an object both of independent interest and with many applications. We formalize the syntax and security model of global escrow PKE, and provide two generic constructions. The first embodies a generic approach to compile any PKE into one with global escrow property. The second establishes a connection between three-party non-interactive key exchange and global escrow PKE. Combining the results developed above, we obtain HISE schemes that support both individual and global key escrow. We instantiate our generic constructions of (global escrow) HISE and implement all the resulting concrete schemes for 128-bit security. Our schemes have performance that is comparable to the best Cartesian product combined public-key scheme, and exhibit advantages in terms of richer functionality and public key reuse. As a byproduct, we obtain a new global escrow PKE scheme that outperforms the best prior work in speed by several orders of magnitude, which might be of independent interest.
2021
ASIACRYPT
Lattice Enumeration for Tower NFS: a 521-bit Discrete Logarithm Computation 📺
The Tower variant of the Number Field Sieve (TNFS) is known to be asymptotically the most efficient algorithm to solve the discrete logarithm problem in finite fields of medium characteristics, when the extension degree is composite. A major obstacle to an efficient implementation of TNFS is the collection of algebraic relations, as it happens in dimension greater than 2. This requires the construction of new sieving algorithms which remain efficient as the dimension grows. In this article, we overcome this difficulty by considering a lattice enumeration algorithm which we adapt to this specific context. We also consider a new sieving area, a high-dimensional sphere, whereas previous sieving algorithms for the classical NFS considered an orthotope. Our new sieving technique leads to a much smaller running time, despite the larger dimension of the search space, and even when considering a larger target, as demonstrated by a record computation we performed in a 521-bit finite field GF(p^6). The target finite field is of the same form than finite fields used in recent zero-knowledge proofs in some blockchains. This is the first reported implementation of TNFS.
2021
ASIACRYPT
Promise $\Sigma$-protocol: How to Construct Efficient Threshold ECDSA from Encryptions Based on Class Groups 📺
Threshold Signatures allow $n$ parties to share the ability of issuing digital signatures so that any coalition of size at least $t+1$ can sign, whereas groups of $t$ or less players cannot. The currently known class-group-based threshold ECDSA constructions are either inefficient (requiring parallel-repetition of the underlying zero knowledge proof with small challenge space) or requiring rather non-standard assumptions. In this paper, under \emph{standard assumptions} we present efficient threshold ECDSA protocols from encryption schemes based on class groups \emph{without parallel repeating the underlying zero knowledge proof}, yielding a significant efficiency improvement in the key generation over previous constructions (even those based on non-standard assumptions). Along the way we introduce a new notion of \emph{promise} $\Sigma$-protocol that satisfies only a weaker soundness called \emph{promise extractability}. An accepting \emph{promise} $\Sigma$-proof for statements related to class-group-based encryptions does not establish the truth of the statement but provides security guarantees (promise extractability) that are sufficient for our applications. We also show how to simulate homomorphic operations on a (possibly invalid) class-group-based encryption whose correctness has been proven via our \emph{promise} $\Sigma$-protocol. We believe that these techniques are of independent interest and applicable to other scenarios where efficient zero knowledge proofs for statements related to class-group is required.
2021
CHES
2021
CHES
2021
CRYPTO
The $t$-wise Independence of Substitution-Permutation Networks 📺
Block ciphers such as the Advanced Encryption Standard (Rijndael) are used extensively in practice, yet our understanding of their security continues to be highly incomplete. This paper promotes and continues a research program aimed at {\em proving} the security of block ciphers against important and well-studied classes of attacks. In particular, we initiate the study of (almost) $t$-wise independence of concrete block-cipher construction paradigms such as substitution-permutation networks and key-alternating ciphers. Sufficiently strong (almost) pairwise independence already suffices to resist (truncated) differential attacks and linear cryptanalysis, and hence this is a relevant and meaningful target. Our results are two-fold. Our first result concerns substitution-permutation networks (SPNs) that model ciphers such as AES. We prove the almost pairwise-independence of an SPN instantiated with concrete S-boxes together with an appropriate linear mixing layer, given sufficiently many rounds and independent sub-keys. Our proof relies on a {\em characterization} of S-box computation on input differences in terms of sampling output differences from certain subspaces, and a new randomness extraction lemma (which we prove with Fourier-analytic techniques) that establishes when such sampling yields uniformity. We use our techniques in particular to prove almost pairwise-independence for sufficiently many rounds of both the AES block cipher (which uses a variant of the patched inverse function $x \mapsto x^{-1}$ as the $S$-box) and the MiMC block cipher (which uses the cubing function $x \mapsto x^3$ as the $S$-box), assuming independent sub-keys. Secondly, we show that instantiating a key-alternating cipher (which can be thought of as a degenerate case of SPNs) with most permutations gives us (almost) $t$-wise independence in $t + o(t)$ rounds. In order to do this, we use the probabilistic method to develop two new lemmas, an {\em independence-amplification lemma} and a {\em distance amplification lemma}, that allow us to reason about the evolution of key-alternating ciphers.
2021
CRYPTO
Time- and Space-Efficient Arguments from Groups of Unknown Order 📺
We construct public-coin time- and space-efficient zero-knowledge arguments for NP. For every time T and space S non-deterministic RAM computation, the prover runs in time T * polylog(T) and space S * polylog(T), and the verifier runs in time n * polylog(T), where n is the input length. Our protocol relies on hidden order groups, which can be instantiated with a trusted setup from the hardness of factoring (products of safe primes), or without a trusted setup using class groups. The argument-system can heuristically be made non-interactive using the Fiat-Shamir transform. Our proof builds on DARK (Bunz et al., Eurocrypt 2020), a recent succinct and efficiently verifiable polynomial commitment scheme. We show how to implement a variant of DARK in a time- and space-efficient way. Along the way we: 1. Identify a significant gap in the proof of security of Dark. 2. Give a non-trivial modification of the DARK scheme that overcomes the aforementioned gap. The modified version also relies on significantly weaker cryptographic assumptions than those in the original DARK scheme. Our proof utilizes ideas from the theory of integer lattices in a novel way. 3. Generalize Pietrzak's (ITCS 2019) proof of exponentiation (PoE) protocol to work with general groups of unknown order (without relying on any cryptographic assumption). In proving these results, we develop general-purpose techniques for working with (hidden order) groups, which may be of independent interest.
2021
CRYPTO
Three Halves Make a Whole? Beating the Half-Gates Lower Bound for Garbled Circuits 📺
Honorable mention for best paper
We describe a garbling scheme for boolean circuits, in which XOR gates are free and AND gates require communication of $1.5\kappa + 5$ bits. This improves over the state-of-the-art ``half-gates'' scheme of Zahur, Rosulek, and Evans (Eurocrypt 2015), in which XOR gates are free and AND gates cost $2\kappa$ bits. The half-gates paper proved a lower bound of $2\kappa$ bits per AND gate, in a model that captured all known garbling techniques at the time. We bypass this lower bound with a novel technique that we call \textbf{slicing and dicing}, which involves slicing wire labels in half and operating separately on those halves. Ours is the first to bypass the lower bound while being fully compatible with free-XOR, making it a drop-in replacement for half-gates. Our construction is proven secure from a similar assumption to prior free-XOR garbling (circular correlation-robust hash), and uses only slightly more computation than half-gates.
2021
CRYPTO
On the Round Complexity of Black-Box Secure MPC 📺
We consider the question of minimizing the round complexity of secure multiparty computation (MPC) protocols that make a black-box use of simple cryptographic primitives in the setting of security against any number of malicious parties. In the plain model, previous black-box protocols required a high constant number of rounds (>15). This is far from the known lower bound of 4 rounds for protocols with black-box simulators. When allowing a random oblivious transfer (OT) correlation setup, 2-round protocols making a black-box use of a pseudorandom generator were previously known. However, such protocols were obtained via a round-collapsing ``protocol garbling'' technique that has poor concrete efficiency and makes a non-black-box use of an underlying malicious-secure protocol. We improve this state of affairs by presenting the following types of black-box protocols. a. 4-round ``pairwise MPC'' in the plain model. This round-optimal protocol enables each ordered pair of parties to compute a function of both inputs whose output is delivered to the second party. The protocol makes black-box use of any public-key encryption (PKE) with pseudorandom public keys. As a special case, we get a black-box round-optimal realization of secure (copies of) OT between every ordered pair of parties. b. 2-round MPC from OT correlations. This round-optimal protocol makes a black-box use of any general 2-round MPC protocol satisfying an augmented notion of semi-honest security. In the two-party case, this yields new kinds of 2-round black-box protocols. c. 5-round MPC in the plain model. This protocol makes a black-box use of PKE with pseudorandom public keys, and 2-round oblivious transfer with ``semi-malicious'' security. A key technical tool for the first result is a novel combination of split-state non-malleable codes (Dziembowski, Pietrzak, and Wichs, JACM '18) with standalone secure {\em two-party} protocols. The second result is based on a new round-optimized variant of the ``IPS compiler'' (Ishai, Prabhakaran and Sahai, Crypto '08). The third result is obtained via a specialized combination of these two techniques.
2021
CRYPTO
Fine-grained Secure Attribute-based Encryption 📺
Fine-grained cryptography is constructing cryptosystems in a setting where an adversary’s resource is a-prior bounded and an honest party has less resource than an adversary. Currently, only simple form of encryption schemes, such as secret-key and public-key encryption, are constructed in this setting. In this paper, we enrich the available tools in fine-grained cryptography by proposing the first fine-grained secure attribute-based encryption (ABE) scheme. Our construction is adaptively secure under the widely accepted worst-case assumption, $NC1 \subsetneq \oplus L/poly$, and it is presented in a generic manner using the notion of predicate encodings (Wee, TCC’14). By properly instantiating the underlying encoding, we can obtain different types of ABE schemes, including identity-based encryption. Previously, all of these schemes were unknown in fine-grained cryptography. Our main technical contribution is constructing ABE schemes without using pairing or the Diffie-Hellman assumption. Hence, our results show that, even if one-way functions do not exist, we still have ABE schemes with meaningful security. For more application of our techniques, we construct an efficient (quasi-adaptive) non-interactive zero-knowledge (QA-NIZK) proof system.
2021
CRYPTO
Computational Hardness of Optimal Fair Computation: Beyond Minicrypt 📺
Secure multi-party computation allows mutually distrusting parties to compute securely over their private data. However, guaranteeing output delivery to honest parties when the adversarial parties may abort the protocol has been a challenging objective. As a representative task, this work considers two-party coin-tossing protocols with guaranteed output delivery, a.k.a., fair coin-tossing. In the information-theoretic plain model, as in two-party zero-sum games, one of the parties can force an output with certainty. In the commitment-hybrid, any $r$-message coin-tossing protocol is ${1/\sqrt r}$-unfair, i.e., the adversary can change the honest party's output distribution by $1/\sqrt r$ in the statistical distance. Moran, Naor, and Segev (TCC--2009) constructed the first $1/r$-unfair protocol in the oblivious transfer-hybrid. No further security improvement is possible because Cleve (STOC--1986) proved that $1/r$-unfairness is unavoidable. Therefore, Moran, Naor, and Segev's coin-tossing protocol is optimal. However, is oblivious transfer necessary for optimal fair coin-tossing? Maji and Wang (CRYPTO--2020) proved that any coin-tossing protocol using one-way functions in a black-box manner is at least $1/\sqrt r$-unfair. That is, optimal fair coin-tossing is impossible in Minicrypt. Our work focuses on tightly characterizing the hardness of computation assumption necessary and sufficient for optimal fair coin-tossing within Cryptomania, outside Minicrypt. Haitner, Makriyannia, Nissim, Omri, Shaltiel, and Silbak (FOCS--2018 and TCC--2018) proved that better than $1/\sqrt r$-unfairness, for any constant $r$, implies the existence of a key-agreement protocol. We prove that any coin-tossing protocol using public-key encryption (or, multi-round key agreement protocols) in a black-box manner must be $1/\sqrt r$-unfair. Next, our work entirely characterizes the additional power of secure function evaluation functionalities for optimal fair coin-tossing. We augment the model with an idealized secure function evaluation of $f$, \aka, the $f$-hybrid. If $f$ is complete, that is, oblivious transfer is possible in the $f$-hybrid, then optimal fair coin-tossing is also possible in the $f$-hybrid. On the other hand, if $f$ is not complete, then a coin-tossing protocol using public-key encryption in a black-box manner in the $f$-hybrid is at least $1/\sqrt r$-unfair.
2021
CRYPTO
Constructing Locally Leakage-resilient Linear Secret-sharing Schemes 📺
Innovative side-channel attacks have repeatedly falsified the assumption that cryptographic implementations are opaque black-boxes. Therefore, it is essential to ensure cryptographic constructions' security even when information leaks via unforeseen avenues. One such fundamental cryptographic primitive is the secret-sharing schemes, which underlies nearly all threshold cryptography. Our understanding of the leakage-resilience of secret-sharing schemes is still in its preliminary stage. This work studies locally leakage-resilient linear secret-sharing schemes. An adversary can leak $m$ bits of arbitrary local leakage from each $n$ secret shares. However, in a locally leakage-resilient secret-sharing scheme, the leakage's joint distribution reveals no additional information about the secret. For every constant $m$, we prove that the Massey secret-sharing scheme corresponding to a random linear code of dimension $k$ (over sufficiently large prime fields) is locally leakage-resilient, where $k/n > 1/2$ is a constant. The previous best construction by Benhamouda, Degwekar, Ishai, Rabin (CRYPTO--2018) needed $k/n > 0.907$. A technical challenge arises because the number of all possible $m$-bit local leakage functions is exponentially larger than the number of random linear codes. Our technical innovation begins with identifying an appropriate pseudorandomness-inspired family of tests; passing them suffices to ensure leakage-resilience. We show that most linear codes pass all tests in this family. This Monte-Carlo construction of linear secret-sharing scheme that is locally leakage-resilient has applications to leakage-resilient secure computation. Furthermore, we highlight a crucial bottleneck for all the analytical approaches in this line of work. Benhamouda et al. introduced an analytical proxy to study the leakage-resilience of secret-sharing schemes; if the proxy is small, then the scheme is leakage-resilient. However, we present a one-bit local leakage function demonstrating that the converse is false, motivating the need for new analytically well-behaved functions that capture leakage-resilience more accurately. Technically, the analysis involves probabilistic and combinatorial techniques and (discrete) Fourier analysis. The family of new ``tests'' capturing local leakage functions, we believe, is of independent and broader interest.
2021
CRYPTO
No Time to Hash:On Super-Efficient Entropy Accumulation 📺
Real-world random number generators (RNGs) cannot afford to use (slow) cryptographic hashing every time they refresh their state R with a new entropic input X. Instead, they use ``super-efficient'' simple entropy-accumulation procedures, such as R <- rot_{alpha, n}(R) XOR X where rot_{alpha,n} rotates an n-bit state R by some fixed number alpha. For example, Microsoft's RNG uses alpha=5 for n=32 and alpha=19 for n=64. Where do these numbers come from? Are they good choices? Should rotation be replaced by a better permutation pi of the input bits? In this work we initiate a rigorous study of these pragmatic questions, by modeling the sequence of successive entropic inputs X_1,X_2, ... as independent (but otherwise adversarial) samples from some natural distribution family D. We show a simple but surprisingly powerful connection between entropy accumulation and understanding the Fourier spectrum of distributions in D. Our contribution is as follows. - We define 2-monotone distributions as a rich family D that includes relevant real-world distributions (Gaussian, exponential, etc.), but avoids trivial impossibility results. - For any alpha with gcd(alpha,n)=1, we show that rotation accumulates Omega(n) bits of entropy from n independent samples X_1,...,X_n from any (unknown) 2-monotone distribution with entropy k > 1. - However, we also show some choices of alpha perform much better than others for a given n. E.g., we show alpha=19 is one of the best choices for n=64; in contrast, alpha=5 is good, but generally worse than alpha=7, for n=32. - More generally, given a permutation pi and k > 1, we define a simple parameter, the covering number C_{pi,k}, and show that it characterizes the number of steps before the rule (R_1,...,R_n) <- (R_{pi(1)},..., R_{pi(n)}) XOR X accumulates nearly n bits of entropy from independent, 2-monotone samples of min-entropy k each. - We build a simple permutation pi^*, which achieves nearly optimal C_{pi^*,k} \approx n/k for all values of k simultaneously, and experimentally validate that it compares favorably with all rotations rot_{alpha,n}.
2021
CRYPTO
White Box Traitor Tracing 📺
Traitor tracing aims to identify the source of leaked decryption keys. Since the ``traitor'' can try to hide their key within obfuscated code in order to evade tracing, the tracing algorithm should work for general, potentially obfuscated, decoder \emph{programs}. In the setting of such general decoder programs, prior work uses \emph{black box} tracing: the tracing algorithm ignores the implementation of the decoder, and instead traces just by making queries to the decoder and observing the outputs. We observe that, in some settings, such black box tracing leads to consistency and user privacy issues. On the other hand, these issues do not appear inherent to \emph{white box} tracing, where the tracing algorithm actually inspects the decoder implementation. We therefore develop new white box traitor tracing schemes providing consistency and/or privacy. Our schemes can be instantiated under various assumptions ranging from public key encryption to indistinguishability obfuscation, with different trade-offs. To the best of our knowledge, ours is the first work to consider white box tracing in the general decoder setting.
2021
CRYPTO
Oblivious Key-Value Stores and Amplification for Private Set Intersection 📺
Many recent private set intersection (PSI) protocols encode input sets as polynomials. We consider the more general notion of an oblivious key-value store (OKVS), which is a data structure that compactly represents a desired mapping $k_i$ to $v_i$. When the $v_i$ values are random, the OKVS data structure hides the $k_i$ values that were used to generate it. The simplest (and size-optimal) OKVS is a polynomial $p$ that is chosen using interpolation such that $p(k_i)=v_i$. We initiate the formal study of oblivious key-value stores, and show new constructions resulting in the fastest OKVS to date. Similarly to cuckoo hashing, current analysis techniques are insufficient for finding *concrete* parameters to guarantee a small failure probability for our OKVS constructions. Moreover, it would cost too much to run experiments to validate a small upperbound on the failure probability. We therefore show novel techniques to amplify an OKVS construction which has a failure probability $p$, to an OKVS with a similar overhead and failure probability $p^c$. Setting $p$ to be moderately small enables to validate it by running a relatively small number of $O(1/p)$ experiments. This validates a $p^c$ failure probability for the amplified OKVS. Finally, we describe how OKVS can significantly improve the state of the art of essentially all variants of PSI. This leads to the fastest two-party PSI protocols to date, for both the semi-honest and the malicious settings. Specifically, in networks with moderate bandwidth (e.g., 30 - 300 Mbps) our malicious two-party PSI protocol has 40\% less communication and is 20-40% faster than the previous state of the art protocol, even though the latter only has heuristic confidence.
2021
CRYPTO
MPC-Friendly Symmetric Cryptography from Alternating Moduli: Candidates, Protocols, and Applications 📺
We study new candidates for symmetric cryptographic primitives that leverage alternation between linear functions over $\mathbb{Z}_2$ and $\mathbb{Z}_3$ to support fast protocols for secure multiparty computation (MPC). This continues the study of weak pseudorandom functions of this kind initiated by Boneh et al. (TCC 2018) and Cheon et al. (PKC 2021). We make the following contributions. (Candidates). We propose new designs of symmetric primitives based on alternating moduli. These include candidate one-way functions, pseudorandom generators, and weak pseudorandom functions. We propose concrete parameters based on cryptanalysis. (Protocols). We provide a unified approach for securely evaluating modulus-alternating primitives in different MPC models. For the original candidate of Boneh et al., our protocols obtain at least 2x improvement in all performance measures. We report efficiency benchmarks of an optimized implementation. (Applications). We showcase the usefulness of our candidates for a variety of applications. This includes short ``Picnic-style'' signature schemes, as well as protocols for oblivious pseudorandom functions, hierarchical key derivation, and distributed key generation for function secret sharing.
2021
CRYPTO
Counterexamples to New Circular Security Assumptions Underlying iO 📺
We study several strengthening of classical circular security assumptions which were recently introduced in four new lattice-based constructions of indistinguishability obfuscation: Brakerski-D\"ottling-Garg-Malavolta (Eurocrypt 2020), Gay-Pass (STOC 2021), Brakerski-D\"ottling-Garg-Malavolta (Eprint 2020) and Wee-Wichs (Eprint 2020). We provide explicit counterexamples to the {\em $2$-circular shielded randomness leakage} assumption w.r.t.\ the Gentry-Sahai-Waters fully homomorphic encryption scheme proposed by Gay-Pass, and the {\em homomorphic pseudorandom LWE samples} conjecture proposed by Wee-Wichs. Our work suggests a separation between classical circular security of the kind underlying un-levelled fully-homomorphic encryption from the strengthened versions underlying recent iO constructions, showing that they are not (yet) on the same footing. Our counterexamples exploit the flexibility to choose specific implementations of circuits, which is explicitly allowed in the Gay-Pass assumption and unspecified in the Wee-Wichs assumption. Their indistinguishabilty obfuscation schemes are still unbroken. Our work shows that the assumptions, at least, need refinement. In particular, generic leakage-resilient circular security assumptions are delicate, and their security is sensitive to the specific structure of the leakages involved.
2021
CRYPTO
Halo Infinite: Proof-Carrying Data from Additive Polynomial Commitments 📺
Polynomial commitment schemes (PCS) have recently been in the spotlight for their key role in building SNARKs. A PCS provides the ability to commit to a polynomial over a finite field and prove its evaluation at points. A *succinct* PCS has commitment size and evaluation proof size sublinear in the degree of the polynomial. An *efficient* PCS has sublinear proof verification. Any efficient and succinct PCS can be used to construct a SNARK with similar security and efficiency characteristics (in the random oracle model). Proof-carrying data (PCD) enables a set of parties to carry out an indefinitely long distributed computation where every step along the way is accompanied by a proof of correctness. It generalizes *incrementally verifiable computation* and can even be used to construct SNARKs. Until recently, however, the only known method for constructing PCD required expensive SNARK recursion. A system called *Halo* first demonstrated a new methodology for building PCD without SNARKs, exploiting an aggregation property of the *Bulletproofs* inner-product argument. The construction was *heuristic* because it makes non-black-box use of a concrete instantiation of the Fiat-Shamir transform. We expand upon this methodology to show that PCD can be (heuristically) built from any homomorphic polynomial commitment scheme (PCS), even if the PCS evaluation proofs are neither succinct nor efficient. In fact, the Halo methodology extends to any PCS that has an even more general property, namely the ability to aggregate linear combinations of commitments into a new succinct commitment that can later be opened to this linear combination. Our results thus imply new constructions of SNARKs and PCD that were not previously described in the literature and serve as a blueprint for future constructions as well.
2021
CRYPTO
Large Message Homomorphic Secret Sharing from DCR and Applications 📺
We present the first homomorphic secret sharing (HSS) construction that simultaneously (1) has negligible correctness error, (2) supports integers from an exponentially large range, and (3) relies on an assumption not known to imply FHE --- specifically, the Decisional Composite Residuosity (DCR) assumption. This resolves an open question posed by Boyle, Gilboa, and Ishai (Crypto 2016). Homomorphic secret sharing is analogous to fully-homomorphic encryption, except the ciphertexts are shared across two non-colluding evaluators. Previous constructions of HSS either had non-negligible correctness error and polynomial-size plaintext space or were based on the stronger LWE assumption. We also present two applications of our technique: a multi-server ORAM with constant bandwidth overhead, and a rate-$1$ trapdoor hash function with negligible error rate.
2021
CRYPTO
Towards faster polynomial-time lattice reduction 📺
The LLL algorithm is a polynomial-time algorithm for reducing d-dimensional lattice with exponential approximation factor. Currently, the most efficient variant of LLL, by Neumaier and Stehl\'e, has a theoretical running time in $d^4\cdot B^{1+o(1)}$ where $B$ is the bitlength of the entries, but has never been implemented. This work introduces new asymptotically fast, parallel, yet heuristic, reduction algorithms with their optimized implementations. Our algorithms are recursive and fully exploit fast block matrix multiplication. We experimentally demonstrate that by carefully controlling the floating-point precision during the recursion steps, we can reduce euclidean lattices of rank d in time $\tilde{O}(d^\omega\cdot C)$, i.e., almost a constant number of matrix multiplications, where $\omega$ is the exponent of matrix multiplication and C is the log of the condition number of the matrix. For cryptographic applications, C is close to B, while it can be up to d times larger in the worst case. It improves the running-time of the state-of-the-art implementation fplll by a multiplicative factor of order $d^2\cdot B$. Further, we show that we can reduce structured lattices, the so-called knapsack lattices, in time $\tilde{O}(d^{\omega-1}\cdot C)$ with a progressive reduction strategy. Besides allowing reducing huge lattices, our implementation can break several instances of Fully Homomorphic Encryption schemes based on large integers in dimension 2,230 with 4 millions of bits.
2021
CRYPTO
Composition with Knowledge Assumptions 📺
Zero-knowledge succinct non-interactive arguments (zk-SNARKs) rely on knowledge assumptions for their security. Meanwhile, as the complexity and scale of cryptographic systems continues to grow, the composition of secure protocols is of vital importance. The current gold standards of composable security, the Universal Composability and Constructive Cryptography frameworks cannot capture knowledge assumptions, as their core proofs of composition prohibit white-box extraction. In this paper, we present a formal model allowing the composition of knowledge assumptions. Despite showing impossibility for the general case, we demonstrate the model’s usefulness when limiting knowledge assumptions to few instances of protocols at a time. We finish by providing the first instance of a simultaneously succinct and composable zk-SNARK, by using existing results within our framework.
2021
CRYPTO
Low-Complexity Weak Pseudorandom Functions in AC0[MOD2] 📺
A *weak pseudorandom function* (WPRF) is a keyed function $f_k:\{0,1\}^n\to\{0,1\}$ such that, for a random key $k$, a collection of samples $(x, f_k(x))$, for {\em uniformly random} inputs $x$, cannot be efficiently distinguished from totally random input-output pairs $(x,y)$. We study WPRFs in AC0[MOD2], the class of functions computable by AC0 circuits with parity gates, making the following contributions.  - *Between Lapland and Cryptomania.* We show that WPRFs in AC0[MOD2] imply a variant of the Learning Parity with Noise (LPN) assumption. This gives an unconditional version of an earlier conditional result of Akavia et al. (ITCS 2014). We further show that WPRFs in a subclass of AC0[mod 2] that includes a recent WPRF candidate by Boyle et al. (FOCS 2020) imply, under a seemingly weak additional conjecture, public-key encryption.  - *WPRF by sparse polynomials.* We propose the first WPRF candidate that can be computed by sparse multivariate polynomials over $\F_2$. We prove that it has subexponential security against linear and algebraic attacks. - *WPRF in AC0 ◦ MOD2.* We study the existence of WPRFs computed by AC0 circuits \emph{over} parity gates. We propose a modified version of a previous WPRF candidate of Akavia et al., and prove that it resists the algebraic attacks that were used by Bogdanov and Rosen (ECCC 2017) to break the original candidate in quasipolynomial time. We give evidence against the possibility of using {\em public} parity gates and relate this question to other conjectures.
2021
CRYPTO
Asymptotically-Good Arithmetic Secret Sharing over Z/p^{\ell}Z with Strong Multiplication and Its Applications to Efficient MPC 📺
The current paper studies information-theoretically secure multiparty computation (MPC) over rings $\Z/p^{\ell}\Z$. This is a follow-up research of recent work on MPC over rings $\Z/p^{\ell}\Z$. In the work of \cite[TCC2019]{tcc}, a protocol based on the Shamir secret sharing over $\Z/p^{\ell}\Z$ was presented. As in the field case, its limitation is that the share size has to grow as the number of players increases. Then several MPC protocols were developed in \cite[Asiacrypt 2020]{asiacrypt} to overcome this limitation. However, the MPC protocols in \cite[Asiacrypt 2020]{asiacrypt} suffer from several drawbacks: (i) the offline multiplication gate has super-linear communication complexity; (ii) the share size is doubled for the most important case, namely over $\Z/2^{\ell}\Z$ due to infeasible lifting of self-orthogonal codes from fields to rings; (iii) most importantly, the BGW model could not be applied via the secret sharing given in \cite[Asiacrypt 2020]{asiacrypt} due to lack of strong multiplication. Our contribution in this paper is three fold. Firstly, we overcome all the drawbacks in \cite{tcc,asiacrypt} mentioned above. Secondly, we establish an arithmetic secret sharing with strong multiplication, which is the most important primitive in the BGW model. Thirdly, we lift Reverse Multiplication Friendly Embeddings (RMFE) from fields to rings, with same (linear) complexity. Note that RMFE has become a standard technique for amortized communication complexity in MPC, as in \cite[CRYPTO'18]{crypto2018} and \cite[CRYPTO'19]{dn19}. To obtain our theoretical results, we use the existence of lifts of curves over rings, then use the known results stating that Riemann-Roch spaces are free modules. To make our scheme practical, we start from good algebraic geometry codes over finite fields obtained from existing computational techniques. Then we present, and implement, an efficient algorithm to Hensel-lift the generating matrix of the code, such that the multiplicative conditions are preserved over rings. Existence of this specific lift is guaranteed by the previous theory. On the other hand, a random lifting of codes over from fields to Galois rings does not preserve multiplicativity in general. (Notice that our indirect method is motivated by the fact that, following the theory instead, would require to ``preprocess'' the curve under a form with ``smooth" equations, in particular with many variables, before lifting it. But computing on these objects over rings is out of the scope of existing research). Finally we provide efficient elementary methods for sharing and (robust) reconstruction of secrets over rings. As a result, arithmetic secret sharing over $\Z/p^{\ell}\Z$ with strong multiplication can be efficiently constructed and practically applied.
2021
CRYPTO
Separating Adaptive Streaming from Oblivious Streaming using the Bounded Storage Model 📺
Streaming algorithms are algorithms for processing large data streams, using only a limited amount of memory. Classical streaming algorithms typically work under the assumption that the input stream is chosen independently from the internal state of the algorithm. Algorithms that utilize this assumption are called oblivious algorithms. Recently, there is a growing interest in studying streaming algorithms that maintain utility also when the input stream is chosen by an adaptive adversary, possibly as a function of previous estimates given by the streaming algorithm. Such streaming algorithms are said to be adversarially-robust. By combining techniques from learning theory with cryptographic tools from the bounded storage model, we separate the oblivious streaming model from the adversarially-robust streaming model. Specifically, we present a streaming problem for which every adversarially-robust streaming algorithm must use polynomial space, while there exists a classical (oblivious) streaming algorithm that uses only polylogarithmic space. This is the first general separation between the capabilities of these two models, resolving one of the central open questions in adversarial robust streaming.
2021
CRYPTO
Sublinear GMW-Style Compiler for MPC with Preprocessing 📺
We consider the efficiency of protocols for secure multiparty computation (MPC) with a dishonest majority. A popular approach for the design of such protocols is to employ {\em preprocessing}. Before the inputs are known, the parties generate correlated secret randomness, which is consumed by a fast and ``information-theoretic'' online protocol. A powerful technique for securing such protocols against malicious parties uses {\em homomorphic MACs} to authenticate the values produced by the online protocol. Compared to a baseline protocol, which is only secure against semi-honest parties, this involves a significant increase in the size of the correlated randomness, by a factor of up to a statistical security parameter. Different approaches for partially mitigating this extra storage cost come at the expense of increasing the online communication. In this work we propose a new technique for protecting MPC with preprocessing against malicious parties. We show that for circuit evaluation protocols that satisfy mild security and structural requirements, that are met by almost all standard protocols with semi-honest security, the extra {\em additive} storage and online communication costs are both {\em logarithmic} in the circuit size. This applies to Boolean circuits and to arithmetic circuits over fields or rings, and to both information-theoretic and computationally secure protocols. Our protocol can be viewed as a sublinear information-theoretic variant of the celebrated ``GMW compiler'' that applies to MPC with preprocessing. Our compiler makes a novel use of the techniques of Boneh et al. (Crypto 2019) for sublinear distributed zero knowledge, which were previously only used in the setting of {\em honest-majority} MPC.
2021
CRYPTO
You Only Speak Once: Secure MPC with Stateless Ephemeral Roles 📺
The inherent difficulty of maintaining stateful environments over long periods of time gave rise to the paradigm of serverless computing, where mostly-stateless components are deployed on demand to handle computation tasks, and are teared down once their task is complete. Serverless architecture could offer the added benefit of improved resistance to targeted denial-of-service attacks, by hiding from the attacker the physical machines involved in the protocol until after they complete their work. Realizing such protection, however, requires that the protocol only uses stateless parties, where each party sends only one message and never needs to speaks again. Perhaps the most famous example of this style of protocols is the Nakamoto consensus protocol used in Bitcoin: A peer can win the right to produce the next block by running a local lottery (mining), all while staying covert. Once the right has been won, it is executed by sending a single message. After that, the physical entity never needs to send more messages. We refer to this as the You-Only-Speak-Once (YOSO) property, and initiate the formal study of it within a new model that we call the YOSO model. Our model is centered around the notion of roles, which are stateless parties that can only send a single message. Crucially, our modelling separates the protocol design, that only uses roles, from the role-assignment mechanism, that assigns roles to actual physical entities. This separation enables studying these two aspects separately, and our YOSO model in this work only deals with the protocol-design aspect. We describe several techniques for achieving YOSO MPC; both computational and information theoretic. Our protocols are synchronous and provide guaranteed output delivery (which is important for application domains such as blockchains), assuming honest majority of roles in every time step. We describe a practically efficient computationally-secure protocol, as well as a proof-of-concept information theoretically secure protocol.
2021
CRYPTO
Two-Round Trip Schnorr Multi-Signatures via Delinearized Witnesses 📺
We construct a two-round Schnorr-based signature scheme (DWMS) by delinearizing two pre-witnesses supplied by each signer. DWMS is a secure signature scheme in the algebraic group model (AGM) and the random oracle model (ROM) under the assumption of the hardness of the one-more discrete logarithm problem and the 2-entwined sum problem that we introduce in this paper. Our new m-entwined sum problem tweaks the k-sum problem in a scalar field using the associated group. We prove the hardness of our new problem in the AGM assuming the hardness of the discrete logarithm problem in the associated group. We believe that our new problem simplifies the security proofs of multi-signature schemes that use the delinearization of witnesses.
2021
CRYPTO
Compact Ring Signatures from Learning With Errors 📺
Ring signatures allow a user to sign a message on behalf of a ``ring'' of signers, while hiding the true identity of the signer. As the degree of anonymity guaranteed by a ring signature is directly proportional to the size of the ring, an important goal in cryptography is to study constructions that minimize the size of the signature as a function of the number of ring members. In this work, we present the first compact ring signature scheme (i.e., where the size of the signature grows logarithmically with the size of the ring) from the (plain) learning with errors (LWE) problem. The construction is in the standard model and it does not rely on a trusted setup or on the random oracle heuristic. In contrast with the prior work of Backes \etal~[EUROCRYPT'2019], our scheme does not rely on bilinear pairings, which allows us to show that the scheme is post-quantum secure assuming the quantum hardness of LWE. At the heart of our scheme is a new construction of compact and statistically witness-indistinguishable ZAP arguments for NP $\cap$ coNP, that we show to be sound based on the plain LWE assumption. Prior to our work, statistical ZAPs (for all of NP) were known to exist only assuming \emph{sub-exponential} LWE. We believe that this scheme might find further applications in the future.
2021
CRYPTO
Threshold Schnorr with Stateless Deterministic Signing from Standard Assumptions 📺
Schnorr's signature scheme permits an elegant threshold signing protocol due to its linear signing equation. However each new signature consumes fresh randomness, which can be a major attack vector in practice. Sources of randomness in deployments are frequently either unreliable, or require state continuity, i.e. reliable fresh state resilient to rollbacks. State continuity is a notoriously difficult guarantee to achieve in practice, due to system crashes caused by software errors, malicious actors, or power supply interruptions (Parno et al., S&P '11). This is a non-issue for Schnorr variants such as EdDSA, which is specified to derive nonces deterministically as a function of the message and the secret key. However, it is challenging to translate these benefits to the threshold setting, specifically to construct a threshold Schnorr scheme where signing neither requires parties to consume fresh randomness nor update long-term secret state. In this work, we construct a dishonest majority threshold Schnorr protocol that enables such stateless deterministic nonce derivation using standardized block ciphers. Our core technical ingredients are new tools for the zero-knowledge from garbled circuits (ZKGC) paradigm to aid in verifying correct nonce derivation: - A mechanism based on UC Commitments that allows a prover to commit once to a witness, and prove an unbounded number of statements online with only cheap symmetric key operations. - A garbling gadget to translate intermediate garbled circuit wire labels to arithmetic encodings. A proof per our scheme requires only a small constant number of exponentiations.
2021
CRYPTO
Cryptanalysis of Full LowMC and LowMC-M with Algebraic Techniques 📺
In this paper, we revisit the difference enumeration techniques for LowMC and develop new algebraic techniques to achieve efficient key-recovery attacks with negligible memory complexity. \mbox{Benefiting} from our technique to reduce the memory complexity, we could significantly improve the attacks on LowMC when the block size is much larger than the key size and even break LowMC with such a kind of parameter. On the other hand, with our new key-recovery technique, we could significantly improve the time to retrieve the full key if given only a single pair of input and output messages together with the difference trail that they take, which was stated as an interesting question by Rechberger et al. in ToSC 2018. Combining both the techniques, with only 2 chosen plaintexts, we could break 4 rounds of LowMC adopting a full S-Box layer with block size of 129, 192 and 255 bits, respectively, which are the 3 recommended parameters for Picnic3, an alternative \mbox{third-round} candidate in NIST's Post-Quantum Cryptography competition. We have to emphasize that our attacks do not indicate that Picnic3 is broken as the Picnic use-case is very different and an attacker cannot even freely choose 2 plaintexts to encrypt for a concrete LowMC instance. However, such parameters are deemed as secure in the latest LowMC. Moreover, much more rounds of seven instances of the backdoor cipher \mbox{LowMC-M} as proposed by Peyrin and Wang in CRYPTO 2020 can be broken without finding the backdoor by making full use of the allowed $2^{64}$ data. The above mentioned attacks are all achieved with negligible memory.
2021
CRYPTO
Mac'n'Cheese: Zero-Knowledge Proofs for Boolean and Arithmetic Circuits with Nested Disjunctions 📺
Zero knowledge proofs are an important building block in many cryptographic applications. Unfortunately, when the proof statements become very large, existing zero-knowledge proof systems easily reach their limits: either the computational overhead, the memory footprint, or the required bandwidth exceed levels that would be tolerable in practice. We present an interactive zero-knowledge proof system for boolean and arithmetic circuits, called Mac'n'Cheese, with a focus on supporting large circuits. Our work follows the commit-and-prove paradigm instantiated using information-theoretic MACs based on vector oblivious linear evaluation to achieve high efficiency. We additionally show how to optimize disjunctions, with a general OR transformation for proving the disjunction of $m$ statements that has communication complexity proportional to the longest statement (plus an additive term logarithmic in $m$). These disjunctions can further be \emph{nested}, allowing efficient proofs about complex statements with many levels of disjunctions. We also show how to make Mac'n'Cheese non-interactive (after a preprocessing phase) using the Fiat-Shamir transform, and with only a small degradation in soundness. We have implemented the online phase of Mac'n'Cheese and achieve a runtime of 144~ns per AND gate and 1.5~$\mu$s per multiplication gate in $\mathbb{F}_{2^{61}-1}$ when run over a network with a 95~ms latency and a bandwidth of 31.5~Mbps. In addition, we show that the disjunction optimization improves communication as expected: when proving a boolean circuit with eight branches and each branch containing roughly 1 billion multiplications, Mac'n'Cheese requires only 75 more bytes to communicate than in the single branch case.
2021
CRYPTO
Thinking Outside the Superbox 📺
Designing a block cipher or cryptographic permutation can be approached in many different ways. One such approach, popularized by AES, consists in grouping the bits along the S-box boundaries, e.g., in bytes, and in consistently processing them in these groups. This aligned approach leads to hierarchical structures like superboxes that make it possible to reason about the differential and linear propagation properties using combinatorial arguments. In contrast, an unaligned approach avoids any such grouping in the design of transformations. However, without hierarchical structure, sophisticated computer programs are required to investigate the differential and linear propagation properties of the primitive. In this paper, we formalize this notion of alignment and study four primitives that are exponents of different design strategies. We propose a way to analyze the interactions between the linear and the nonlinear layers w.r.t. the differential and linear propagation, and we use it to systematically compare the four primitives using non-trivial computer experiments. We show that alignment naturally leads to different forms of clustering, e.g., of active bits in boxes, of two-round trails in activity patterns, and of trails in differentials and linear approximations.
2021
CRYPTO
Silver: Silent VOLE and Oblivious Transfer from Hardness of Decoding Structured LDPC Codes 📺
We put forth new protocols for oblivious transfer extension and vector OLE, called \emph{Silver}, for SILent Vole and oblivious transfER. Silver offers extremely high performances: generating 10 million random OTs on one core of a standard laptop requires only 300ms of computation and 122KB of communication. This represents 37% less computation and ~1300x less communication than the standard IKNP protocol, as well as ~4x less computation and ~4x less communication than the recent protocol of Yang et al. (CCS 2020). Silver is \emph{silent}: after a one-time cheap interaction, two parties can store small seeds, from which they can later \emph{locally} generate a large number of OTs \emph{while remaining offline}. Neither IKNP nor Yang et al. enjoys this feature; compared to the best known silent OT extension protocol of Boyle et al. (CCS 2019), upon which we build up, Silver has 19x less computation, and the same communication. Due to its attractive efficiency features, Silver yields major efficiency improvements in numerous MPC protocols. Our approach is a radical departure from the standard paradigm for building MPC protocols, in that we do \emph{not} attempt to base our constructions on a well-studied assumption. Rather, we follow an approach closer in spirit to the standard paradigm in the design of symmetric primitives: we identify a set of fundamental structural properties that allow us to withstand all known attacks, and put forth a candidate design, guided by our analysis. We also rely on extensive experimentations to analyze our candidate and experimentally validate their properties. In essence, our approach boils down to constructing new families of linear codes with (plausibly) high minimum distance and extremely low encoding time. While further analysis is of course warranted to confidently assess the security of Silver, we hope and believe that initiating this approach to the design of MPC primitives will pave the way to new secure primitives with extremely attractive efficiency features.
2021
CRYPTO
A World of SNARKS 📺
Invited talk
2021
CRYPTO
Subtractive Sets over Cyclotomic Rings: Limits of Schnorr-like Arguments over Lattices 📺
We study when (dual) Vandermonde systems of the form `V_T ⋅ z = s⋅w` admit a solution `z` over a ring `R`, where `V_T` is the Vandermonde matrix defined by a set `T` and where the “slack” `s` is a measure of the quality of solutions. To this end, we propose the notion of `(s,t)`-subtractive sets over a ring `R`, with the property that if `S` is `(s,t)`-subtractive then the above (dual) Vandermonde systems defined by any `t`-subset `T ⊆ S` are solvable over `R`. The challenge is then to find large sets `S` while minimising (the norm of) `s` when given a ring `R`. By constructing families of `(s,t)`-subtractive sets `S` of size `n = poly(λ)` over cyclotomic rings `R = ZZ[ζ_{p^ℓ}]` for prime `p`, we construct Schnorr-like lattice-based proofs of knowledge for the SIS relation `A ⋅ x = s ⋅ y mod q` with `O(1/n)` knowledge error, and `s=1` in case `p = poly(λ)`. Our technique slots naturally into the lattice Bulletproof framework from Crypto’20, producing lattice-based succinct arguments for NP with better parameters. We then give matching impossibility results constraining `n` relative to `s`, which suggest that our Bulletproof-compatible protocols are optimal unless fundamentally new techniques are discovered. Noting that the knowledge error of lattice Bulletproofs is `Ω(log k/n)` for witnesses in `R^k` and subtractive set size `n`, our result represents a barrier to practically efficient lattice-based succinct arguments in the Bulletproof framework. Beyond these main results, the concept of `(s,t)`-subtractive sets bridges group-based threshold cryptography to the lattice settings, which we demonstrate by relating it to distributed pseudorandom functions.
2021
CRYPTO
Lower bounds on lattice sieving and information set decoding 📺
In two of the main areas of post-quantum cryptography, based on lattices and codes, nearest neighbor techniques have been used to speed up state-of-the-art cryptanalytic algorithms, and to obtain the lowest asymptotic cost estimates to date [May--Ozerov, Eurocrypt'15; Becker--Ducas--Gama--Laarhoven, SODA'16]. These upper bounds are useful for assessing the security of cryptosystems against known attacks, but to guarantee long-term security one would like to have closely matching lower bounds, showing that improvements on the algorithmic side will not drastically reduce the security in the future. As existing lower bounds from the nearest neighbor literature do not apply to the nearest neighbor problems appearing in this context, one might wonder whether further speedups to these cryptanalytic algorithms can still be found by only improving the nearest neighbor subroutines. We derive new lower bounds on the costs of solving the nearest neighbor search problems appearing in these cryptanalytic settings. For the Euclidean metric we show that for random data sets on the sphere, the locality-sensitive filtering approach of [Becker--Ducas--Gama--Laarhoven, SODA 2016] using spherical caps is optimal, and hence within a broad class of lattice sieving algorithms covering almost all approaches to date, their asymptotic time complexity of $2^{0.292d + o(d)}$ is optimal. Similar conditional optimality results apply to lattice sieving variants, such as the $2^{0.265d + o(d)}$ complexity for quantum sieving [Laarhoven, PhD thesis 2016] and previously derived complexity estimates for tuple sieving [Herold--Kirshanova--Laarhoven, PKC 2018]. For the Hamming metric we derive new lower bounds for nearest neighbor searching which almost match the best upper bounds from the literature [May--Ozerov, Eurocrypt 2015]. As a consequence we derive conditional lower bounds on decoding attacks, showing that also here one should search for improvements elsewhere to significantly undermine security estimates from the literature.
2021
CRYPTO
Non-Interactive Secure Multiparty Computation for Symmetric Functions, Revisited: More Efficient Constructions and Extensions 📺
Non-interactive secure multiparty computation (NIMPC) is a variant of secure computation which allows each of $n$ players to send only a single message depending on his input and correlated randomness. Abelian programs, which can realize any symmetric function, are defined as functions on the sum of the players' inputs over an abelian group and provide useful functionalities for real-world applications. We improve and extend the previous results in the following ways: \begin{itemize} \item We present NIMPC protocols for abelian programs that improve the best known communication complexity. If inputs take any value of an abelian group $\mathbb{G}$, our protocol achieves the communication complexity $O(|\mathbb{G}|(\log|\mathbb{G}|)^2)$ improving $O(|\mathbb{G}|^2n^2)$ of Beimel et al. (Crypto 2014). If players are limited to inputs from subsets of size at most $d$, our protocol achieves $|\mathbb{G}|(\log|\mathbb{G}|)^2(\max\{n,d\})^{(1+o(1))t}$ where $t$ is a corruption threshold. This result improves $|\mathbb{G}|^3(nd)^{(1+o(1))t}$ of Beimel et al. (Crypto 2014), and even $|\mathbb{G}|^{\log n+O(1)}n$ of Benhamouda et al. (Crypto 2017) if $t=o(\log n)$ and $|\mathbb{G}|=n^{\Theta(1)}$. \item We propose for the first time NIMPC protocols for linear classifiers that are more efficient than those obtained from the generic construction. \item We revisit a known transformation of Benhamouda et al. (Crypto 2017) from Private Simultaneous Messages (PSM) to NIMPC, which we repeatedly use in the above results. We reveal that a sub-protocol used in the transformation does not satisfy the specified security. We also fix their protocol with only constant overhead in the communication complexity. As a byproduct, we obtain an NIMPC protocol for indicator functions with asymptotically optimal communication complexity with respect to the input length. \end{itemize}
2021
CRYPTO
Linear Cryptanalysis of FF3-1 and FEA 📺
Best Paper by Early Career Researchers Award, Honorable mention for best paper
Improved attacks on generic small-domain Feistel ciphers with alternating round tweaks are obtained using linear cryptanalysis. This results in practical distinguishing and message-recovery attacks on the United States format-preserving encryption standard FF3-1 and the South-Korean standards FEA-1 and FEA-2. The data-complexity of the proposed attacks on FF3-1 and FEA-1 is $O(N^{r/2 - 1.5})$, where $N^2$ is the domain size and $r$ is the number of rounds. For example, FF3-1 with $N = 10^3$ can be distinguished from an ideal tweakable block cipher with advantage $\ge 1/10$ using $2^{23}$ encryption queries. Recovering the left half of a message with similar advantage requires $2^{24}$ data. The analysis of FF3-1 serves as an interesting real-world application of (generalized) linear cryptanalysis over the group $\mathbb{Z}/N\mathbb{Z}$.
2021
CRYPTO
Multi-Input Quadratic Functional Encryption from Pairings 📺
We construct the first multi-input functional encryption (MIFE) scheme for quadratic functions from pairings. Our construction supports polynomial number of users, where user $i$, for $i \in [n]$, encrypts input $\bfx_i \in \mbZ^m$ to obtain ciphertext $\ct_i$, the key generator provides a key $\sk_\bfc$ for vector $\bfc \in \mbZ^{({mn})^2}$ and decryption, given $\ct_1,\ldots,\ct_n$ and $\sk_\bfc$, recovers $\ip{\bfc}{\bfx \otimes \bfx}$ and nothing else. We achieve indistinguishability-based (selective) security against unbounded collusions under the standard bilateral matrix Diffie-Hellman assumption. All previous MIFE schemes either support only inner products (linear functions) or rely on strong cryptographic assumptions such as indistinguishability obfuscation or multi-linear maps.
2021
CRYPTO
Deniable Fully Homomorphic Encryption from Learning With Errors 📺
We define and construct {\it Deniable Fully Homomorphic Encryption} based on the Learning With Errors (LWE) polynomial hardness assumption. Deniable FHE enables storing encrypted data in the cloud to be processed securely without decryption, maintaining deniability of the encrypted data, as well the prevention of vote-buying in electronic voting schemes where encrypted votes can be tallied without decryption. Our constructions achieve {\it compactness} independently of the level of deniability- both the size of the public key and the size of the ciphertexts are bounded by a fixed polynomial, independent of the faking probability achieved by the scheme. This is in contrast to all previous constructions of deniable encryption schemes (even without requiring homomorphisms) which are based on polynomial hardness assumptions, originating with the seminal work of Canetti, Dwork, Naor and Ostrovsky (CRYPTO 1997) in which the ciphertext size grows with the inverse of the faking probability. Canetti {\it et al.} argued that this dependence ``seems inherent'', but our constructions illustrate this is not the case. We note that the Sahai-Waters (STOC13) construction of deniable encryption from indistinguishability-obfuscation achieves compactness and can be easily modified to achieve deniable FHE as well, but it requires multiple, stronger sub-exponential hardness assumptions, which are furthermore not post-quantum secure. In contrast, our constructions rely only on the LWE polynomial hardness assumption, as currently required for FHE even without deniability. The running time of our encryption algorithm depends on the inverse of the faking probability, thus the scheme falls short of achieving simultaneously compactness, negligible deniability probability {\it and} polynomial encryption time. Yet, we believe that achieving compactness is a fundamental step on the way to achieving all properties simultaneously as has been the historical journey for other primitives such as functional encryption. Interestingly, we note that our constructions support large message spaces, whereas previous constructions were bit by bit, and can be run in online-offline model of encryption, where the bulk of computation is independent of the message and may be performed in an offline pre-processing phase. The running time of the online phase, is independent of the faking probability, whereas the offline encryption run-time grows with the inverse of the faking probability. At the heart of our constructions is a new way to use bootstrapping to obliviously generate FHE ciphertexts so that it supports faking under coercion.
2021
CRYPTO
Revisiting the Security of DbHtS MACs: Beyond-Birthday-Bound in the Multi-User Setting 📺
Double-block Hash-then-Sum (\textsf{DbHtS}) MACs are a class of MACs that aim for achieving beyond-birthday-bound security, including \textsf{SUM-ECBC}, \textsf{PMAC\_Plus}, \textsf{3kf9} and \textsf{LightMAC\_Plus}. Recently Datta et al. (FSE'19), and then Kim et al. (Eurocrypt'20) prove that \textsf{DbHtS} constructions are secure beyond the birthday bound in the single-user setting. However, by a generic reduction, their results degrade to (or even worse than) the birthday bound in the multi-user setting. In this work, we revisit the security of \textsf{DbHtS} MACs in the multi-user setting. We propose a generic framework to prove beyond-birthday-bound security for \textsf{DbHtS} constructions. We demonstrate the usability of this framework with applications to key-reduced variants of \textsf{DbHtS} MACs, including \textsf{2k-SUM-ECBC}, \textsf{2k-PMAC\_Plus} and \textsf{2k-LightMAC\_Plus}. Our results show that the security of these constructions will not degrade as the number of users grows. On the other hand, our results also indicate that these constructions are secure beyond the birthday bound in both single-user and multi-user setting without additional domain separation, which is used in the prior work to simplify the analysis. Moreover, we find a critical flaw in \textsf{2kf9}, which is proved to be secure beyond the birthday bound by Datta et al. (FSE'19). We can successfully forge a tag with probability 1 without making any queries. We go further to show attacks with birthday-bound complexity on several variants of \textsf{2kf9}.
2021
CRYPTO
A New Simple Technique to Bootstrap Various Lattice Zero-Knowledge Proofs to QROM Secure NIZKs 📺
Many of the recent advanced lattice-based Sigma-/public-coin honest verifier (HVZK) interactive protocols based on the techniques developed by Lyubashevsky (Asiacrypt'09, Eurocrypt'12) can be transformed into a non-interactive zero-knowledge (NIZK) proof in the random oracle model (ROM) using the Fiat-Shamir transform. Unfortunately, although they are known to be secure in the __classical__ ROM, existing proof techniques are incapable of proving them secure in the __quantum__ ROM (QROM). Alternatively, while we could instead rely on the Unruh transform (Eurocrypt'15), the resulting QROM secure NIZK will incur a large overhead compared to the underlying interactive protocol. In this paper, we present a new simple semi-generic transform that compiles many existing lattice-based Sigma-/public-coin HVZK interactive protocols into QROM secure NIZKs. Our transform builds on a new primitive called __extractable linear homomorphic commitment__ protocol. The resulting NIZK has several appealing features: it is not only a proof of knowledge but also straight-line extractable; the proof overhead is smaller compared to the Unruh transform; it enjoys a relatively small reduction loss; and it requires minimal background on quantum computation. To illustrate the generality of our technique, we show how to transform the recent Bootle et al.'s 5-round protocol with an exact sound proof (Crypto'19) into a QROM secure NIZK by increasing the proof size by a factor of 2.6. This compares favorably to the Unruh transform that requires a factor of more than 50.
2021
CRYPTO
Subquadratic SNARGs in the Random Oracle Model 📺
In a seminal work, Micali (FOCS 1994) gave the first succinct non-interactive argument (SNARG) in the random oracle model (ROM). The construction combines a PCP and a cryptographic commitment, and has several attractive features: it is plausibly post-quantum; it can be heuristically instantiated via lightweight cryptography; and it has a transparent (public-coin) parameter setup. However, it also has a significant drawback: a large argument size. In this work, we provide a new construction that achieves a smaller argument size. This is the first progress on the Micali construction since it was introduced over 25 years ago. A SNARG in the ROM is (t,ε)-secure if every t-query malicious prover can convince the verifier of a false statement with probability at most ε. For (t,ε)-security, the argument size of all known SNARGs in the ROM (including Micali's) is Õ((log (t/ε))^2) bits, *even* if one were to rely on conjectured probabilistic proofs well beyond current techniques. In practice, these costs lead to SNARGs that are much larger than constructions based on other (pre-quantum and costly) tools. This has led many to believe that SNARGs in the ROM are inherently quadratic. We show that this is not the case. We present a SNARG in the ROM with a sub-quadratic argument size: Õ(log (t/ε) * log t). Our construction relies on a strong soundness notion for PCPs and a weak binding notion for commitments. We hope that our work paves the way for understanding if a linear argument size, that is O(log (t/ε)), is achievable in the ROM.
2021
CRYPTO
Lattice Reduction with Approximate Enumeration Oracles: Practical Algorithms and Concrete Performance 📺
This work provides a systematic investigation of the use of approximate enumeration oracles in BKZ, building on recent technical progress on speeding-up lattice enumeration: relaxing (the search radius of) enumeration and extended preprocessing which preprocesses in a larger rank than the enumeration rank. First, we heuristically justify that relaxing enumeration with certain extreme pruning asymptotically achieves an exponential speed-up for reaching the same root Hermite factor (RHF). Second, we perform simulations/experiments to validate this and the performance for relaxed enumeration with numerically optimised pruning for both regular and extended preprocessing. Upgrading BKZ with such approximate enumeration oracles gives rise to our main result, namely a practical and faster (compared to previous work) polynomial-space lattice reduction algorithm for reaching the same RHF in practical and cryptographic parameter ranges. We assess its concrete time/quality performance with extensive simulations and experiments. As a consequence, we update the extrapolation of the crossover rank between a square-root cost estimate for quantum enumeration using our algorithm and the Core-SVP cost estimate for quantum sieving to 547.
2021
CRYPTO
Secure Wire Shuffling in the Probing Model 📺
In this paper we describe the first improvement of the wire shuffling countermeasure against side-channel attacks described by Ishai, Sahai and Wagner at Crypto 2003. More precisely, we show how to get worst case statistical security against t probes with running time O(t) instead of O(t log t); our construction is also much simpler. Recall that the classical masking countermeasure achieves perfect security but with running time O(t^2). We also describe a practical implementation for AES that outperforms the masking countermeasure for t ≥ 6 000.
2021
CRYPTO
SSE and SSD: Page-Efficient Searchable Symmetric Encryption 📺
Searchable Symmetric Encryption (SSE) enables a client to outsource a database to an untrusted server, while retaining the ability to securely search the data. The performance bottleneck of classic SSE schemes typically does not come from their fast, symmetric cryptographic operations, but rather from the cost of memory accesses. To address this issue, many works in the literature have considered the notion of locality, a simple design criterion that helps capture the cost of memory accesses in traditional storage media, such as Hard Disk Drives. A common thread among many SSE schemes aiming to improve locality is that they are built on top of new memory allocation schemes, which form the technical core of the constructions. The starting observation of this work is that for newer storage media such as Solid State Drives (SSDs), which have become increasingly common, locality is not a good predictor of practical performance. Instead, SSD performance mainly depends on page efficiency, that is, reading as few pages as possible. We define this notion, and identify a simple allocation problem, Data-Independent Packing, that captures the main technical challenge required to build page-efficient SSE. As our main result, we build a page-efficient and storage-efficient data-independent packing scheme, and deduce an SSE scheme with the same properties. The technical core of the result is a new generalization of cuckoo hashing to items of variable size. Practical experiments show that this approach achieves excellent performance.
2021
CRYPTO
ATLAS: Efficient and Scalable MPC in the Honest Majority Setting 📺
In this work, we address communication, computation, and round efficiency of unconditionally secure multi-party computation for arithmetic circuits in the honest majority setting. We achieve both algorithmic and practical improvements: - The best known result in the semi-honest setting has been due to Damgard and Nielsen (CRYPTO 2007). Over the last decade, their construction has played an important role in the progress of efficient secure computation. However despite a number of follow-up works, any significant improvements to the basic semi-honest protocol have been hard to come by. We show 33% improvement in communication complexity of this protocol. We show how to generalize this result to the malicious setting, leading to the best known unconditional honest majority MPC with malicious security. - We focus on the round complexity of the Damgard and Nielsen protocol and improve it by a factor of 2. Our improvement relies on a novel observation relating to an interplay between Damgard and Nielsen multiplication and Beaver triple multiplication. An implementation of our constructions shows an execution run time improvement compared to the state of the art ranging from 30% to 50%.
2021
CRYPTO
Unconditional Communication-Efficient MPC via Hall's Marriage Theorem 📺
The best known n party unconditional multiparty computation protocols with an optimal corruption threshold communicates O(n) field elements per gate. This has been the case even in the semi-honest setting despite over a decade of research on communication complexity in this setting. Going to the slightly sub-optimal corruption setting, the work of Damgard, Ishai, and Kroigaard (EUROCRYPT 2010) provided the first protocol for a single circuit achieving communication complexity of O(log |C|) elements per gate. While a number of works have improved upon this result, obtaining a protocol with O(1) field elements per gate has been an open problem. In this work, we construct the first unconditional multi-party computation protocol evaluating a single arithmetic circuit with amortized communication complexity of O(1) elements per gate.
2021
CRYPTO
Puncturable Pseudorandom Sets and Private Information Retrieval with Near-Optimal Online Bandwidth and Time 📺
Imagine one or more non-colluding servers each holding a large public database, e.g., the repository of DNS entries. Clients would like to access entries in this database without disclosing their queries to the servers. Classical private information retrieval (PIR) schemes achieve polylogarithmic bandwidth per query, but require the server to perform linear computation per query, which is a significant barrier towards deployment. Several recent works showed, however, that by introducing a one-time, per-client, off-line preprocessing phase, an \emph{unbounded} number of client queries can be subsequently served with sublinear online computation time per query (and the cost of the preprocessing can be amortized over the unboundedly many queries). Existing preprocessing PIR schemes (supporting unbounded queries), unfortunately, make undesirable tradeoffs to achieve sublinear online computation: they are either significantly non-optimal in online time or bandwidth, or require the servers to store a linear amount of state per client or even per query, or require polylogarithmically many non-colluding servers. We propose a novel 2-server preprocessing PIR scheme that achieves $\widetilde{O}(\sqrt{n})$ online computation per query and $\widetilde{O}(\sqrt{n})$ client storage, while preserving the polylogarithmic online bandwidth of classical PIR schemes. Both the online bandwidth and computation are optimal up to a poly-logarithmic factor. In our construction, each server stores only the original database and nothing extra, and each online query is served within a single round trip. Our construction relies on the standard LWE assumption. As an important stepping stone, we propose new, more generalized definitions for a cryptographic object called a Privately Puncturable Pseudorandom Set, and give novel constructions that depart significantly from prior approaches.
2021
CRYPTO
On the Concurrent Composition of Quantum Zero-Knowledge 📺
We study the notion of zero-knowledge secure against quantum polynomial-time verifiers (referred to as quantum zero-knowledge) in the concurrent composition setting. Despite being extensively studied in the classical setting, concurrent composition in the quantum setting has hardly been studied. \par We initiate a formal study of concurrent quantum zero-knowledge. Our results are as follows: - Bounded Concurrent QZK for NP and QMA: Assuming post-quantum one-way functions, there exists a quantum zero-knowledge proof system for NP in the bounded concurrent setting. In this setting, we fix a priori the number of verifiers that can simultaneously interact with the prover. Under the same assumption, we also show that there exists a quantum zero-knowledge proof system for QMA in the bounded concurrency setting. - Quantum Proofs of Knowledge: Assuming quantum hardness of learning with errors (QLWE), there exists a bounded concurrent zero-knowledge proof system for NP satisfying quantum proof of knowledge property. Our extraction mechanism simultaneously allows for extraction probability to be negligibly close to acceptance probability (extractability) and also ensures that the prover's state after extraction is statistically close to the prover's state after interacting with the verifier (simulatability). Even in the standalone setting, the seminal work of [Unruh EUROCRYPT'12], and all its followups, satisfied a weaker version of extractability property and moreover, did not achieve simulatability. Our result yields a proof of {\em quantum knowledge} system for QMA with better parameters than prior works.
2021
CRYPTO
SMILE: Set Membership from Ideal Lattices with Applications to Ring Signatures and Confidential Transactions 📺
In a set membership proof, the public information consists of a set of elements and a commitment. The prover then produces a zero-knowledge proof showing that the commitment is indeed to some element from the set. This primitive is closely related to concepts like ring signatures and ``one-out-of-many'' proofs that underlie many anonymity and privacy protocols. The main result of this work is a new succinct lattice-based set membership proof whose size is logarithmic in the size of the set. We also give transformations of our set membership proof to a ring signature scheme and to a confidential transaction payment system. The ring signature size is also logarithmic in the size of the public key set and has size $16$~KB for a set of $2^5$ elements, and $22$~KB for a set of size $2^{25}$. At an approximately $128$-bit security level, these outputs are between 1.5X and 7X smaller than the current state of the art succinct ring signatures of Beullens et al. (Asiacrypt 2020) and Esgin et al. (CCS 2019). We then show that our ring signature, combined with a few other techniques and optimizations, can be turned into a fairly efficient Monero-like confidential transaction system based on the MatRiCT framework of Esgin et al. (CCS 2019). With our new techniques, we are able to reduce the transaction proof size by factors of about 4X - 10X over the aforementioned work. For example, a transaction with two inputs and two outputs, where each input is hidden among $2^{15}$ other accounts, requires approximately $30$KB in our protocol.
2021
CRYPTO
Functional Encryption for Turing Machines with Dynamic Bounded Collusion from LWE 📺
The classic work of Gorbunov, Vaikuntanathan and Wee (CRYPTO 2012) and follow-ups provided constructions of bounded collusion Functional Encryption (FE) for circuits from mild assumptions. In this work, we improve the state of affairs for bounded collusion FE in several ways: 1. {\it New Security Notion.} We introduce the notion of {\it dynamic} bounded collusion FE, where the declaration of collusion bound is delayed to the time of encryption. This enables the encryptor to dynamically choose the collusion bound for different ciphertexts depending on their individual level of sensitivity. Hence, the ciphertext size grows linearly with its own collusion bound and the public key size is independent of collusion bound. In contrast, all prior constructions have public key and ciphertext size that grow at least linearly with a fixed bound $Q$. 2. {\it CPFE for circuits with Dynamic Bounded Collusion.} We provide the first CPFE schemes for circuits enjoying dynamic bounded collusion security. By assuming identity based encryption (IBE), we construct CPFE for circuits of {\it unbounded} size satisfying {\it non-adaptive} simulation based security. By strengthening the underlying assumption to IBE with receiver selective opening security, we obtain CPFE for circuits of {\it bounded} size enjoying {\it adaptive} simulation based security. Moreover, we show that IBE is a necessary assumption for these primitives. Furthermore, by relying on the Learning With Errors (LWE) assumption, we obtain the first {\it succinct} CPFE for circuits, i.e. supporting circuits with unbounded size, but fixed output length and depth. This scheme achieves {\it adaptive} simulation based security. 3. {\it KPFE for circuits with dynamic bounded collusion.} We provide the first KPFE for circuits of unbounded size, but bounded depth and output length satisfying dynamic bounded collusion security from LWE. Our construction achieves {\it adaptive} simulation security improving security of \cite{GKPVZ13a}. 4. {\it KP and CP FE for TM/NL with dynamic bounded collusion.} We provide the first KPFE and CPFE constructions of bounded collusion functional encryption for Turing machines in the public key setting from LWE. Our constructions achieve non-adaptive simulation based security. Both the input and the machine in our construction can be of {\it unbounded} polynomial length. We provide a variant of the above scheme that satisfies {\it adaptive} security, but at the cost of supporting a smaller class of computation, namely Nondeterministic Logarithmic-space (NL). Since NL contains Nondeterministic Finite Automata (NFA), this result subsumes {\it all} prior work of bounded collusion FE for uniform models from standard assumptions \cite{AMY19,AS17}.
2021
CRYPTO
Game-Theoretic Fairness Meets Multi-Party Protocols: The Case of Leader Election 📺
Suppose that $n$ players want to elect a random leader and they communicate by posting messages to a common broadcast channel. This problem is called leader election, and it is fundamental to the distributed systems and cryptography literature. Recently, it has attracted renewed interests due to its promised applications in decentralized environments. In a game theoretically fair leader election protocol, roughly speaking, we want that even a majority coalition cannot increase its own chance of getting elected, nor hurt the chance of any honest individual. The folklore tournament-tree protocol, which completes in logarithmically many rounds, can easily be shown to satisfy game theoretic security. To the best of our knowledge, no sub-logarithmic round protocol was known in the setting that we consider. We show that by adopting an appropriate notion of approximate game-theoretic fairness, and under standard cryptographic assumption, we can achieve $(1-1/2^{\Theta(r)})$-fairness in $r$ rounds for $\Theta(\log \log n) \leq r \leq \Theta(\log n)$, where $n$ denotes the number of players. In particular, this means that we can approximately match the fairness of the tournament tree protocol using as few as $O(\log \log n)$ rounds. We also prove a lower bound showing that logarithmically many rounds are necessary if we restrict ourselves to ``perfect'' game-theoretic fairness and protocols that are ``very similar in structure'' to the tournament-tree protocol. Although leader election is a well-studied problem in other contexts in distributed computing, our work is the first exploration of the round complexity of {\it game-theoretically fair} leader election in the presence of a possibly majority coalition. As a by-product of our exploration, we suggest a new, approximate game-theoretic fairness notion, called ``approximate sequential fairness'', which provides a more desirable solution concept than some previously studied approximate fairness notions.
2021
CRYPTO
The Cost to Break SIKE: A Comparative Hardware-Based Analysis with AES and SHA-3 📺
This work presents a detailed study of the classical security of the post-quantum supersingular isogeny key encapsulation (SIKE) protocol using a realistic budget-based cost model that considers the actual computing and memory costs that are needed for cryptanalysis. In this effort, we design especially-tailored hardware accelerators for the time-critical multiplication and isogeny computations that we use to model an ASIC-powered instance of the van Oorschot-Wiener (vOW) parallel collision search algorithm. We then extend the analysis to AES and SHA-3 in the context of the NIST post-quantum cryptography standardization process to carry out a parameter analysis based on our cost model. This analysis, together with the state-of-the-art quantum security analysis of SIKE, indicates that the current SIKE parameters offer higher practical security than currently believed, closing an open issue on the suitability of the parameters to match NIST's security levels. In addition, we explore the possibility of using significantly smaller primes to enable more efficient and compact implementations with reduced bandwidth. Our improved cost model and analysis can be applied to other cryptographic settings and primitives, and can have implications for other post-quantum candidates in the NIST process.
2021
CRYPTO
Upslices, Downslices, and Secret-Sharing with Complexity of $1.5^n$ 📺
A secret-sharing scheme allows to distribute a secret $s$ among $n$ parties such that only some predefined ``authorized'' sets of parties can reconstruct the secret, and all other ``unauthorized'' sets learn nothing about $s$. The collection of authorized/unauthorized sets is be captured by a monotone function $f:\{0,1\}^n\rightarrow \{0,1\}$. In this paper, we focus on monotone functions that all their min-terms are sets of size $a$, and on their duals -- monotone functions whose max-terms are of size $b$. We refer to these classes as $(a,n)$-\emph{upslices} and $(b,n)$-\emph{downslices}, and note that these natural families correspond to monotone $a$-regular DNFs and monotone $(n-b)$-regular CNFs. We derive the following results. \begin{enumerate} \item (General downslices) Every downslice can be realized with total share size of $1.5^{n+o(n)}<2^{0.585 n}$. Since every monotone function can be cheaply decomposed into $n$ downslices, we obtain a similar result for general access structures improving the previously known $2^{0.637n+o(n)}$ complexity of Applebaum, Beimel, Nir and Peter (STOC 2020). We also achieve a minor improvement in the exponent of linear secrets sharing schemes. \item (Random mixture of upslices) Following, Beimel and Farr{\`{a}}s (TCC 2020) who studied the complexity of random DNFs with constant-size terms, we consider the following general distribution $F$ over monotone DNFs: For each width value $a\in [n]$, uniformly sample $k_a$ monotone terms of size $a$, where $\vec{k}=(k_1,\ldots,k_n)$ is an arbitrary vector of non-negative integers. We show that, except with exponentially small probability, $F$ can be realized with share size of $2^{0.5 n+o(n)}$ and can be linearly realized with an exponent strictly smaller than $2/3$. Our proof also provides a candidate distribution for the ``exponentially-hard'' access structure. \end{enumerate} We use our results to explore connections between several seemingly unrelated questions about the complexity of secret-sharing schemes such as worst-case vs. average-case, linear vs. non-linear, and primal vs. dual access structures. We prove that, in at least one of these settings, there is a significant gap in secret-sharing complexity.
2021
CRYPTO
Impossibility of Quantum Virtual Black-Box Obfuscation of Classical Circuits 📺
Virtual black-box obfuscation is a strong cryptographic primitive: it encrypts a circuit while maintaining its full input/output functionality. A remarkable result by Barak et al. (Crypto 2001) shows that a general obfuscator that obfuscates classical circuits into classical circuits cannot exist. A promising direction that circumvents this impossibility result is to obfuscate classical circuits into quantum states, which would potentially be better capable of hiding information about the obfuscated circuit. We show that, under the assumption that Learning With Errors (LWE) is hard for quantum computers, this quantum variant of virtual black-box obfuscation of classical circuits is generally impossible. On the way, we show that under the presence of dependent classical auxiliary input, even the small class of classical point functions cannot be quantum virtual black-box obfuscated.
2021
CRYPTO
Towards a Unified Approach to Black-Box Constructions of Zero-Knowledge Proofs 📺
General-purpose zero-knowledge proofs for all $\NP$ languages greatly simplify secure protocol design. However, they inherently require the code of the underlying relation. If the relation contains black-box calls to a cryptographic function, the code of that function must be known to use the ZK proof, even if both the relation and the proof require only black-box access to the function. Rosulek (Crypto'12) shows that non-trivial proofs for even simple statements, such as membership in the range of a one-way function, require non-black-box access. We propose an alternative approach to bypass Rosulek's impossibility result. Instead of asking for a ZK proof directly for the given one-way function $f$, we seek to construct a {\em new} one-way function $F$ given only black-box access to $f$, {\em and} an associated ZK protocol for proving non-trivial statements, such as range membership, over its output. We say that $F$, along with its proof system, is a {\em proof-based} one-way function. We similarly define proof-based versions of other primitives, specifically pseudo-random generators and collision-resistant hash functions. We show how to construct proof-based versions of each of the primitives mentioned above from their ordinary counterparts under mild but necessary restrictions over the input. More specifically, \begin{itemize} \item We first show that if the prover entirely chooses the input, then proof-based pseudo-random generators cannot be constructed from ordinary ones in a black-box manner, thus establishing that some restrictions over the input are necessary. \item We next present black-box constructions handling inputs of the form $(x,r)$ where $r$ is chosen uniformly by the verifier. This is similar to the restrictions in the widely used Goldreich-Levin theorem. The associated ZK proofs support range membership over the output as well as arbitrary predicates over prefixes of the input. \end{itemize} Our results open up the possibility that general-purpose ZK proofs for relations that require black-box access to the primitives above may be possible in the future without violating their black-box nature by instantiating them using proof-based primitives instead of ordinary ones.
2021
CRYPTO
Does Fiat-Shamir Require a Cryptographic Hash Function? 📺
The Fiat-Shamir transform is a general method for reducing interaction in public-coin protocols by replacing the random verifier messages with deterministic hashes of the protocol transcript. The soundness of this transformation is usually heuristic and lacks a formal security proof. Instead, to argue security, one can rely on the random oracle methodology, which informally states that whenever a random oracle soundly instantiates Fiat-Shamir, a hash function that is ``sufficiently unstructured'' (such as fixed-length SHA-2) should suffice. Finally, for some special interactive protocols, it is known how to (1) isolate a concrete security property of a hash function that suffices to instantiate Fiat-Shamir and (2) build a hash function satisfying this property under a cryptographic assumption such as Learning with Errors. In this work, we abandon this methodology and ask whether Fiat-Shamir truly requires a cryptographic hash function. Perhaps surprisingly, we show that in two of its most common applications --- building signature schemes as well as (general-purpose) non-interactive zero-knowledge arguments --- there are sound Fiat-Shamir instantiations using extremely simple and non-cryptographic hash functions such as sum-mod-$p$ or bit decomposition. In some cases, we make idealized assumptions (i.e., we invoke the generic group model), while in others, we prove soundness in the plain model. On the negative side, we also identify important cases in which a cryptographic hash function is provably necessary to instantiate Fiat-Shamir. We hope this work leads to an improved understanding of the precise role of the hash function in the Fiat-Shamir transformation.
2021
CRYPTO
A Logarithmic Lower Bound for Oblivious RAM (for all parameters) 📺
An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e., for every input the observed locations accessed are similarly distributed. In recent years there has been great progress both in terms of upper bounds as well as in terms of lower bounds, essentially pinning down the smallest overhead possible in various settings of parameters. We observe that there is a very natural setting of parameters in which \emph{no} non-trivial lower bound is known, even not ones in restricted models of computation (like the so called balls and bins model). Let $N$ and $w$ be the number of cells and bit-size of cells, respectively, in the RAM that we wish to simulate obliviously. Denote by $b$ the cell bit-size of the ORAM. \emph{All} previous ORAM lower bounds have a multiplicative $w/b$ factor which makes them trivial in many settings of parameters of interest. In this work, we prove a new ORAM lower bound that captures this setting (and in all other settings it is at least as good as previous ones, quantitatively). We show that any ORAM must make (amortized) $$ \Omega\left(\log \left(\frac{Nw}{m}\right)/\log\left(\frac{b}{w}\right)\right) $$ memory probes for every logical operation. Here, $m$ denotes the bit-size of the local storage of the ORAM. Our lower bound implies that logarithmic overhead in accesses is necessary, even if $ b \gg w$. Our lower bound is tight for \emph{all} settings of parameters, up to the $\log(b/w)$ factor. Our bound also extends to the non-colluding multi-server setting. As an application, we derive the first (unconditional) separation between the overhead needed for ORAMs in the \emph{online} vs.\ \emph{offline} models. Specifically, we show that when $w=\log N$ and $b,m \in poly\log N$, there exists an offline ORAM that makes (on average) $o(1)$ memory probes per logical operation while every online one must make $\Omega(\log N/\log\log N)$ memory probes per logical operation. No such previous separation was known for any setting of parameters, not even in the balls and bins model.
2021
CRYPTO
Meet-in-the-Middle Attacks Revisited: Key-recovery, Collision, and Preimage Attacks 📺
At EUROCRYPT 2021, Bao et al. proposed an automatic method for systematically exploring the configuration space of meet-in-the-middle (MITM) preimage attacks. We further extend it into a constraint-based framework for finding exploitable MITM characteristics in the context of key-recovery and collision attacks by taking the subtle peculiarities of both scenarios into account. Moreover, to perform attacks based on MITM characteristics with nonlinear constrained neutral words, which have not been seen before, we present a procedure for deriving the solution spaces of neutral words without solving the corresponding nonlinear equations or increasing the overall time complexities of the attack. We apply our method to concrete symmetric-key primitives, including SKINNY, ForkSkinny, Romulus-H, Saturnin, Grostl, Whirlpool, and hashing modes with AES-256. As a result, we identify the first 23-round key-recovery attack on \skinny-$n$-$3n$ and the first 24-round key-recovery attack on ForkSkinny-$n$-$3n$ in the single-key model. Moreover, improved (pseudo) preimage or collision attacks on round-reduced Whirlpool, Grostl, and hashing modes with AES-256 are obtained. In particular, imploying the new representation of the \AES key schedule due to Leurent and Pernot (EUROCRYPT 2021), we identify the first preimage attack on 10-round AES-256 hashing.
2021
CRYPTO
Tighter Security for Schnorr Identification and Signatures: A High-Moment Forking Lemma for ${\Sigma}$-Protocols 📺
The Schnorr identification and signature schemes have been amongst the most influential cryptographic protocols of the past three decades. Unfortunately, although the best-known attacks on these two schemes are via discrete-logarithm computation, the known approaches for basing their security on the hardness of the discrete logarithm problem encounter the ``square-root barrier''. In particular, in any group of order $p$ where Shoup's generic hardness result for the discrete logarithm problem is believed to hold (and is thus used for setting concrete security parameters), the best-known $t$-time attacks on the Schnorr identification and signature schemes have success probability $t^2/p$, whereas existing proofs of security only rule out attacks with success probabilities $(t^2/p)^{1/2}$ and $(q_{\Hash} \cdot t^2/p)^{1/2}$, respectively, where $q_{\Hash}$ denotes the number of random-oracle queries issued by the attacker. We establish tighter security guarantees for identification and signature schemes which result from $\Sigma$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is ``$d$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $d$-th moment of the algorithm's running time. In the concrete context of the discrete logarithm problem, already Shoup's original proof shows that the discrete logarithm problem is $2$-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the $2$-moment hardness of the discrete logarithm problem, any $t$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $(t^2/p)^{2/3}$ and $(q_\Hash \cdot t^2/p)^{2/3}$, respectively.
2021
CRYPTO
Three-Round Secure Multiparty Computation from Black-Box Two-Round Oblivious Transfer 📺
We give constructions of three-round secure multiparty computation (MPC) protocols for general functions that make {\it black-box} use of a two-round oblivious transfer (OT). For the case of semi-honest adversaries, we make use of a two-round, semi-honest secure OT in the plain model. This resolves the round-complexity of black-box (semi-honest) MPC protocols from minimal assumptions and answers an open question of Applebaum et al. (ITCS 2020). For the case of malicious adversaries, we make use of a two-round maliciously-secure OT in the common random/reference string model that satisfies a (mild) variant of adaptive security for the receiver.
2021
CRYPTO
On Tight Quantum Security of HMAC and NMAC in the Quantum Random Oracle Model 📺
HMAC and NMAC are the most basic and important constructions to convert Merkle-Damg{\aa}rd hash functions into message authentication codes (MACs) or pseudorandom functions (PRFs). In the quantum setting, at CRYPTO~2017, Song and Yun showed that HMAC and NMAC are quantum pseudorandom functions (qPRFs) under the standard assumption that the underlying compression function is a qPRF. Their proof guarantees security up to $O(2^{n/5})$ or $O(2^{n/8})$ quantum queries when the output length of HMAC and NMAC is $n$ bits. However, there is a gap between the provable security bound and a simple distinguishing attack that uses $O(2^{n/3})$ quantum queries. This paper settles the problem of closing the gap. We show that the tight bound of the number of quantum queries to distinguish HMAC or NMAC from a random function is $\Theta(2^{n/3})$ in the quantum random oracle model, where compression functions are modeled as quantum random oracles. To give the tight quantum bound, based on an alternative formalization of Zhandry's compressed oracle technique, we introduce a new proof technique focusing on the symmetry of quantum query records.
2021
CRYPTO
Quantum Collision Attacks on Reduced SHA-256 and SHA-512 📺
In this paper, we study dedicated quantum collision attacks on SHA-256 and SHA-512 for the first time. The attacks reach 38 and 39 steps, respectively, which significantly improve the classical attacks for 31 and 27 steps. Both attacks adopt the framework of the previous work that converts many semi-free-start collisions into a 2-block collision, and are faster than the generic attack in the cost metric of time-space tradeoff. We observe that the number of required semi-free-start collisions can be reduced in the quantum setting, which allows us to convert the previous classical 38 and 39 step semi-free-start collisions into a collision. The idea behind our attacks is simple and will also be applicable to other cryptographic hash functions.
2021
CRYPTO
Efficient Information-Theoretic Multi-Party Computation over Non-Commutative Rings 📺
We construct the first efficient MPC protocol that only requires black-box access to a non-commutative ring $R$. Previous results in the same setting were efficient only either for a constant number of corruptions or when computing branching programs and formulas. Our techniques are based on a generalization of Shamir's secret sharing to non-commutative rings, which we derive from the work on Reed Solomon codes by Quintin, Barbier and Chabot (\textit{IEEE Transactions on Information Theory, 2013}). When the center of the ring contains a set $A = \{\alpha_0, \ldots, \alpha_n\}$ such that $\forall i \neq j, \alpha_i - \alpha_j \in R^*$, the resulting secret sharing scheme is strongly multiplicative and we can generalize existing constructions over finite fields without much trouble. Most of our work is devoted to the case where the elements of $A$ do not commute with all of $R$, but they just commute with each other. For such rings, the secret sharing scheme cannot be linear ``on both sides" and furthermore it is not multiplicative. Nevertheless, we are still able to build MPC protocols with a concretely efficient online phase and black-box access to $R$. As an example we consider the ring $\mathcal{M}_{m\times m}(\mathbb{Z}/2^k\mathbb{Z})$, for which when $m > \log(n+1)$, \enote{maybe adapt/simplify the following claim as the comparison requires some nuances} we obtain protocols that require around $\lceil\log(n+1)\rceil/2$ less communication and $2\lceil\log(n+1)\rceil$ less computation than the state of the art protocol based on Circuit Amortization Friendly Encodings (Dalskov, Lee and Soria-Vazquez, \textit{ASIACRYPT 2020}). In this setting with a ``less commutative" $A$, our black-box preprocessing phase has a less practical complexity of $\poly(n)$. Due to this, we additionally provide specialized, concretely efficient preprocessing protocols for $R = \mathcal{M}_{m\times m}(\mathbb{Z}/2^k\mathbb{Z})$ that exploit the structure of the matrix ring.
2021
CRYPTO
Adaptive Extractors and their Application to Leakage Resilient Secret Sharing 📺
We introduce Adaptive Extractors, which unlike traditional randomness extractors, guarantee security even when an adversary obtains leakage on the source \textit{after} observing the extractor output. We make a compelling case for the study of such extractors by demonstrating their use in obtaining adaptive leakage in secret sharing schemes. Specifically, at FOCS 2020, Chattopadhyay, Goodman, Goyal, Kumar, Li, Meka, Zuckerman, built an adaptively secure leakage resilient secret sharing scheme (LRSS) with both rate and leakage rate being $\mathcal{O}(1/n)$, where $n$ is the number of parties. In this work, we build an adaptively secure LRSS that offers an interesting trade-off between rate, leakage rate, and the total number of shares from which an adversary can obtain leakage. As a special case, when considering $t$-out-of-$n$ secret sharing schemes for threshold $t = \alpha n$ (constant $0<\alpha<1$), we build a scheme with constant rate, constant leakage rate, and allow the adversary leakage from all but $t-1$ of the shares, while giving her the remaining $t-1$ shares completely in the clear. (Prior to this, constant rate LRSS scheme tolerating adaptive leakage was unknown for \textit{any} threshold.) Finally, we show applications of our techniques to both non-malleable secret sharing and secure message transmission.
2021
CRYPTO
Pushing the Limits of Valiant's Universal Circuits: Simpler, Tighter and More Compact 📺
A universal circuit (UC) is a general-purpose circuit that can simulate arbitrary circuits (up to a certain size $n$). Valiant provides a $k$-way recursive construction of UCs (STOC 1976), where $k$ tunes the complexity of the recursion. More concretely, Valiant gives theoretical constructions of 2-way and 4-way UCs of asymptotic (multiplicative) sizes $5n\log n$ and $4.75 n\log n$ respectively, which matches the asymptotic lower bound $\Omega(n\log n)$ up to some constant factor. Motivated by various privacy-preserving cryptographic applications, Kiss et al. (Eurocrypt 2016) validated the practicality of $2$-way universal circuits by giving example implementations for private function evaluation. G{\"{u}}nther et al. (Asiacrypt 2017) and Alhassan et al. (J. Cryptology 2020) implemented the 2-way/4-way hybrid UCs with various optimizations in place towards making universal circuits more practical. Zhao et al. (Asiacrypt 2019) optimized Valiant's 4-way UC to asymptotic size $4.5 n\log n$ and proved a lower bound $3.64 n\log n$ for UCs under the Valiant framework. As the scale of computation goes beyond 10-million-gate ($n=10^7$) or even billion-gate level ($n=10^9$), the constant factor in UCs size plays an increasingly important role in application performance. In this work, we investigate Valiant's universal circuits and present an improved framework for constructing universal circuits with the following advantages. [Simplicity.] Parameterization is no longer needed. In contrast to that previous implementations resorted to a hybrid construction combining $k=2$ and $k=4$ for a tradeoff between fine granularity and asymptotic size-efficiency, our construction gets the best of both worlds when configured at the lowest complexity (i.e., $k=2$). [Compactness.] Our universal circuits have asymptotic size $3n\log n$, improving upon the best previously known $4.5n\log n$ by 33\% and beating the $3.64n\log n$ lower bound for UCs constructed under Valiant's framework (Zhao et al., Asiacrypt 2019). [Tightness.] We show that under our new framework the UCs size is lower bounded by $2.95 n\log n$, which almost matches the $3n\log n$ circuit size of our $2$-way construction. We implement the 2-way universal circuits and evaluate its performance with other implementations, which confirms our theoretical analysis.
2021
CRYPTO
Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN 📺
Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90's. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate $\frac{\log^2 n}{n}$ implies the quasi-polynomial hardness of LPN at the extremely high noise rate $1/2-1/\poly(n)$. It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness. We start with a simple observation that the hardness of high-noise LPN over large fields is implied by that of the LWE of the same modulus, and is thus reducible from worst-case hardness of lattice problems. We then revisit [BLVW19], which is the main focus of this work. We first expand the underlying binary linear codes (of the NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant $0<c<1$ the constant-noise LPN problem is ($T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{\min(c,1-c)})},q=2^{\Omega(n^{\min(c,1-c)})}$)-hard assuming that the NCP at the low-noise rate $\tau=n^{-c}$ is ($T'={2^{\Omega(\tau n)}}$, $\epsilon'={2^{-\Omega(\tau n)}}$,$m={2^{\Omega(\tau n)}}$)-hard in the worst case, where $T$, $\epsilon$, $q$ and $m$ are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions need constant-noise LPN with ($T={2^{\omega(\sqrt{n})}}$, $\epsilon'={2^{-\omega(\sqrt{n})}}$,$q={2^{\sqrt{n}}}$)-hardness (Yu et al., CRYPTO 2016 \& ASIACRYPT 2019), which is almost (up to an arbitrary $\omega(1)$ factor in the exponent) what is reducible from the worst-case NCP when $c= 0.5$. We leave it as an open problem whether the gap can be closed or there is a separation in place.
2021
CRYPTO
Improved Computational Extractors and their Applications 📺
Recent exciting breakthroughs have achieved the first two-source extractors that operate in the low min-entropy regime. Unfortunately, these constructions suffer from non-negligible error, and reducing the error to negligible remains an important open problem. In recent work, Garg, Kalai, and Khurana (GKK, Eurocrypt 2020) investigated a meaningful relaxation of this problem to the computational setting, in the presence of a common random string (CRS). In this relaxed model, their work built explicit two-source extractors for a restricted class of unbalanced sources with min-entropy n^{\gamma} (for some constant \gamma) and negligible error, under the sub-exponential DDH assumption. In this work, we investigate whether computational extractors in the CRS model be applied to more challenging environments. Specifically, we study network extractor protocols (Kalai et al., FOCS 2008) and extractors for adversarial sources (Chattopadhyay et al., STOC 2020) in the CRS model. We observe that these settings require extractors that work well for balanced sources, making the GKK results inapplicable. We remedy this situation by obtaining the following results, all of which are in the CRS model and assume the sub-exponential hardness of DDH. - We obtain ``optimal'' computational two-source and non-malleable extractors for balanced sources: requiring both sources to have only poly-logarithmic min-entropy, and achieving negligible error. To obtain this result, we perform a tighter and arguably simpler analysis of the GKK extractor. - We obtain a single-round network extractor protocol for poly-logarithmic min-entropy sources that tolerates an optimal number of adversarial corruptions. Prior work in the information-theoretic setting required sources with high min-entropy rates, and in the computational setting had round complexity that grew with the number of parties, required sources with linear min-entropy, and relied on exponential hardness (albeit without a CRS). - We obtain an ``optimal'' adversarial source extractor for poly-logarithmic min-entropy sources, where the number of honest sources is only 2 and each corrupted source can depend on either one of the honest sources. Prior work in the information-theoretic setting had to assume a large number of honest sources.
2021
CRYPTO
Broadcast-Optimal Two Round MPC with an Honest Majority 📺
This paper closes the question of the possibility of two-round MPC protocols achieving different security guarantees with and without the availability of broadcast in any given round. Cohen et al. (Eurocrypt 2020) study this question in the dishonest majority setting; we complete the picture by studying the honest majority setting. In the honest majority setting, given broadcast in both rounds, it is known that the strongest guarantee — guaranteed output delivery — is achievable (Gordon et al. Crypto 2015). We show that, given broadcast in the first round only, guaranteed output delivery is still achievable. Given broadcast in the second round only, we give a new construction that achieves identifiable abort, and we show that fairness — and thus guaranteed output delivery — are not achievable in this setting. Finally, if only peer-to-peer channels are available, we show that the weakest guarantee — selective abort — is the only one achievable for corruption thresholds t > 1 and for t = 1 and n = 3. On the other hand, it is already known that selective abort can be achieved in these cases. In the remaining cases, i.e., t = 1 and n > 3, it is known (from the work of Ishai et al. at Crypto 2010, and Ishai et al. at Crypto 2015) that guaranteed output delivery (and thus all weaker guarantees) are possible.
2021
CRYPTO
Compressing Proofs of k-Out-Of-n Partial Knowledge 📺
In a proof of partial knowledge, introduced by Cramer, Damg{\aa}rd and Schoenmakers (CRYPTO 1994), a prover knowing witnesses for some $k$-subset of $n$ given public statements can convince the verifier of this claim without revealing which $k$-subset. Their solution combines $\Sigma$-protocol theory and linear secret sharing, and achieves linear communication complexity for general $k,n$. Especially the ``one-out-of-$n$'' case $k=1$ has seen myriad applications during the last decades, e.g., in electronic voting, ring signatures, and confidential transaction systems. In this paper we focus on the discrete logarithm (DL) setting, where the prover claims knowledge of DLs of $k$-out-of-$n$ given elements. Groth and Kohlweiss (EUROCRYPT 2015) have shown how to solve the special case $k=1$ %, yet arbitrary~$n$, with {\em logarithmic} (in $n$) communication, instead of linear as prior work. However, their method takes explicit advantage of $k=1$ and does not generalize to $k>1$. Alternatively, an {\em indirect} approach for solving the considered problem is by translating the $k$-out-of-$n$ relation into a circuit and then applying communication-efficient circuit ZK. Indeed, for the $k=1$ case this approach has been highly optimized, e.g., in ZCash. Our main contribution is a new, simple honest-verifier zero-knowledge proof protocol for proving knowledge of $k$ out of $n$ DLs with {\em logarithmic} communication and {\em for general $k$ and $n$}, without requiring any generic circuit ZK machinery. Our solution puts forward a novel extension of the {\em compressed} $\Sigma$-protocol theory (CRYPTO 2020), which we then utilize to compress a new $\Sigma$-protocol for proving knowledge of $k$-out-of-$n$ DL's down to logarithmic size. The latter $\Sigma$-protocol is inspired by the CRYPTO 1994 approach, but a careful re-design of the original protocol is necessary for the compression technique to apply. Interestingly, {\em even for $k=1$ and general $n$} our approach improves prior {\em direct} approaches as it reduces prover complexity without increasing the communication complexity. Besides the conceptual simplicity, we also identify regimes of practical relevance where our approach achieves asymptotic and concrete improvements, e.g., in proof size and prover complexity, over the generic approach based on circuit-ZK. Finally, we show various extensions and generalizations of our core result. For instance, we extend our protocol to proofs of partial knowledge of Pedersen (vector) commitment openings, and/or to include a proof that the witness satisfies some additional constraint, and we show how to extend our results to non-threshold access structures.
2021
CRYPTO
A Compressed Sigma-Protocol Theory for Lattices 📺
We show a \emph{lattice-based} solution for commit-and-prove transparent circuit zero-knowledge (ZK) with \emph{polylog-communication}, the \emph{first} not depending on PCPs. We start from \emph{compressed $\Sigma$-protocol theory} (CRYPTO 2020), which is built around basic $\Sigma$-protocols for opening an arbitrary linear form on a long secret vector that is compactly committed to. These protocols are first compressed using a recursive ``folding-technique'' adapted from Bulletproofs, at the expense of logarithmic rounds. Proving in ZK that the secret vector satisfies a given constraint -- captured by a circuit -- is then by (blackbox) reduction to the linear case, via arithmetic secret-sharing techniques adapted from MPC. Commit-and-prove is also facilitated, i.e., when commitment(s) to the secret vector are created ahead of any circuit-ZK proof. On several platforms (incl.\ DL) this leads to logarithmic communication. Non-interactive versions follow from Fiat-Shamir. This abstract modular theory strongly suggests that it should somehow be supported by a lattice-platform \emph{as well}. However, when going through the motions and trying to establish low communication (on a SIS-platform), a certain significant lack in current understanding of multi-round protocols is exposed. Namely, as opposed to the DL-case, the basic $\Sigma$-protocol in question typically has \emph{poly-small challenge} space. Taking into account the compression-step -- which yields \emph{non-constant} rounds -- and the necessity for parallelization to reduce error, there is no known tight result that the compound protocol admits an efficient knowledge extractor. We resolve the state of affairs here by a combination of two novel results which are fully general and of independent interest. The first gives a tight analysis of efficient knowledge extraction in case of non-constant rounds combined with poly-small challenge space, whereas the second shows that parallel repetition indeed forces rapid decrease of knowledge error. Moreover, in our present context, arithmetic secret sharing is not defined over a large finite field but over a quotient of a number ring and this forces our careful adaptation of how the linearization techniques are deployed. We develop our protocols in an abstract framework that is conceptually simple and can be flexibly instantiated. In particular, the framework applies to arbitrary rings and norms.
2021
CRYPTO
Authenticated Key Exchange and Signatures with Tight Security in the Standard Model 📺
We construct the first authenticated key exchange protocols that achieve tight security in the standard model. Previous works either relied on techniques that seem to inherently require a random oracle, or achieved only “Multi-Bit-Guess” security, which is not known to compose tightly, for instance, to build a secure channel. Our constructions are generic, based on digital signatures and key encapsulation mechanisms (KEMs). The main technical challenges we resolve is to determine suitable KEM security notions which on the one hand are strong enough to yield tight security, but at the same time weak enough to be efficiently instantiable in the standard model, based on standard techniques such as universal hash proof systems. Digital signature schemes with tight multi-user security in presence of adaptive corruptions are a central building block, which is used in all known constructions of tightly-secure AKE with full forward security. We identify a subtle gap in the security proof of the only previously known efficient standard model scheme by Bader et al. (TCC 2015). We develop a new variant, which yields the currently most efficient signature scheme that achieves this strong security notion without random oracles and based on standard hardness assumptions.
2021
CRYPTO
Witness Authenticating NIZKs and Applications 📺
We initiate the study of witness authenticating NIZK proof systems (waNIZKs), in which one can use a witness $w$ of a statement $x$ to identify whether a valid proof for $x$ is indeed generated using $w$. Such a new identification functionality enables more diverse applications, and it also puts new requirements on soundness that: (1) no adversary can generate a valid proof that will not be identified by any witness; (2) or forge a proof using her valid witness to frame others. To work around the obvious obstacle towards conventional zero-knowledgeness, we define entropic zero-knowledgeness that requires the proof to leak no partial information, if the witness has sufficient computational entropy. We give a formal treatment of this new primitive. The modeling turns out to be quite involved and multiple subtle points arise and particular cares are required. We present general constructions from standard assumptions. We also demonstrate three applications in non-malleable (perfect one-way) hash, group signatures with verifier-local revocations and plaintext-checkable public-key encryption. Our waNIZK provides a new tool to advance the state of the art in all these applications.
2021
CRYPTO
Receiver-Anonymity in Rerandomizable RCCA-Secure Cryptosystems Resolved 📺
In this work, we resolve the open problem raised by Prabhakaran and Rosulek at CRYPTO 2007, and present the first anonymous, rerandomizable, Replayable-CCA (RCCA) secure public key encryption scheme. This solution opens the door to numerous privacy-oriented applications with a highly desired RCCA security level. At the core of our construction is a non-trivial extension of smooth projective hash functions (Cramer and Shoup, EUROCRYPT 2002), and a modular generic framework developed for constructing Rand-RCCA-secure encryption schemes with receiver-anonymity. The framework gives an enhanced abstraction of the original Prabhakaran and Rosulek’s scheme (which was the first construction of Rand-RCCA-secure encryption in the standard model), where the most crucial enhancement is the first realization of the desirable property of receiver-anonymity, essential to privacy settings. It also serves as a conceptually more intuitive and generic understanding of RCCA security, which leads, for example, to new implementations of the notion. Finally, note that (since CCA security is not applicable to the privacy applications motivating our work) the concrete results and the conceptual advancement presented here, seem to substantially expand the power and relevance of the notion of Rand-RCCA-secure encryption.
2021
CRYPTO
Traceable Secret Sharing and Applications 📺
Consider a scenario where Alice stores some secret data $s$ on $n$ servers using a $t$-out-of-$n$ secret sharing scheme. Trudy (the collector) is interested in the secret data of Alice and is willing to pay for it. Trudy publishes an advertisement on the internet which describes an elaborate cryptographic scheme to collect the shares from the $n$ servers. Each server who decides to submit its share is paid a hefty monetary reward and is guaranteed ``immunity" from being caught or prosecuted in a court for violating its service agreement with Alice. Bob is one of the servers and sees this advertisement. On examining the collection scheme closely, Bob concludes that there is no way for Alice to prove anything in a court that he submitted his share. Indeed, if Bob is rational, he might use the cryptographic scheme in the advertisement and submit his share since there are no penalties and no fear of being caught and prosecuted. Can we design a secret sharing scheme which Alice can use to avoid such a scenario? We introduce a new primitive called as \textit{Traceable Secret Sharing} to tackle this problem. In particular, a traceable secret sharing scheme guarantees that a cheating server always runs the risk of getting traced and prosecuted by providing a valid evidence (which can be examined in a court of law) implicating its dishonest behavior. We explore various definitional aspects and show how they are highly non-trivial to construct (even ignoring efficiency aspects). We then give an efficient construction of traceable secret sharing assuming the existence of a secure two-party computation protocol. We also show an application of this primitive in constructing traceable protocols for multi-server delegation of computation.
2021
CRYPTO
On the Possibility of Basing Cryptography on $\EXP \neq \BPP$ 📺
Best Paper
Liu and Pass (FOCS'20) recently demonstrated an equivalence between the existence of one-way functions and mild average-case hardness of the time-bounded Kolmogorov complexity problem. In this work, we establish a similar equivalence but to a different form of time-bounded Kolmogorov Complexity---namely, Levin's notion of Kolmogorov Complexity---whose hardness is closely related to the problem of whether $\EXP \neq \BPP$. In more detail, let $Kt(x)$ denote the Levin-Kolmogorov Complexity of the string $x$; that is, $Kt(x) = \min_{\desc \in \bitset^*, t \in \N}\{|\desc| + \lceil \log t \rceil: U(\desc, 1^t) = x\}$, where $U$ is a universal Turing machine, and let $\mktp$ denote the language of pairs $(x,k)$ having the property that $Kt(x) \leq k$. We demonstrate that: - $\mktp$ is \emph{two-sided error} mildly average-case hard (i.e., $\mktp \notin \HeurpBPP$) iff infinititely-often one-way functions exist. - $\mktp$ is \emph{errorless} mildly average-case hard (i.e., $\mktp \notin \AvgpBPP$) iff $\EXP \neq \BPP$. Thus, the only ``gap'' towards getting (infinitely-often) one-way functions from the assumption that $\EXP \neq \BPP$ is the seemingly ``minor'' technical gap between two-sided error and errorless average-case hardness of the $\mktp$ problem. As a corollary of this result, we additionally demonstrate that any reduction from errorless to two-sided error average-case hardness for $\mktp$ implies (unconditionally) that $\NP \neq \P$. We finally consider other alternative notions of Kolmogorov complexity---including space-bounded Kolmogorov complexity and conditional Kolmogorov complexity---and show how average-case hardness of problems related to them characterize log-space computable one-way functions, or one-way functions in $\NC^0$.
2021
CRYPTO
Oblivious RAM with Worst-Case Logarithmic Overhead 📺
We present the first Oblivious RAM (ORAM) construction that for $N$ memory blocks supports accesses with \emph{worst-case} $O(\log N)$ overhead for any block size $\Omega(\log N)$ while requiring a client memory of only a constant number of memory blocks. We rely on the existence of one-way functions and guarantee computational security. Our result closes a long line of research on fundamental feasibility results for ORAM constructions as logarithmic overhead is necessary. The previous best logarithmic overhead construction only guarantees it in an \emph{amortized} sense, i.e., logarithmic overhead is achieved only for long enough access sequences, where some of the individual accesses incur $\Theta(N)$ overhead. The previously best ORAM in terms of \emph{worst-case} overhead achieves $O(\log^2 N/\log\log N)$ overhead. Technically, we design a novel de-amortization framework for modern ORAM constructions that use the ``shuffled inputs'' assumption. Our framework significantly departs from all previous de-amortization frameworks, originating from Ostrovsky and Shoup (STOC~'97), that seem to be fundamentally too weak to be applied on modern ORAM constructions.
2021
CRYPTO
Provable Security Analysis of FIDO2 📺
We carry out the first provable security analysis of the new FIDO2 protocols, the promising FIDO Alliance’s proposal for a standard for passwordless user authentication. Our analysis covers the core components of FIDO2: the W3C’s Web Authentication (WebAuthn) specification and the new Client-to-Authenticator Protocol (CTAP2). Our analysis is modular. For WebAuthn and CTAP2, in turn, we propose appropriate security models that aim to capture their intended security goals and use the models to analyze their security. First, our proof confirms the authentication security of WebAuthn. Then, we show CTAP2 can only be proved secure in a weak sense; meanwhile, we identify a series of its design flaws and provide suggestions for improvement. To withstand stronger yet realistic adversaries, we propose a generic protocol called sPACA and prove its strong security; with proper instantiations, sPACA is also more efficient than CTAP2. Finally, we analyze the overall security guarantees provided by FIDO2 and WebAuthn+sPACA based on the security of their components. We expect that our models and provable security results will help clarify the security guarantees of the FIDO2 protocols. In addition, we advocate the adoption of our sPACA protocol as a substitute for CTAP2 for both stronger security and better performance.
2021
CRYPTO
MoSS: Modular Security Specifications Framework 📺
Applied cryptographic protocols have to meet a rich set of security requirements under diverse environments and against diverse adversaries. However, currently used security specifications, based on either simulation (e.g., `ideal functionality' in UC) or games, are monolithic, combining together different aspects of protocol requirements, environment and assumptions. Such security specifications are complex, error-prone, and foil reusability, modular analysis and incremental design. We present the Modular Security Specifications (MoSS) framework, which cleanly separates the security requirements (goals) which a protocol should achieve, from the models (assumptions) under which each requirement should be ensured. This modularity allows us to reuse individual models and requirements across different protocols and tasks, and to compare protocols for the same task, either under different assumptions or satisfying different sets of requirements. MoSS is flexible and extendable, e.g., it can support both asymptotic and concrete definitions for security. So far, we confirmed the applicability of MoSS to two applications: secure broadcast protocols and PKI schemes.
2021
CRYPTO
MHz2k: MPC from HE over $\mathbb{Z}_{2^k}$ with New Packing, Simpler Reshare, and Better ZKP 📺
We propose a multi-party computation (MPC) protocol over $\mathbb{Z}_{2^k}$ secure against actively corrupted majority from somewhat homomorphic encryption. The main technical contributions are: (i) a new efficient packing method for $\mathbb{Z}_{2^k}$-messages in lattice-based somewhat homomorphic encryption schemes, (ii) a simpler reshare protocol for level-dependent packings, (iii) a more efficient zero-knowledge proof of plaintext knowledge on cyclotomic rings $\Z[X]/\Phi_M(X)$ with $M$ being a prime. Integrating them, our protocol shows from 2.2x upto 4.8x improvements in amortized communication costs compared to the previous best results. Our techniques not only improve the efficiency of MPC over $\mathbb{Z}_{2^k}$ considerably, but also provide a toolkit that can be leveraged when designing other cryptographic primitives over $\mathbb{Z}_{2^k}$.
2021
CRYPTO
DualRing: Generic Construction of Ring Signatures with Efficient Instantiations 📺
We introduce a novel generic ring signature construction, called DualRing, which can be built from several canonical identification schemes (such as Schnorr identification). DualRing differs from the classical ring signatures by its formation of two rings: a ring of commitments and a ring of challenges. It has a structural difference from the common ring signature approaches based on accumulators or zero-knowledge proofs of the signer index. Comparatively, DualRing has a number of unique advantages. Considering the DL-based setting by using Schnorr identification scheme, our DualRing structure allows the signature size to be compressed into logarithmic size via an argument of knowledge system such as Bulletproofs. We further improve on the Bulletproofs argument system to eliminate about half of the computation while maintaining the same proof size. We call this Sum Argument and it can be of independent interest. This DL-based construction, named DualRing-EC, using Schnorr identification with Sum Argument has the shortest ring signature size in the literature without using trusted setup. Considering the lattice-based setting, we instantiate DualRing by a canonical identification based on M-LWE and M-SIS. In practice, we achieve the shortest lattice-based ring signature, named DualRing-LB, when the ring size is between 4 and 2000. DualRing-LB is also 5x faster in signing and verification than the fastest lattice-based scheme by Esgin et al. (CRYPTO'19).
2021
CRYPTO
A Black-Box Approach to Post-Quantum Zero-Knowledge in Constant Rounds 📺
In a recent seminal work, Bitansky and Shmueli (STOC '20) gave the first construction of a constant round zero-knowledge argument for NP secure against quantum attacks. However, their construction has several drawbacks compared to the classical counterparts. Specifically, their construction only achieves computational soundness, requires strong assumptions of quantum hardness of learning with errors (QLWE assumption) and the existence of quantum fully homomorphic encryption (QFHE), and relies on non-black-box simulation. In this paper, we resolve these issues at the cost of weakening the notion of zero-knowledge to what is called ϵ-zero-knowledge. Concretely, we construct the following protocols: - We construct a constant round interactive proof for NP that satisfies statistical soundness and black-box ϵ-zero-knowledge against quantum attacks assuming the existence of collapsing hash functions, which is a quantum counterpart of collision-resistant hash functions. Interestingly, this construction is just an adapted version of the classical protocol by Goldreich and Kahan (JoC '96) though the proof of ϵ-zero-knowledge property against quantum adversaries requires novel ideas. - We construct a constant round interactive argument for NP that satisfies computational soundness and black-box ϵ-zero-knowledge against quantum attacks only assuming the existence of post-quantum one-way functions. At the heart of our results is a new quantum rewinding technique that enables a simulator to extract a committed message of a malicious verifier while simulating verifier's internal state in an appropriate sense.
2021
CRYPTO
A Rational Protocol Treatment of 51% Attacks 📺
Game-theoretic analysis of cryptocurrencies and, more generally, blockchain-based decentralized ledgers offers insight on their economic robustness, and their behavior when even the cryptographic assumptions that underpin their security fail. In this work we utilize the recently proposed blockchain adaptation of the rational protocol design (RPD) framework [EUROCRYPT~'18] to analyze 51\% double-spending attacks against Nakamoto-style cryptocurrencies. We observe a property of the originally proposed utility class that yields an unnatural behavior against such attacks, and show how to devise a utility that avoids this pitfall and makes predictions that match the observable behavior---i.e., that renders attacking a dominant strategy in settings where an attack was indeed observed. We then propose a generic modification to the underlying protocol which deters attacks on consistency by adversaries controlling a majority of the system's resources, including the 51\% double-spending attack. This can be used as guidance to patch systems that have suffered such attacks, e.g., Ethereum Classic and Bitcoin Cash, and serves as a demonstration of the power of game-theoretic analyses.
2021
CRYPTO
Differential-Linear Cryptanalysis from an Algebraic Perspective 📺
The differential-linear cryptanalysis is an important cryptanalytic tool in cryptography, and has been extensively researched since its discovery by Langford and Hellman in 1994. There are nevertheless very few methods to study the middle part where the differential and linear trail connect, besides the Differential-Linear Connectivity Table (Bar-On et al., EUROCRYPT 2019) and the experimental approach. In this paper, we study differential-linear cryptanalysis from an algebraic perspective. We first introduce a technique called Differential Algebraic Transitional Form (DATF) for differential-linear cryptanalysis, then develop a new theory of estimation of the differential-linear bias and techniques for key recovery in differential-linear cryptanalysis. The techniques are applied to the CAESAR finalist ASCON, the AES finalist SERPENT, and the eSTREAM finalist Grain v1. The bias of the differential-linear approximation is estimated for ASCON and SERPENT. The theoretical estimates of the bias are more accurate than that obtained by the DLCT, and the techniques can be applied with more rounds. Our general techniques can also be used to estimate the bias of Grain v1 in differential cryptanalysis, and have a markedly better performance than the Differential Engine tool tailor-made for the cipher. The improved key recovery attacks on round-reduced variants of these ciphers are then proposed. To the best of our knowledge, they are thus far the best known cryptanalysis of SERPENT, as well as the best differential-linear cryptanalysis of ASCON and the best initialization analysis of Grain v1. The results have been fully verified by experiments. Notably, security analysis of SERPENT is one of the most important applications of differential-linear cryptanalysis in the last two decades. The results in this paper update the differential-linear cryptanalysis of SERPENT-128 and SERPENT-256 with one more round after the work of Biham, Dunkelman and Keller in 2003.
2021
CRYPTO
Round Efficient Secure Multiparty Quantum Computation with Identifiable Abort 📺
A recent result by Dulek et al. (EUROCRYPT 2020) showed a secure protocol for computing any quantum circuit even without the presence of an honest majority. Their protocol, however, is susceptible to a ``denial of service'' attack and allows even a single corrupted party to force an abort. We propose the first quantum protocol that admits security-with-identifiable-abort, which allows the honest parties to agree on the identity of a corrupted party in case of an abort. Additionally, our protocol is the first to have the property that the number of rounds where quantum communication is required is independent of the circuit complexity. Furthermore, if there exists a post-quantum secure classical protocol whose round complexity is independent of the circuit complexity, then our protocol has this property as well. Our protocol is secure under the assumption that classical quantum-resistant fully homomorphic encryption schemes with decryption circuit of logarithmic depth exist. Interestingly, our construction also admits a reduction from quantum fair secure computation to classical fair secure computation.
2021
CRYPTO
How to Meet Ternary LWE Keys 📺
The LWE problem with its ring variants is today the most prominent candidate for building efficient public key cryptosystems resistant to quantum computers. NTRU-type cryptosystems use an LWE-type variant with small max-norm secrets, usually with ternary coefficients from the set $\{-1,0,1\}$. The presumably best attack on these schemes is a hybrid attack that combines lattice reduction techniques with Odlyzko's Meet-in-the-Middle approach. Odlyzko's algorithm is a classical combinatorial attack that for key space size $\S$ runs in time $\S^{0.5}$. We substantially improve on this Meet-in-the-Middle approach, using the representation technique developed for subset sum algorithms. Asymptotically, our heuristic Meet-in-the-Middle attack runs in time roughly $\S^{0.25}$, which also beats the $\S^{\frac 1 3}$ complexity of the best known quantum algorithm. For the round-3 NIST post-quantum encryptions NTRU and NTRU Prime we obtain non-asymptotic instantiations of our attack with complexity roughly $\S^{0.3}$. As opposed to other combinatorial attacks, our attack benefits from larger LWE field sizes $q$, as they are often used in modern lattice-based signatures. For example, for BLISS and GLP signatures we obtain non-asymptotic combinatorial attacks around $\S^{0.28}$. Our attacks do not invalidate the security claims of the aforementioned schemes. However, they establish improved combinatorial upper bounds for their security. We leave it is an open question whether our new Meet-in-the-Middle attack in combination with lattice reduction can be used to speed up the hybrid attack.
2021
CRYPTO
Quadratic Secret Sharing and Conditional Disclosure of Secrets 📺
There is a huge gap between the upper and lower bounds on the share size of secret-sharing schemes for arbitrary $n$-party access structures, and consistent with our current knowledge the optimal share size can be anywhere between polynomial in $n$ and exponential in $n$. For linear secret-sharing schemes, we know that the share size for almost all $n$-party access structures must be exponential in $n$. Furthermore, most constructions of efficient secret-sharing schemes are linear. We would like to study larger classes of secret-sharing schemes with two goals. On one hand, we want to prove lower bounds for larger classes of secret-sharing schemes, possibly shedding some light on the share size of general secret-sharing schemes. On the other hand, we want to construct efficient secret-sharing schemes for access structures that do not have efficient linear secret-sharing schemes. Given this motivation, Paskin-Cherniavsky and Radune (ITC'20) defined and studied a new class of secret-sharing schemes in which the shares are generated by applying degree-$d$ polynomials to the secret and some random field elements. The special case $d=1$ corresponds to linear and multi-linear secret-sharing schemes. We define and study two additional classes of polynomial secret-sharing schemes: (1) schemes in which for every authorized set the reconstruction of the secret is done using polynomials and (2) schemes in which both sharing and reconstruction are done by polynomials. For linear secret-sharing schemes, schemes with linear sharing and schemes with linear reconstruction are equivalent. We give evidence that for polynomial secret-sharing schemes, schemes with polynomial sharing are probably stronger than schemes with polynomial reconstruction. We also prove lower bounds on the share size for schemes with polynomial reconstruction. On the positive side, we provide constructions of secret-sharing schemes and conditional disclosure of secrets (CDS) protocols with quadratic sharing and reconstruction. We extend a construction of Liu et al. (CRYPTO'17) and construct optimal quadratic $k$-server CDS protocols for functions $f:[N]^k\rightarrow \set{0,1}$ with message size $O(N^{(k-1)/3})$. We show how to transform our quadratic $k$-server CDS protocol to a robust CDS protocol, and use the robust CDS protocol to construct quadratic secret-sharing schemes for arbitrary access structures with share size $O(2^{0.705n})$; this is better than the best known share size of $O(2^{0.7576n})$ for linear secret-sharing schemes and worse than the best known share size of $O(2^{0.585n})$ for general secret-sharing schemes.
2021
CRYPTO
Non-Malleable Codes for Bounded Parallel-Time Tampering 📺
Non-malleable codes allow one to encode data in such a way that once a codeword is being tampered with, the modified codeword is either an encoding of the original message, or a completely unrelated one. Since the introduction of this notion by Dziembowski, Pietrzak, and Wichs (ICS '10 and J. ACM '18), there has been a large body of works realizing such coding schemes secure against various classes of tampering functions. It is well known that there is no efficient non-malleable code secure against all polynomial size tampering functions. Nevertheless, no code which is non-malleable for \emph{bounded} polynomial size attackers is known and obtaining such a code has been a major open problem. We present the first construction of a non-malleable code secure against all polynomial size tampering functions that have {bounded} parallel time. This is an even larger class than all bounded polynomial size functions. In particular, this class includes all functions in non-uniform $\mathbf{NC}$ (and much more). Our construction is in the plain model (i.e., no trusted setup) and relies on several cryptographic assumptions such as keyless hash functions, time-lock puzzles, as well as other standard assumptions. Additionally, our construction has several appealing properties: the complexity of encoding is independent of the class of tampering functions and we can obtain (sub-)exponentially small error.
2021
CRYPTO
Sumcheck Arguments and their Applications 📺
We introduce a class of interactive protocols, which we call *sumcheck arguments*, that establishes a novel connection between the sumcheck protocol (Lund et al. JACM 1992) and folding techniques for Pedersen commitments (Bootle et al. EUROCRYPT 2016). Informally, we consider a general notion of bilinear commitment over modules, and show that the sumcheck protocol applied to a certain polynomial associated with the commitment scheme yields a succinct argument of knowledge for openings of the commitment. Building on this, we additionally obtain succinct arguments for the NP-complete language R1CS over certain rings. Sumcheck arguments enable us to recover as a special case numerous prior works in disparate cryptographic settings (such as discrete logarithms, pairings, RSA groups, lattices), providing one abstract framework to understand them all. Further, we answer open questions raised in prior works, such as obtaining a lattice-based succinct argument from the SIS assumption for satisfiability problems over rings.
2021
CRYPTO
Secure Computation from One-Way Noisy Communication, or: Anti-Correlation via Anti-Concentration 📺
Can a sender encode a pair of messages (m_0,m_1) jointly, and send their encoding over (say) a binary erasure channel, so that the receiver can decode exactly one of the two messages and the sender does not know which one? Garg et al. (Crypto 2015) showed that this is information-theoretically impossible. We show how to circumvent this impossibility by assuming that the receiver is computationally bounded, settling for an inverse-polynomial security error (which is provably necessary), and relying on ideal obfuscation. Our solution creates a ``computational anti-correlation'' between the events of receiving m_0 and receiving m_1 by exploiting the anti-concentration of the binomial distribution. The ideal obfuscation primitive in our construction can either be directly realized using (stateless) tamper-proof hardware, yielding an unconditional result, or heuristically instantiated using existing indistinguishability obfuscation schemes. We put forward a new notion of obfuscation that suffices to securely instantiate our construction. As a corollary, we get similar feasibility results for general secure computation of sender-receiver functionalities by leveraging the completeness of the above ``random oblivious transfer'' functionality.
2021
CRYPTO
Proof-Carrying Data without Succinct Arguments 📺
Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Known approaches to construct PCD are based on succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme. In this paper we show how to obtain PCD without relying on SNARKs. We construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size arguments) that has a *split accumulation scheme*, which is a weak form of accumulation that we introduce. Moreover, we construct a transparent non-interactive argument of knowledge for R1CS whose split accumulation is verifiable via a (small) *constant number of group and field operations*. Our construction is proved secure in the random oracle model based on the hardness of discrete logarithms, and it leads, via the random oracle heuristic and our result above, to concrete efficiency improvements for PCD. Along the way, we construct a split accumulation scheme for Hadamard products under Pedersen commitments and for a simple polynomial commitment scheme based on Pedersen commitments. Our results are supported by a modular and efficient implementation.
2021
CRYPTO
Efficient Key Recovery for all HFE Signature Variants 📺
Honorable mention for best paper
The HFE cryptosystem is one of the best known multivariate schemes. Especially in the area of digital signatures, the HFEv- variant offers short signatures and high performance. Recently, an instance of the HFEv- signature scheme called GeMSS was elected as one of the alternative candidates for signature schemes in the third round of the NIST Post Quantum Crypto (PQC) Standardization Project. In this paper, we propose a new key recovery attack on the HFEv- signature scheme. Our attack shows that both the Minus and the Vinegar modifi- cation do not enhance the security of the basic HFE scheme significantly. This shows that it is very difficult to build a secure and efficient signature scheme on the basis of HFE. In particular, we use our attack to show that the proposed parameters of the GeMSS scheme are not as secure as claimed.
2021
CRYPTO
New Approaches for Quantum Copy-Protection 📺
Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be immensely useful, but unfortunately little is known about how to achieve it in general. In this work, we make progress on this goal, by giving the following results: * We show how to copy protect any program that cannot be learned from its input-output behavior, relative to a classical oracle. This improves on Aaronson (CCC 2009), which achieves the same relative to a quantum oracle. By instantiating the oracle with post-quantum candidate obfuscation schemes, we obtain a heuristic construction of copy protection. * We show, roughly, that any program which can be watermarked can be copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent watermarking schemes.
2021
CRYPTO
Multi-theorem Designated-Verifier NIZK for QMA 📺
Abstract. We present a designated-verifier non-interactive zero-knowledge argument system for QMA with multi-theorem security under the Learning with Errors Assumption. All previous such protocols for QMA are only single-theorem secure. We also relax the setup assumption required in previous works. We prove security in the malicious designated-verifier (MDV-NIZK) model (Quach, Rothblum, and Wichs, EUROCRYPT 2019), where the setup consists of a mutually trusted random string and an untrusted verifier public key. Our main technical contribution is a general compiler that given a NIZK for NP and a quantum sigma protocol for QMA generates an MDV-NIZK protocol for QMA.
2021
CRYPTO
Hidden Cosets and Applications to Unclonable Cryptography 📺
In 2012, Aaronson and Christiano introduced the idea of hidden subspace states to build public-key quantum money [STOC '12]. Since then, this idea has been applied to realize several other cryptographic primitives which enjoy some form of unclonability. In this work, we propose a generalization of hidden subspace states to hidden coset states. We study different unclonable properties of coset states and several applications: * We show that, assuming indistinguishability obfuscation (iO), hidden coset states possess a certain direct product hardness property, which immediately implies a tokenized signature scheme in the plain model. Previously, a tokenized signature scheme was known only relative to an oracle, from a work of Ben-David and Sattath [QCrypt '17]. * Combining a tokenized signature scheme with extractable witness encryption, we give a construction of an unclonable decryption scheme in the plain model. The latter primitive was recently proposed by Georgiou and Zhandry [ePrint '20], who gave a construction relative to a classical oracle. * We conjecture that coset states satisfy a certain natural monogamy-of-entanglement property. Assuming this conjecture is true, we remove the requirement for extractable witness encryption in our unclonable decryption construction. As potential evidence in support of the conjecture, we prove a weaker version of this monogamy property, which we believe will still be of independent interest. * Finally, we give the first construction of a copy-protection scheme for pseudorandom functions (PRFs) in the plain model. Our scheme is secure either assuming iO and extractable witness encryption, or iO, LWE and the conjectured monogamy property mentioned above. This is the first example of a copy-protection scheme with provable security in the plain model for a class of functions that is not evasive.
2021
CRYPTO
KHAPE: Asymmetric PAKE from Key-Hiding Key Exchange 📺
OPAQUE [Jarecki et al., Eurocrypt 2018] is an asymmetric password authenticated key exchange (aPAKE) protocol that is being developed as an Internet standard and for use within TLS 1.3. OPAQUE combines an Oblivious PRF (OPRF) with an authenticated key exchange to provide strong security properties, including security against pre-computation attacks (called saPAKE security). However, the security of OPAQUE relies crucially on the integrity of the OPRF. If the latter breaks (by cryptanalysis, quantum attacks or security compromise), the user's password is immediately exposed to an offline dictionary attack. To address this weakness, we present KHAPE, a variant of OPAQUE that does not require the use of an OPRF to achieve aPAKE security, resulting in improved resilience and performance. An OPRF can be optionally added to KHAPE, for enhanced saPAKE security, but without opening the password to an offline dictionary attack upon OPRF compromise. In addition to resilience to OPRF compromise, a DH-based implementation of KHAPE (using HMQV) offers the best performance among aPAKE protocols in terms of exponentiations with less than the cost of an exponentiation on top of an unauthenticated Diffie-Hellman exchange. KHAPE uses three messages with explicit client authentication and four with explicit server authentication (one more than OPAQUE in the latter case). All results in the paper are proven within the UC framework in the ideal cipher model. Of independent interest is our treatment of "key-hiding AKE" which KHAPE uses as a main component, and our UC proofs of AKE security for protocols 3DH (a basis of Signal) and HMQV that we use as efficient instantiations of KHAPE.
2021
CRYPTO
An Algebraic Framework for Universal and Updatable SNARKs 📺
We introduce Checkable Subspace Sampling Arguments, a new information theoretic interactive proof system in which the prover shows that a vector has been sampled in a subspace according to the verifier's coins. We show that this primitive provides a unifying view that explains the technical core of most of the constructions of universal and updatable pairing-based (zk)SNARKs. This characterization is extended to a fully algebraic framework for designing such SNARKs in a modular way. We propose new constructions of CSS arguments that lead to SNARKs with different performance trade-offs.
2021
CRYPTO
Improved torsion-point attacks on SIDH variants 📺
SIDH is a post-quantum key exchange algorithm based on the presumed difficulty of finding isogenies between supersingular elliptic curves. However, SIDH and related cryptosystems also reveal additional information: the restriction of a secret isogeny to a subgroup of the curve (torsion-point information). Petit [31] was the first to demonstrate that torsion-point information could noticeably lower the difficulty of finding secret isogenies. In particular, Petit showed that "overstretched'' parameterizations of SIDH could be broken in polynomial time. However, this did not impact the security of any cryptosystems proposed in the literature. The contribution of this paper is twofold: First, we strengthen the techniques of [31] by exploiting additional information coming from a dual and a Frobenius isogeny. This extends the impact of torsion-point attacks considerably. In particular, our techniques yield a classical attack that completely breaks the $n$-party group key exchange of [2], first introduced as GSIDH in [17], for 6 parties or more, and a quantum attack for 3 parties or more that improves on the best known asymptotic complexity. We also provide a Magma implementation of our attack for 6 parties. We give the full range of parameters for which our attacks apply. Second, we construct SIDH variants designed to be weak against our attacks; this includes backdoor choices of starting curve, as well as backdoor choices of base-field prime. We stress that our results do not degrade the security of, or reveal any weakness in, the NIST submission SIKE [20].
2021
CRYPTO
MuSig2: Simple Two-Round Schnorr Multi-Signatures 📺
Multi-signatures enable a group of signers to produce a joint signature on a joint message. Recently, Drijvers et al. (S\&P'19) showed that all thus far proposed two-round multi-signature schemes in the pure DL setting (without pairings) are insecure under concurrent signing sessions. While Drijvers et al. proposed a secure two-round scheme, this efficiency in terms of rounds comes with the price of having signatures that are more than twice as large as Schnorr signatures, which are becoming popular in cryptographic systems due to their practicality (e.g., they will likely be adopted in Bitcoin). If one needs a multi-signature scheme that can be used as a drop-in replacement for Schnorr signatures, then one is forced to resort either to a three-round scheme or to sequential signing sessions, both of which are undesirable options in practice. In this work, we propose MuSig2, a simple and highly practical two-round multi-signature scheme. This is the first scheme that simultaneously i) is secure under concurrent signing sessions, ii) supports key aggregation, iii) outputs ordinary Schnorr signatures, iv) needs only two communication rounds, and v) has similar signer complexity as ordinary Schnorr signatures. Furthermore, it is the first multi-signature scheme in the pure DL setting that supports preprocessing of all but one rounds, effectively enabling a non-interactive signing process without forgoing security under concurrent sessions. We prove the security of MuSig2 in the random oracle model, and the security of a more efficient variant in the combination of the random oracle and the algebraic group model. Both our proofs rely on a weaker variant of the OMDL assumption.
2021
CRYPTO
Limits on the Adaptive Security of Yao’s Garbling 📺
Yao’s garbling scheme is one of the most fundamental cryptographic constructions. Lindell and Pinkas (Journal of Cryptograhy 2009) gave a formal proof of security in the selective setting assuming secure symmetric-key encryption (and hence one-way functions). This was fol- lowed by results, both positive and negative, concerning its security in the, stronger, adaptive setting. Applebaum et al. (Crypto 2013) showed that it cannot satisfy adaptive security as is, due to a simple incompressibility argument. Jafagholi and Wichs (TCC 2017) considered a natural adaptation of Yao’s scheme that circumvents this negative result, and proved that it is adaptively secure, at least for shallow circuits. In particular, they showed that for the class of circuits of depth d, the loss in security is at most exponential in d. The above results all concern the simulation-based notion of security. In this work, we show that the upper bound of Jafargholi and Wichs is more or less optimal in a strong sense. As our main result, we show that there exists a family of Boolean circuits, one for each depth d ∈ N, such that any black-box reduction proving the adaptive indistinguishability- security of the natural adaptation of Yao’s scheme from any symmetric-key encryption has to lose a factor that is sub-exponential in d. Since indistinguishability is a weaker notion than simulation, our bound also applies to adaptive simulation. To establish our results, we build on the recent approach of Kamath et al. (Eprint 2021), which uses pebbling lower bounds in conjunction with oracle separations to prove fine-grained lower bounds on loss in cryptographic security
2021
CRYPTO
Fluid MPC: Secure Multiparty Computation with Dynamic Participants 📺
Existing approaches to secure multiparty computation (MPC) require all participants to commit to the entire duration of the protocol. As interest in MPC continues to grow, it is inevitable that there will be a desire to use it to evaluate increasingly complex functionalities, resulting in computations spanning several hours or days. Such scenarios call for a *dynamic* participation model for MPC where participants have the flexibility to go offline as needed and (re)join when they have available computational resources. Such a model would also democratize access to privacy-preserving computation by facilitating an ``MPC-as-a-service'' paradigm --- the deployment of MPC in volunteer-operated networks (such as blockchains, where dynamism is inherent) that perform computation on behalf of clients. In this work, we initiate the study of *fluid MPC*, where parties can dynamically join and leave the computation. The minimum commitment required from each participant is referred to as *fluidity*, measured in the number of rounds of communication that it must stay online. Our contributions are threefold: - We provide a formal treatment of fluid MPC, exploring various possible modeling choices. - We construct information-theoretic fluid MPC protocols in the honest-majority setting. Our protocols achieve *maximal fluidity*, meaning that a party can exit the computation after receiving and sending messages in one round. - We implement our protocol and test it in multiple network settings.
2021
CRYPTO
Targeted Lossy Functions and Applications 📺
Lossy trapdoor functions, introduced by Peikert and Waters (STOC '08), can be initialized in one of two indistinguishable modes: in injective mode, the function preserves all information about its input, and can be efficiently inverted given a trapdoor, while in lossy mode, the function loses some information about its input. Such functions have found countless applications in cryptography, and can be constructed from a variety of Cryptomania assumptions. In this work, we introduce \emph{targeted lossy functions (TLFs)}, which relax lossy trapdoor functions along two orthogonal dimensions. Firstly, they do not require an inversion trapdoor in injective mode. Secondly, the lossy mode of the function is initialized with some target input, and the function is only required to lose information about this particular target. The injective and lossy modes should be indistinguishable even given the target. We construct TLFs from Minicrypt assumptions, namely, injective pseudorandom generators, or even one-way functions under a natural relaxation of injectivity. We then generalize TLFs to incorporate \emph{branches}, and construct \emph{all-injective-but-one} and \emph{all-lossy-but-one} variants. We show a wide variety of applications of targeted lossy functions. In several cases, we get the first Minicrypt constructions of primitives that were previously only known under Cryptomania assumptions. Our applications include: \begin{itemize} \item \emph{Pseudo-entropy functions} from one-way functions. \item Deterministic leakage-resilient message-authentication codes and improved leakage-resilient symmetric-key encryption from one-way functions. \item Extractors for \emph{extractor-dependent sources} from one-way functions. \item Selective-opening secure symmetric-key encryption from one-way functions. \item A new construction of CCA PKE from (exponentially secure) trapdoor functions and injective pseudorandom generators. \end{itemize} We also discuss a fascinating connection to distributed point functions.
2021
CRYPTO
On the Round Complexity of Secure Quantum Computation 📺
We construct the first constant-round protocols for secure quantum computation in the two-party (2PQC) and multi-party (MPQC) settings with security against malicious adversaries. Our protocols are in the common random string (CRS) model. - Assuming two-message oblivious transfer (OT), we obtain (i) three-message 2PQC, and (ii) five-round MPQC with only three rounds of online (input-dependent) communication; such OT is known from quantum-hard Learning with Errors (QLWE). - Assuming sub-exponential hardness of QLWE, we obtain (i) three-round 2PQC with two online rounds and (ii) four-round MPQC with two online rounds. - When only one (out of two) parties receives output, we achieve minimal interaction (two messages) from two-message OT; classically, such protocols are known as non-interactive secure computation (NISC), and our result constitutes the first maliciously-secure quantum NISC. Additionally assuming reusable malicious designated-verifier NIZK arguments for NP (MDV-NIZKs), we give the first MDV-NIZK for QMA that only requires one copy of the quantum witness. Finally, we perform a preliminary investigation into two-round secure quantum computation where each party must obtain output. On the negative side, we identify a broad class of simulation strategies that suffice for classical two-round secure computation that are unlikely to work in the quantum setting. Next, as a proof-of-concept, we show that two-round secure quantum computation exists with respect to a quantum oracle.
2021
CRYPTO
Towards Tight Random Probing Security 📺
Proving the security of masked implementations in theoretical models that are relevant to practice and match the best known attacks of the side-channel literature is a notoriously hard problem. The random probing model is a good candidate to contribute to this challenge, due to its ability to capture the continuous nature of physical leakage (contrary to the threshold probing model), while also being convenient to manipulate in proofs and to automate with verification tools. Yet, despite recent progresses in the design of masked circuits with good asymptotic security guarantees in this model, existing results still fall short when it comes to analyze the security of concretely useful circuits under realistic noise levels and with low number of shares. In this paper, we contribute to this issue by introducing a new composability notion, the Probe Distribution Table (PDT), and a new tool (called STRAPS, for the Sampled Testing of the RAndom Probing Security). Their combination allows us to significantly improve the tightness of existing analyses in the most practical (low noise, low number of shares) region of the design space. We illustrate these improvements by quantifying the random probing security of an AES S-box circuit, masked with the popular multiplication gadget of Ishai, Sahai and Wagner from Crypto 2003, with up to six shares.
2021
CRYPTO
One-Way Functions Imply Secure Computation in a Quantum World 📺
We prove that quantum-hard one-way functions imply simulation-secure quantum oblivious transfer (QOT), which is known to suffice for secure computation of arbitrary quantum functionalities. Furthermore, our construction only makes black-box use of the quantum-hard one-way function. Our primary technical contribution is a construction of extractable and equivocal quantum bit commitments based on the black-box use of quantum-hard one-way functions in the standard model. Instantiating the Crépeau-Kilian (FOCS 1988) framework with these commitments yields simulation-secure quantum oblivious transfer.
2021
CRYPTO
Non-Interactive Batch Arguments for NP from Standard Assumptions 📺
We study the problem of designing *non-interactive batch arguments* for NP. Such an argument system allows an efficient prover to prove multiple $\npol$ statements, with size much smaller than the combined witness length. We provide the first construction of such an argument system for NP in the common reference string model based on standard cryptographic assumptions. Prior works either require non-falsifiable assumptions (or the random oracle model) or can only support private verification. At the heart of our result is a new *dual mode* interactive batch argument system for NP. We show how to apply the correlation-intractability framework for Fiat-Shamir -- that has primarily been applied to proof systems -- to such interactive arguments.
2021
CRYPTO
Tight State-Restoration Soundness in the Algebraic Group Model 📺
Most efficient zero-knowledge arguments lack a concrete security analysis, making parameter choices and efficiency comparisons challenging. This is even more true for non-interactive versions of these systems obtained via the Fiat-Shamir transform, for which the security guarantees generically derived from the interactive protocol are often too weak, even when assuming a random oracle. This paper initiates the study of {\em state-restoration soundness} in the algebraic group model (AGM) of Fuchsbauer, Kiltz, and Loss (CRYPTO '18). This is a stronger notion of soundness for an interactive proof or argument which allows the prover to rewind the verifier, and which is tightly connected with the concrete soundness of the non-interactive argument obtained via the Fiat-Shamir transform. We propose a general methodology to prove tight bounds on state-restoration soundness, and apply it to variants of Bulletproofs (Bootle et al, S\&P '18) and Sonic (Maller et al., CCS '19). To the best of our knowledge, our analysis of Bulletproofs gives the {\em first} non-trivial concrete security analysis for a non-constant round argument combined with the Fiat-Shamir transform.
2021
EUROCRYPT
Classical proofs of quantum knowledge 📺
We define the notion of a proof of knowledge in the setting where the verifier is classical, but the prover is quantum, and where the witness that the prover holds is in general a quantum state. We establish simple properties of our definition, including that, if a nondestructive classical proof of quantum knowledge exists for some state, then that state can be cloned by an unbounded adversary, and that, under certain conditions on the parameters in our definition, a proof of knowledge protocol for a hard-to-clone state can be used as a (destructive) quantum money verification protocol. In addition, we provide two examples of protocols (both inspired by private-key classical verification protocols for quantum money schemes) which we can show to be proofs of quantum knowledge under our definition. In so doing, we introduce techniques for the analysis of such protocols which build on results from the literature on nonlocal games. Finally, we show that, under our definition, the verification protocol introduced by Mahadev (FOCS 2018) is a classical argument of quantum knowledge for QMA relations. In all cases, we construct an explicit quantum extractor that is able to produce a quantum witness given black-box quantum (rewinding) access to the prover, the latter of which includes the ability to coherently execute the prover's black-box circuit controlled on a superposition of messages from the verifier.
2021
EUROCRYPT
Large Scale, Actively Secure Computation from LPN and Free-XOR Garbled Circuits 📺
Whilst secure multiparty computation (MPC) based on garbled circuits is concretely efficient for a small number of parties $n$, the gap between the complexity of practical protocols, which is $O(n^2)$ per party, and the theoretical complexity, which is $O(n)$ per party, is prohibitive for large values of $n$. In order to bridge this gap, Ben-Efraim, Lindell and Omri (ASIACRYPT 2017) introduced a garbled-circuit-based MPC protocol with an almost-practical pre-processing, yielding $O(n)$ complexity per party. However, this protocol is only passively secure and does not support the free-XOR technique by Kolesnikov and Schneider (ICALP 2008), on which almost all practical garbled-circuit-based protocols rely on for their efficiency. In this work, to further bridge the gap between theory and practice, we present a new $n$-party garbling technique based on a new variant of standard LPN-based encryption. Using this approach we can describe two new garbled-circuit based protocols, which have practical evaluation phases. Both protocols are in the preprocessing model, have $O(n)$ complexity per party, are actively secure and support the free-XOR technique. The first protocol tolerates full threshold corruption and ensures the garbled circuit contains no adversarially introduced errors, using a rather expensive garbling phase. The second protocol assumes that at least $n/c$ of the parties are honest (for an arbitrary fixed value $c$) and allows a significantly lighter preprocessing, at the cost of a small sacrifice in online efficiency. We demonstrate the practicality of our approach with an implementation of the evaluation phase using different circuits. We show that like the passively-secure protocol of Ben-Efraim, Lindell and Omri, our approach starts to improve upon other practical protocols with $O(n^2)$ complexity when the number of parties is around $100$.
2021
EUROCRYPT
Pre-Computation Scheme of Window $\tau$NAF for Koblitz Curves Revisited 📺
Let $E_a/ \mathbb{F}_{2}: y^2+xy=x^3+ax^2+1$ be a Koblitz curve. The window $\tau$-adic non-adjacent form (window $\tau$NAF) is currently the standard representation system to perform scalar multiplications on $E_a/ \mathbb{F}_{2^m}$ utilizing the Frobenius map $\tau$. This work focuses on the pre-computation part of scalar multiplication. We first introduce $\mu\bar{\tau}$-operations where $\mu=(-1)^{1-a}$ and $\bar{\tau}$ is the complex conjugate of $\tau$. Efficient formulas of $\mu\bar{\tau}$-operations are then derived and used in a novel pre-computation scheme. Our pre-computation scheme requires $6${\bf M}$+6${\bf S}, $18${\bf M}$+17${\bf S}, $44${\bf M}$+32${\bf S}, and $88${\bf M}$+62${\bf S} ($a=0$) and $6${\bf M}$+6${\bf S}, $19${\bf M}$+17${\bf S}, $46${\bf M}$+32${\bf S}, and $90${\bf M}$+62${\bf S} ($a=1$) for window $\tau$NAF with widths from $4$ to $7$ respectively. It is about two times faster, compared to the state-of-the-art technique of pre-computation in the literature. The impact of our new efficient pre-computation is also reflected by the significant improvement of scalar multiplication. Traditionally, window $\tau$NAF with width at most $6$ is used to achieve the best scalar multiplication. Because of the dramatic cost reduction of the proposed pre-computation, we are able to increase the width for window $\tau$NAF to $7$ for a better scalar multiplication. This indicates that the pre-computation part becomes more important in performing scalar multiplication. With our efficient pre-computation and the new window width, our scalar multiplication runs in at least 85.2\% the time of Kohel's work (Eurocrypt'2017) combining the best previous pre-computation. Our results push the scalar multiplication of Koblitz curves, a very well-studied and long-standing research area, to a significant new stage.
2021
EUROCRYPT
A 2^{n/2}-Time Algorithm for \sqrt{n}-SVP and \sqrt{n}-Hermite SVP, and an Improved Time-Approximation Tradeoff for (H)SVP 📺
We show a 2^{n/2+o(n)}-time algorithm that, given as input a basis of a lattice $\lat \subset \R^n$, finds a (non-zero) vector in whose length is at most $\widetilde{O}(\sqrt{n})\cdot \min\{\lambda_1(\lat), \det(\lat)^{1/n}\}$, where $\lambda_1(\lat)$ is the length of a shortest non-zero lattice vector and $\det(\lat)$ is the lattice determinant. Minkowski showed that $\lambda_1(\lat) \leq \sqrt{n} \det(\lat)^{1/n}$ and that there exist lattices with $\lambda_1(\lat) \geq \Omega(\sqrt{n}) \cdot \det(\lat)^{1/n}$, so that our algorithm finds vectors that are as short as possible relative to the determinant (up to a polylogarithmic factor). The main technical contribution behind this result is new analysis of (a simpler variant of) a 2^{n/2 + o(n)}-time algorithm from [ADRS15], which was only previously known to solve less useful problems. To achieve this, we rely crucially on the ``reverse Minkowski theorem'' (conjectured by Dadush [DR16] and proven by [RS17]), which can be thought of as a partial converse to the fact that $\lambda_1(\lat) \leq \sqrt{n} \det(\lat)^{1/n}$. Previously, the fastest known algorithm for finding such a vector was the 2^{0.802n + o(n)}-time algorithm due to [LWXZ11], which actually found a non-zero lattice vector with length $O(1) \cdot \lambda_1(\lat)$. Though we do not show how to find lattice vectors with this length in time $2^{n/2+o(n)}$, we do show that our algorithm suffices for the most important application of such algorithms: basis reduction. In particular, we show a modified version of Gama and Nguyen's slide-reduction algorithm [GN08], which can be combined with the algorithm above to improve the time-length tradeoff for shortest-vector algorithms in nearly all regimes---including the regimes relevant to cryptography.
2021
EUROCRYPT
Rotational Cryptanalysis From a Differential-Linear Perspective - Practical Distinguishers for Round-reduced FRIET, Xoodoo, and Alzette 📺
The differential-linear attack, combining the power of the two most effective techniques for symmetric-key cryptanalysis, was proposed by Langford and Hellman at CRYPTO 1994. From the exact formula for evaluating the bias of a differential-linear distinguisher (JoC2017), to the differential-linear connectivity table (DLCT) technique for dealing with the dependencies in the switch between the differential and linear parts (EUROCRYPT 2019), and to the improvements in the context of cryptanalysis of ARX primitives (CRYPTO 2020), we have seen significant development of the differential-linear attack during the last four years. In this work, we further extend this framework by replacing the differential part of the attack by rotational-xor differentials. Along the way, we establish the theoretical link between the rotational-xor differential and linear approximations, revealing that it is nontrivial to directly apply the closed formula for the bias of ordinary differentiallinear attack to rotational differential-linear cryptanalysis. We then revisit the rotational cryptanalysis from the perspective of differentiallinear cryptanalysis and generalize Morawiecki et al.’s technique for analyzing Keccak, which leads to a practical method for estimating the bias of a (rotational) differential-linear distinguisher in the special case where the output linear mask is a unit vector. Finally, we apply the rotational differential-linear technique to the permutations involved in FRIET, Xoodoo, Alzette, and SipHash. This gives significant improvements over existing cryptanalytic results, or offers explanations for previous experimental distinguishers without a theoretical foundation. To confirm the validity of our analysis, all distinguishers with practical complexities are verified experimentally.
2021
EUROCRYPT
On the Security of Homomorphic Encryption on Approximate Numbers 📺
We present passive attacks against CKKS, the homomorphic encryption scheme for arithmetic on approximate numbers presented at Asiacrypt 2017. The attack is both theoretically efficient (running in expected polynomial time) and very practical, leading to complete key recovery with high probability and very modest running times. We implemented and tested the attack against major open source homomorphic encryption libraries, including HEAAN, SEAL, HElib and PALISADE, and when computing several functions that often arise in applications of the CKKS scheme to machine learning on encrypted data, like mean and variance computations, and approximation of logistic and exponential functions using their Maclaurin series. The attack shows that the traditional formulation of IND-CPA security (or indistinguishability against chosen plaintext attacks) achieved by CKKS does not adequately captures security against passive adversaries when applied to approximate encryption schemes, and that a different, stronger definition is required to evaluate the security of such schemes. We provide a solid theoretical basis for the security evaluation of homomorphic encryption on approximate numbers (against passive attacks) by proposing new definitions, that naturally extend the traditional notion of IND-CPA security to the approximate computation setting. We propose both indistinguishability-based and simulation-based variants, as well as restricted versions of the definitions that limit the order and number of adversarial queries (as may be enforced by some applications). We prove implications and separations among different definitional variants, and discuss possible modifications to CKKS that may serve as a countermeasure to our attacks.
2021
EUROCRYPT
The Mother of All Leakages: How to Simulate Noisy Leakages via Bounded Leakage (Almost) for Free 📺
We show that noisy leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to a small statistical simulation error and a slight loss in the leakage parameter. The latter holds true in particular for one of the most used noisy-leakage models, where the noisiness is measured using the conditional average min-entropy (Naor and Segev, CRYPTO'09 and SICOMP'12). Our reductions between noisy and bounded leakage are achieved in two steps. First, we put forward a new leakage model (dubbed the dense leakage model) and prove that dense leakage can be simulated in the information-theoretic setting using a single query of bounded leakage, up to small statistical distance. Second, we show that the most common noisy-leakage models fall within the class of dense leakage, with good parameters. We also provide a complete picture of the relationships between different noisy-leakage models, and prove lower bounds showing that our reductions are nearly optimal. Our result finds applications to leakage-resilient cryptography, where we are often able to lift security in the presence of bounded leakage to security in the presence of noisy leakage, both in the information-theoretic and in the computational setting. Additionally, we show how to use lower bounds in communication complexity to prove that bounded-collusion protocols (Kumar, Meka, and Sahai, FOCS'19) for certain functions do not only require long transcripts, but also necessarily need to reveal enough information about the inputs.
2021
EUROCRYPT
Message-recovery Laser Fault Injection Attack on the Classic McEliece Cryptosystem 📺
Code-based public-key cryptosystems are promising candidates for standardization as quantum-resistant public-key cryptographic algorithms. Their security is based on the hardness of the syndrome decoding problem. Computing the syndrome in a finite field, usually $\F_{2}$, guarantees the security of the constructions. We show in this article that the problem becomes considerably easier to solve if the syndrome is computed in $\mathbb{N}$ instead. By means of laser fault injection, we illustrate how to force the matrix-vector product in $\mathbb{N}$ by corrupting specific instructions, and validate it experimentally. To solve the syndrome decoding problem in $\mathbb{N}$, we propose a reduction to an integer linear programming problem. We leverage the computational efficiency of linear programming solvers to obtain real-time message recovery attacks against all the code-based proposals to the NIST Post-Quantum Cryptography standardization challenge. We perform our attacks on worst-case scenarios, i.e. random binary codes, and retrieve the initial message within minutes on a desktop computer. Our practical evaluation of the attack targets the reference implementation of the Niederreiter cryptosystem in the NIST finalist \textit{Classic McEliece} and is feasible for all proposed parameters sets of this submission. For example, for the 256-bit security parameters sets, we successfully recover the plaintext in a couple of seconds on a desktop computer Finally, we highlight the fact that the attack is still possible if only a fraction of the syndrome entries are faulty. This makes the attack feasible even though the fault injection does not have perfect repeatability and reduces the computational complexity of the attack, making it even more practical overall.
2021
EUROCRYPT
Improved Linear Approximations to ARX Ciphers and Attacks Against ChaCha 📺
In this paper, we present a new technique which can be used to find better linear approximations in ARX ciphers. Using this technique, we present the first explicitly derived linear approximations for 3 and 4 rounds of ChaCha and, as a consequence, it enables us to improve the recent attacks against ChaCha. Additionally, we present new differentials for 3 and 3.5 rounds of ChaCha that, when combined with the proposed technique, lead to further improvement in the complexity of the Differential-Linear attacks against ChaCha.
2021
EUROCRYPT
New Lattice Two-Stage Sampling Technique and its Applications to Functional Encryption – Stronger Security and Smaller Ciphertexts 📺
This work proposes a new lattice two-stage sampling technique, generalizing the prior two-stage sampling method of Gentry, Peikert, and Vaikuntanathan (STOC '08). By using our new technique as a key building block, we can significantly improve security and efficiency of the current state of the arts of simulation-based functional encryption. Particularly, our functional encryption achieves $(Q,\poly)$ simulation-based semi-adaptive security that allows arbitrary pre- and post-challenge key queries, and has succinct ciphertexts with only an additive $O(Q)$ overhead. %This significantly improves the current research frontier of simulation-based functional encryption. Additionally, our two-stage sampling technique can derive new feasibilities of indistinguishability-based adaptively-secure $\IB$-$\FE$ for inner products and semi-adaptively-secure $\AB$-$\FE$ for inner products, breaking several technical limitations of the recent work by Abdalla, Catalano, Gay, and Ursu (Asiacrypt '20).
2021
EUROCRYPT
On the ideal shortest vector problem over random rational primes 📺
Any non-zero ideal in a number field can be factored into a product of prime ideals. In this paper we report a surprising connection between the complexity of the shortest vector problem (SVP) of prime ideals in number fields and their decomposition groups. When applying the result to number fields popular in lattice based cryptosystems, such as power-of-two cyclotomic fields, we show that a majority of rational primes lie under prime ideals admitting a polynomial time algorithm for SVP. Although the shortest vector problem of ideal lattices underpins the security of the Ring-LWE cryptosystem, this work does not break Ring-LWE, since the security reduction is from the worst case ideal SVP to the average case Ring-LWE, and it is one-way.
2021
EUROCRYPT
Delay Encryption 📺
We introduce a new primitive named Delay Encryption, and give an efficient instantation based on isogenies of supersingular curves and pairings. Delay Encryption is related to Time-lock Puzzles and Verifiable Delay Functions, and can be roughly described as ``time-lock identity based encryption''. It has several applications in distributed protocols, such as sealed bid Vickrey auctions and electronic voting. We give an instantiation of Delay Encryption by modifying Boneh and Frankiln's IBE scheme, where we replace the master secret key by a long chain of isogenies, as in the isogeny VDF of De Feo, Masson, Petit and Sanso. Similarly to the isogeny-based VDF, our Delay Encryption requires a trusted setup before parameters can be safely used; our trusted setup is identical to that of the VDF, thus the same parameters can be generated once and shared for many executions of both protocols, with possibly different delay parameters. We also discuss several topics around delay protocols based on isogenies that were left untreated by De Feo et al., namely: distributed trusted setup, watermarking, and implementation issues.
2021
EUROCRYPT
Improved cryptanalysis of UOV and Rainbow 📺
The contributions of this paper are twofold. First, we simplify the description of the Unbalanced Oil and Vinegar scheme (UOV) and its Rainbow variant, which makes it easier to understand the scheme and the existing attacks. We hope that this will make UOV and Rainbow more approachable for cryptanalysts. Secondly, we give two new attacks against the UOV and Rainbow signature schemes; the intersection attack that applies to both UOV and Rainbow and the rectangular MinRank attack that applies only to Rainbow. Our attacks are more powerful than existing attacks. In particular, we estimate that compared to previously known attacks, our new attacks reduce the cost of a key recovery by a factor of 2^17, 2^53, and 2^73 for the parameter sets submitted to the second round of the NIST PQC standardization project targeting the security levels I, III, and V respectively. For the third round parameters, the cost is reduced by a factor of 2^20, 2^40, and 2^55 respectively. This means all these parameter sets fall short of the security requirements set out by NIST.
2021
EUROCRYPT
Dummy Shuffling against Algebraic Attacks in White-box Implementations 📺
At CHES 2016, Bos et al. showed that most of existing white-box implementations are easily broken by standard side-channel attacks. A natural idea to apply the well-developed side-channel countermeasure - linear masking schemes - leaves implementations vulnerable to linear algebraic attacks which exploit absence of noise in the white-box setting and are applicable for any order of linear masking. At ASIACRYPT 2018, Biryukov and Udovenko proposed a security model (BU-model for short) for protection against linear algebraic attacks and a new quadratic masking scheme which is provably secure in this model. However, countermeasures against higher-degree attacks were left as an open problem. In this work, we study the effectiveness of another well-known side-channel countermeasure - shuffling - against linear and higher-degree algebraic attacks in the white-box setting. First, we extend the classic shuffling to include dummy computation slots and show that this is a crucial component for protecting against the algebraic attacks. We quantify and prove the security of dummy shuffling against the linear algebraic attack in the BU-model. We introduce a refreshing technique for dummy shuffling and show that it allows to achieve close to optimal protection in the model for arbitrary degrees of the attack, thus solving the open problem of protection against the algebraic attack in the BU-model. Furthermore, we describe an interesting proof-of-concept construction that makes the slot function public (while keeping the shuffling indexes private).
2021
EUROCRYPT
Non-Interactive Anonymous Router 📺
Anonymous routing is one of the most fundamental online privacy problems and has been studied extensively for decades. Almost all known approaches that achieve anonymous routing (e.g., mix-nets, DC-nets, and numerous other systems) rely on multiple servers or routers to engage in some interactive protocol; and anonymity is guaranteed in the threshold model, i.e., if one or more of the servers/routers behave honestly. Departing from all prior approaches, we propose a novel non-interactive abstraction called a Non-Interactive Anonymous Router (NIAR), that works even with a single untrusted router. In a NIAR scheme, suppose that n senders each want to talk to a distinct receiver. A one-time trusted setup is performed such that each sender obtains a sending key, each receiver obtains a receiving key, and the router receives a token that “encrypts” the permutation mapping the senders to receivers. In every time step, the senders can each encrypt its message using its sender key, and the router can use its token to convert the n ciphertexts received from the senders to n transformed ciphertexts. Each transformed ciphertext is delivered to the corresponding receiver, and the receiver can decrypt the message using its receiver key. Imprecisely speaking, security requires that the untrusted router, even when colluding with a subset of corrupt senders and/or receivers, should not be able to break the privacy of honest parties, including who is talking to who, and the messages they exchange. We show how to construct a communication-efficient NIAR scheme with provable security guarantees based on the SXDH assumption in suitable bilinear groups and assuming Random Oracles (RO); further, the RO assumption can be removed if we allow a public key that is as large as the number of time steps supported. We also define a paranoid notion of security that achieves full insider protection, and show that if we additionally assume sub-exponentially secure Indistinguishability Obfuscation and as sub-exponentially secure one-way functions, one can construct a NIAR scheme with paranoid security. We show that a com- pelling application of NIAR is to realize a Non-Interactive Anonymous Shuffler (NIAS), where an untrusted server or data analyst can only de- crypt a shuffled version of the messages coming from n senders where the permutation is hidden. NIAS can be adopted to construct privacy- preserving surveys, differentially private protocols in the shuffle model, and pseudonymous bulletin boards.
2021
EUROCRYPT
Fast verification of masking schemes in characteristic two 📺
We revisit the matrix model for non-interference (NI) probing security of masking gadgets introduced by Belaïd et al. at CRYPTO 2017. This leads to two main results. 1) We generalise the theorems on which this model is based, so as to be able to apply them to masking schemes over any finite field --- in particular GF(2)--- and to be able to analyse the *strong* non-interference (SNI) security notion. We also follow Faust et al. (TCHES 2018) to additionally consider a *robust* probing model that takes hardware defects such as glitches into account. 2) We exploit this improved model to implement a very efficient verification algorithm that improves the performance of state-of-the-art software by three orders of magnitude. We show applications to variants of NI and SNI multiplication gadgets from Barthe et al. (EUROCRYPT~2017) which we verify to be secure up to order 11 after a significant parallel computation effort, whereas the previous largest proven order was 7; SNI refreshing gadgets (ibid.); and NI multiplication gadgets from Gross et al. (TIS@CCS 2016) secure in presence of glitches. We also reduce the randomness cost of some existing gadgets, notably for the implementation-friendly case of 8 shares, improving here the previous best results by 17% (resp. 19%) for SNI multiplication (resp. refreshing).
2021
EUROCRYPT
Function Secret Sharing for Mixed-Mode and Fixed-Point Secure Computation 📺
Recently Boyle et al. (TCC 2019) proposed a new approach for secure computation in the {\em preprocessing model} building on {\em function secret sharing} (FSS). This approach can be used to realize any circuit containing gates that admit efficient FSS schemes. In this work, we make the following three technical contributions: {\bf Improved Key Size.} The complexity of the preprocessing phase directly depends on the FSS key size. We improve the size of FSS keys for several existing FSS constructions through two important steps. First, we present a roughly $4\times$ reduction in FSS key size for the Distributed Comparison Function (DCF), i.e. ($f_\alpha(x) = \beta$ for all $x < \alpha$ and $0$, otherwise). Second, prior FSS schemes for many important function classes are obtained via reductions to multiple instances of DCF; for example, 2 instances for interval containment and $2m$ for splines with $m$ pieces. We significantly improve these reductions for public intervals and obtain {\em optimal} FSS schemes, i.e., through a {\em single instance of DCF}, thereby reducing the key sizes by up to $6-22\times$ for commonly used functions in mixed-mode secure computation such as ReLU and sigmoid. {\bf FSS for New Function Families.} We present the first constructions of FSS schemes for arithmetic and logical right shift, as well as for bit-decomposition, where the output bits must be secret shared in a larger ring. These functions are crucial for many applications such as fixed-point arithmetic and machine learning. {\bf FSS for Fixed-Point Arithmetic and Barrier.} One of the important functions in the realization of secure fixed-point arithmetic is that of multiply-then-truncate. While our work shows how to obtain a construction for this function in 2 rounds using sequential calls to FSS schemes for multiply and shift, we demonstrate a barrier towards improving this via FSS beyond what we achieve. Specifically, we show that a 1-round solution would require settling a major open problem in the area of FSS: namely, building an FSS for the class of bit-conjunction functions based on only symmetric-key cryptographic assumptions.
2021
EUROCRYPT
Secure Software Leasing 📺
Formulating cryptographic definitions to protect against software piracy is an important research direction that has not received much attention. Since natural definitions using classical cryptography are impossible to achieve (as classical programs can always be copied), this directs us towards using techniques from quantum computing. The seminal work of Aaronson [CCC'09] introduced the notion of quantum copy-protection precisely to address the problem of software anti-piracy. However, despite being one of the most important problems in quantum cryptography, there are no provably secure solutions of quantum copy-protection known for {\em any} class of functions. We formulate an alternative definition for tackling software piracy, called quantum secure software leasing (QSSL). While weaker than quantum copy-protection, QSSL is still meaningful and has interesting applications in software anti-piracy. We present a construction of QSSL for a subclass of evasive circuits (that includes natural implementations of point functions, conjunctions with wild cards, and affine testers) based on concrete cryptographic assumptions. Our construction is the first provably secure solution, based on concrete cryptographic assumptions, for software anti-piracy. To complement our positive result, we show, based on cryptographic assumptions, that there is a class of quantum unlearnable functions for which QSSL does not exist. In particular, our impossibility result also rules out quantum copy-protection [Aaronson CCC'09] for an arbitrary class of quantum unlearnable functions; resolving an important open problem on the possibility of constructing copy-protection for arbitrary quantum unlearnable circuits.
2021
EUROCRYPT
Public-Coin Statistical Zero-Knowledge Batch Verification against Malicious Verifiers 📺
Suppose that a problem $\Pi$ has a statistical zero-knowledge (SZK) proof with communication complexity $m$. The question of batch verification for SZK asks whether one can prove that $k$ instances $x_1,\dots,x_k$ all belong to $\Pi$ with a statistical zero-knowledge proof whose communication complexity is better than $k \cdot m$ (which is the complexity of the trivial solution of executing the original protocol independently on each input). In a recent work, Kaslasi et al. (TCC, 2020) constructed such a batch verification protocol for any problem having a non-interactive SZK (NISZK) proof-system. Two drawbacks of their result are that their protocol is private-coin and is only zero-knowledge with respect to the honest verifier. In this work, we eliminate these two drawbacks by constructing a public-coin malicious-verifier SZK protocol for batch verification of NISZK. Similarly to the aforementioned prior work, the communication complexity of our protocol is $(k+poly(m)) \cdot polylog(k,m)$.
2021
EUROCRYPT
Decentralized Multi-Authority ABE for DNFs from LWE 📺
We construct the first decentralized multi-authority attribute-based encryption (????????-????????????) scheme for a non-trivial class of access policies whose security is based (in the random oracle model) solely on the Learning With Errors (LWE) assumption. The supported access policies are ones described by ???????????? formulas. All previous constructions of ????????-???????????? schemes supporting any non-trivial class of access policies were proven secure (in the random oracle model) assuming various assumptions on bilinear maps. In our system, any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. A party can simply act as a standard ABE authority by creating a public key and issuing private keys to different users that reflect their attributes. A user can encrypt data in terms of any ???????????? formulas over attributes issued from any chosen set of authorities. Finally, our system does not require any central authority. In terms of efficiency, when instantiating the scheme with a global bound ???? on the size of access policies, the sizes of public keys, secret keys, and ciphertexts, all grow with ????. Technically, we develop new tools for building ciphertext-policy ABE (????????-????????????) schemes using LWE. Along the way, we construct the first provably secure ????????-???????????? scheme supporting access policies in ????????^1 under the LWE assumption that avoids the generic universal-circuit-based key-policy to ciphertext-policy transformation. In particular, our construction relies on linear secret sharing schemes with new properties and in some sense is more similar to ????????-???????????? schemes that rely on bilinear maps. While our ????????-???????????? construction is not more efficient than existing ones, it is conceptually intriguing and further we show how to extend it to get the ????????-???????????? scheme described above.
2021
EUROCRYPT
Compact, Efficient and UC-Secure Isogeny-Based Oblivious Transfer 📺
Oblivious transfer (OT) is an essential cryptographic tool that can serve as a building block for almost all secure multiparty functionalities. The strongest security notion against malicious adversaries is universal composability (UC-secure). An important goal is to have post-quantum OT protocols. One area of interest for post-quantum cryptography is isogeny-based crypto. Isogeny-based cryptography has some similarities to Diffie-Hellman, but lacks some algebraic properties that are needed for discrete-log-based OT protocols. Hence it is not always possible to directly adapt existing protocols to the isogeny setting. We propose the first practical isogeny-based UC-secure oblivious transfer protocol in the presence of malicious adversaries. Our scheme uses the CSIDH framework and does not have an analogue in the Diffie-Hellman setting. The scheme consists of a constant number of isogeny computations. The underlying computational assumption is a problem that we call the computational reciprocal CSIDH problem, and that we prove polynomial-time equivalent to the computational CSIDH problem.
2021
EUROCRYPT
Tightly-Secure Authenticated Key Exchange, Revisited 📺
We introduce new tightly-secure authenticated key exchange (AKE) protocols that are extremely efficient, yet have only a constant security loss and can be instantiated in the random oracle model both from the standard DDH assumption and a subgroup assumption over RSA groups. These protocols can be deployed with optimal parameters, independent of the number of users or sessions, without the need to compensate a security loss with increased parameters and thus decreased computational efficiency. We use the standard “Single-Bit-Guess” AKE security (with forward secrecy and state corruption) requiring all challenge keys to be simultaneously pseudo-random. In contrast, most previous papers on tightly secure AKE protocols (Bader et al., TCC 2015; Gjøsteen and Jager, CRYPTO 2018; Liu et al., ASIACRYPT 2020) concentrated on a non-standard “Multi-Bit-Guess” AKE security which is known not to compose tightly with symmetric primitives to build a secure communication channel. Our key technical contribution is a new generic approach to construct tightly-secure AKE protocols based on non-committing key encapsulation mechanisms. The resulting DDH-based protocols are considerably more efficient than all previous constructions.
2021
EUROCRYPT
Leakage Resilient Value Comparison With Application to Message Authentication 📺
Side-channel attacks are a threat to secrets stored on a device, especially if an adversary has physical access to the device. As an effect of this, countermeasures against such attacks for cryptographic algorithms are a well-researched topic. In this work, we deviate from the study of cryptographic algorithms and instead focus on the side-channel protection of a much more basic operation, the comparison of a known attacker-controlled value with a secret one. Comparisons sensitive to side-channel leakage occur in tag comparisons during the verification of message authentication codes (MACs) or authenticated encryption, but are typically omitted in security analyses. Besides, also comparisons performed as part of fault countermeasures might be sensitive to side-channel attacks. In this work, we present a formal analysis on comparing values in a leakage resilient manner by utilizing cryptographic building blocks that are typically part of an implementation anyway. Our results indicate that there is no need to invest additional resources into implementing a protected comparison operation itself if a sufficiently protected implementation of a public cryptographic permutation, or a (tweakable) block cipher, is already available. We complement our contribution by applying our findings to the SuKS message authentication code used by lightweight authenticated encryption scheme ISAP, and to the classical Hash-then-PRF construction.
2021
EUROCRYPT
Sieving for twin smooth integers with solutions to the Prouhet-Tarry-Escott problem 📺
We give a sieving algorithm for finding pairs of consecutive smooth numbers that utilizes solutions to the Prouhet-Tarry-Escott (PTE) problem. Any such solution induces two degree-n polynomials, a(x) and b(x), that differ by a constant integer C and completely split into linear factors in Z[x]. It follows that for any l in Z such that a(l) = b(l) = 0 mod C , the two integers a(l)/C and b(l)/C differ by 1 and necessarily contain n factors of roughly the same size. For a fixed smoothness bound B, restricting the search to pairs of integers that are parameterized in this way increases the probability that they are B-smooth. Our algorithm combines a simple sieve with parametrizations given by a collection of solutions to the PTE problem. The motivation for finding large twin smooth integers lies in their application to compact isogeny-based post-quantum protocols. The recent key exchange scheme B-SIDH and the recent digital signature scheme SQISign both require large primes that lie between two smooth integers; finding such a prime can be seen as a special case of finding twin smooth integers under the additional stipulation that their sum is a prime p. When searching for cryptographic parameters with 2^240 <= p < 2^256, an implementation of our sieve found primes p where p+1 and p-1 are 2^15-smooth; the smoothest prior parameters had a similar sized prime for which p-1 and p+1 were 2^19-smooth. In targeting higher security levels, our sieve found a 376-bit prime lying between two 2^21-smooth integers, a 384-bit prime lying between two 2^22-smooth integers, and a 512-bit prime lying between two 2^29-smooth integers. Our analysis shows that using previously known methods to find high-security instances subject to these smoothness bounds is computationally infeasible.
2021
EUROCRYPT
Cryptanalytic Applications of the Polynomial Method for Solving Multivariate Equation Systems over GF(2) 📺
At SODA 2017 Lokshtanov et al. presented the first worst-case algorithms with exponential speedup over exhaustive search for solving polynomial equation systems of degree $d$ in $n$ variables over finite fields. These algorithms were based on the polynomial method in circuit complexity which is a technique for proving circuit lower bounds that has recently been applied in algorithm design. Subsequent works further improved the asymptotic complexity of polynomial method-based algorithms for solving equations over the field $\mathbb{F}_2$. However, the asymptotic complexity formulas of these algorithms hide significant low-order terms, and hence they outperform exhaustive search only for very large values of~$n$. In this paper, we devise a concretely efficient polynomial method-based algorithm for solving multivariate equation systems over $\mathbb{F}_2$. We analyze our algorithm's performance for solving random equation systems, and bound its complexity by about $n^2 \cdot 2^{0.815n}$ bit operations for $d = 2$ and $n^2 \cdot 2^{\left(1 - 1/2.7d\right) n}$ for any $d \geq 2$. We apply our algorithm in cryptanalysis of recently proposed instances of the Picnic signature scheme (an alternate third-round candidate in NIST's post-quantum standardization project) that are based on the security of the LowMC block cipher. Consequently, we show that 2 out of 3 new instances do not achieve their claimed security level. As a secondary application, we also improve the best-known preimage attacks on several round-reduced variants of the Keccak hash function. Our algorithm combines various techniques used in previous polynomial method-based algorithms with new optimizations, some of which exploit randomness assumptions about the system of equations. In its cryptanalytic application to Picnic, we demonstrate how to further optimize the algorithm for solving structured equation systems that are constructed from specific cryptosystems.
2021
EUROCRYPT
Round-Optimal Blind Signatures in the Plain Model from Classical and Quantum Standard Assumptions 📺
Blind signatures, introduced by Chaum (Crypto'82), allows a user to obtain a signature on a message without revealing the message itself to the signer. Thus far, all existing constructions of round-optimal blind signatures are known to require one of the following: a trusted setup, an interactive assumption, or complexity leveraging. This state-of-the-affair is somewhat justified by the few known impossibility results on constructions of round-optimal blind signatures in the plain model (i.e., without trusted setup) from standard assumptions. However, since all of these impossibility results only hold \emph{under some conditions}, fully (dis)proving the existence of such round-optimal blind signatures has remained open. In this work, we provide an affirmative answer to this problem and construct the first round-optimal blind signature scheme in the plain model from standard polynomial-time assumptions. Our construction is based on various standard cryptographic primitives and also on new primitives that we introduce in this work, all of which are instantiable from __classical and post-quantum__ standard polynomial-time assumptions. The main building block of our scheme is a new primitive called a blind-signature-conforming zero-knowledge (ZK) argument system. The distinguishing feature is that the ZK property holds by using a quantum polynomial-time simulator against non-uniform classical polynomial-time adversaries. Syntactically one can view this as a delayed-input three-move ZK argument with a reusable first message, and we believe it would be of independent interest.
2021
EUROCRYPT
Leakage-resilience of the Shamir Secret-sharing Scheme against Physical-bit Leakages 📺
Efficient Reed-Solomon code reconstruction algorithms, for example, by Guruswami and Wooters (STOC--2016), translate into local leakage attacks on Shamir secret-sharing schemes over characteristic-2 fields. However, Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO--2018) showed that the Shamir secret sharing scheme over prime-fields is leakage resilient to one-bit local leakage if the reconstruction threshold is roughly 0.87 times the total number of parties. In several application scenarios, like secure multi-party multiplication, the reconstruction threshold must be at most half the number of parties. Furthermore, the number of leakage bits that the Shamir secret sharing scheme is resilient to is also unclear. Towards this objective, we study the Shamir secret-sharing scheme's leakage-resilience over a prime-field $F$. The parties' secret-shares, which are elements in the finite field $F$, are naturally represented as $\lambda$-bit binary strings representing the elements $\{0,1,\dotsc,p-1\}$. In our leakage model, the adversary can independently probe $m$ bit-locations from each secret share. The inspiration for considering this leakage model stems from the impact that the study of oblivious transfer combiners had on general correlation extraction algorithms, and the significant influence of protecting circuits from probing attacks has on leakage-resilient secure computation. Consider arbitrary reconstruction threshold $k\geq 2$, physical bit-leakage parameter $m\geq 1$, and the number of parties $n\geq 1$. We prove that Shamir's secret-sharing scheme with random evaluation places is leakage-resilient with high probability when the order of the field $F$ is sufficiently large; ignoring polylogarithmic factors, one needs to ensure that $\log \abs F \geq n/k$. Our result, excluding polylogarithmic factors, states that Shamir's scheme is secure as long as the total amount of leakage $m\cdot n$ is less than the entropy $k\cdot\lambda$ introduced by the Shamir secret-sharing scheme. Note that our result holds even for small constant values of the reconstruction threshold $k$, which is essential to several application scenarios. To complement this positive result, we present a physical-bit leakage attack for $m=1$ physical bit-leakage from $n=k$ secret shares and any prime-field $F$ satisfying $\abs F=1\mod k$. In particular, there are (roughly) $\abs F^{n-k+1}$ such vulnerable choices for the $n$-tuple of evaluation places. We lower-bound the advantage of this attack for small values of the reconstruction threshold, like $k=2$ and $k=3$, and any $\abs F=1\mod k$. In general, we present a formula calculating our attack's advantage for every $k$ as $\abs F\rightarrow\infty.$ Technically, our positive result relies on Fourier analysis, analytic properties of proper rank-$r$ generalized arithmetic progressions, and B\'ezout's theorem to bound the number of solutions to an equation over finite fields. The analysis of our attack relies on determining the ``discrepancy'' of the Irwin-Hall distribution. A probability distribution's discrepancy is a new property of distributions that our work introduces, which is of potential independent interest.
2021
EUROCRYPT
Robust Property-Preserving Hash Functions for Hamming Distance and More 📺
Robust property-preserving hash (PPH) functions, recently introduced by Boyle, Lavigne, and Vaikuntanathan [ITCS 2019], compress large inputs $x$ and $y$ into short digests $h(x)$ and $h(y)$ in a manner that allows for computing a predicate $P$ on $x$ and $y$ while only having access to the corresponding hash values. In contrast to locality-sensitive hash functions, a robust PPH function guarantees to correctly evaluate a predicate on $h(x)$ and $h(y)$ even if $x$ and $y$ are chosen adversarially \emph{after} seeing $h$. Our main result is a robust PPH function for the exact hamming distance predicate \[ \mathsf{HAM}^t(x, y) = \begin{cases} 1 &\text{if } d( x, y) \geq t \\ 0 & \text{Otherwise}\\ \end{cases} \] where $d(x, y)$ is the hamming-distance between $x$ and $y$. Our PPH function compresses $n$-bit strings into $\mathcal{O}(t \lambda)$-bit digests, where $\lambda$ is the security parameter. The construction is based on the q-strong bilinear discrete logarithm assumption. Along the way, we construct a robust PPH function for the set intersection predicate \[ \mathsf{INT}^t(X, Y) = \begin{cases} 1 &\text{if } \vert X \cap Y\vert > n - t \\ 0 & \text{Otherwise}\\ \end{cases} \] which compresses sets $X$ and $Y$ of size $n$ with elements from some arbitrary universe $U$ into $\mathcal{O}(t\lambda)$-bit long digests. This PPH function may be of independent interest. We present an almost matching lower bound of $\Omega(t \log t)$ on the digest size of any PPH function for the intersection predicate, which indicates that our compression rate is close to optimal. Finally, we also show how to extend our PPH function for the intersection predicate to more than two inputs.
2021
EUROCRYPT
Alibi: A Flaw in Cuckoo-Hashing based Hierarchical ORAM Schemes and a Solution 📺
There once was a table of hashes That held extra items in stashes It all seemed like bliss But things went amiss When the stashes were stored in the caches The first Oblivious RAM protocols introduced the ``hierarchical solution,'' (STOC '90) where the server stores a series of hash tables of geometrically increasing capacities. Each ORAM query would read a small number of locations from each level of the hierarchy, and each level of the hierarchy would be reshuffled and rebuilt at geometrically increasing intervals to ensure that no single query was ever repeated twice at the same level. This yielded an ORAM protocol with polylogarithmic overhead. Future works extended and improved the hierarchical solution, replacing traditional hashing with cuckoo hashing (ICALP '11) and cuckoo hashing with a combined stash (Goodrich et al. SODA '12). In this work, we identify a subtle flaw in the protocol of Goodrich et al. (SODA '12) that uses cuckoo hashing with a stash in the hierarchical ORAM solution. We give a concrete distinguishing attack against this type of hierarchical ORAM that uses cuckoo hashing with a \emph{combined} stash. This security flaw has propagated to at least 5 subsequent hierarchical ORAM protocols, including the recent optimal ORAM scheme, OptORAMa (Eurocrypt '20). In addition to our attack, we identify a simple fix that does not increase the asymptotic complexity. We note, however, that our attack only affects more recent \emph{hierarchical ORAMs}, but does not affect the early protocols that predate the use of cuckoo hashing, or other types of ORAM solutions (e.g. Path ORAM or Circuit ORAM).
2021
EUROCRYPT
VOLE-PSI: Fast OPRF and Circuit-PSI from Vector-OLE 📺
In this work we present a new construction for a batched Oblivious Pseudorandom Function (OPRF) based on Vector-OLE and the PaXoS data structure. We then use it in the standard transformation for achieving Private Set Intersection (PSI) from an OPRF. Our overall construction is highly efficient with $O(n)$ communication and computation. We demonstrate that our protocol can achieve malicious security at only a very small overhead compared to the semi-honest variant. For input sizes $n = 2^{20}$, our malicious protocol needs 6.2 seconds and less than 59 MB communication. This corresponds to under 450 bits per element, which is the lowest number for any published PSI protocol (semi-honest or malicious) to date. Moreover, in theory our semi-honest (resp. malicious) protocol can achieve as low as 219 (resp. 260) bits per element for $n=2^{20}$ at the added cost of interpolating a polynomial over $n$ elements. As a second contribution, we present an extension where the output of the PSI is secret-shared between the two parties. This functionality is generally referred to as Circuit-PSI. It allows the parties to perform a subsequent MPC protocol on the secret-shared outputs, e.g., train a machine learning model. Our circuit PSI protocol builds on our OPRF construction along with another application of the PaXoS data structure. It achieves semi-honest security and allows for a highly efficient implementation, up to 3x faster than previous work.
2021
EUROCRYPT
Multi-Source Non-Malleable Extractors and Applications 📺
We introduce a natural generalization of two-source non-malleable extractors (Cheragachi and Guruswami, TCC 2014) called as \textit{multi-source non-malleable extractors}. Multi-source non-malleable extractors are special independent source extractors which satisfy an additional non-malleability property. This property requires that the output of the extractor remains close to uniform even conditioned on its output generated by tampering {\it several sources together}. We formally define this primitive, give a construction that is secure against a wide class of tampering functions, and provide applications. More specifically, we obtain the following results: \begin{itemize} \item For any $s \geq 2$, we give an explicit construction of a $s$-source non-malleable extractor for min-entropy $\Omega(n)$ and error $2^{-n^{\Omega(1)}}$ in the {\it overlapping joint tampering model}. This means that each tampered source could depend on any strict subset of all the sources and the sets corresponding to each tampered source could be overlapping in a way that we define. Prior to our work, there were no known explicit constructions that were secure even against disjoint tampering (where the sets are required to be disjoint without any overlap). \item We adapt the techniques used in the above construction to give a $t$-out-of-$n$ non-malleable secret sharing scheme (Goyal and Kumar, STOC 2018) for any $t \leq n$ in the \emph{disjoint tampering model}. This is the first general construction of a threshold non-malleable secret sharing (NMSS) scheme in the disjoint tampering model. All prior constructions had a restriction that the size of the tampered subsets could not be equal. \item We further adapt the techniques used in the above construction to give a $t$-out-of-$n$ non-malleable secret sharing scheme (Goyal and Kumar, STOC 2018) for any $t \leq n$ in the \emph{overlapping joint tampering model}. This is the first construction of a threshold NMSS in the overlapping joint tampering model. \item We show that a stronger notion of $s$-source non-malleable extractor that is multi-tamperable against disjoint tampering functions gives a single round network extractor protocol (Kalai et al., FOCS 2008) with attractive features. Plugging in with a new construction of multi-tamperable, 2-source non-malleable extractors provided in our work, we get a network extractor protocol for min-entropy $\Omega(n)$ that tolerates an {\it optimum} number ($t = p-2$) of faulty processors and extracts random bits for {\it every} honest processor. The prior network extractor protocols could only tolerate $t = \Omega(p)$ faulty processors and failed to extract uniform random bits for a fraction of the honest processors. \end{itemize}
2021
EUROCRYPT
LogStack: Stacked Garbling with O(b log b) Computation 📺
Secure two party computation (2PC) of arbitrary programs can be efficiently achieved using garbled circuits (GC). Until recently, it was widely believed that a GC proportional to the entire program, including parts of the program that are entirely discarded due to conditional branching, must be transmitted over a network. Recent work shows that this belief is false, and that communication proportional only to the longest program execution path suffices (Heath and Kolesnikov, CRYPTO 20, [HK20a]). Although this recent work reduces needed communication, it increases computation. For a conditional with b branches, the players use O(b^2) computation (traditional GC uses only O(b)). Our scheme LogStack reduces stacked garbling computation from O(b^2) to O(b log b) with no increase in communication over [HK20a]. The cause of [HK20a]'s increased computation is the oblivious collection of garbage labels that emerge during the evaluation of inactive branches. Garbage is collected by a multiplexer that is costly to generate. At a high level, we redesign stacking and garbage collection to avoid quadratic scaling. Our construction is also more space efficient: [HK20a] algorithms require O(b) space, while ours use only O(log b) space. This space efficiency allows even modest setups to handle large numbers of branches. [HK20a] assumes a random oracle (RO). We track the source of this need, formalize a simple and natural added assumption on the base garbling scheme, and remove reliance on RO: LogStack is secure in the standard model. Nevertheless, LogStack can be instantiated with typical GC tricks based on non-standard assumptions, such as free XOR and half-gates, and hence can be implemented with high efficiency. We implemented LogStack (in the RO model, based on half-gates garbling) and report performance. In terms of wall-clock time and for fewer than 16 branches, our performance is comparable to [HK20a]'s; for larger branching factors, our approach clearly outperforms [HK20a]. For example, given 1024 branches, our approach is 31x faster.
2021
EUROCRYPT
Ciminion: Symmetric Encryption Based on Toffoli-Gates over Large Finite Fields 📺
Motivated by new applications such as secure Multi-Party Computation (MPC), Fully Homomorphic Encryption (FHE), and Zero-Knowledge proofs (ZK), the need for symmetric encryption schemes that minimize the number of field multiplications in their natural algorithmic description is apparent. This development has brought forward many dedicated symmetric encryption schemes that minimize the number of multiplications in GF(2^n) or GF(p), with p being prime. These novel schemes have lead to new cryptanalytic insights that have broken many of said schemes. Interestingly, to the best of our knowledge, all of the newly proposed schemes that minimize the number of multiplications use those multiplications exclusively in S-boxes based on a power mapping that is typically x^3 or x^{-1}. Furthermore, most of those schemes rely on complex and resource-intensive linear layers to achieve a low multiplication count. In this paper, we present Ciminion, an encryption scheme minimizing the number of field multiplications in large binary or prime fields, while using a very lightweight linear layer. In contrast to other schemes that aim to minimize field multiplications in GF(2^n) or GF(p), Ciminion relies on the Toffoli gate to improve the non-linear diffusion of the overall design. In addition, we have tailored the primitive for the use in a Farfalle-like construction in order to minimize the number of rounds of the used primitive, and hence, the number of field multiplications as far as possible.
2021
EUROCRYPT
Efficient Range Proofs with Transparent Setup from Bounded Integer Commitments 📺
We introduce a new approach for constructing range proofs. Our approach is modular, and leads to highly competitive range proofs under standard assumption, using less communication and (much) less computation than the state of the art methods, and without relying on a trusted setup. Our range proofs can be used as a drop-in replacement in a variety of protocols such as distributed ledgers, anonymous transaction systems, and many more, leading to significant reductions in communication and computation for these applications. At the heart of our result is a new method to transform any commitment over a finite field into a commitment scheme which allows to commit to and efficiently prove relations about bounded integers. Combining these new commitments with a classical approach for range proofs based on square decomposition, we obtain several new instantiations of a paradigm which was previously limited to RSA-based range proofs (with high communication and computation, and trusted setup). More specifically, we get: - Under the discrete logarithm assumption, we obtain the most compact and efficient range proof among all existing candidates (with or without trusted setup). Our proofs are 12% to 20% shorter than the state of the art Bulletproof (Bootle et al., CRYPTO'18) for standard choices of range size and security parameter, and are more efficient (both for the prover and the verifier) by more than an order of magnitude. - Under the LWE assumption, we obtain range proofs that improve over the state of the art in a batch setting when at least a few dozen range proofs are required. The amortized communication of our range proofs improves by up to two orders of magnitudes over the state of the art when the number of required range proofs grows. - Eventually, under standard class group assumptions, we obtain the first concretely efficient standard integer commitment scheme (without bounds on the size of the committed integer) which does not assume trusted setup.
2021
EUROCRYPT
Compactness of Hashing Modes and Efficiency beyond Merkle Tree 📺
We revisit the classical problem of designing optimally efficient cryptographically secure hash functions. Hash functions are traditionally designed via applying modes of operation on primitives with smaller domains. The results of Shrimpton and Stam (ICALP 2008), Rogaway and Steinberger (CRYPTO 2008), and Mennink and Preneel (CRYPTO 2012) show how to achieve optimally efficient designs of $2n$-to-$n$-bit compression functions from non-compressing primitives with asymptotically optimal $2^{n/2-\epsilon}$-query collision resistance. Designing optimally efficient and secure hash functions for larger domains ($> 2n$ bits) is still an open problem. To enable efficiency analysis and comparison across hash functions built from primitives of different domain sizes, in this work we propose the new \textit{compactness} efficiency notion. It allows us to focus on asymptotically optimally collision resistant hash function and normalize their parameters based on Stam's bound from CRYPTO 2008 to obtain maximal efficiency. We then present two tree-based modes of operation as a design principle for compact, large domain, fixed-input-length hash functions. \begin{enumerate} \item Our first construction is an \underline{A}ugmented \underline{B}inary T\underline{r}ee (\cmt) mode. The design is a $(2^{\ell}+2^{\ell-1} -1)n$-to-$n$-bit hash function making a total of $(2^{\ell}-1)$ calls to $2n$-to-$n$-bit compression functions for any $\ell\geq 2$. Our construction is optimally compact with asymptotically (optimal) $2^{n/2-\epsilon}$-query collision resistance in the ideal model. For a tree of height $\ell$, in comparison with Merkle tree, the $\cmt$ mode processes additional $(2^{\ell-1}-1)$ data blocks making the same number of internal compression function calls. \item With our second design we focus our attention on the indifferentiability security notion. While the $\cmt$ mode achieves collision resistance, it fails to achieve indifferentiability from a random oracle within $2^{n/3}$ queries. $\cmt^{+}$ compresses only $1$ less data block than $\cmt$ with the same number of compression calls and achieves in addition indifferentiability up to $2^{n/2-\epsilon}$ queries. \end{enumerate} Both of our designs are closely related to the ubiquitous Merkle Trees and have the potential for real-world applicability where the speed of hashing is of primary interest.
2021
EUROCRYPT
Aggregatable Distributed Key Generation 📺
In this paper we introduce a distributed key generation (DKG) protocol with aggregatable and publicly verifiable transcripts. As compared with prior approaches, our DKG reduces the size of the final transcript and the time to verify it from O(n^2) to O(n), where n denotes the number of parties. We also revisit existing DKG security definitions, which are quite strong, and propose new and natural relaxations. As a result, we can prove the security of our aggregatable DKG as well as that of several existing DKGs, including the popular Pedersen variant. We show that, under these new definitions, these existing DKGs can be used to yield secure threshold variants of popular cryptosystems such as El-Gamal encryption and BLS signatures. We also prove that our DKG can be securely combined with a new efficient verifiable unpredictable function (VUF), whose security we prove in the random oracle model. Finally, we experimentally evaluate our DKG and show that the per-party overheads scale linearly and are practical: for 64 parties it takes 71ms to share and 359ms to verify the overall transcript, while these respective costs for 8192 parties are 8s and 42.2s.
2021
EUROCRYPT
Generic Compiler for Publicly Verifiable Covert Multi-Party Computation 📺
Covert security has been introduced as a compromise between semi-honest and malicious security. In a nutshell, covert security guarantees that malicious behavior can be detected by the honest parties with some probability, but in case detection fails all bets are off. While the security guarantee offered by covert security is weaker than full-fledged malicious security, it comes with significantly improved efficiency. An important extension of covert security introduced by Asharov and Orlandi (ASIACRYPT'12) is \emph{public verifiability}, which allows the honest parties to create a publicly verifiable certificate of malicious behavior. Public verifiability significantly strengthen covert security as the certificate allows punishment via an external party, e.g., a judge. Most previous work on publicly verifiable covert (PVC) security focuses on the two-party case, and the multi-party case has mostly been neglected. In this work, we introduce a novel compiler for multi-party PVC secure protocols with no private inputs. The class of supported protocols includes the preprocessing of common multi-party computation protocols that are designed in the offline-online model. Our compiler leverages time-lock encryption to offer high probability of cheating detection (often also called deterrence factor) independent of the number of involved parties. Moreover, in contrast to the only earlier work that studies PVC in the multi-party setting (CRYPTO'20), we provide the first full formal security analysis.
2021
EUROCRYPT
Abuse Resistant Law Enforcement Access Systems 📺
The increased deployment of end-to-end encryption has ignited a debate between technology firms and law enforcement agencies over the need for lawful access to encrypted communications. Unfortunately, existing solutions to this problem suffer from serious technical risks, such as the possibility of operator abuse and theft of escrow key material. In this work we investigate the problem of constructing law enforcement access systems that mitigate the possibility of unauthorized surveillance. We first define a set of desirable properties for an abuse-resistant law enforcement access system (ARLEAS), and motivate each of these properties. We then formalize these definitions in the Universal Composability framework, and present two main constructions that realize this definition. The first construction enables {\em prospective} access, allowing surveillance only if encryption occurs after a warrant has been issued and activated. The second, more powerful construction, allows {\em retrospective} access to communications that occurred prior to a warrant's issuance. To illustrate the technical challenge of constructing the latter type of protocol, we conclude by investigating the minimal assumptions required to realize these systems.
2021
EUROCRYPT
Analysing the HPKE Standard 📺
The Hybrid Public Key Encryption (HPKE) scheme is an emerging standard currently under consideration by the Crypto Forum Research Group (CFRG) of the IETF as a candidate for formal approval. Of the four modes of HPKE, we analyse the authenticated mode HPKE_Auth in its single-shot encryption form as it contains what is, arguably, the most novel part of HPKE. HPKE_Auth’s intended application domain is captured by a new primitive which we call Authenticated Public Key Encryption (APKE). We provide syntax and security definitions for APKE schemes, as well as for the related Authenticated Key Encapsulation Mechanisms (AKEMs). We prove security of the AKEM scheme DH-AKEM underlying HPKE Auth based on the Gap Diffie-Hellman assumption and provide general AKEM/DEM composition theorems with which to argue about HPKE_Auth’s security. To this end, we also formally analyse HPKE_Auth’s key schedule and key derivation functions. To increase confidence in our results we use the automatic theorem proving tool CryptoVerif. All our bounds are quantitative and we discuss their practical implications for HPKE_Auth. As an independent contribution we propose the new framework of nominal groups that allows us to capture abstract syntactical and security properties of practical elliptic curves, including the Curve25519 and Curve448 based groups (which do not constitute cyclic groups).
2021
EUROCRYPT
Candidate Obfuscation via Oblivious LWE Sampling 📺
We present a new, simple candidate construction of indistinguishability obfuscation (iO). Our scheme is inspired by lattices and learning-with-errors (LWE) techniques, but we are unable to prove security under a standard assumption. Instead, we formulate a new falsifiable assumption under which the scheme is secure. Furthermore, the scheme plausibly achieves post-quantum security. Our construction is based on the recent ``split FHE'' framework of Brakerski, D\"ottling, Garg, and Malavolta (EUROCRYPT '20), and we provide a new instantiation of this framework. As a first step, we construct an iO scheme that is provably secure assuming that LWE holds and that it is possible to obliviously generate LWE samples without knowing the corresponding secrets. We define a precise notion of oblivious LWE sampling that suffices for the construction. It is known how to obliviously sample from any distribution (in a very strong sense) using iO, and our result provides a converse, showing that the ability to obliviously sample from the specific LWE distribution (in a much weaker sense) already also implies iO. As a second step, we give a heuristic contraction of oblivious LWE sampling. On a very high level, we do this by homomorphically generating pseudorandom LWE samples using an encrypted pseudorandom function.
2021
EUROCRYPT
Automatic Search of Meet-in-the-Middle Preimage Attacks on AES-like Hashing 📺
The Meet-in-the-Middle (MITM) preimage attack is highly effective in breaking the preimage resistance of many hash functions, including but not limited to the full MD5, HAVAL, and Tiger, and reduced SHA-0/1/2. It was also shown to be a threat to hash functions built on block ciphers like AES by Sasaki in 2011. Recently, such attacks on AES hashing modes evolved from merely using the freedom of choosing the internal state to also exploiting the freedom of choosing the message state. However, detecting such attacks especially those evolved variants is difficult. In previous works, the search space of the configurations of such attacks is limited, such that manual analysis is practical, which results in sub-optimal solutions. In this paper, we remove artificial limitations in previous works, formulate the essential ideas of the construction of the attack in well-defined ways, and translate the problem of searching for the best attacks into optimization problems under constraints in Mixed-Integer-Linear-Programming (MILP) models. The MILP models capture a large solution space of valid attacks; and the objectives of the MILP models are attack configurations with the minimized computational complexity. With such MILP models and using the off-the-shelf solver, it is efficient to search for the best attacks exhaustively. As a result, we obtain the first attacks against the full (5-round) and an extended (5.5-round) version of Haraka-512 v2, and 8-round AES-128 hashing modes, as well as improved attacks covering more rounds of Haraka-256 v2 and other members of AES and Rijndael hashing modes.
2021
EUROCRYPT
Classical vs Quantum Random Oracles 📺
In this paper, we study relationship between security of cryptographic schemes in the random oracle model (ROM) and quantum random oracle model (QROM). First, we introduce a notion of a proof of quantum access to a random oracle (PoQRO), which is a protocol to prove the capability to quantumly access a random oracle to a classical verifier. We observe that a proof of quantumness recently proposed by Brakerski et al. (TQC '20) can be seen as a PoQRO. We also give a construction of a publicly verifiable PoQRO relative to a classical oracle. Based on them, we construct digital signature and public key encryption schemes that are secure in the ROM but insecure in the QROM. In particular, we obtain the first examples of natural cryptographic schemes that separate the ROM and QROM under a standard cryptographic assumption. On the other hand, we give lifting theorems from security in the ROM to that in the QROM for certain types of cryptographic schemes and security notions. For example, our lifting theorems are applicable to Fiat-Shamir non-interactive arguments, Fiat-Shamir signatures, and Full-Domain-Hash signatures etc. We also discuss applications of our lifting theorems to quantum query complexity.
2021
EUROCRYPT
Threshold Garbled Circuits and Ad Hoc Secure Computation 📺
Garbled Circuits (GCs) represent fundamental and powerful tools in cryptography, and many variants of GCs have been considered since their introduction. An important property of the garbled circuits is that they can be evaluated securely if and only if exactly 1 key for each input wire is obtained: no less and no more. In this work we study the case when: 1) some of the wire-keys are missing, but we are still interested in computing the output of the garbled circuit and 2) the evaluator of the GC might have both keys for a constant number of wires. We start to study this question in terms of non-interactive multi-party computation (NIMPC) which is strongly connected with GCs. In this notion, there is a fixed number of parties (n) that can get correlated information from a trusted setup. Then these parties can send an encoding of their input to an evaluator, which can compute the output of the function. Similarly to the notion of ad hoc secure computation proposed by Beimel et al. [ITCS 2016], we consider the case when less than n parties participate in the online phase, and in addition we let these parties colluding with the evaluator. We refer to this notion as Threshold NIMPC. In addition, we show that when the number of parties participating in the online phase is a fixed threshold l <= n then it is possible to securely evaluate any l-input function. We build our result on top of a new secret-sharing scheme (which can be of independent interest) and on the results proposed by Benhamouda, Krawczyk and Rabin [Crypto 2017]. Our protocol can be used to compute any function in NC1 in the information-theoretic setting and any function in P assuming one-way functions. As a second (and main) contribution, we consider a slightly different notion of security in which the number of parties that can participate in the online phase is not specified, and can be any number c above the threshold l (in this case the evaluator cannot collude with the other parties). We solve an open question left open by Beimel, Ishai and Kushilevitz [Eurocrypt 2017] showing how to build a secure protocol for the case when c is constant, under the Learning with Errors assumption.
2021
EUROCRYPT
Post-Quantum Multi-Party Computation 📺
We initiate the study of multi-party computation for classical functionalities in the plain model, with security against malicious quantum adversaries. We observe that existing techniques readily give a polynomial-round protocol, but our main result is a construction of *constant-round* post-quantum multi-party computation. We assume mildly super-polynomial quantum hardness of learning with errors (LWE), and quantum polynomial hardness of an LWE-based circular security assumption. Along the way, we develop the following cryptographic primitives that may be of independent interest: 1.) A spooky encryption scheme for relations computable by quantum circuits, from the quantum hardness of (a circular variant of) the LWE problem. This immediately yields the first quantum multi-key fully-homomorphic encryption scheme with classical keys. 2.) A constant-round post-quantum non-malleable commitment scheme, from the mildly super-polynomial quantum hardness of LWE. To prove the security of our protocol, we develop a new straight-line non-black-box simulation technique against parallel sessions that does not clone the adversary's state. This technique may also be relevant to the classical setting.
2021
EUROCRYPT
The Nested Subset Differential Attack: A Practical Direct Attack Against LUOV which Forges a Signature within 210 Minutes 📺
In 2017, Ward Beullenset al.submitted Lifted Unbalanced Oil and Vinegar [4], which is a modification to the Unbalanced Oil and Vinegar Schemeby Patarin. Previously, Ding et al.proposed the Subfield Differential Attack [20]which prompted a change of parameters by the authors of LUOV for the second round of the NIST post quantum standardization competition [3].In this paper we propose a modification to the Subfield Differential Attackcalled the Nested Subset Differential Attack which fully breaks half of the parameter sets put forward. We also show by experimentation that this attack is practically possible to do in under 210 minutes for the level I security parameters and not just a theoretical attack. The Nested Subset Differential attack is a large improvement of the Subfield differential attack which can be used in real world circumstances. Moreover, we will only use what is called the "lifted" structure of LUOV, and our attack can be thought as a development of solving"lifted" quadratic systems.
2021
EUROCRYPT
Black-Box Non-Interactive Non-Malleable Commitments 📺
There has been recent exciting progress in building non-interactive non-malleable commitments from judicious assumptions. All proposed approaches proceed in two steps. First, obtain simple “base” commitment schemes for very small tag/identity spaces based on a various sub-exponential hardness assumptions. Next, assuming sub-exponential non-interactive witness indistinguishable proofs (NIWIs), and variants of keyless collision-resistant hash functions, construct non-interactive compilers that convert tag-based non-malleable commitments for a small tag space into tag-based non-malleable commitments for a larger tag space. We propose the first black-box construction of non-interactive non-malleable commitments. Our key technical contribution is a novel implementation of the non-interactive proof of consistency required for tag amplification. Prior to our work, the only known approach to tag amplification without setup and with black-box use of the base scheme (Goyal, Lee, Ostrovsky, and Visconti, FOCS 2012) added multiple rounds of interaction. Our construction satisfies the strongest known definition of non-malleability, i.e., CCA (chosen commitment attack) security. In addition to being black-box, our approach dispenses with the need for sub-exponential NIWIs, that was common to all prior work. Instead of NIWIs, we rely on sub-exponential hinting PRGs which can be obtained based on a broad set of assumptions such as sub-exponential CDH or LWE.
2021
EUROCRYPT
"Bifurcated Cryptography" Folding Competing Cryptosystems into a Single Scheme: On Accountability vs. Anonymity in Private Signatures 📺
Over the development of modern cryptography, often, alternative cryptographic schemes are developed to achieve goals that in some important respect are orthogonal. Thus, we have to choose either a scheme which achieves the first goal and not the second, or vice versa. This results in two types of schemes that compete with each other. In the basic area of user privacy, specifically in anonymous (multi-use credentials) signing, such an orthogonality exists between anonymity and accountability. The conceptual contribution of this work is to reverse the above orthogonality by design, which essentially typifies the last 25 years or so, and to suggest an alternative methodology where the opposed properties are carefully folded into a single scheme. The schemes will support both opposing properties simultaneously in a bifurcated fashion, where: - First, based on rich semantics expressed over the message's context and content, the user, etc., the relevant property is applied point-wise per message operation depending on a predicate; and - Secondly, at the same time, the schemes provide what we call ``branch-hiding;'' namely, the resulting calculated value hides from outsiders which property has actually been locally applied. Specifically, we precisely define and give the first construction and security proof of a ``Bifurcated Anonymous Signature'' (BiAS): A scheme which supports either absolute anonymity or anonymity with accountability, based on a specific contextual predicate, while being branch-hiding. This novel signing scheme has numerous applications not easily implementable or not considered before, especially because: (i) the conditional traceability does 'not' rely on a trusted authority as it is (non-interactively) encapsulated into signatures; and (ii) signers 'know' the predicate value and can make a conscious choice at each signing time. Technically, we realize BiAS from homomorphic commitments for a general family of predicates that can be represented by bounded-depth circuits. Our construction is generic and can be instantiated in the standard model from lattices and, more efficiently, from bilinear maps. In particular, the signature length is independent of the circuit size when we use commitments with suitable efficiency properties.
2021
EUROCRYPT
Oblivious Transfer is in MiniQCrypt 📺
MiniQCrypt is a world where quantum-secure one-way functions exist, and quantum communication is possible. We construct an oblivious transfer (OT) protocol in MiniQCrypt that achieves simulation-security against malicious quantum polynomial-time adversaries, building on the foundational work of Bennett, Brassard, Crepeau and Skubiszewska (CRYPTO 1991). Combining the OT protocol with prior works, we obtain secure two-party and multi-party computation protocols also in MiniQCrypt. This is in contrast to the classical world, where it is widely believed that OT does not exist in MiniCrypt.
2021
EUROCRYPT
Order-C Secure Multiparty Computation for Highly Repetitive Circuits 📺
Running secure multiparty computation (MPC) protocols with hundreds or thousands of players would allow leveraging large volunteer networks (such as blockchains and Tor) and help justify honest majority assumptions. However, most existing protocols have at least a linear (multiplicative) dependence on the number of players, making scaling difficult. Known protocols with asymptotic efficiency independent of the number of parties (excluding additive factors) require expensive circuit transformations that induce large overheads. We observe that the circuits used in many important applications of MPC such as training algorithms used to create machine learning models have a highly repetitive structure. We formalize this class of circuits and propose an MPC protocol that achieves O(|C|) total complexity for this class. We implement our protocol and show that it is practical and outperforms O(n|C|) protocols for modest numbers of players.
2021
EUROCRYPT
Towards Accountability in CRS Generation 📺
It is well known that several cryptographic primitives cannot be achieved without a common reference string (CRS). Those include, for instance, non-interactive zero-knowledge for NP, or malicious secure computation in fewer than four rounds. The security of those primitives heavily rely upon on the assumption that the trusted authority, who generates the CRS, does not misuse the randomness used in the CRS generation. However, we argue that there is no such thing as an unconditionally trusted authority and every authority must be held accountable for any trust to be well-founded. Indeed, a malicious authority can, for instance, recover private inputs of honest parties given transcripts of the protocols executed with respect to the CRS it has generated. While eliminating trust in the trusted authority may not be entirely feasible, can we at least move towards achieving some notion of accountability? We propose a new notion in which, if the CRS authority releases the private inputs of protocol executions to others, we can then provide a publicly-verifiable proof that certifies that the authority misbehaved. We study the feasibility of this notion in the context of non-interactive zero knowledge and two-round secure two-party computation.
2021
EUROCRYPT
TARDIS: A Foundation of Time-Lock Puzzles in UC 📺
Time-based primitives like time-lock puzzles (TLP) are finding widespread use in practical protocols, partially due to the surge of interest in the blockchain space where TLPs and related primitives are perceived to solve many problems. Unfortunately, the security claims are often shaky or plainly wrong since these primitives are used under composition. One reason is that TLPs are inherently not UC secure and time is tricky to model and use in the UC model. On the other hand, just specifying standalone notions of the intended task, left alone correctly using standalone notions like non-malleable TLPs only, might be hard or impossible for the given task. And even when possible a standalone secure primitive is harder to apply securely in practice afterwards as its behavior under composition is unclear. The ideal solution would be a model of TLPs in the UC framework to allow simple modular proofs. In this paper we provide a foundation for proving composable security of practical protocols using time-lock puzzles and related timed primitives in the UC model. We construct UC-secure TLPs based on random oracles and show that using random oracles is necessary. In order to prove security, we provide a simple and abstract way to reason about time in UC protocols. Finally, we demonstrate the usefulness of this foundation by constructing applications that are interesting in their own right, such as UC-secure two-party computation with output-independent abort.
2021
EUROCRYPT
Multi-Party Reusable Non-Interactive Secure Computation from LWE 📺
Motivated by the goal of designing versatile and flexible secure computation protocols that at the same time require as little interaction as possible, we present new multiparty reusable Non-Interactive Secure Computation (mrNISC) protocols. This notion, recently introduced by Benhamouda and Lin (TCC 2020), is essentially two-round Multi-Party Computation (MPC) protocols where the first round of messages serves as a reusable commitment to the private inputs of participating parties. Using these commitments, any subset of parties can later compute any function of their choice on their respective inputs by just sending a single message to a stateless evaluator, conveying the result of the computation but nothing else. Importantly, the input commitments can be computed without knowing anything about other participating parties (neither their identities nor their number) and they are reusable across any number of desired computations. We give a construction of mrNISC that achieves standard simulation security, as classical multi-round MPC protocols achieve. Our construction relies on the Learning With Errors (LWE) assumption with polynomial modulus, and on the existence of a pseudorandom function (PRF) in $\mathsf{NC}^1$. We achieve semi-malicious security in the plain model and malicious security by further relying on trusted setup (which is unavoidable for mrNISC). In comparison, the only previously known constructions of mrNISC were either using bilinear maps or using strong primitives such as program obfuscation. We use our mrNISC to obtain new Multi-Key FHE (MKFHE) schemes with threshold decryption: - In the CRS model, we obtain threshold MKFHE for $\mathsf{NC}^1$ based on LWE with only {\em polynomial} modulus and PRFs in $\mathsf{NC}^1$, whereas all previous constructions rely on LWE with super-polynomial modulus-to-noise ratio. - In the plain model, we obtain threshold levelled MKFHE for $\mathsf{P}$ based on LWE with {\em polynomial} modulus, PRF in $\mathsf{NC}^1$, and NTRU, and another scheme for constant number of parties from LWE with sub-exponential modulus-to-noise ratio. The only known prior construction of threshold MKFHE (Ananth et al., TCC 2020) in the plain model restricts the set of parties who can compute together at the onset.
2021
EUROCRYPT
Constant-Overhead Unconditionally Secure Multiparty Computation over Binary Fields 📺
We study the communication complexity of unconditionally secure multiparty computation (MPC) protocols in the honest majority setting. Despite tremendous efforts in achieving efficient protocols for binary fields under computational assumptions, there are no efficient unconditional MPC protocols in this setting. In particular, there are no n party protocols with constant overhead admitting communication complexity of O(n) bits per gate. Cascudo, Cramer, Xing and Yuan (CRYPTO 2018) were the first ones to achieve such an overhead in the amortized setting by evaluating O(log n) copies of the same circuit in the binary field in parallel. In this work, we construct the first unconditional MPC protocol secure against a malicious adversary in the honest majority setting evaluating just a single boolean circuit with amortized communication complexity of O(n) bits per gate.
2021
EUROCRYPT
On the power of multiple anonymous messages: Frequency Estimation and Selection in the Shuffle Model of Differential Privacy 📺
It is well-known that general secure multi-party computation can in principle be applied to implement differentially private mechanisms over distributed data with utility matching the curator (a.k.a. central) model. In this paper we study the power of protocols running on top of a much weaker primitive: A non-interactive anonymous channel, known as the shuffled model in the differential privacy literature. Such protocols are implementable in a scalable way using known cryptographic methods and are known to enable non-interactive, differentially private protocols with error much smaller than what is possible in the local model. We study fundamental counting problems in the shuffled model and obtain tight, up to poly-logarithmic factors, bounds on the error and communication in several settings. For the problem of frequency estimation for n users and a domain of size B, we obtain: - A nearly tight lower bound of ˜Ω(min(n^(1/4), sqrt(B))) on the error in the single-message shuffled model. This implies that the protocols obtained from the amplification via shuffling work of Erlingsson et al. (SODA 2019) and Balle et al. (Crypto 2019) are essentially optimal for single-message protocols. - Protocols in the multi-message shuffled model with poly(log B, log n) bits of communication per user and poly log B error, which provide an exponential improvement on the error compared to what is possible with single-message algorithms. This implies protocols with similar error and communication guarantees for several well-studied problems such as heavy hitters, d-dimensional range counting, M-estimation of the median and quantiles, and more generally sparse non-adaptive statistical query algorithms. For the related selection problem on a domain of size B, we prove: - A nearly tight lower bound of Ω(B) on the number of users in the single-message shuffled model. This significantly improves on the Ω(B^(1/17)) lower bound obtained by Cheu et al. (Eurocrypt 2019). A key ingredient in the proof is a lower bound on the error of locally-private frequency estimation in the low-privacy (aka high ε) regime. For this we develop new techniques to extend the results of Duchi et al. (FOCS 2013; JASA 2018) and Bassily & Smith (STOC 2015), whose techniques only gave tight bounds in the high-privacy setting.
2021
EUROCRYPT
Unbounded Multi-Party Computation from Learning with Errors 📺
We consider the problem of round-optimal *unbounded MPC*: in the first round, parties publish a message that depends only on their input. In the second round, any subset of parties can jointly and securely compute any function $f$ over their inputs in a single round of broadcast. We do not impose any a priori bound on the number of parties nor on the size of the functions that can be computed. Our main result is a semi-honest two-round protocol for unbounded MPC in the plain model from the hardness of the standard learning with errors (LWE) problem. Prior work in the same setting assumes the hardness of problems over bilinear maps. Thus, our protocol is the first example of unbounded MPC that is post-quantum secure. The central ingredient of our protocol is a new scheme of attribute-based secure function evaluation (AB-SFE) with *public decryption*. Our construction combines techniques from the realm of homomorphic commitments with delegation of lattice basis. We believe that such a scheme may find further applications in the future.
2021
EUROCRYPT
Password Hashing and Preprocessing 📺
How does the cryptanalytic effort needed to compromise t out of m instances of hashed passwords scale with the number of users when arbitrary preprocessing information on the hash function is available? We provide a formal treatment of this problem in the multi-instance setting with auxiliary information. A central contribution of our work is an (arguably simple) transcript-counting argument that allows us to resolve a fundamental question left open by Bellare, Ristenpart, and Tessaro (BRT; CRYPTO 2012) in multi-instance security. We leverage this proof technique to formally justify unrecoverability of hashed salted passwords in the presence of auxiliary information in the random-oracle model. To this end we utilize the recent pre-sampling techniques for dealing with auxiliary information developed by Coretti et al. (CRYPTO 2018). Our bounds closely match those commonly assumed in practice. Besides hashing of passwords through a monolithic random oracle, we consider the effect of iteration, a technique that is used in classical mechanisms, such as bcrypt and PBKDF2, to slow down the rate of guessing. Building on the work of BRT, we formulate a notion of KDF security, also in the presence of auxiliary information, and prove an appropriate composition theorem for it.
2021
EUROCRYPT
Mind the Middle Layer: The HADES Design Strategy Revisited 📺
The HADES design strategy combines the classical SPN construction with the Partial SPN (PSPN) construction, in which at every encryption round, the non-linear layer is applied to only a part of the state. In a HADES design, a middle layer that consists of PSPN rounds is surrounded by outer layers of SPN rounds. The security arguments of HADES with respect to statistical attacks use only the SPN rounds, disregarding the PSPN rounds. This allows the designers to not pose any restriction on the MDS matrix used as the linear mixing operation. In this paper we show that the choice of the MDS matrix significantly affects the security level provided by HADES designs. If the MDS is chosen properly, then the security level of the scheme against differential and linear attacks is significantly higher than claimed by the designers. On the other hand, weaker choices of the MDS allow for extremely large invariant subspaces that pass the entire middle layer without activating any non-linear operation (a.k.a. S-box). We showcase our results on the Starkad and Poseidon instantiations of HADES. For Poseidon, we significantly improve the lower bounds on the number of active S-boxes with respect to both differential and linear cryptanalysis provided by the designers – for example, from 28 to 60 active S-boxes for the t = 6 variant. For Starkad, we show that for any variant with t (i.e., the number of S-boxes in each round) divisible by 4, the cipher admits a huge invariant subspace that passes any number of PSPN rounds without activating any S-box (e.g., a subspace of size 2^1134 for the t = 24 variant). Furthermore, for various choices of the parameters, this invariant subspace can be used to mount a preimage attack on the hash function that breakes its security claims. On the other hand, we show that the problem can be fixed easily by replacing t with any value that is not divisible by four. Following our paper, the designers of Starkad and Poseidon amended their design, by adding requirements which ensure that the MDS matrix is chosen properly.
2021
EUROCRYPT
Security Analysis of Quantum Lightning 📺
Zhandry recently defined a new cryptographic object called quantum lightning, which has a number of useful applications, including a strong form of quantum money. Further, they proposed a construction of quantum lightning based on superpositions of low-rank matrices. The scheme is unusual, so it is difficult to base the scheme's security on any widespread computational assumptions. So instead, they proposed a new hardness assumption that, if true, could be used to prove security. In this work, we show that the hardness assumption is in fact false, so the proof of security does not hold. However, we note that the proposal for quantum lightning has not been proven insecure. This work is the first step in analyzing the security of Zhandry's proposal and moving toward a scheme that we can prove to be secure.
2021
EUROCRYPT
Non-interactive Distributional Indistinguishability (NIDI) and Non-Malleable Commitments 📺
We introduce non-interactive distributionally indistinguishable arguments (NIDI) to remedy a significant weakness of NIWI proofs: namely, the lack of meaningful secrecy when proving statements about NP languages with unique witnesses. NIDI arguments allow a prover $\cP$ to send a single message to verifier $\cV$, given which $\cV$ can obtain a sample $d$ from a (secret) distribution $\cD$ together with a proof of membership of $d$ in an NP language. The soundness guarantee is that if the sample $d$ obtained by the verifier $\cV$ is not in the language, then $\cV$ outputs $\bot$. The secrecy guarantee is that secrets about the distribution remain hidden: for every pair of (sufficiently) hard-to-distinguish distributions $\cD_0$ and $\cD_1$, a NIDI that outputs samples from $\cD_0$ with proofs is indistinguishable from one that outputs samples from $\cD_1$ with proofs. We build NIDI arguments that satisfy secrecy for sufficiently hard distributions, assuming sub-exponential indistinguishability obfuscation and sub-exponentially secure (variants of) one-way functions. We demonstrate preliminary applications of NIDI and of our techniques to obtaining the first (relaxed) non-interactive constructions in the plain model, from well-founded assumptions, of: -- Commit-and-prove that provably hides the committed message -- CCA-secure commitments against non-uniform adversaries. The commit phase of our commitment schemes consists of a single message from the committer to the receiver, followed by a randomized output by the receiver (that need not be sent to the committer).
2021
EUROCRYPT
Dynamic Ad Hoc Clock Synchronization 📺
Clock synchronization allows parties to establish a common notion of global time by leveraging a weaker synchrony assumption, i.e., local clocks with approximately the same speed. Despite intensive investigation of the problem in the fault-tolerant distributed computing literature, existing solutions do not apply to settings where participation is unknown, e.g., the ad hoc model of Beimel et al. [EUROCRYPT 17], or is dynamically shifting over time, e.g., the fluctuating/sleepy/dynamic-availability models of Garay et al. [CRYPTO 17], Pass and Shi [ASIACRYPT 17] and Badertscher et al. CCS 18]. We show how to apply and extend ideas from the blockchain literature to devise synchronizers that work in such dynamic ad hoc settings and tolerate corrupted minorities under the standard assumption that local clocks advance at approximately the same speed. We discuss both the setting of honest-majority hashing power and that of a PKI with honest majority. Our main result is a synchronizer that is directly integrated with a new proof-of-stake (PoS) blockchain protocol, Ouroboros Chronos, which we construct and prove secure; to our knowledge, this is the first PoS blockchain protocol to rely only on local clocks, while tolerating worst-case corruption and dynamically fluctuating participation. We believe that this result might be of independent interest.
2021
EUROCRYPT
High-Precision Bootstrapping of RNS-CKKS Homomorphic Encryption Using Optimal Minimax Polynomial Approximation and Inverse Sine Function 📺
Approximate homomorphic encryption with the residue number system (RNS), called RNS-variant Cheon-Kim-Kim-Song (RNS-CKKS) scheme, is a fully homomorphic encryption scheme that supports arithmetic operations for real or complex number data encrypted. Although the RNS-CKKS scheme is a fully homomorphic encryption scheme, most of the applications with the RNS-CKKS scheme use it as the only leveled homomorphic encryption scheme because of the lack of the practicality of the bootstrapping operation of the RNS-CKKS scheme. One of the crucial problems of the bootstrapping operation is its poor precision. While other basic homomorphic operations ensure sufficiently high precision for practical use, the bootstrapping operation only supports about 20-bit fixed-point precision at best, which is not high precision enough to be used for the reliable large-depth homomorphic computations until now. In this paper, we improve the message precision in the bootstrapping operation of the RNS-CKKS scheme. Since the homomorphic modular reduction process is one of the most important steps in determining the precision of the bootstrapping, we focus on the homomorphic modular reduction process. Firstly, we propose a fast algorithm of obtaining the optimal minimax approximate polynomial of modular reduction function and the scaled sine/cosine function over the union of the approximation regions, called an improved multi-interval Remez algorithm. In fact, this algorithm derives the optimal minimax approximate polynomial of any continuous functions over any union of the finite number of intervals. Next, we propose the composite function method using the inverse sine function to reduce the difference between the scaling factor used in the bootstrapping and the default scaling factor. With these methods, we reduce the approximation error in the bootstrapping of the RNS-CKKS scheme by 1/1176~1/42 (5.4~10.2-bit precision improvement) for each parameter setting. While the bootstrapping without the composite function method has 27.2~30.3-bit precision at maximum, the bootstrapping with the composite function method has 32.6~40.5-bit precision.
2021
EUROCRYPT
On the Compressed-Oracle Technique, and Post-Quantum Security of Proofs of Sequential Work 📺
We revisit the so-called compressed oracle technique, introduced by Zhandry for analyzing quantum algorithms in the quantum random oracle model (QROM). To start off with, we offer a concise exposition of the technique, which easily extends to the parallel-query QROM, where in each query-round the considered algorithm may make several queries to the QROM in parallel. This variant of the QROM allows for a more fine-grained query-complexity analysis. Our main technical contribution is a framework that simplifies the use of (the parallel-query generalization of) the compressed oracle technique for proving query complexity results. With our framework in place, whenever applicable, it is possible to prove quantum query complexity lower bounds by means of purely classical reasoning. More than that, for typical examples the crucial classical observations that give rise to the classical bounds are sufficient to conclude the corresponding quantum bounds. We demonstrate this on a few examples, recovering known results but also obtaining new results. Our main target is the hardness of finding a q-chain with fewer than q parallel queries, i.e., a sequence x_0, x_1, ..., x_q with x_i = H(x_{i-1}) for all 1 \leq i \leq q. The above problem of finding a hash chain is of fundamental importance in the context of proofs of sequential work. Indeed, as a concrete cryptographic application of our techniques, we prove quantum security of the ``Simple Proofs of Sequential Work'' by Cohen and Pietrzak.
2021
EUROCRYPT
One-way functions and malleability oracles: Hidden shift attacks on isogeny-based protocols 📺
Supersingular isogeny Diffie-Hellman key exchange (SIDH) is a post-quantum protocol based on the presumed hardness of computing an isogeny between two supersingular elliptic curves given some additional torsion point information. Unlike other isogeny-based protocols, SIDH has been widely believed to be immune to subexponential quantum attacks because of the non-commutative structure of the endomorphism rings of supersingular curves. We contradict this commonly believed misconception in this paper. More precisely, we highlight the existence of an abelian group action on the SIDH key space, and we show that for sufficiently \emph{unbalanced} and \emph{overstretched} SIDH parameters, this action can be efficiently computed (heuristically) using the torsion point information revealed in the protocol. This reduces the underlying hardness assumption to a hidden shift problem instance which can be solved in quantum subexponential time. We formulate our attack in a new framework allowing the inversion of one-way functions in quantum subexponential time provided a malleability oracle with respect to some commutative group action. This framework unifies our new attack with earlier subexponential quantum attacks on isogeny-based protocols, and it may be of further interest for cryptanalysis.
2021
EUROCRYPT
On the Power of Expansion: More Efficient Constructions in the Random Probing Model 📺
The random probing model is a leakage model in which each wire of a circuit leaks with a given probability $p$. This model enjoys practical relevance thanks to a reduction to the noisy leakage model, which is admitted as the right formalization for power and electromagnetic side-channel attacks. In addition, the random probing model is much more convenient than the noisy leakage model to prove the security of masking schemes. In a recent work, Ananth, Ishai and Sahai (CRYPTO 2018) introduce a nice expansion strategy to construct random probing secure circuits. Their construction tolerates a leakage probability of $2^{-26}$, which is the first quantified achievable leakage probability in the random probing model. In a follow-up work, Bela\"id, Coron, Prouff, Rivain and Taleb (CRYPTO 2020) generalize their idea and put forward a complete and practical framework to generate random probing secure circuits. The so-called expanding compiler can bootstrap simple base gadgets as long as they satisfy a new security notion called \emph{random probing expandability} (RPE). They further provide an instantiation of the framework which tolerates a $2^{-8}$ leakage probability in complexity $\mathcal{O}(\kappa^{7.5})$ where $\kappa$ denotes the security parameter. In this paper, we provide an in-depth analysis of the RPE security notion. We exhibit the first upper bounds for the main parameter of a RPE gadget, which is known as the \emph{amplification order}. We further show that the RPE notion can be made tighter and we exhibit strong connections between RPE and the \emph{strong non-interference} (SNI) composition notion. We then introduce the first generic constructions of gadgets achieving RPE for any number of shares and with nearly optimal amplification orders and provide an asymptotic analysis of such constructions. Last but not least, we introduce new concrete constructions of small gadgets achieving maximal amplification orders. This allows us to obtain much more efficient instantiations of the expanding compiler: we obtain a complexity of $\mathcal{O}(\kappa^{3.9})$ for a slightly better leakage probability, as well as $\mathcal{O}(\kappa^{3.2})$ for a slightly lower leakage probability.
2021
EUROCRYPT
Breaking the Circuit Size Barrier for Secure Computation under Quasi-Polynomial LPN 📺
In this work we introduce a new (circuit-dependent) homomorphic secret sharing (HSS) scheme for all log/loglog-local circuits, with communication proportional only to the width of the circuit, and polynomial computation, assuming the super-polynomial hardness of learning parity with noise (LPN). At the heart of our new construction is a pseudorandom correlation generator (PCG), which allows two partie to locally stretch, from short seeds, pseudorandom instances of an arbitrary log / log log-local additive correlation. Our main application, and the main motivation behind this work, is a generic two-party secure computation protocol for every layered (boolean or arithmetic) circuit of size s with total communication O(s/ log log s) and polynomial computation, assuming the super-polynomial hardness of the standard learning parity with noise assumption (a circuit is layered if its nodes can be partitioned in layers, such that any wire connects adjacent layers). This expands the set of assumptions under which the ‘circuit size barrier’ can be broken, for a large class of circuits. The strength of the underlying assumption is tied to the sublinearity factor: we achieve communication O(s/k(s)) under the s^2^k(s) -hardness of LPN, for any k(s) ≤ log log s /4. Previously, the set of assumptions known to imply a PCG for correlations of degree ω(1) or generic secure computation protocols with sublinear communication was restricted to LWE, DDH, and a circularly secure variant of DCR.
2021
EUROCRYPT
Indistinguishability Obfuscation from Simple-to-State Hard Problems: New Assumptions, New Techniques, and Simplification 📺
In this work, we study the question of what set of simple-to-state assumptions suffice for constructing functional encryption and indistinguishability obfuscation ($i\mathcal{O}$), supporting all functions describable by polynomial-size circuits. Our work improves over the state-of-the-art work of Jain, Lin, Matt, and Sahai (Eurocrypt 2019) in multiple dimensions. New Assumption: Previous to our work, all constructions of $i\mathcal{O}$ from simple assumptions required novel pseudorandomness generators involving LWE samples and constant-degree polynomials over the integers, evaluated on the error of the LWE samples. In contrast, Boolean pseudorandom generators (PRGs) computable by constant-degree polynomials have been extensively studied since the work of Goldreich (2000). We show how to replace the novel pseudorandom objects over the integers used in previous works, with appropriate Boolean pseudorandom generators with sufficient stretch, when combined with LWE with binary error over suitable parameters. Both binary error LWE and constant degree Goldreich PRGs have been a subject of extensive cryptanalysis since much before our work and thus we back the plausibility of our assumption with security against algorithms studied in context of cryptanalysis of these objects. New Techniques: we introduce a number of new techniques: - We show how to build partially-hiding public-key functional encryption, supporting degree-2 functions in the secret part of the message, and arithmetic $\mathsf{NC}^1$ functions over the public part of the message, assuming only standard assumptions over asymmetric pairing groups. - We construct single-ciphertext secret-key functional encryption for all circuits with {\em linear} key generation, assuming only the LWE assumption. Simplification: Unlike prior works, our new techniques furthermore let us construct public-key functional encryption for polynomial-sized circuits directly (without invoking any bootstrapping theorem, nor transformation from secret-key to public key FE), and based only on the polynomial hardness of underlying assumptions. The functional encryption scheme satisfies a strong notion of efficiency where the size of the ciphertext grows only sublinearly in the output size of the circuit and not its size. Finally, assuming that the underlying assumptions are subexponentially hard, we can bootstrap this construction to achieve $i\mathcal{O}$.
2021
EUROCRYPT
The More The Merrier: Reducing the Cost of Large Scale MPC 📺
Secure multi-party computation (MPC) allows multiple parties to perform secure joint computations on their private inputs. Today, applications for MPC are growing with thousands of parties wishing to build federated machine learning models or trusted setups for blockchains. To address such scenarios we propose a suite of novel MPC protocols that maximize throughput when run with large numbers of parties. In particular, our protocols have both communication and computation complexity that decrease with the number of parties. Our protocols build on prior protocols based on packed secret-sharing, introducing new techniques to build more efficient computation for general circuits. Specifically, we introduce a new approach for handling \emph{linear attacks} that arise in protocols using packed secret-sharing and we propose a method for unpacking shared multiplication triples without increasing the asymptotic costs. Compared with prior work, we avoid the $\log |C|$ overhead required when generically compiling circuits of size $|C|$ for use in a SIMD computation, and we improve over folklore ``committee-based'' solutions by a factor of $O(s)$, the statistical security parameter. In practice, our protocol is up to $10X$ faster than any known construction, under a reasonable set of parameters.
2021
EUROCRYPT
Three Third Generation Attacks on the Format Preserving Encryption Scheme FF3 📺
Format-Preserving Encryption (FPE) schemes accept plaintexts from any finite set of values (such as social security numbers or birth dates) and produce ciphertexts that belong to the same set. They are extremely useful in practice since they make it possible to encrypt existing databases or communication packets without changing their format. Due to industry demand, NIST had standardized in 2016 two such encryption schemes called FF1 and FF3. They immediately attracted considerable cryptanalytic attention with decreasing attack complexities. The best currently known attack on the Feistel construction FF3 has data and memory complexity of ${O}(N^{11/6})$ and time complexity of ${O}(N^{17/6})$, where the input belongs to a domain of size $N \times N$. In this paper, we present and experimentally verify three improved attacks on FF3. Our best attack achieves the tradeoff curve $D=M=\tilde{O}(N^{2-t})$, $T=\tilde{O}(N^{2+t})$ for all $t \leq 0.5$. In particular, we can reduce the data and memory complexities to the more practical $\tilde{O}(N^{1.5})$, and at the same time, reduce the time complexity to $\tilde{O}(N^{2.5})$. We also identify another attack vector against FPE schemes, the {\em related-domain} attack. We show how one can mount powerful attacks when the adversary is given access to the encryption under the same key in different domains, and show how to apply it to efficiently distinguish FF3 and FF3-1 instances.
2021
EUROCRYPT
2021
EUROCRYPT
Structured Encryption and Dynamic Leakage Suppression 📺
Structured encryption (STE) schemes encrypt data structures in such a way that they can be privately queried. Special cases ofSTE include searchable symmetric encryption (SSE) and graph encryption. Like all sub-linear encrypted search solutions, STE leaks information about queries against persistent adversaries. To address this, a line of work on leakage suppression was recently initiated that focuses on techniques to mitigate or completely remove the leakage of STE schemes (Kamara et al. CRYPTO’18 and Kamara and Moataz, Eurocrypt ’19). A notable example is the cache-based compiler which, when combined with the rebuild compiler, transforms any dynamic STE scheme that leaks the query equality into a new scheme that does not. Unfortunately, this compiler can only produce static schemes and it was left as an open problem to design a compiler that could yield dynamic constructions. In this work, we propose a dynamic variant of the cache-based compiler. Our compiler can transform any volume-hiding semi-dynamic or mutable STE scheme that leaks the query equality pattern into into a new fully-dynamic construction that does not. Using this compiler, we design three new fully-dynamic STE schemes that are “almost” and fully zero-leakage which, under natural assumptions about the data and query distributions, are asymptotically more efficient than using black-box ORAM simulation. These are the first constructions of their kind.
2021
EUROCRYPT
A Deeper Look at Machine Learning-Based Cryptanalysis 📺
At CRYPTO’19, Gohr proposed a new cryptanalysis strategy based on the utilisation of machine learning algorithms. Using deep neural networks, he managed to build a neural based distinguisher that surprisingly surpassed state-of-the-art cryptanalysis efforts on one of the versions of the well studied NSA block cipher SPECK (this distinguisher could in turn be placed in a larger key recovery attack). While this work opens new possibilities for machine learning-aided cryptanalysis, it remains unclear how this distinguisher actually works and what information is the machine learning algorithm deducing. The attacker is left with a black-box that does not tell much about the nature of the possible weaknesses of the algorithm tested, while hope is thin as interpretability of deep neural networks is a well-known difficult task. In this article, we propose a detailed analysis and thorough explanations of the inherent workings of this new neural distinguisher. First, we studied the classified sets and tried to find some patterns that could guide us to better understand Gohr’s results. We show with experiments that the neural distinguisher generally relies on the differential distribution on the ciphertext pairs, but also on the differential distribution in penultimate and antepenultimate rounds. In order to validate our findings, we construct a distinguisher for SPECK cipher based on pure cryptanalysis, without using any neural network, that achieves basically the same accuracy as Gohr’s neural distinguisher and with the same efficiency (therefore improving over previous non-neural based distinguishers). Moreover, as another approach, we provide a machine learning-based distinguisher that strips down Gohr’s deep neural network to a bare minimum. We are able to remain very close to Gohr’s distinguishers’ accuracy using simple standard machine learning tools. In particular, we show that Gohr’s neural distinguisher is in fact inherently building a very good approximation of the Differential Distribution Table (DDT) of the cipher during the learning phase, and using that information to directly classify ciphertext pairs. This result allows a full interpretability of the distinguisher and represents on its own an interesting contribution towards interpretability of deep neural networks. Finally, we propose some method to improve over Gohr’s work and possible new neural distinguishers settings. All our results are confirmed with experiments we have conducted on SPECK block cipher.
2021
EUROCRYPT
The Rise of Paillier: Homomorphic Secret Sharing and Public-Key Silent OT 📺
We describe a simple method for solving the distributed discrete logarithm problem in Paillier groups, allowing two parties to locally convert multiplicative shares of a secret (in the exponent) into additive shares. Our algorithm is perfectly correct, unlike previous methods with an inverse polynomial error probability. We obtain the following applications and further results. – Homomorphic secret sharing: We construct homomorphic secret sharing for branching programs with negligible correctness error and supporting exponentially large plaintexts, with security based on the decisional composite residuosity (DCR) assumption. – Correlated pseudorandomness: Pseudorandom correlation functions (PCFs), recently introduced by Boyle et al. (FOCS 2020), allow two parties to obtain a practically unbounded quantity of correlated randomness, given a pair of short, correlated keys. We construct PCFs for the oblivious transfer (OT) and vector oblivious linear evaluation (VOLE) correlations, based on the quadratic residuosity (QR) or DCR assumptions, respectively. We also construct a pseudorandom correlation generator (for producing a bounded number of samples, all at once) for OLE, based on a combination of the DCR and learning parity with noise assumptions. – Public-keysilentOT/VOLE: We upgrade our PCF constructions to have a public-key setup, where after independently posting a public key, each party can locally derive its PCF key. This allows completely silent generation of an arbitrary amount of OTs or VOLEs, without any interaction beyond a PKI, based on QR and DCR. The public-key setup is based on a novel non-interactive vector OLE protocol which can be seen as a variant of the Bellare-Micali oblivious transfer protocol.
2021
EUROCRYPT
Advanced Lattice Sieving on GPUs, with Tensor Cores 📺
In this work, we study GPU implementations of various state-of-the-art sieving algorithms for lattices (Becker-Gama-Joux 2015, Becker-Ducas-Gama-Laarhoven 2016, Herold-Kirshanova 2017) inside the General Sieve Kernel (G6K, Albrecht et al. 2019). In particular, we extensively exploit the recently introduced Tensor Cores -- originally designed for raytracing and machine learning -- and demonstrate their fitness for the cryptanalytic task at hand. We also propose a new dual-hash technique for efficient detection of `lift-worthy' pairs to accelerate a key ingredient of G6K: finding short lifted vectors. We obtain new computational records, reaching dimension 180 for the SVP Darmstadt Challenge improving upon the previous record for dimension 155. This computation ran for 51.6 days on a server with 4 NVIDIA Turing GPUs and 1.5TB of RAM. This corresponds to a gain of about two orders of magnitude over previous records both in terms of wall-clock time and of energy efficiency.
2021
EUROCRYPT
Efficient Bootstrapping for Approximate Homomorphic Encryption with Non-Sparse Keys 📺
We present a bootstrapping procedure for the full-RNS variant of the approximate homomorphic-encryption scheme of Cheon et al., CKKS (Asiacrypt 17, SAC 18). Compared to the previously proposed procedures (Eurocrypt 18 & 19, CT-RSA 20), our bootstrapping procedure is more precise, more efficient (in terms of CPU cost and number of consumed levels), and is more reliable and 128-bit-secure. Unlike the previous approaches, it does not require the use of sparse secret-keys. Therefore, to the best of our knowledge, this is the first procedure that enables a highly efficient and precise bootstrapping with a low probability of failure for parameters that are 128-bit-secure under the most recent attacks on sparse R-LWE secrets. We achieve this efficiency and precision by introducing three novel contributions: (i) We propose a generic algorithm for homomorphic polynomial-evaluation that takes into account the approximate rescaling and is optimal in level consumption. (ii) We optimize the key-switch procedure and propose a new technique for linear transformations (double hoisting). (iii) We propose a systematic approach to parameterize the bootstrapping, including a precise way to assess its failure probability. We implemented our improvements and bootstrapping procedure in the open-source Lattigo library. For example, bootstrapping a plaintext in C^32768 takes 18 seconds, has an output coefficient modulus of 505 bits, a mean precision of 19.1 bits, and a failure probability of 2^-15.58. Hence, we achieve 14.1x improvement in bootstrapped throughput (plaintext-bit per second), with respect to the previous best results, and we have a failure probability 468x smaller and ensure 128-bit security.
2021
PKC
Two-round n-out-of-n and Multi-Signatures and Trapdoor Commitment from Lattice 📺
Although they have been studied for a long time, distributed signature protocols have garnered renewed interest in recent years in view of novel applications to topics like blockchains. Most recent works have focused on distributed versions of ECDSA or variants of Schnorr signatures, however, and in particular, little attention has been given to constructions based on post-quantum secure assumptions like the hardness of lattice problems. A few lattice-based threshold signature and multi-signature schemes have been proposed in the literature, but they either rely on hash-and-sign lattice signatures (which tend to be comparatively inefficient), use expensive generic transformations, or only come with incomplete security proofs. In this paper, we construct several lattice-based distributed signing protocols with low round complexity following the Fiat--Shamir with Aborts (FSwA) paradigm of Lyubashevsky (Asiacrypt 2009). Our protocols can be seen as distributed variants of the fast Dilithium-G signature scheme and the full security proof can be made assuming the hardness of module SIS and LWE problems. A key step to achieving security (unexplained in some earlier papers) is to prevent the leakage that can occur when parties abort after their first message---which can inevitably happen in the Fiat--Shamir with Aborts setting. We manage to do so using homomorphic commitments. Exploiting the similarities between FSwA and Schnorr-style signatures, our approach makes the most of observations from recent advancements in the discrete log setting, such as Drijvers et al.'s seminal work on two-round multi-signatures (S&P 2019). In particular, we observe that the use of commitment not only resolves the subtle issue with aborts, but also makes it possible to realize secure two-round n-out-of-n distributed signing and multi-signature in the plain public key model, by equipping the commitment with a trapdoor feature. The construction of suitable trapdoor commitment from lattices is a side contribution of this paper.
2021
PKC
On the (In)Security of the Diffie-Hellman Oblivious PRF with Multiplicative Blinding 📺
Oblivious Pseudorandom Function (OPRF) is a protocol between a client holding input x and a server holding key k for a PRF F. At the end, the client learns F_k(x) and nothing else while the server learns nothing. OPRF's have found diverse applications as components of larger protocols, and the currently most efficient instantiation, with security proven in the UC model, is F_k(x)=H2(x,(H1(x))^k) computed using so-called exponential blinding, i.e., the client sends a=(H1(x))^r for random r, the server responds b=a^k, which the client ublinds as v=b^{1/r} to compute F_k(x)=H2(x,v). However, this protocol requires two variable-base exponentiations on the client, while a more efficient multiplicative blinding scheme replaces one or both client exponentiations with fixed-base exponentiation, leading to the decrease of the client's computational cost by a factor between two to six, depending on pre-computation. We analyze the security of the above OPRF with multiplicative blinding, showing surprising weaknesses that offer attack avenues which are not present using exponential blinding. We characterize the security of this OPRF implementation as a "Revised OPRF" functionality, a relaxation of UC OPRF functionality used in prior work. On the positive side, we show that the Revised OPRF suffices for the security of OPAQUE, the asymmetric PAKE protocol, hence allowing OPAQUE the computational advantages of multiplicative blinding. Unfortunately, we also show examples of other OPRF applications which become insecure when using such blinding. The conclusion is that usage of multiplicative blinding for F_k(x) defined as above, in settings where correct value g^k (needed for multiplicative blinding) is not authenticated, and OPRF inputs are of low entropy, must be carefully analyzed, or avoided all together. We complete the picture by showing a simple and safe alternative definition of function F_k(x) which offers (full) UC OPRF security using either form of blinding.
2021
PKC
Compact Zero-Knowledge Proofs for Threshold ECDSA with Trustless Setup 📺
Threshold ECDSA signatures provide a higher level of security to a crypto wallet since it requires more than t parties out of n parties to sign a transaction. The state-of-the-art bandwidth efficient threshold ECDSA used the additive homomorphic Castagnos and Laguillaumie (CL) encryption based on an unknown order group G, together with a number of zero-knowledge proofs in G. In this paper, we propose compact zero-knowledge proofs for threshold ECDSA to lower the communication bandwidth, as well as the computation cost. The proposed zero-knowledge proofs include the discrete-logarithm relation in G and the well-formedness of a CL ciphertext. When applied to two-party ECDSA, we can lower the bandwidth of the key generation algorithm by 47%, and the running time for the key generation and signing algorithms are boosted by about 35% and 104% respectively. When applied to threshold ECDSA, our first scheme is more optimized for the key generation algorithm (about 70% lower bandwidth and 70% faster computation in key generation, at a cost of 20% larger bandwidth in signing), while our second scheme has an all-rounded performance improvement (about 60% lower bandwidth, 27% faster computation in key generation without additional cost in signing).
2021
PKC
An Efficient and Generic Construction for Signal's Handshake (X3DH): Post-Quantum, State Leakage Secure, and Deniable 📺
The Signal protocol is a secure instant messaging protocol that underlies the security of numerous applications such as WhatsApp, Skype, Facebook Messenger among many others. The Signal protocol consists of two sub-protocols known as the X3DH protocol and the double ratchet protocol, where the latter has recently gained much attention. For instance, Alwen, Coretti, and Dodis (Eurocrypt'19) provided a concrete security model along with a generic construction based on simple building blocks that are instantiable from versatile assumptions, including post-quantum ones. In contrast, as far as we are aware, works focusing on the X3DH protocol seem limited. In this work, we cast the X3DH protocol as a specific type of authenticated key exchange (AKE) protocol, which we call a Signal-conforming AKE protocol, and formally define its security model based on the vast prior work on AKE protocols. We then provide the first efficient generic construction of a Signal-conforming AKE protocol based on standard cryptographic primitives such as key encapsulation mechanisms (KEM) and signature schemes. Specifically, this results in the first post-quantum secure replacement of the X3DH protocol on well-established assumptions. Similar to the X3DH protocol, our Signal-conforming AKE protocol offers a strong (or stronger) flavor of security, where the exchanged key remains secure even when all the non-trivial combinations of the long-term secrets and session-specific secrets are compromised. Moreover, our protocol has a weak flavor of deniability and we further show how to strengthen it using ring signatures. Finally, we provide a full-fledged, generic C implementation of our (weakly deniable) protocol. We instantiate it with several Round 3 candidates (finalists and alternates) to the NIST post-quantum standardization process and compare the resulting bandwidth and computation performances. Our implementation is publicly available.
2021
PKC
Isogeny-based key compression without pairings 📺
SIDH/SIKE-style protocols benefit from key compression to minimize their bandwidth requirements, but proposed key compression mechanisms rely on computing bilinear pairings. Pairing computation is a notoriously expensive operation, and, unsurprisingly, it is typically one of the main efficiency bottlenecks in SIDH key compression, incurring processing time penalties that are only mitigated at the cost of trade-offs with precomputed tables. We address this issue by describing how to compress isogeny-based keys without pairings. As a bonus, we also substantially reduce the storage requirements of other operations involved in key compression.
2021
PKC
Universal Proxy Re-Encryption 📺
We put forward the notion of universal proxy re-encryption (UPRE). A UPRE scheme enables a proxy to convert a ciphertext under a (delegator) public key of any existing public-key encryption (PKE) scheme into another ciphertext under a (delegatee) public key of any existing PKE scheme (possibly different from the delegator one). The proxy has a re-encryption key generated from the delegator's secret key and the delegatee public key. Thus UPRE generalizes proxy re-encryption by supporting arbitrary PKE schemes and allowing to convert ciphertexts into ones of possibly different PKE schemes. In this work, we - provide syntax and definitions for both UPRE and a variant we call relaxed UPRE. The relaxed variant means that decryption algorithms for re-encrypted ciphertexts are slightly modified but still only use the original delegatee secret keys for decryption. - construct a UPRE based on probabilistic indistinguishability obfuscation (PIO). It allows us to re-encrypt ciphertexts polynomially many times. - construct relaxed UPRE from garbled circuits (GCs). We provide two variants of this construction, one which allows us to re-encrypt ciphertexts polynomially many times, and a second one which satisfies a stronger security requirement but only allows us to re-encrypt ciphertexts a constant number of times.
2021
PKC
Analysis of Multivariate Encryption Schemes: Application to Dob 📺
In this paper, we study the effect of two modifications to multivariate public key encryption schemes: internal perturbation (ip), and Q_+. Focusing on the Dob encryption scheme, a construction utilising these modifications, we accurately predict the number of degree fall polynomials produced in a Gröbner basis attack, up to and including degree five. The predictions remain accurate even when fixing variables. Based on this new theory we design a novel attack on the Dob encryption scheme, which breaks Dob using the parameters suggested by its designers. While our work primarily focuses on the Dob encryption scheme, we also believe that the presented techniques will be of particular interest to the analysis of other big-field schemes.
2021
PKC
On the Integer Polynomial Learning with Errors Problem 📺
Several recent proposals of efficient public-key encryption are based on variants of the polynomial learning with errors problem (\textsf{PLWE}$^f$) in which the underlying \emph{polynomial} ring $\mZ_q[x]/f$ is replaced with the (related) modular \emph{integer} ring $\mZ_{f(q)}$; the corresponding problem is known as \emph{Integer Polynomial Learning with Errors} (\textsf{I-PLWE}$^f$). Cryptosystems based on \textsf{I-PLWE}$^f$ and its variants can exploit optimised big-integer arithmetic to achieve good practical performance, as exhibited by the \textsf{ThreeBears} cryptosystem. Unfortunately, the average-case hardness of \textsf{I-PLWE}$^f$ and its relation to more established lattice problems have to date remained unclear. We describe the first polynomial-time average-case reductions for the search variant of \textsf{I-PLWE}$^f$, proving its computational equivalence with the search variant of its counterpart problem \textsf{PLWE}$^f$. Our reductions apply to a large class of defining polynomials~$f$. To obtain our results, we employ a careful adaptation of R\'{e}nyi divergence analysis techniques to bound the impact of the integer ring arithmetic carries on the error distributions. As an application, we present a deterministic public-key cryptosystem over integer rings. Our cryptosystem, which resembles \textsf{ThreeBears}, enjoys one-way (OW-CPA) security provably based on the search variant of~\textsf{I-PLWE}$^f$.
2021
PKC
Master-Key KDM-Secure ABE via Predicate Encoding 📺
In this paper, we propose the first generic framework for attribute-based encryptions (ABE) with master-secret-key-dependent-message security (mKDM security) for affine functions via predicate encodings by Chen, Gay and Wee [Eurocrypt 2015]. The construction is adaptively secure under standard $k$-Lin assumption in prime-order bilinear groups. By this, we obtain a set of new mKDM-secure ABE schemes with high expressiveness that have never been reached before: we get the first hierarchical IBE (HIBE) scheme and the first ABE scheme for arithmetic branching program (ABP) with mKDM security for affine functions. Thanks to the expressiveness (more concretely, delegability like HIBE), we can obtain mKDM-secure ABE against chosen-ciphertext attack (i.e., CCA security) via a classical CPA-to-CCA transformation that works well in the context of mKDM.
2021
PKC
Exact Lattice Sampling from Non-Gaussian Distributions 📺
We propose a new framework for (trapdoor) sampling over lattices. Our framework can be instantiated in a number of ways. It allows for example to sample from uniform, affine and “product affine” distributions. Another salient point of our framework is that the output distributions of our samplers are perfectly indistinguishable from ideal ones, in contrast with classical samplers that are statistically indistinguishable. One caveat of our framework is that all our current instantiations entail a rather large standard deviation.
2021
PKC
Efficient Adaptively-Secure IB-KEMs and VRFs via Near-Collision Resistance 📺
We construct more efficient cryptosystems with provable security against adaptive attacks, based on simple and natural hardness assumptions in the standard model. Concretely, we describe: – An adaptively-secure variant of the efficient, selectively-secure LWE- based identity-based encryption (IBE) scheme of Agrawal, Boneh, and Boyen (EUROCRYPT 2010). In comparison to the previously most efficient such scheme by Yamada (CRYPTO 2017) we achieve smaller lattice parameters and shorter public keys of size O(log \lambda), where \lambda is the security parameter. – Adaptively-secure variants of two efficient selectively-secure pairing- based IBEs of Boneh and Boyen (EUROCRYPT 2004). One is based on the DBDH assumption, has the same ciphertext size as the cor- responding BB04 scheme, and achieves full adaptive security with public parameters of size only O(log \lambda). The other is based on a q- type assumption and has public key size O(\lambda), but a ciphertext is only a single group element and the security reduction is quadrat- ically tighter than the corresponding scheme by Jager and Kurek (ASIACRYPT 2018). – A very efficient adaptively-secure verifiable random function where proofs, public keys, and secret keys have size O(log \lambda). As a technical contribution we introduce blockwise partitioning, which leverages the assumption that a cryptographic hash function is weak near-collision resistant to prove full adaptive security of cryptosystems.
2021
PKC
Subversion-Resilient Public Key Encryption with Practical Watchdogs 📺
Restoring the security of maliciously implemented cryptosystems has been widely considered challenging due to the fact that the subverted implementation could arbitrarily deviate from the official specification. Achieving security against adversaries that can arbitrarily subvert implementations seems to inherently require trusted component assumptions and/or architectural properties. At ASIACRYPT 2016, Russell et al. proposed a very useful model where a watchdog is used to test and approve individual components of implementation before or during deployment. Such a detection-based strategy has been shown very useful for designing a broad class of cryptographic schemes that are provable resilient to subversion. We consider Russell et al.'s watchdog model from a practical perspective. We find that the asymptotic definitional framework, while permitting strong positive theoretical results, does not yet provide practical solutions, due to the fact that the running time of a watchdog is only bounded by an abstract polynomial. Hence, in the worst case, the running time of the watchdog might exceed the running time of the adversary, which seems not very practical. We adopt Russell et al.'s watchdog model to the concrete security setting. We design the first subversion-resilient public-key encryption scheme, which additionally allows for extremely efficient watchdogs with only linear running time. At the core of our construction is a new variant of a combiner for key encapsulation mechanisms (KEMs) by Giacon et al. (PKC'18). We combine this construction with a new subversion-resilient randomness generator that also can be checked by a very efficient watchdog, even in constant time, which could be of independent interest for the design of other subversion-resilient cryptographic schemes with practical watchdogs. Our work thus shows how to apply Russell et al.'s watchdog model to design subversion-resilient cryptography with efficient and very practical watchdogs.
2021
PKC
Cryptographic Pseudorandom Generators Can Make Cryptosystems Problematic 📺
Randomness is an essential resource for cryptography. For practical randomness generation, the security notion of pseudorandom generators (PRGs) intends to automatically preserve (computational) security of cryptosystems when used in implementation. Nevertheless, some opposite case such as in computational randomness extractors (Barak et al., CRYPTO 2011) is known (but not yet systematically studied so far) where the security can be lost even by applying secure PRGs. The present paper aims at pushing ahead the observation and understanding about such a phenomenon; we reveal such situations at layers of primitives and protocols as well, not just of building blocks like randomness extractors. We present three typical types of such cases: (1) adversaries can legally see the seed of the PRGs (including the case of randomness extractors); (2) the set of "bad" randomness may be not efficiently recognizable; (3) the formulation of a desired property implicitly involves non-uniform distinguishers for PRGs. We point out that the semi-honest security of multiparty computation also belongs to Type 1, while the correctness with negligible decryption error probability for public key encryption belongs to Types 2 and 3. We construct examples for each type where a secure PRG (against uniform distinguishers only, for Type 3) does not preserve the security/correctness of the original scheme; and discuss some countermeasures to avoid such an issue.
2021
PKC
Non-Interactive CCA2-Secure Threshold Cryptosystems: Achieving Adaptive Security in the Standard Model Without Pairings 📺
We consider threshold public-key encryption, where the decryption servers distributively hold the private key shares, and we need a threshold of these servers to decrypt the message (while the system remains secure when less than the threshold is corrupt). We investigate the notion of chosen-ciphertext secure threshold systems which has been historically hard to achieve. We further require the systems to be, both, adaptively secure (i.e., secure against a strong adversary making corruption decisions dynamically during the protocol), and non-interactive (i.e., where decryption servers do not interact amongst themselves but rather efficiently contribute, each, a single message). To date, only pairing-based implementations were known to achieve security in the standard security model without relaxation (i.e., without assuming the random oracle idealization) under the above stringent requirements. Here, we investigate how to achieve the above using other assumptions (in order to understand what other algebraic building blocks and mathematical assumptions are needed to extend the domain of encryption methods achieving the above). Specifically, we show realizations under the Decision Composite Residuosity (DCR) and Learning-With-Errors (LWE) assumptions.
2021
PKC
Publicly Verifiable Zero Knowledge from (Collapsing) Blockchains 📺
Publicly Verifiable Zero-Knowledge proofs are known to exist only from setup assumptions such as a trusted common reference string or a random oracle. Unfortunately, the former requires a trusted party while the latter does not exist. Blockchains are distributed systems that already exist and provide certain security properties (under some honest majority assumption), hence, a natural recent research direction has been to use a blockchain as an alternative setup assumption. In TCC 2017 Goyal and Goyal proposed a construction of a publicly verifiable zero-knowledge (pvZK) proof system for some proof-of-stake blockchains. The zero-knowledge property of their construction however relies on some additional and not fully specified assumptions about the current and future behavior of honest blockchain players. In this paper, we provide several contributions. First, we show that when using a blockchain to design a provably secure protocol, it is dangerous to rely on demanding additional requirements on behaviors of the blockchain players. We do so by showing an ``attack of the clones'' whereby a malicious verifier can use a smart contract to slyly (not through bribing) clone capabilities of honest stakeholders and use those to invalidate the zero-knowledge property of the proof system by Goyal and Goyal. Second, we propose a new publicly verifiable zero-knowledge proof system that relies on non-interactive commitments and on an assumption on the min-entropy of some blocks appearing on the blockchain. Third, motivated by the fact that blockchains are a recent innovation and their resilience, in the long run, is still controversial, we introduce the concept of collapsing blockchain, and we prove that the zero-knowledge property of our scheme holds even if the blockchain eventually becomes insecure and all blockchain players eventually become dishonest.
2021
PKC
Two-server Distributed ORAM with Sublinear Computation and Constant Rounds 📺
Distributed ORAM (DORAM) is a multi-server variant of Oblivious RAM. Originally proposed to lower bandwidth, DORAM has recently been of great interest due to its applicability to secure computation in the RAM model, where the circuit complexity and rounds of communication are equally important metrics of efficiency. All prior DORAM constructions either involve linear work per server (e.g., Floram) or logarithmic rounds of communication between servers (e.g., square root ORAM). In this work, we construct the first DORAM schemes in the 2-server, semi-honest setting that simultaneously achieve sublinear server computation and constant rounds of communication. We provide two constant-round constructions, one based on square root ORAM that has O(sqrt{N} log N) local computation and another based on secure computation of a doubly efficient PIR that achieves local computation of O(N^e) for any e > 0 but that allows the servers to distinguish between reads and writes. As a building block in the latter construction, we provide secure computation protocols for evaluation and interpolation of multi- variate polynomials based on the Fast Fourier Transform, which may be of independent interest.
2021
PKC
Updatable Signatures and Message Authentication Codes 📺
Cryptographic objects with updating capabilities have been proposed by Bellare, Goldreich and Goldwasser (CRYPTO'94) under the umbrella of incremental cryptography. They have recently seen increased interest, motivated by theoretical questions (Ananth et al., EC'17) as well as concrete practical motivations (Lehmann et al., EC'18; Groth et al. CRYPTO'18; Klooß et al., EC'19). In this work, the form of updatability we are particularly interested in is that primitives are key-updatable and allow to update ''old'' cryptographic objects, e.g., signatures or message authentication codes, from the ''old'' key to the updated key at the same time without requiring full access to the new key (i.e., only via a so-called update token). Inspired by the rigorous study of updatable encryption by Lehmann and Tackmann (EC'18) and Boyd et al. (CRYPTO'20), we introduce a definitional framework for updatable signatures (USs) and message authentication codes (UMACs). We discuss several applications demonstrating that such primitives can be useful in practical applications, especially around key rotation in various domains, as well as serve as building blocks in other cryptographic schemes. We then turn to constructions and our focus there is on ones that are secure and practically efficient. In particular, we provide generic constructions from key-homomorphic primitives (signatures and PRFs) as well as direct constructions. This allows us to instantiate these primitives from various assumptions such as DDH or CDH (latter in bilinear groups), or the (R)LWE and the SIS assumptions. As an example, we obtain highly practical US schemes from BLS signatures or UMAC schemes from the Naor-Pinkas-Reingold PRF.
2021
PKC
Flexible and Efficient Verifiable Computation on Encrypted Data 📺
We consider the problem of verifiable and private delegation of computation [Gennaro et al. CRYPTO'10] in which a client stores private data on an untrusted server and asks the server to compute functions over this data. In this scenario we aim to achieve three main properties: the server should not learn information on inputs and outputs of the computation (privacy), the server cannot return wrong results without being caught (integrity), and the client can verify the correctness of the outputs faster than running the computation (efficiency). A known paradigm to solve this problem is to use a (non-private) verifiable computation (VC) to prove correctness of a homomorphic encryption (HE) evaluation on the ciphertexts. Despite the research advances in obtaining efficient VC and HE, using these two primitives together in this paradigm is concretely expensive. Recent work [Fiore et al. CCS'14, PKC'20] addressed this problem by designing specialized VC solutions that however require the HE scheme to work with very specific parameters; notably HE ciphertexts must be over $\mathbb{Z}_q$ for a large prime $q$. In this work we propose a new solution that allows a flexible choice of HE parameters, while staying modular (based on the paradigm combining VC and HE) and efficient (the VC and the HE schemes are both executed at their best efficiency). At the core of our new protocol are new homomorphic hash functions for Galois rings. As an additional contribution we extend our results to support non-deterministic computations on encrypted data and an additional privacy property by which verifiers do not learn information on the inputs of the computation.
2021
PKC
Transferable E-cash: A Cleaner Model and the First Practical Instantiation 📺
Transferable e-cash is the most faithful digital analog of physical cash, as it allows users to transfer coins between them in isolation, that is, without interacting with a bank or a ``ledger''. Appropriate protection of user privacy and, at the same time, providing means to trace fraudulent behavior (double-spending of coins) have made instantiating the concept notoriously hard. Baldimtsi et al.\ (PKC'15) gave a first instantiation, but, as it relies on a powerful cryptographic primitive, the scheme is not practical. We also point out a flaw in their scheme. In this paper we revisit the model for transferable e-cash and propose simpler yet stronger security definitions. We then provide the first concrete construction, based on bilinear groups, give rigorous proofs that it satisfies our model, and analyze its efficiency in detail.
2021
PKC
Multi-Client Functional Encryption for Separable Functions 📺
In this work, we provide a compiler that transforms a single-input functional encryption scheme for the class of polynomially bounded circuits into a multi-client functional encryption (MCFE) scheme for the class of separable functions. An $n$-input function $f$ is called separable if it can be described as a list of polynomially bounded circuits $f^1,..., f^n$ s.t. $f(x_1,..., x_n)= f^1(x_1)+ ... + f^n(x_n)$ for all $x_1,..., x_n$. Our compiler extends the works of Brakerski et al. [Eurocrypt 2016] and of Komargodski et al. [Eurocrypt 2017] in which a generic compiler is proposed to obtain multi-input functional encryption (MIFE) from single-input functional encryption. Our construction achieves the stronger notion of MCFE but for the less generic class of separable functions. Prior to our work, a long line of results has been proposed in the setting of MCFE for the inner-product functionality, which is a special case of a separable function. We also propose a modified version of the notion of decentralized MCFE introduced by Chotard et al. [Asiacrypt 2018] that we call outsourceable mulit-client functional encryption (OMCFE). Intuitively, the notion of OMCFE makes it possible to distribute the load of the decryption procedure among at most $n$ different entities, which will return decryption shares that can be combined (e.g., additively) thus obtaining the output of the computation. This notion is especially useful in the case of a very resource consuming decryption procedure, while the combine algorithm is non-time consuming. We also show how to extend the presented MCFE protocol to obtain an OMCFE scheme for the same functionality class.
2021
PKC
More Efficient Digital Signatures with Tight Multi-User Security 📺
We construct the currently most efficient signature schemes with tight multi-user security against adaptive corruptions. It is the first generic construction of such schemes, based on lossy identification schemes (Abdalla etal; JoC 2016), and the first to achieve strong existential unforgeability. It also has significantly more compact signatures than the previously most efficient construction by Gjosteen and Jager (CRYPTO 2018). When instantiated based on the decisional Diffie-Hellman assumption, a signature consists of only three exponents. We propose a new variant of the generic construction of signatures from sequential OR-proofs by Abe, Ohkubo, and Suzuki (ASIACRYPT 2002) and Fischlin, Harasser, and Janson (EUROCRYPT 2020). In comparison to Fischlin etal, who focus on constructing signatures in the non-programmable random oracle model (NPROM), we aim to achieve tight security against adaptive corruptions, maximize efficiency, and to directly achieve strong existential unforgeability (also in the NPROM). This yields a slightly different construction and we use slightly different and additional properties of the lossy identification scheme. Signatures with tight multi-user security against adaptive corruptions are a commonly-used standard building block for tightly-secure authenticated key exchange protocols. We also show how our construction improves the efficiency of all existing tightly-secure AKE protocols.
2021
PKC
Shorter Lattice-Based Zero-Knowledge Proofs via One-Time Commitments 📺
There has been a lot of recent progress in constructing efficient zero-knowledge proofs for showing knowledge of an $\vec{\bm{s}}$ with small coefficients satisfying $\bm{A}\vec{\bm{s}}=\vec{\bm{t}}$. For typical parameters, the proof sizes have gone down from several megabytes to a bit under $50$KB (Esgin et al., Asiacrypt 2020). These are now within an order of magnitude of the sizes of lattice-based signatures, which themselves constitute proof systems which demonstrate knowledge of something weaker than the aforementioned equation. One can therefore see that this line of research is approaching optimality. In this paper, we modify a key component of these proofs, as well as apply several other tweaks, to achieve a further reduction of around $30\%$ in the proof output size. We also show that this savings propagates itself when these proofs are used in a general framework to construct more complex protocols.
2021
PKC
Multiparty Cardinality Testing for Threshold Private Set Intersection 📺
Threshold Private Set Intersection (PSI) allows multiple parties to compute the intersection of their input sets if and only if the intersection is larger than $n-t$, where $n$ is the size of the sets and $t$ is some threshold. The main appeal of this primitive is that, in contrast to standard PSI, known upper-bounds on the communication complexity only depend on the threshold $t$ and not on the sizes of the input sets. Current Threshold PSI protocols split themselves into two components: A Cardinality Testing phase, where parties decide if the intersection is larger than some threshold; and a PSI phase, where the intersection is computed. The main source of inefficiency of Threshold PSI is the former part. In this work, we present a new Cardinality Testing protocol that allows $N$ parties to check if the intersection of their input sets is larger than $n-t$. The protocol incurs in $\tilde{ \mathcal{O}} (Nt^2)$ communication complexity. We thus obtain a Threshold PSI scheme for $N$ parties with communication complexity $\tilde{ \mathcal{O}}(Nt^2)$.
2021
PKC
Private Set Operations from Oblivious Switching 📺
Private set intersection reveals the intersection of two private sets, but many real-world applications require the parties to learn $\textit{only}$ partial information} about the intersection. In this paper, we introduce a new approach for computing arbitrary functions of the intersection, provided that it is safe to also reveal the cardinality of the intersection. In the most general case, our new protocol provides the participants with secret shares of the intersection, which can be fed into any generic 2PC protocol. Certain computations on the intersection can also be done even more directly and efficiently, avoiding this secret-sharing step. These cases include computing $\textit{only}$ the cardinality of the intersection, or the ``cardinality-sum'' application proposed in Ion $\textit{et al.}$ (ePrint 2017). Compared to the state-of-the-art protocol for computing on the intersection (Pinkas et al., Eurocrypt 2019), our protocol has about $2.5-3\times$ less communication and has faster running time on slower (50Mbps) networks. Our new techniques can also be used to privately compute the {\em union} of two sets as easily as computing the intersection. Our protocol concretely improves the leading private set union protocol (Kolesnikov et al., Asiacrypt 2020) by a factor of $2-2.5\times$, depending on the network speed. We then show how private set union can be used in a simple way to realize the ``Private-ID'' functionality suggested by Buddhavarapu et al.~(ePrint 2020). Our protocol is significantly faster than the prior Private-ID protocol, especially on fast networks. All of our protocols are in the two-party setting and are secure against semi-honest adversaries.
2021
PKC
Verifiable Random Functions with Optimal Tightness 📺
Verifiable random functions (VRFs), introduced by Micali, Rabin and Vadhan (FOCS’99), are the public-key equivalent of pseudo- random functions. A public verification key and proofs accompanying the output enable all parties to verify the correctness of the output. How- ever, all known standard model VRFs have a reduction loss that is much worse than what one would expect from known optimal constructions of closely related primitives like unique signatures. We show that: 1. Every security proof for a VRF has to lose a factor of Q, where Q is the number of adversarial queries. To that end, we extend the meta- reduction technique of Bader et al. (EUROCRYPT’16) to also cover VRFs. 2. This raises the question: Is this bound optimal? We answer this question in the affirmative by presenting the first VRF that achieves this tightness. We thus paint a complete picture of the achievability of tight verifiable random functions: We show that a security loss of Q is unavoidable and present the first construction that achieves this bound.
2021
PKC
A Geometric Approach to Homomorphic Secret Sharing 📺
An (n,m,t)-homomorphic secret sharing (HSS) scheme allows n clients to share their inputs across m servers, such that the inputs are hidden from any t colluding servers, and moreover the servers can evaluate functions over the inputs locally by mapping their input shares to compact output shares. Such compactness makes HSS a useful building block for communication-efficient secure multi-party computation (MPC). In this work, we propose a simple compiler for HSS evaluating multivariate polynomials based on two building blocks: (1) homomorphic encryption for linear functions or low-degree polynomials, and (2) information-theoretic HSS for low-degree polynomials. Our compiler leverages the power of the first building block towards improving the parameters of the second. We use our compiler to generalize and improve on the HSS scheme of Lai, Malavolta, and Schröder [ASIACRYPT'18], which is only efficient when the number of servers is at most logarithmic in the security parameter. In contrast, we obtain efficient schemes for polynomials of higher degrees and an arbitrary number of servers. This application of our general compiler extends techniques that were developed in the context of information-theoretic private information retrieval (Woodruff and Yekhanin [CCC'05]), which use partial derivatives and Hermite interpolation to support the computation of polynomials of higher degrees. In addition to the above, we propose a new application of HSS to MPC with preprocessing. By pushing the computation of some HSS servers to a preprocessing phase, we obtain communication-efficient MPC protocols for low-degree polynomials that use fewer parties than previous protocols based on the same assumptions. The online communication of these protocols is linear in the input size, independently of the description size of the polynomial.
2021
PKC
Multivariate Public Key Cryptosystem from Sidon Spaces 📺
A Sidon space is a subspace of an extension field over a base field in which the product of any two elements can be factored uniquely, up to constants. This paper proposes a new a public-key cryptosystem of the multivariate type which is based on Sidon spaces, and has the potential to remain secure even if quantum supremacy is attained. This system, whose security relies on the hardness of the well-known MinRank problem, is shown to be resilient to several straightforward algebraic attacks. In particular, it is proved that the two popular attacks on the MinRank problem, the kernel attack and the minor attack, succeed only with exponentially small probability. The system is implemented in software, and its hardness is demonstrated experimentally.
2021
PKC
Generic Negation of Pair Encodings 📺
Attribute-based encryption (ABE) is a cryptographic primitive which supports fine-grained access control on encrypted data, making it an appealing building block for many applications. Pair encodings (Attrapadung, EUROCRYPT 2014) are simple primitives that can be used for constructing fully secure ABE schemes associated to a predicate relative to the encoding. We propose a generic transformation that takes any pair encoding scheme (PES) for a predicate P and produces a PES for its negated predicate \bar{P} . This construction finally solves a problem that was open since 2015. Our techniques bring new insight to the expressivity and generality of PES and can be of independent interest. We also provide, to the best of our knowledge, the first pair encoding scheme for negated doubly spatial encryption (obtained with our transformation) and explore several other consequences of our results.
2021
PKC
Banquet: Short and Fast Signatures from AES 📺
In this work we introduce Banquet, a digital signature scheme with post-quantum security, constructed using only symmetric-key primitives. The design is based on the MPC-in-head paradigm also used by Picnic (CCS 2017) and BBQ (SAC 2019). Like BBQ, Banquet uses only standardized primitives, namely AES and SHA-3, but signatures are more than 50\% shorter, making them competitive with Picnic (which uses a non-standard block cipher to improve performance). The MPC protocol in Banquet uses a new technique to verify correctness of the AES S-box computations, which is efficient because the cost is amortized with a batch verification strategy. Our implementation and benchmarks also show that both signing and verification can be done in under 10ms on a current x64 CPU. We also explore the parameter space to show the range of trade-offs that are possible with the Banquet design, and show that Banquet can nearly match the signature sizes possible with Picnic (albeit with slower, but still practical run times) or have speed within a factor of two of Picnic (at the cost of larger signatures).
2021
PKC
On Publicly-Accountable Zero-Knowledge and Small Shuffle Arguments 📺
Constructing interactive zero-knowledge arguments from simple assumptions with small communication complexity and good computational efficiency is an important, but difficult problem. In this work, we study interactive arguments with noticeable soundness error in their full generality and for the specific purpose of constructing concretely efficient shuffle arguments. To counterbalance the effects of a larger soundness error, we show how to transform such three-move arguments into publicly-accountable ones which allow the verifier to convince third parties of detected misbehavior by a cheating prover. This may be particularly interesting for applications where a malicious prover has to balance the profits it can make from cheating successfully and the losses it suffers from being caught. We construct interactive, public-coin, zero-knowledge arguments with noticeable soundness error for proving that a target vector of commitments is a pseudorandom permutation of a source vector. Our arguments do not rely on any trusted setup and only require the existence of collision-resistant hash functions. The communication complexity of our arguments is \emph{independent} of the length of the shuffled vector. For a soundness error of $2^{-5}=1/32$, the communication cost is $153$ bytes without and $992$ bytes with public accountability, meaning that our arguments are shorter than shuffle arguments realized using Bulletproofs (IEEE S\&P 2018) and even competitive in size with SNARKs, despite only relying on simple assumptions.
2021
PKC
Beyond Security and Efficiency: On-Demand Ratcheting with Security Awareness 📺
Secure asynchronous two-party communication applies ratcheting to strengthen privacy, in the presence of internal state exposures. Security with ratcheting is provided in two forms: forward security and post-compromise security. There have been several such secure protocols proposed in the last few years. However, they come with a high cost. In this paper, we propose two generic constructions with favorable properties. Concretely, our first construction achieves security awareness. It allows users to detect non-persistent active attacks, to determine which messages are not safe given a potential leakage pattern, and to acknowledge for deliveries. In our second construction, we define a hybrid system formed by combining two protocols: typically, a weakly secure "light" protocol and a strongly secure "heavy" protocol. The design goals of our hybrid construction are, first, to let the sender decide which one to use in order to obtain an efficient protocol with ratchet on demand; and second, to restore the communication between honest participants in the case of a message loss or an active attack. We can apply our generic constructions to any existing protocol.
2021
PKC
Group Encryption: Full Dynamicity, Message Filtering and Code-Based Instantiation 📺
Group encryption (\textsf{GE}), introduced by Kiayias, Tsiounis and Yung (Asiacrypt'07), is the encryption analogue of group signatures. It allows to send verifiably encrypted messages satisfying certain requirements to certified members of a group, while keeping the anonymity of the receivers. Similar to the tracing mechanism in group signatures, the receiver of any ciphertext can be identified by an opening authority - should the needs arise. The primitive of \textsf{GE} is motivated by a number of interesting privacy-preserving applications, including the filtering of encrypted emails sent to certified members of an organization. This paper aims to improve the state-of-affairs of \textsf{GE} systems. Our first contribution is the formalization of fully dynamic group encryption (\textsf{FDGE}) - a \textsf{GE} system simultaneously supporting dynamic user enrolments and user revocations. The latter functionality for \textsf{GE} has not been considered so far. As a second contribution, we realize the message filtering feature for \textsf{GE} based on a list of $t$-bit keywords and $2$ commonly used policies: ``permissive'' - accept the message if it contains at least one of the keywords as a substring; ``prohibitive'' - accept the message if all of its $t$-bit substrings are at Hamming distance at least $d$ from all keywords, for $d \geq 1$. This feature so far has not been substantially addressed in existing instantiations of \textsf{GE} based on DCR, DDH, pairing-based and lattice-based assumptions. Our third contribution is the first instantiation of GE under code-based assumptions. The scheme is more efficient than the lattice-based construction of Libert et al. (Asiacrypt'16) - which, prior to our work, is the only known instantiation of \textsf{GE} under post-quantum assumptions. Our scheme supports the $2$ suggested policies for message filtering, and in the random oracle model, it satisfies the stringent security notions for \textsf{FDGE} that we put forward.
2021
PKC
Steel: Composable Hardware-based Stateful and Randomised Functional Encryption 📺
Trusted execution enviroments (TEEs) enable secure execution of program on untrusted hosts and cryptographically attest the correctness of outputs. As these are complex systems, it is hard to capture the exact security achieved by protocols employing TEEs. Crucially TEEs are typically employed in multiple protocols at the same time, thus composable security (with global subroutines) is a natural goal for such systems. We show that under an attested execution setup $\Gatt$ we can realise cryptographic functionalities that are unrealizable in the standard model. We propose a new primitive of Functional Encryption for Stateful and Randomised functionalities (FESR) and an associated protocol, Steel, that realizes it. We show that Steel UC-realises FESR in the universal composition with global subroutines model (TCC 2020). Our work is also a validation of the compositionality of earlier work (Iron}, CCS 2017) capturing (non-stateful) hardware-based functional encryption. As the existing functionality for attested execution of Pass et al. (Eurocrypt 2017) is too strong for real world use, we propose a weaker functionality that allows the adversary to conduct rollback and forking attacks. We show that the stateful variant of $\Steel$, contrary to the stateless variant corresponding to Iron, is not secure in this setting and propose several mitigation techniques.
2021
PKC
How Provably Secure are (EC)DSA Signatures? 📺
Invited talk
Today, digital signatures are an omnipresent cryptographic primitive. They are extensively used for message and entity authentication and find widespread application in real-world protocols. Without much doubt, the specific schemes deployed most often are the RSA-based PKCS#1 v1.5, and the discrete logarithm-based DSA and ECDSA. For instance, current versions of TLS - the standard technology for securing internet connections - exclusively employ signatures of these types to authenticate servers. Furthermore, most cryptocurrencies like Bitcoin and Ethereum use ECDSA for signing transactions. The popularity of (EC)DSA signatures stands in stark contrast to the absence of rigorous security analyses. In this talk we will survey known provable security results about DSA and ECDSA. We will also discuss limitations of current provable security approaches.
2021
PKC
Single-to-Multi-Theorem Transformations for Non-Interactive Statistical Zero-Knowledge 📺
Non-interactive zero-knowledge proofs or arguments allow a prover to show validity of a statement without further interaction. For non-trivial statements such protocols require a setup assumption in form of a common random or reference string (CRS). Generally, the CRS can only be used for one statement (single-theorem zero-knowledge) such that a fresh CRS would need to be generated for each proof. Fortunately, Feige, Lapidot and Shamir (FOCS 1990) presented a transformation for any non-interactive zero-knowledge proof system that allows the CRS to be reused any polynomial number of times (multi-theorem zero-knowledge). This FLS transformation, however, is only known to work for either computational zero-knowledge or requires a structured, non-uniform common reference string. In this paper we present FLS-like transformations that work for non-interactive statistical zero-knowledge arguments in the common random string model. They allow to go from single-theorem to multi-theorem zero-knowledge and also preserve soundness, for both properties in the adaptive and non-adaptive case. Our first transformation is based on the general assumption that one-way permutations exist, while our second transformation uses lattice-based assumptions. Additionally, we define different possible soundness notions for non-interactive arguments and discuss their relationships.
2021
PKC
Adventures in Crypto Dark Matter: Attacks and Fixes for Weak Pseudorandom Functions 📺
A weak pseudorandom function (weak PRF) is one of the most important cryptographic primitives for its efficiency although it has lower security than a standard PRF. Recently, Boneh et al. (TCC'18) introduced two types of new weak PRF candidates, which are called a basic Mod-2/Mod-3 and alternative Mod-2/Mod-3 weak PRF. Both use the mixture of linear computations defined on different small moduli to satisfy conceptual simplicity, low complexity (depth-2 ${\sf ACC^0}$) and MPC friendliness. In fact, the new candidates are conjectured to be exponentially secure against any adversary that allows exponentially many samples, and a basic Mod-2/Mod-3 weak PRF is the only candidate that satisfies all features above. However, none of the direct attacks which focus on basic and alternative Mod-2/Mod-3 weak PRFs use their own structures. In this paper, we investigate weak PRFs from two perspectives; attacks, fixes. We first propose direct attacks for an alternative Mod-2/Mod-3 weak PRF and a basic Mod-2/Mod-3 weak PRF when a circulant matrix is used as a secret key. For an alternative Mod-2/Mod-3 weak PRF, we prove that the adversary's advantage is at least $2^{-0.105n}$, where $n$ is the size of the input space of the weak PRF. Similarly, we show that the advantage of our heuristic attack to the weak PRF with a circulant matrix key is larger than $2^{-0.21n}$, which is contrary to the previous expectation that `structured secret key' does not affect the security of a weak PRF. Thus, for an optimistic parameter choice $n = 2\lambda$ for the security parameter $\lambda$, parameters should be increased to preserve $\lambda$-bit security when an adversary obtains exponentially many samples. Next, we suggest a simple method for repairing two weak PRFs affected by our attack while preserving the parameters.
2021
PKC
QCCA-Secure Generic Key Encapsulation Mechanism with Tighter Security in the Quantum Random Oracle Model 📺
Xagawa and Yamakawa (PQCrypto 2019) proved the transformation SXY can tightly turn DS secure PKEs into IND-qCCA secure KEMs in the quantum random oracle model (QROM). But transformations such as KC, TPunc that turn PKEs with standard security (OW-CPA or IND-CPA) into DS secure PKEs still suffer from quadratic security loss in the QROM. In this paper, we give a tighter security reduction for the transformation KC that turns OW-CPA secure deterministic PKEs into modified DS secure PKEs in the QROM. We use the Measure-Rewind-Measure One-Way to Hiding Lemma recently introduced by Kuchta et al. (EUROCRYPT 2020) to avoid the square-root advantage loss. Moreover, we extend it to the case that underlying PKEs are not perfectly correct. Combining with other transformations, we finally obtain a generic KEM from any IND-CPA secure PKE. Our security reduction has roughly the same tightness as the result of Kuchta et al. without any other assumptions and we achieve the stronger IND-qCCA security. We also give a similar result for another KEM transformation achieving the same security notion from any OW-CPA secure deterministic PKE.
2021
PKC
Improving Revocation for Group Signature with Redactable Signature 📺
Group signature is a major cryptographic tool allowing anonymous access to a service. However, in practice, access to a service is usually granted for some periods of time, which implies that the signing rights must be deactivated the rest of the time. This requirement thus calls for complex forms of revocation, reminiscent of the concept of time-bound keys. However, schemes implementing this concept are rare and only allow revocation with limited granularity. That is, signing keys are associated with an expiry time and become definitively useless once the latter has passed. In this paper, we revisit the notion of group signatures with time-bound keys with several contributions. Firstly, we extend this notion to allow high granularity revocation: a member's signing key can in particular be deactivated at some moments and then be automatically reinstated. Secondly, we show that this complex property is actually simple to achieve using redactable signature. In particular, we consider in this context a recent redactable signature scheme from PKC 20 that we improve by dramatically reducing the size of the public key. The resulting construction is of independent interest.
2021
PKC
Bootstrapping fully homomorphic encryption over the integers in less than one second 📺
One can bootstrap LWE-based fully homomorphic encryption (FHE) schemes in less than one second, but bootstrapping AGCD-based FHE schemes, also known as FHE over the integers, is still very slow. In this work we propose fast bootstrapping methods for FHE over the integers, closing thus this gap between these two types of schemes. We use a variant of the AGCD problem to construct a new GSW-like scheme that can natively encrypt polynomials, then, we show how the single-gate bootstrapping method proposed by Ducas and Micciancio (EUROCRYPT 2015) can be adapted to FHE over the integers using our scheme, and we implement a bootstrapping that, using around 400 MB of key material, runs in less than one second in a common personal computer.
2021
PKC
Group Signatures with User-Controlled and Sequential Linkability 📺
Group signatures allow users to create signatures on behalf of a group while remaining anonymous. Such signatures are a powerful tool to realize privacy-preserving data collections, where e.g., sensors, wearables or vehicles can upload authenticated measurements into a data lake. The anonymity protects the user’s privacy yet enables basic data processing of the uploaded unlinkable information. For many applications, full anonymity is often neither desired nor useful though, and selected parts of the data must eventually be correlated after being uploaded. Current solutions of group signatures do not provide such functionality in a satisfactory way: they either rely on a trusted party to perform opening or linking of signatures, which clearly conflicts with the core privacy goal of group signatures; or require the user to decide upon the linkability of signatures before they are generated. In this paper we propose a new variant of group signatures that provides linkability in a flexible and user-centric manner. Users – and only they – can decide before and after signature creation whether they should remain linkable or be correlated. To prevent attacks where a user omits certain signatures when a sequence of events in a certain section (e.g., time frame), should be linked, we further extend this new primitive to allow for sequential link proofs. Such proofs guarantee that the provided sequence of data is not only originating from the same signer, but also occurred in that exact order and contains all of the user’s signatures within the time frame. We formally define the desired security and privacy properties, propose a provably secure construction based on DL-related assumptions and report on a prototypical implementation of our scheme.
2021
PKC
Impossibility on Tamper-Resilient Cryptography with Uniqueness Properties 📺
In this work, we show negative results on the tamper-resilience of a wide class of cryptographic primitives with uniqueness properties, such as unique signatures, verifiable random functions, signatures with unique keys, injective one-way functions, and encryption schemes with a property we call unique-message property. Concretely, we prove that for these primitives, it is impossible to derive their (even extremely weak) tamper-resilience from any common assumption, via black-box reductions. Our proofs exploit the simulatable attack paradigm proposed by Wichs (ITCS ’13), and the tampering model we treat is the plain model, where there is no trusted setup.
2021
PKC
On the CCA Compatibility of Public-Key Infrastructure 📺
In this work, we put forth the notion of compatibility of any key generation or setup algorithm. We focus on the specific case of encryption, and say that a key generation algorithm KeyGen is X-compatible (for X \in {CPA, CCA1, CCA2}) if there exist encryption and decryption algorithms that together with KeyGen, result in an X-secure public-key encryption scheme. We study the following question: Is every CPA-compatible key generation algorithm also CCA-compatible? We obtain the following answers: - Every sub-exponentially CPA-compatible KeyGen algorithm is CCA1-compatible, assuming the existence of hinting PRGs and sub-exponentially secure keyless collision resistant hash functions. - Every sub-exponentially CPA-compatible KeyGen algorithm is also CCA2-compatible, assuming the existence of non-interactive CCA2 secure commitments, in addition to sub-exponential security of the assumptions listed in the previous bullet. Here, sub-exponentially CPA-compatible KeyGen refers to any key generation algorithm for which there exist encryption and decryption algorithms that result in a CPA-secure public-key encryption scheme {\em against sub-exponential adversaries}. This gives a way to perform CCA secure encryption given any public key infrastructure that has been established with only (sub-exponential) CPA security in mind. The resulting CCA encryption makes black-box use of the CPA scheme and all other underlying primitives.
2021
PKC
Rate-1 Key-Dependent Message Security via Reusable Homomorphic Extractor against Correlated-Source Attacks 📺
In this work, we first present general methods to construct information rate-1 PKE that is $\KDM^{(n)}$-secure with respect to \emph{block-affine} functions for any unbounded polynomial $n$. To achieve this, we propose a new notion of extractor that satisfies \emph{reusability}, \emph{homomorphic}, and \emph{security against correlated-source attacks}, and show how to use this extractor to improve the information rate of the \KDM-secure PKE of Brakerski et al.~(Eurocrypt 18). Then, we show how to amplify \KDM~security from block-affine function class into general bounded size circuits via a variant of the technique of Applebaum (Eurocrypt 11), achieving better efficiency. Furthermore, we show how to generalize these approaches to the IBE setting. Additionally, our PKE and IBE schemes are also leakage resilient, with leakage rates $1-o(1)$ against a slightly smaller yet still general class -- block leakage functions. We can instantiate the required building blocks from $\LWE$ or $\DDH$.
2021
PKC
Round-optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices 📺
Verifiable Oblivious Pseudorandom Functions (VOPRFs) are protocols that allow a client to learn verifiable pseudorandom function (PRF) evaluations on inputs of their choice. The PRF evaluations are computed by a server using their own secret key. The security of the protocol prevents both the server from learning anything about the client's input, and likewise the client from learning anything about the server's key. VOPRFs have many applications including password-based authentication, secret-sharing, anonymous authentication and efficient private set intersection. In this work, we construct the first round-optimal (online) VOPRF protocol that retains security from well-known subexponential lattice hardness assumptions. Our protocol requires constructions of non-interactive zero-knowledge arguments of knowledge (NIZKAoK). Using recent developments in the area of post-quantum zero-knowledge arguments of knowledge, we show that our VOPRF may be securely instantiated in the quantum random oracle model. We construct such arguments as extensions of prior work in the area of lattice-based zero-knowledge proof systems.
2021
PKC
Two-Party Adaptor Signatures From Identification Schemes 📺
Adaptor signatures are a novel cryptographic primitive with important applications for cryptocurrencies. They have been used to construct second layer solutions such as payment channels or cross-currency swaps. The basic idea of an adaptor signature scheme is to tie the signing process to the revelation of a secret value in the sense that, much like a regular signature scheme, an adaptor signature scheme can authenticate messages, but simultaneously leaks a secret to certain parties. Recently, Aumayr et al. provide the first formalization of adaptor signature schemes, and present provably secure constructions from ECDSA and Schnorr signatures. Unfortunately, the formalization and constructions given in this work have two limitations: (1) current schemes are limited to ECDSA and Schnorr signatures, and no generic transformation for constructing adaptor signatures is known; (2) they do not offer support for aggregated two-party signing, which can significantly reduce the blockchain footprint in applications of adaptor signatures. In this work, we address these two shortcomings. First, we show that signature schemes that are constructed from identification (ID) schemes, which additionally satisfy certain homomorphic properties, can generically be transformed into adaptor signature schemes. We further provide an impossibility result which proves that unique signature schemes (e.g., the BLS scheme) cannot be transformed into an adaptor signature scheme. In addition, we define two-party adaptor signature schemes with aggregatable public keys and show how to instantiate them via a generic transformation from ID-based signature schemes. Finally, we give instantiations of our generic transformations for the Schnorr, Katz-Wang and Guillou-Quisquater signature schemes.
2021
PKC
The Convergence of Slide-type Reductions 📺
In this work, we apply the dynamical systems analysis of Hanrot et al. (CRYPTO'11) to a class of lattice block reduction algorithms that includes (natural variants of) slide reduction and block-Rankin reduction. This implies sharper bounds on the polynomial running times (in the query model) for these algorithms and opens the door to faster practical variants of slide reduction. We give heuristic arguments showing that such variants can indeed speed up slide reduction significantly in practice. This is confirmed by experimental evidence, which also shows that our variants are competitive with state-of-the-art reduction algorithms.
2021
PKC
Revisiting (R)CCA Security and Replay Protection 📺
This paper takes a fresh approach to systematically characterizing, comparing, and understanding CCA-type security definitions for public-key encryption (PKE), a topic with a long history. The justification for a concrete security definition X is relative to a benchmark application (e.g. confidential communication): Does the use of a PKE scheme satisfying X imply the security of the application? Because unnecessarily strong definitions may lead to unnecessarily inefficient schemes or unnecessarily strong computational assumptions, security definitions should be as weak as possible, i.e. as close as possible to (but above) the benchmark. Understanding the hierarchy of security definitions, partially ordered by the implication (i.e. at least as strong) relation, is hence important, as is placing the relevant applications as benchmark levels within the hierarchy. CCA-2 security is apparently the strongest notion, but because it is arguably too strong, Canetti, Krawczyk, and Nielsen (Crypto 2003) proposed the relaxed notions of Replayable CCA security (RCCA) as perhaps the weakest meaningful definition, and they investigated the space between CCA and RCCA security by proposing two versions of Detectable RCCA (d-RCCA) security which are meant to ensure that replays of ciphertexts are either publicly or secretly detectable (and hence preventable). The contributions of this paper are three-fold. First, following the work of Coretti, Maurer, and Tackmann (Asiacrypt 2013), we formalize the three benchmark applications of PKE that serve as the natural motivation for security notions, namely the construction of certain types of (possibly replay-protected) confidential channels (from an insecure and an authenticated communication channel). Second, we prove that RCCA does not achieve the confidentiality benchmark and, contrary to previous belief, that the proposed d-RCCA notions are not even relaxations of CCA-2 security. Third, we propose the natural security notions corresponding to the three benchmarks: an appropriately strengthened version of RCCA to ensure confidentiality, as well as two notions for capturing public and secret replay detectability.
2021
PKC
On the Success Probability of Solving Unique SVP via BKZ 📺
As lattice-based key encapsulation, digital signature, and fully homomorphic encryption schemes near standardisation, ever more focus is being directed to the precise estimation of the security of these schemes. The primal attack reduces key recovery against such schemes to instances of the unique Shortest Vector Problem (uSVP). Dachman-Soled et al. (Crypto 2020) recently proposed a new approach for fine-grained estimation of the cost of the primal attack when using Progressive BKZ for lattice reduction. In this paper we review and extend their technique to BKZ 2.0 and provide extensive experimental evidence of its accuracy. Using this technique we also explain results from previous primal attack experiments by Albrecht et al. (Asiacrypt 2017) where attacks succeeded with smaller than expected block sizes. Finally, we use our simulators to reestimate the cost of attacking the three lattice KEM finalists of the NIST Post Quantum Standardisation Process.
2021
PKC
BETA: Biometric-Enabled Threshold Authentication 📺
In the past decades, user authentication has been dominated by server-side password-based solutions that rely on ``what users know". This approach is susceptible to breaches and phishing attacks, and poses usability challenges. As a result, the industry is gradually moving to biometric-based client-side solutions that do not store any secret information on servers. This shift necessitates the safe storage of biometric templates and private keys, which are used to generate tokens, on user devices. We propose a new generic framework called Biometric Enabled Threshold Authentication (BETA) to protect sensitive client-side information like biometric templates and cryptographic keys. Towards this, we formally introduce the notion of Fuzzy Threshold Tokenizer (FTT) where an initiator can use a ``close'' biometric measurement to generate an authentication token if at least t (the threshold) devices participate. We require that the devices only talk to the initiator, and not to each other, to capture the way user devices are connected in the real world. We use the universal composability (UC) framework to model the security properties of FTT, including the unforgeability of tokens and the privacy of the biometric values (template and measurement), under a malicious adversary. We construct three protocols that meet our definition. Our first two protocols are general feasibility results that work for any distance function, any threshold t and tolerate the maximal (i.e. t-1) amount of corruption. They are based on any two round UC-secure multi-party computation protocol in the standard model (with a CRS) and threshold fully homomorphic encryption, respectively. We show how to effectively use these primitives to build protocols in a constrained communication model with just four rounds of communication. For the third protocol, we consider inner-product based distance metrics (cosine similarity, Euclidean distance, etc.) specifically, motivated by the recent interest in its use for face recognition. We use Paillier encryption, efficient NIZKs for specific languages, and a simple garbled circuit to build an efficient protocol for the common case of n=3 devices with one compromised.
2021
PKC
Masked Triples: Amortizing Multiplication Triples across Conditionals 📺
A classic approach to MPC uses preprocessed multiplication triples to evaluate arbitrary Boolean circuits. If the target circuit features conditional branching, e.g. as the result of a IF program statement, then triples are wasted: one triple is consumed per AND gate, even if the output of the gate is entirely discarded by the circuit’s conditional behavior. In this work, we show that multiplication triples can be re-used across conditional branches. For a circuit with b branches, each having n AND gates, we need only a total of n triples, rather than the typically required bn. Because preprocessing triples is often the most expensive step in protocols that use them, this significantly improves performance. Prior work similarly amortized oblivious transfers across branches in the classic GMW protocol (Heath et al., Asiacrypt 2020, [HKP20]). In addition to demonstrating conditional improvements are possible for a different class of protocols, we also concretely improve over [HKP20]: their maximum improvement is bounded by the topology of the circuit. Our protocol yields improvement independent of topology: we need triples proportional to the size of the program’s longest execution path, regardless of the structure of the program branches. We implemented our approach in C++. Our experiments show that we significantly improve over a "naive" protocol and over prior work: for a circuit with 16 branches and in terms of total communication, we improved over naive by 12x and over [HKP20] by an average of 2.6x. Our protocol is secure against the semi-honest corruption of p-1 parties.
2021
PKC
Multi-Party Threshold Private Set Intersection with Sublinear Communication 📺
In multi-party threshold private set intersection (PSI), $n$ parties each with a private set wish to compute the intersection of their sets if the intersection is sufficiently large. Previously, Ghosh and Simkin (CRYPTO 2019) studied this problem for the two-party case and demonstrated interesting lower and upper bounds on the communication complexity. In this work, we investigate the communication complexity of the multi-party setting $(n\geq 2)$. We consider two functionalities for multi-party threshold PSI. In the first, parties learn the intersection if each of their sets and the intersection differ by at most $T$. In the second functionality, parties learn the intersection if the union of all their sets and the intersection differ by at most $T$. For both functionalities, we show that any protocol must have communication complexity $\Omega(nT)$. We build protocols with a matching upper bound of $O(nT)$ communication complexity for both functionalities assuming threshold FHE. We also construct a computationally more efficient protocol for the second functionality with communication complexity $\widetilde{O}(nT)$ under a weaker assumption of threshold additive homomorphic encryption. As a direct implication, we solve one of the open problems in the work of Ghosh and Simkin (CRYPTO 2019) by designing a two-party protocol with communication cost $\widetilde{O}(T)$ from assumptions weaker than FHE. As a consequence of our results, we achieve the first "regular" multi-party PSI protocol where the communication complexity only grows with the size of the set difference and does not depend on the size of the input sets.
2021
TCC
Somewhere Statistical Soundness, Post-Quantum Security, and SNARGs 📺
The main conceptual contribution of this paper is a unification of two leading paradigms for constructing succinct argument systems, namely Kilian's protocol and the BMW (Biehl-Meyer-Wetzel) heuristic. We define the notion of a multi-extractable somewhere statistically binding (meSSB) hash family, an extension of the notion of somewhere statistically binding hash functions (Hubacek and Wichs, ITCS 2015), and construct it from LWE. We show that when instantiating Kilian's protocol with a meSSB hash family, the first two messages are simply an instantiation of the BMW heuristic. Therefore, if we also instantiate it with a PCP for which the BMW heuristic is sound, e.g., a computational non-signaling PCP, then the first two messages of the Kilian protocol is a sound instantiation of the BMW heuristic. This leads us to two technical results. First, we show how to efficiently convert any succinct non-interactive argument (SNARG) for BatchNP into a SNARG for any language that has a computational non-signaling PCP. Put together with the recent and independent result of Choudhuri, Jain and Jin (Eprint 2021/808) which constructs a SNARG for BatchNP from LWE, we get a SNARG for any language that has a computational non-signaling PCP, including any language in P, but also any language in NTISP (non-deterministic bounded space), from LWE. Second, we introduce the notion of a somewhere statistically sound (SSS) interactive argument, which is a hybrid between a statistically sound proof and a computationally sound proof (a.k.a. an argument), and * prove that Kilian's protocol, instantiated as above, is an SSS argument; * show that the soundness of SSS arguments can be proved in a straight-line manner, implying that they are also post-quantum sound if the underlying assumption is post-quantum secure; and * conjecture that constant-round SSS arguments can be soundly converted into non-interactive arguments via the Fiat-Shamir transformation.
2021
TCC
On Derandomizing Yao’s Weak-to-Strong OWF Construction 📺
The celebrated result of Yao (Yao, FOCS'82) shows that concatenating n · p(n) copies of a weak one-way function f which can be inverted with probability 1 - 1/p(n) suffices to construct a strong one-way function g, showing that weak and strong one-way functions are black-box equivalent. This direct product theorem for hardness amplification of one-way functions has been very influential. However, the construction of Yao has severe efficiency limitations; in particular, it is not security-preserving (the input to g needs to be much larger than the input to f). Understanding whether this is inherent is an intriguing and long-standing open question. In this work, we explore necessary features of constructions which achieve short input length by proving the following: for any direct product construction of strong OWF g from a weak OWF f, which can be inverted with probability 1-1/p(n), the input size of g must grow as Omega(p(n)). By direct product construction, we refer to any construction with the following structure: the construction g executes some arbitrary pre-processing function (independent of f) on its input, obtaining a vector (y_1 ,··· ,y_l ), and outputs f(y_1),··· ,f(y_l). Note that Yao's construction is obtained by setting the pre-processing to be the identity. Our result generalizes to functions g with post-processing, as long as the post-processing function is not too lossy. Thus, in essence, any weak-to-strong hardness amplification must either (1) be very far from security-preserving, (2) use adaptivity, or (3) must be very far from a direct-product structure (in the sense of having a very lossy post-processing of the outputs of f). On a technical level, we use ideas from lower bounds for secret-sharing to prove the impossibility of derandomizing Yao in a black-box way. Our results are in line with Goldreich, Impagliazzo, Levin, Venkatesan, and Zuckerman (FOCS 1990) who derandomize Yao's construction for regular weak one-way functions by evaluating the OWF along a random walk on an expander graph---the construction is adaptive, since it alternates steps on the expander graph with evaluations of the weak one-way function.
2021
TCC
Information-Theoretically Secure MPC against Mixed Dynamic Adversaries 📺
In this work we consider information-theoretically secure MPC against an \emph{mixed} adversary who can corrupt $t_p$ parties passively, $t_a$ parties actively, and can make $t_f$ parties fail-stop. With perfect security, it is known that every function can be computed securely if and only if $3t_a + 2t_p + t_f < n$, for statistical security the bound is $2t_a + 2t_p + t_f < n$. These results say that for each given set of parameters $(t_a, t_p, t_f)$ respecting the inequality, there exists a protocol secure against this particular choice of corruption thresholds. In this work we consider a \emph{dynamic} adversary. Here, the goal is a \emph{single} protocol that is secure, no matter which set of corruption thresholds $(t_a, t_p, t_f)$ from a certain class is chosen by the adversary. A dynamic adversary can choose a corruption strategy after seeing the protocol and so is much stronger than a standard adversary. Dynamically secure protocols have been considered before for computational security. Also the information theoretic case has been studied, but only considering non-threshold adversaries, leading to inefficient protocols. We consider threshold dynamic adversaries and information theoretic security. For statistical security we show that efficient dynamic secure function evaluation (SFE) is possible if and only if $2t_a + 2t_p + t_f < n$, but any dynamically secure protocol must use $\Omega(n)$ rounds, even if only fairness is required. Further, general reactive MPC is possible if we assume in addition that $2t_a+2t_f \leq n$, but fair reactive MPC only requires $2t_a + 2t_p + t_f < n$. For perfect security we show that both dynamic SFE and verifiable secret sharing (VSS) are impossible if we only assume $3t_a + 2t_p + t_f < n$ and remain impossible even if we also assume $t_f=0$. In fact even SFE with security with abort is impossible in this case. On the other hand, perfect dynamic SFE with guaranteed output delivery (G.O.D.) is possible when either $t_p = 0$ or $t_a = 0$ i.e. if instead we assume $3t_a+t_f < n$ or $2t_p +t_f < n$. Further, perfect dynamic VSS with G.O.D. is possible under the stronger conditions $3t_a + 3/2t_f \leq n$ or $2t_p + 2t_f \leq n$. These conditions are also sufficient for perfect reactive MPC. On the other hand, because perfect fair VSS only requires $3t_a+2t_p+t_f< n$, perfect reactive MPC is possible whenever perfect SFE is.
2021
TCC
Random-Index PIR and Applications 📺
Private information retrieval (PIR) lets a client retrieve an entry from a database without the server learning which entry was retrieved. Here we study a weaker variant that we call random-index PIR (RPIR), where the retrieved index is an output rather than an input of the protocol, and is chosen at random. RPIR is clearly weaker than PIR, but it suffices for some interesting applications and may be realized more efficiently than full-blown PIR. We report here on two lines of work, both tied to RPIR but otherwise largely unrelated. The first line of work studies RPIR as a primitive on its own. Perhaps surprisingly, we show that RPIR is in fact equivalent to PIR when there are no restrictions on the number of communication rounds. On the other hand, RPIR can be implemented in a “noninteractive” setting (with preprocessing), which is clearly impossible for PIR. For two-server RPIR we show a truly noninteractive solution, offering information-theoretic security without any pre-processing. The other line of work, which was the original motivation for our work, uses RPIR to improve on the recent work of Benhamouda et al. (TCC’20) for maintaining secret values on public blockchains. Their solution depends on a method for selecting many random public keys from a PKI while hiding most of the selected keys from an adversary. However, the method they proposed is vulnerable to a double-dipping attack, limiting its resilience. Here we observe that an RPIR protocol, where the client is implemented via secure MPC, can eliminate that vulnerability. We thus get a secrets-on-blockchain protocol (and more generally large-scale MPC), resilient to any fraction f < 1/2 of corrupted parties, resolving the main open problem left from the work of Benhamouda et al. As the client in this solution is implemented via secure MPC, it really brings home the need to make it as efficient as possible. We thus strive to explore whatever efficiency gains we can get by using RPIR rather than PIR. We achieve more gains by using batch RPIR where multiple indexes are retrieved at once. Lastly, we observe that this application can make do with a weaker security guarantee than full RPIR, and show that this weaker variant can be realized even more efficiently. We discuss one protocol in particular, that may be attractive for practical implementations.
2021
TCC
Round-Efficient Byzantine Agreement and Multi-Party Computation with Asynchronous Fallback 📺
Protocols for Byzantine agreement (BA) and secure multi-party computation (MPC) can be classified according to the underlying communication model. The two most commonly considered models are the synchronous one and the asynchronous one. Synchronous protocols typically lose their security guarantees as soon as the network violates the synchrony assumptions. Asynchronous protocols remain secure regardless of the network conditions, but achieve weaker security guarantees even when the network is synchronous. Recent works by Blum, Katz and Loss [TCC'19], and Blum, Liu-Zhang and Loss [CRYPTO'20] introduced BA and MPC protocols achieving security guarantees in both settings: security up to $t_s$ corruptions in a synchronous network, and up to $t_a$ corruptions in an asynchronous network, under the provably optimal threshold trade-offs $t_a \le t_s$ and $t_a + 2t_s < n$. However, current solutions incur a high synchronous round complexity when compared to state-of-the-art purely synchronous protocols. When the network is synchronous, the round complexity of BA protocols is linear in the number of parties, and the round complexity of MPC protocols also depends linearly on the depth of the circuit to evaluate. In this work, we provide round-efficient constructions for both primitives with optimal resilience: fixed-round and expected constant-round BA protocols, and an MPC protocol whose round complexity is independent of the circuit depth.
2021
TCC
Ring-based Identity Based Encryption – Asymptotically Shorter MPK and Tighter Security 📺
This work constructs an identity based encryption from the ring learning with errors assumption (RLWE), with shorter master public keys and tighter security analysis. To achieve this, we develop three new methods: (1) a new homomorphic equality test method using nice algebraic structures of the rings, (2) a new family of hash functions with natural homomorphic evaluation algorithms, and (3) a new insight for tighter reduction analyses. These methods can be used to improve other important cryptographic tasks, and thus are of general interests. Particularly, our homomorphic equality test method can derive a new method for packing/unpacking GSW-style encodings, showing a new non-trivial advantage of RLWE over the plain LWE. Moreover, our new insight for tighter analyses can improve the analyses of all the currently known partition-based IBE designs, achieving the best of the both from prior analytical frameworks of Waters (Eurocrypt ’05) and Bellare and Ristenpart (Eurocrypt ’09).
2021
TCC
Forward Secret Encrypted RAM: Lower Bounds and Applications 📺
In this paper, we study forward secret encrypted RAMs (FS eRAMs) which enable clients to outsource the storage of an n-entry array to a server. In the case of a catastrophic attack where both client and server storage are compromised, FS eRAMs guarantee that the adversary may not recover any array entries that were deleted or overwritten prior to the attack. A simple folklore FS eRAM construction with O(logn) overhead has been known for at least two decades. Unfortunately, no progress has been made since then. We show the lack of progress is fundamental by presenting an \Omega(log n) lower bound for FS eRAMs proving that the folklore solution is optimal. To do this, we introduce the symbolic model for proving cryptographic data structures lower bounds that may be of independent interest. Given this limitation, we investigate applications where forward secrecy may be obtained without the additional O(log n) overhead. We show this is possible for oblivious RAMs, memory checkers, and multicast encryption by incorporating the ideas of the folklore FS eRAM solution into carefully chosen constructions of the corresponding primitives.
2021
TCC
Cryptographic Shallots: A Formal Treatment of Repliable Onion Encryption 📺
Onion routing is a popular, efficient, and scalable method for enabling anonymous communications. To send a message m to Bob via onion routing, Alice picks several intermediaries, wraps m in multiple layers of encryption --- a layer per intermediary --- and sends the resulting “onion” to the first intermediary. Each intermediary “peels off'”a layer of encryption and learns the identity of the next entity on the path and what to send along; finally Bob learns that he is the recipient and recovers the message m. Despite its wide use in the real world (e.g., Mixminion), the foundations of onion routing have not been thoroughly studied. In particular, although two-way communication is needed in most instances, such as anonymous Web browsing or anonymous access to a resource, until now no definitions or provably secure constructions have been given for two-way onion routing. Moreover, the security definitions that existed even for one-way onion routing were found to have significant flaws. In this paper, we (1) propose an ideal functionality for a repliable onion encryption scheme; (2) give a game-based definition for repliable onion encryption and show that it is sufficient to realize our ideal functionality; and finally (3), our main result is a construction of repliable onion encryption that satisfies our definitions.
2021
TCC
Distributed Merkle's Puzzles 📺
Merkle's puzzles were proposed in 1974 by Ralph Merkle as a key agreement protocol between two players based on symmetric-key primitives. In order to agree on a secret key, each player makes $T$ queries to a random function (oracle), while any eavesdropping adversary has to make $\Omega(T^2)$ queries to the random oracle in order to recover the key with high probability. The quadratic gap between the query complexity of the honest players and the eavesdropper was shown to be optimal by Barak and Mahmoody [CRYPTO`09]. We consider Merkle's puzzles in a distributed setting, where the goal is to allow \emph{all} pairs among $M$ honest players with access to a random oracle to agree on secret keys. We devise a protocol in this setting, where each player makes $T$ queries to the random oracle and communicates at most $T$ bits, while any adversary has to make $\Omega(M \cdot T^2)$ queries to the random oracle (up to logarithmic factors) in order to recover \emph{any one} of the keys with high probability. Therefore, the amortized (per-player) complexity of achieving secure communication (for a fixed security level) decreases with the size of the network. Finally, we prove that the gap of $T \cdot M$ between the query complexity of each honest player and the eavesdropper is optimal.
2021
TCC
Secure Quantum Computation with Classical Communication 📺
The study of secure multi-party computation (MPC) has thus far been limited to the following two settings: every party is fully classical, or every party has quantum capabilities. This paper studies a notion of MPC that allows some classical and some quantum parties to securely compute a quantum functionality over their joint private inputs. In particular, we construct constant-round \emph{composable} protocols for blind and verifiable classical delegation of quantum computation, and give applications to secure quantum computation with classical communication. Assuming QLWE (the quantum hardness of learning with errors), we obtain the following (maliciously-secure) protocols for computing any BQP (bounded-error quantum polynomial-time) functionality. - A six-round protocol between one quantum server and multiple classical clients in the CRS (common random string) model. - A three-round protocol between one quantum server and multiple classical clients in the PKI (public-key infrastructure) + QRO (quantum random oracle) model. - A two-message protocol between quantum sender and classical receiver (a quantum non-interactive secure computation protocol), in the QRO model. To enable composability of classical verification of quantum computation, we require the notion of \emph{malicious blindness}, which stipulates that the prover does not learn anything about the verifier's delegated computation, even if it is able to observe whether or not the verifier accepted the proof. To construct a protocol with malicious blindness, we use a classical verification protocol for sampBQP computation (Chung et al., Arxiv 2020), which in general has inverse polynomial soundness error, to prove honest evaluation of QFHE (quantum fully-homomorphic encryption) ciphertexts with negligible soundness error. Obtaining a constant-round protocol requires a strong parallel repetition theorem for classical verification of quantum computation, which we show following the "nearly orthogonal projector" proof strategy (Alagic et al., TCC 2020).
2021
TCC
Black-Box Impossibilities of Obtaining 2-Round Weak ZK and Strong WI from Polynomial Hardness 📺
We study the problem of obtaining 2-round interactive arguments for NP with weak zero-knowledge (weak ZK) [Dwork et al., 2003] or with strong witness indistinguishability (strong WI) [Goldreich, 2001] under polynomially hard falsifiable assumptions. We consider both the delayed-input setting [Jain et al., 2017] and the standard non-delayed-input setting, where in the delayed-input setting, (i) prover privacy is only required to hold against delayed-input verifiers (which learn statements in the last round of the protocol) and (ii) soundness is required to hold even against adaptive provers (which choose statements in the last round of the protocol). Concretely, we show the following black-box (BB) impossibility results by relying on standard cryptographic primitives. 1. It is impossible to obtain 2-round delayed-input weak ZK arguments under polynomially hard falsifiable assumptions if BB reductions are used to prove soundness. This result holds even when non-black-box techniques are used to prove weak ZK. 2. It is impossible to obtain 2-round non-delayed-input strong WI arguments and 2-round publicly verifiable delayed-input strong WI arguments under polynomially hard falsifiable assumptions if a natural type of BB reductions, called "oblivious" BB reductions, are used to prove strong WI. 3. It is impossible to obtain 2-round delayed-input strong WI arguments under polynomially hard falsifiable assumptions if BB reductions are used to prove both soundness and strong WI (the BB reductions for strong WI are required to be oblivious as above). Compared with the above result, this result no longer requires public verifiability in the delayed-input setting.
2021
TCC
Two-Round Maliciously Secure Computation with Super-Polynomial Simulation 📺
We propose the first maliciously secure multi-party computation (MPC) protocol for general functionalities in two rounds, without any trusted setup. Since polynomial-time simulation is impossible in two rounds, we achieve the relaxed notion of superpolynomial-time simulation security [Pass, EUROCRYPT 2003]. Prior to our work, no such maliciously secure protocols were known even in the two-party setting for functionalities where both parties receive outputs. Our protocol is based on the sub-exponential security of standard assumptions plus a special type of non-interactive non-malleable commitment. At the heart of our approach is a two-round multi-party conditional disclosure of secrets (MCDS) protocol in the plain model from bilinear maps, which is constructed from techniques introduced in [Benhamouda and Lin, TCC 2020].
2021
TCC
Continuously Non-Malleable Secret Sharing: Joint Tampering, Plain Model and Capacity 📺
We study non-malleable secret sharing against joint leakage and joint tampering attacks. Our main result is the first threshold secret sharing scheme in the plain model achieving resilience to noisy-leakage and continuous tampering. The above holds under (necessary) minimal computational assumptions (i.e., the existence of one-to-one one-way functions), and in a model where the adversary commits to a fixed partition of all the shares into non-overlapping subsets of at most t - 1 shares (where t is the reconstruction threshold), and subsequently jointly leaks from and tampers with the shares within each partition. We also study the capacity (i.e., the maximum achievable asymptotic information rate) of continuously non-malleable secret sharing against joint continuous tampering attacks. In particular, we prove that whenever the attacker can tamper jointly with k > t/2 shares, the capacity is at most t - k. The rate of our construction matches this upper bound. An important corollary of our results is the first non-malleable secret sharing scheme against independent tampering attacks breaking the rate-one barrier (under the same computational assumptions as above).
2021
TCC
Tight Security Bounds for Micali’s SNARGs 📺
Succinct non-interactive arguments (SNARGs) in the random oracle model (ROM) have several attractive features: they are plausibly post-quantum; they can be heuristically instantiated via lightweight cryptography; and they have a transparent (public-coin) parameter setup. The canonical construction of a SNARG in the ROM is due to Micali (FOCS 1994), who showed how to use a random oracle to compile any probabilistically checkable proof (PCP) with sufficiently-small soundness error into a corresponding SNARG. Yet, while Micali's construction is a seminal result, it has received little attention in terms of analysis in the past 25 years. In this paper, we observe that prior analyses of the Micali construction are not tight and then present a new analysis that achieves tight security bounds. Our result enables reducing the random oracle's output size, and obtain corresponding savings in concrete argument size. Departing from prior work, our approach relies on precisely quantifying the cost for an attacker to find several collisions and inversions in the random oracle, and proving that any PCP with small soundness error withstands attackers that succeed in finding a small number of collisions and inversions in a certain tree-based information-theoretic game.
2021
TCC
Secure Software Leasing from Standard Assumptions 📺
Secure software leasing (SSL) is a quantum cryptographic primitive that enables an authority to lease software to a user by encoding it into a quantum state. SSL prevents users from generating authenticated pirated copies of leased software, where authenticated copies indicate those run on legitimate platforms. Although SSL is a relaxed variant of quantum copy protection that prevents users from generating any copy of leased softwares, it is still meaningful and attractive. Recently, Ananth and La Placa proposed the first SSL scheme. It satisfies a strong security notion called infinite-term security. On the other hand, it has a drawback that it is based on public key quantum money, which is not instantiated with standard cryptographic assumptions so far. Moreover, their scheme only supports a subclass of evasive functions. In this work, we present SSL schemes that satisfy a security notion called finite-term security based on the learning with errors assumption (LWE). Finite-term security is weaker than infinite-term security, but it still provides a reasonable security guarantee. Specifically, our contributions consist of the following. - We construct a finite-term secure SSL scheme for pseudorandom functions from the LWE assumption against quantum adversaries. - We construct a finite-term secure SSL scheme for a subclass of evasive functions from the LWE assumption against sub-exponential quantum adversaries. - We construct finite-term secure SSL schemes for the functionalities above with classical communication from the LWE assumption against (sub-exponential) quantum adversaries. SSL with classical communication means that entities exchange only classical information though they run quantum computation locally. Our crucial tool is two-tier quantum lightning, which is introduced in this work and a relaxed version of quantum lighting. In two-tier quantum lightning schemes, we have a public verification algorithm called semi-verification and a private verification algorithm called full-verification. An adversary cannot generate possibly entangled two quantum states whose serial numbers are the same such that one passes the semi-verification, and the other also passes the full-verification. We show that we can construct a two-tier quantum lightning scheme from the LWE assumption.
2021
TCC
Simple Constructions from (Almost) Regular One-Way Functions 📺
Two of the most useful cryptographic primitives that can be constructed from one-way functions are pseudorandom generators (PRGs) and universal one-way hash functions (UOWHFs). In order to implement them in practice, the efficiency of such constructions must be considered. The three major efficiency measures are: the seed length, the call complexity to the one-way function, and the adaptivity of these calls. Still, the optimal efficiency of these constructions is not yet fully understood: there exist gaps between the known upper bound and the known lower bound for black-box constructions. A special class of one-way functions called unknown-regular one-way functions is much better understood. Haitner, Harnik and Reingold (CRYPTO 2006) presented a PRG construction with semi-linear seed length and linear number of calls based on a method called randomized iterate. Ames, Gennaro and Venkitasubramaniam (TCC 2012) then gave a construction of UOWHF with similar parameters and using similar ideas. On the other hand, Holenstein and Sinha (FOCS 2012) and Barhum and Holenstein (TCC 2013) showed an almost linear call-complexity lower bound for black-box constructions of PRGs and UOWHFs from one-way functions. Hence Haitner et al. and Ames et al. reached tight constructions (in terms of seed length and the number of calls) of PRGs and UOWHFs from regular one-way functions. These constructions, however, are adaptive. In this work, we present non-adaptive constructions for both primitives which match the optimal call-complexity given by Holenstein and Sinha and Barhum and Holenstein. Our constructions, besides being simple and non-adaptive, are robust also for almost-regular one-way functions.
2021
TCC
On the Impossibility of Purely Algebraic Signatures 📺
The existence of one-way functions implies secure digital sig- natures, but not public-key encryption (at least in a black-box setting). Somewhat surprisingly, though, efficient public-key encryption schemes appear to be much easier to construct from concrete algebraic assumptions (such as the factoring of Diffie-Hellman-like assumptions) than efficient digital signature schemes. In this work, we provide one reason for this apparent difficulty to construct efficient signature schemes. Specifically, we prove that a wide range of algebraic signature schemes (in which verification essentially checks a number of linear equations over a group) fall to conceptually surprisingly simple linear algebra attacks. In fact, we prove that in an algebraic signature scheme, sufficiently many signatures can be linearly combined to a signature of a fresh message. We present attacks both in known-order and hidden-order groups (although in hidden-order settings, we have to restrict our definition of algebraic signatures a little). More explicitly, we show: – the insecurity of all algebraic signature schemes in Maurer’s generic group model, as long as the signature schemes do not rely on other cryptographic assumptions, such as hash functions. – the insecurity of a natural class of signatures in hidden-order groups, where verification consists of linear equations over group elements. We believe that this highlights the crucial role of public verifiability in digital signature schemes. Namely, while public-key encryption schemes do not require any publicly verifiable structure on ciphertexts, it is exactly this structure on signatures that invites attacks like ours and makes it hard to construct efficient signatures.
2021
TCC
Acyclicity Programming for Sigma-Protocols 📺
Cramer, Damgård, and Schoenmakers (CDS) built a proof system to demonstrate the possession of subsets of witnesses for a given collection of statements that belong to a prescribed access structure P by composing so-called sigma-protocols for each atomic statement. Their verifier complexity is linear in the size of the monotone span program representation of P. We propose an alternative method for combining sigma-protocols into a single non-interactive system for a compound statement in the random oracle model. In contrast to CDS, our verifier complexity is linear in the size of the acyclicity program representation of P, a complete model of monotone computation introduced in this work. We show that the acyclicity program size of a predicate is never larger than its de Morgan formula size and it is polynomially incomparable to its monotone span program size. We additionally present an extension of our proof system, with verifier complexity linear in the monotone circuit size of P, in the common reference string model. Finally, considering the types of statement that naturally reduce to acyclicity programming, we discuss several applications of our new methods to protecting privacy in cryptocurrency and social networks.
2021
TCC
Statistical ZAPs from Group-Based Assumptions 📺
We put forth a template for constructing statistical ZAPs for NP. Our template compiles NIZKs for NP in the hidden bit model (which exist unconditionally) into statistical ZAPs using a new notion of interactive hidden-bit generator (IHBG), which adapts the notion of hidden-bit generator to the plain model by building upon the recent notion of statistically-hiding extractable commitments. We provide a construction of IHBG from the explicit hardness of the decision Diffie-Hellman assumption (where explicit refers to requiring an explicit upper bound on the advantage of any polynomial-time adversary against the assumption) and the existence of statistical ZAPs for a specific simple language, building upon the recent construction of dual-mode hidden-bit generator from (Libert et al., EUROCRYPT 2020). We provide two instantiations of the underlying simple ZAP: 1. Using the recent statistical ZAP for the Diffie-Hellman language of (Couteau and Hartmann, CRYPTO 2020), we obtain statistical ZAPs for NP assuming (the explicit hardness of) DDH in $G_1$ and kernel-DH in $G_2$ (a search assumption which is weaker than DDH), where $(G_1,G_2)$ are groups equipped with an asymmetric pairing. This improves over the recent work of (Lombardi et al., EUROCRYPT 2020) which achieved a relaxed variant of statistical ZAP for NP, under a stronger assumption. 2. Using the recent work of (Couteau et al., EUROCRYPT 2020), we obtain statistical ZAPs for NP assuming the explicit hardness of DDH, together with the assumption that no efficient adversary can break the key-dependent message one-wayness of ElGamal with respect to efficient functions over groups of size $2^\secpar$ with probability better than $\poly(\secpar)/2^{(c + o(1)) \cdot \secpar}$, denoted $2^{-c\secpar}$-\OWKDM, for a constant c = 1/2, in pairing-free groups. Note that the latter is a search discrete-log-style falsifiable assumption, incomparable to DDH (in particular, it is not known to imply public-key encryption).
2021
TCC
Adaptive Security of Multi-Party Protocols, Revisited 📺
The goal of secure multi-party computation (MPC) is to allow a set of parties to perform an arbitrary computation task, where the security guarantees depend on the set of parties that are corrupted. The more parties are corrupted, the less is guaranteed, and typically the guarantees are completely lost when the number of corrupted parties exceeds a certain corruption bound. Early and also many recent protocols are only statically secure in the sense that they provide no security guarantees if the adversary is allowed to choose adaptively which parties to corrupt. Security against an adversary with such a strong capability is often called adaptive security and a significant body of literature is devoted to achieving adaptive security, which is known as a difficult problem. In particular, a main technical obstacle in this context is the so-called ``commitment problem'', where the simulator is unable to consistently explain the internal state of a party with respect to its pre-corruption outputs. As a result, protocols typically resort to the use of cryptographic primitives like non-committing encryption, incurring a substantial efficiency loss. This paper provides a new, clean-slate treatment of adaptive security in MPC, exploiting the specification concept of constructive cryptography (CC). A new natural security notion, called \cc-adaptive security, is proposed, which is technically weaker than standard adaptive security but nevertheless captures security against a fully adaptive adversary. Known protocol examples separating between adaptive and static security are also insecure in our notion. Moreover, our notion avoids the commitment problem and thereby the need to use non-committing or equivocal tools. We exemplify this by showing that the protocols by Cramer, Damgard and Nielsen (EUROCRYPT'01) for the honest majority setting, and (the variant without non-committing encryption) by Canetti, Lindell, Ostrovsky and Sahai (STOC'02) for the dishonest majority setting, achieve \cc-adaptive security. The latter example is of special interest since all \uc-adaptive protocols in the dishonest majority setting require some form of non-committing encryption or equivocal tools.
2021
TCC
Generalized Proofs of Knowledge with Fully Dynamic Setup 📺
Proofs of knowledge (PoK) are one of the most fundamental notions in cryptography. The appeal of this notion is that it provides a general template that an application can suitably instantiate by choosing a specific relation. Nonetheless, several important applications have been brought to light, including proofs-of-ownership of files or two-factor authentication, which do not fit the PoK template but naturally appear to be special cases of a more general notion of proofs of knowledge or possession. One would thus expect that their security properties, in particular privacy and soundness, are simply derived as concrete instantiation of a common generalized PoK concept with well understood security semantics. Unfortunately, such a notion does not exist, resulting in a variety of tailor-made security definitions whose plausibility must be checked on a case-by-case basis. In this work, we close this gap by providing the theoretical foundations of a generalized notion of PoK that encompasses dynamic and setup-dependent relations as well as interactive statement derivations. This novel combination enables an application to directly specify relations that depend on an assumed setup, such as a random oracle, a database or ledger, and to have statements be agreed upon interactively and dynamically between parties based on the state of the setup. Our new notion is called \emph{agree-and-prove} and provides clear semantics of correctness, soundness, and zero-knowledge in the above generalized scenario. As an application, we first consider proofs-of-ownership of files for client-side file deduplication. We cast the problem and some of its prominent schemes in our agree-and-prove framework and formally analyze their security. Leveraging our generic zero-knowledge formalization, we then devise a novel scheme that is provably the privacy-preserving analogue of the well-known Merkle-Tree based protocol. As a second application, we consider two-factor entity authentication to showcase how the agree-and-prove notion encompasses proofs of ability, such as proving the correct usage of an abstract hardware token.
2021
TCC
Multi-Party Functional Encryption 📺
We initiate the study of multi-party functional encryption (MPFE) which unifies and abstracts out various notions of functional encryption which support distributed ciphertexts or secret keys, such as multi-input FE, multi-client FE, decentralized multi-client FE, multi-authority FE, dynamic decentralized FE, adhoc multi-input FE and such others. Using our framework, we identify several gaps in the literature and provide some constructions to fill these: 1. Multi-Authority ABE with Inner Product Computation. The recent work of Abdalla et al. (ASIACRYPT’20) constructed a novel “composition” of Attribute Based Encryption (ABE) and Inner Product Functional Encryption (IPFE), namely functional encryption schemes that combine the access control functionality of attribute based encryption with the possibility of performing linear operations on the encrypted data. In this work, we extend the access control component to support the much more challenging multi-authority setting, i.e. “lift” the primitive of ABE in their construction to multi-authority ABE for the same class of access control policies (LSSS structures). This yields the first construction of a nontrivial multi-authority FE beyond ABE from simple assumptions on pairings to the best of our knowledge. Our techniques can also be used to generalize the decentralized attribute based encryption scheme of Michalevsky and Joye (ESORICS’18) to support inner product computation on the message. While this scheme only supports inner product predicates which is less general than those supported by the Lewko-Waters (EUROCRYPT’11) construction, it supports policy hiding which the latter does not. Our extension inherits these features and is secure based on the k-linear assumption, in the random oracle model. 2. Function Hiding DDFE. The novel primitive of dynamic decentralized functional encryption (DDFE) was recently introduced by Chotard et al. (CRYPTO’20), where they also provided the first construction for inner products. However, the primitive of DDFE does not support function hiding, which is a significant limitation for several applications. In this work, we provide a new construction for inner product DDFE which supports function hiding. To achieve our final result, we define and construct the first function hiding multi-client functional encryption (MCFE) scheme for inner products, which may be of independent interest. 3. Distributed Ciphertext-Policy ABE. We provide a distributed variant of the recent ciphertext- policy attribute based encryption scheme, constructed by Agrawal and Yamada (EUROCRYPT’20). Our construction supports NC1 access policies, and is secure based on “Learning With Errors” and relies on the generic bilinear group model as well as the random oracle model. Our new MPFE abstraction predicts meaningful new variants of functional encryption as useful targets for future work.
2021
TCC
On Treewidth, Separators and Yao’s Garbling 📺
We show that Yao’s garbling scheme is adaptively indistinguishable for the class of Boolean circuits of size S and treewidth w with only a S^{O(w)} loss in security. For instance, circuits with constant treewidth are as a result adaptively indistinguishable with only a polynomial loss. This (partially) complements a negative result of Applebaum et al. (Crypto 2013), which showed (assuming one-way functions) that Yao’s garbling scheme cannot be adaptively simulatable. As main technical contributions, we introduce a new pebble game that abstracts out our security reduction and then present a pebbling strategy for this game where the number of pebbles used is roughly O(\delta w log(S)), \delta being the fan-out of the circuit. The design of the strategy relies on separators, a graph-theoretic notion with connections to circuit complexity.
2021
TCC
Policy-Compliant Signatures 📺
We introduce policy-compliant signatures (PCS). A PCS scheme can be used in a setting where a central authority determines a global policy and distributes public and secret keys associated with sets of attributes to the users in the system. If two users, Alice and Bob, have attribute sets that jointly satisfy the global policy, Alice can use her secret key and Bob's public key to sign a message. Unforgeability ensures that a valid signature can only be produced if Alice's secret key is known and if the policy is satisfied. Privacy guarantees that the public keys and produced signatures reveal nothing about the users' attributes beyond whether they satisfy the policy or not. PCS extends the functionality provided by existing primitives such as attribute-based signatures and policy-based signatures, which do not consider a designated receiver and thus cannot include the receiver's attributes in the policies. We describe practical applications of PCS which include controlling transactions in financial systems with strong privacy guarantees (avoiding additional trusted entities that check compliance), as well as being a tool for trust negotiations. We introduce an indistinguishability-based privacy notion for PCS and present a generic and modular scheme based on standard building blocks such as signatures, non-interactive zero-knowledge proofs, and a (predicate-only) predicate encryption scheme. We show that it can be instantiated to obtain an efficient scheme that is provably secure under standard pairing-assumptions for a wide range of policies. We further model PCS in UC by describing the goal of PCS as an enhanced ideal signature functionality which gives rise to a simulation-based privacy notion for PCS. We show that our generic scheme achieves this composable security notion under the additional assumption that the underlying predicate encryption scheme satisfies a stronger, fully adaptive, simulation-based attribute-hiding notion.
2021
TCC
Simple and Efficient Batch Verification Techniques for Verifiable Delay Functions 📺
We study the problem of batch verification for verifiable delay functions (VDFs), focusing on proofs of correct exponentiation (PoCE), which underlie recent VDF constructions. We show how to compile any PoCE into a batch PoCE, offering significant savings in both communication and verification time. Concretely, given any PoCE with communication complexity $c$, verification time $t$ and soundness error $\delta$, and any pseudorandom function with key length ${\sf k}_{\sf prf}$ and evaluation time $ t_{\sf prf}$, we construct: -- A batch PoCE for verifying $n$ instances with communication complexity $m\cdot c +{\sf k}_{\sf prf}$, verification time $m\cdot t + n\cdot m\cdot O(t_{\sf op} + t_{\sf prf})$ and soundness error $\delta + 2^{-m}$, where $\lambda$ is the security parameter, $m$ is an adjustable parameter that can take any integer value, and $t_{\sf op}$ is the time required to evaluate the group operation in the underlying group. This should be contrasted with the naive approach, in which the communication complexity and verification time are $n \cdot c$ and $n \cdot t$, respectively. The soundness of this compiler relies only on the soundness of the underlying PoCE and the existence of one-way functions. -- An improved batch PoCE based on the low order assumption. For verifying $n$ instances, the batch PoCE requires communication complexity $c +{\sf k}_{\sf prf}$ and verification time $t + n\cdot (t_{\sf prf} + \log(s)\cdot O(t_{\sf op}))$, and has soundness error $\delta + 1/s$. The parameter $s$ can take any integer value, as long as it is hard to find group elements of order less than $s$ in the underlying group. We discuss instantiations in which $s$ can be exponentially large in the security parameter $\lambda$. If the underlying PoCE is constant round and public coin (as is the case for existing protocols), then so are all of our batch PoCEs, implying that they can be made non-interactive using the Fiat-Shamir transform. Additionally, for RSA groups with moduli which are the products of two safe primes, we show how to efficiently verify that certain elements are not of order $2$. This protocol, together with the second compiler above and any (single-instance) PoCE in these groups, yields an efficient batch PoCE in safe RSA groups. To complete the picture, we also show how to extend Pietrzak's protocol (which is statistically sound in the group $QR_N^+$ when $N$ is the product of two safe primes) to obtain a statistically-sound PoCE in safe RSA groups.
2021
TCC
On Actively-Secure Elementary MPC Reductions 📺
We introduce the notion of \emph{elementary MPC} reductions that allow us to securely compute a functionality $f$ by making a single call to a constant-degree ``non-cryptographic'' functionality $g$ without requiring any additional interaction. Roughly speaking, ``non-cryptographic'' means that $g$ does not make use of cryptographic primitives, though the parties can locally call such primitives. Classical MPC results yield such elementary reductions in various cases including the setting of passive security with full corruption threshold $t<n$ (Yao, FOCS'86; Beaver, Micali, and Rogaway, STOC'90), the setting of full active security against a corrupted minority $t<n/2$ (Damg{\aa}rd and Ishai, Crypto'05), and, for NC1 functionalities, even for the setting of full active (information-theoretic) security with full corruption threshold of $t<n$ (Ishai and Kushilevitz, FOCS'00). This leaves open the existence of an elementary reduction that achieves full active security in the dishonest majority setting for all efficiently computable functions. Our main result shows that such a reduction is unlikely to exist. Specifically, the existence of a computationally secure elementary reduction that makes black-box use of a PRG and achieves a very weak form of partial fairness (e.g., that holds only when the first party is not corrupted) would allow us to realize any efficiently-computable function by a \emph{constant-round} protocol that achieves a non-trivial notion of information-theoretic passive security. The existence of the latter is a well-known 3-decade old open problem in information-theoretic cryptography (Beaver, Micali, and Rogaway, STOC'90). On the positive side, we observe that this barrier can be bypassed under any of the following relaxations: (1) non-black-box use of a pseudorandom generator; (2) weaker security guarantees such as security with identifiable abort; or (3) an additional round of communication with the functionality $g$.
2021
TCC
Succinct LWE Sampling, Random Polynomials, and Obfuscation 📺
We present a construction of indistinguishability obfuscation (iO) that relies on the learning with errors (LWE) assumption together with a new notion of succinctly sampling pseudo-random LWE samples. We then present a candidate LWE sampler whose security is related to the hardness of solving systems of polynomial equations. Our construction improves on the recent iO candidate of Wee and Wichs (Eurocrypt 2021) in two ways: first, we show that a much weaker and simpler notion of LWE sampling suffices for iO; and secondly, our candidate LWE sampler is secure based on a compactly specified and falsifiable assumption about random polynomials, with a simple error distribution that facilitates cryptanalysis.
2021
TCC
Laconic Private Set Intersection and Applications 📺
Consider a server with a \emph{large} set $S$ of strings $\{x_1,x_2\ldots,x_N\}$ that would like to publish a \emph{small} hash $h$ of its set $S$ such that any client with a string $y$ can send the server a \emph{short} message allowing it to learn $y$ if $y \in S$ and nothing otherwise. In this work, we study this problem of two-round private set intersection (PSI) with low (asymptotically optimal) communication cost, or what we call \emph{laconic} private set intersection ($\ell$PSI) and its extensions. This problem is inspired by the recent general frameworks for laconic cryptography [Cho et al. CRYPTO 2017, Quach et al. FOCS'18]. We start by showing the first feasibility result for realizing $\ell$PSI~ based on the CDH assumption, or LWE with polynomial noise-to-modulus ratio. However, these feasibility results use expensive non-black-box cryptographic techniques leading to significant inefficiency. Next, with the goal of avoiding these inefficient techniques, we give a construction of $\ell$PSI~schemes making only black-box use of cryptographic functions. Our construction is secure against semi-honest receivers, malicious senders and reusable in the sense that the receiver's message can be reused across any number of executions of the protocol. The scheme is secure under the $\phi$-hiding, decisional composite residuosity and subgroup decision assumptions. Finally, we show natural applications of $\ell$PSI~to realizing a semantically-secure encryption scheme that supports detection of encrypted messages belonging to a set of ``illegal'' messages (e.g., an illegal video) circulating online. Over the past few years, significant effort has gone into realizing laconic cryptographic protocols. Nonetheless, our work provides the first black-box constructions of such protocols for a natural application setting.
2021
TCC
Secure Software Leasing Without Assumptions 📺
Quantum cryptography is known for enabling functionalities that are unattainable using classical information alone. Recently, Secure Software Leasing (SSL) has emerged as one of these areas of interest. Given a target circuit C from a circuit class, SSL produces an encoding of C that enables a recipient to evaluate C, and also enables the originator of the software to verify that the software has been returned --- meaning that the recipient has relinquished the possibility of any further use of the software. Clearly, such a functionality is unachievable using classical information alone, since it is impossible to prevent a user from keeping a copy of the software. Recent results have shown the achievability of SSL using quantum information for a class of functions called compute-and-compare (these are a generalization of the well-known point functions). These prior works, however all make use of setup or computational assumptions. Here, we show that SSL is achievable for compute-and-compare circuits without any assumptions. Our technique involves the study of quantum copy protection, which is a notion related to SSL, but where the encoding procedure inherently prevents a would-be quantum software pirate from splitting a single copy of an encoding for C into two parts, each of which enables a user to evaluate C. We show that point functions can be copy-protected without any assumptions, for a novel security definition involving one honest and one malicious evaluator; this is achieved by showing that from any quantum message authentication code, we can derive such an honest-malicious copy protection scheme. We then show that a generic honest-malicious copy protection scheme implies SSL; by prior work, this yields SSL for compute-and-compare functions.
2021
TCC
Oblivious Transfer from Trapdoor Permutations in Minimal Rounds 📺
Oblivious transfer (OT) is a foundational primitive within cryptography owing to its connection with secure computation. One of the oldest constructions of oblivious transfer was from certified trapdoor permutations (TDPs). However several decades later, we do not know if a similar construction can be obtained from TDPs in general. In this work, we study the problem of constructing round optimal oblivious transfer from trapdoor permutations. In particular, we obtain the following new results (in the plain model) relying on TDPs in a black-box manner: – Three-round oblivious transfer protocol that guarantees indistinguishability-security against malicious senders (and semi-honest receivers). – Four-round oblivious transfer protocol secure against malicious adversaries with black-box simulation-based security. By combining our second result with an already known compiler we obtain the first round-optimal 2-party computation protocol that relies in a black-box way on TDPs. A key technical tool underlying our results is a new primitive we call dual witness encryption (DWE) that may be of independent interest.
2021
TCC
Environmentally Friendly Composable Multi-Party Computation in the Plain Model from Standard (Timed) Assumptions 📺
Starting with the work of Rivest et al. in 1996, timed assumptions have found many applications in cryptography, building e.g. the foundation of the blockchain technology. They also have been used in the context of classical MPC, e.g. to enable fairness. We follow this line of research to obtain composable general MPC in the plain model. This approach comes with a major advantage regarding environmental friendliness, a property coined by Canetti et al. (FOCS 2013). Informally, this means that our constructions do not “hurt” game-based security properties of protocols that hold against polynomial-time adversaries when executed alone. As an additional property, our constructions can be plugged into any UC-secure protocol without loss of security. Towards proving the security of our constructions, we introduce a variant of the UC security notion that captures timed cryptographic assumptions. Combining standard timed commitment schemes and standard polynomial-time hardness assumptions, we construct a composable commitment scheme in the plain model. As this construction is constant-round and black-box, we obtain the first fully environmentally friendly composable constant-round black-box general MPC protocol in the plain model from standard (timed) assumptions.
2021
TCC
Dory: Efficient, Transparent arguments for Generalised Inner Products and Polynomial Commitments 📺
This paper presents Dory, a transparent setup, public-coin interactive argument for inner-pairing products between committed vectors of elements of two source groups. For a product of vectors of length $n$, proofs are $6 \log n$ target group elements and $O(1)$ additional elements. Verifier work is dominated by an $O(\log n)$ multi-exponentiation in the target group and $O(1)$ pairings. Security is reduced to the standard SXDH assumption in the standard model. We apply Dory to build a multivariate polynomial commitment scheme via the Fiat-Shamir transform. For a dense polynomial with $n$ coefficients, Prover work to compute a commitment is dominated by a multi-exponentiation in one source group of size $n$. Prover work to show that a commitment to an evaluation is correct is $O(n^{\log{8}/\log{25}})$ in general ($O(n^{1/2})$ for univariate or multilinear polynomials); communication complexity and Verifier work are both $O(\log n)$. These asymptotics previously required trusted setup or concretely inefficient groups of unknown order. Critically for applications, these arguments can be batched, saving large factors on the Prover and improving Verifier asymptotics: to validate $\ell$ polynomial evaluations for polynomials of size at most $n$ requires $O(\ell + \log n)$ exponentiations and $O(\ell \log n)$ field operations. Dory is also concretely efficient: Using one core and setting $n = 2^{20}$, commitments are 192 bytes. Evaluation proofs are ~18kb, requiring ~3s to generate and ~25ms to verify. For batches at $n=2^{20}$, the marginal cost per evaluation is <1kb communication, ~300ms for the prover and ~1ms for the verifier.
2021
TCC
On Communication-Efficient Asynchronous MPC with Adaptive Security 📺
Secure multi-party computation (MPC) allows a set of $n$ parties to jointly compute an arbitrary computation over their private inputs. Two main variants have been considered in the literature according to the underlying communication model. Synchronous MPC protocols proceed in rounds, and rely on the fact that the communication network provides strong delivery guarantees within each round. Asynchronous MPC protocols achieve security guarantees even when the network delay is arbitrary. While the problem of MPC has largely been studied in both variants with respect to both feasibility and efficiency results, there is still a substantial gap when it comes to communication complexity of adaptively secure protocols. Concretely, while adaptively secure synchronous MPC protocols with linear communication are known for a long time, the best asynchronous protocol communicates $\mathcal{O}(n^4 \kappa)$ bits per multiplication. In this paper, we make progress towards closing this gap by providing two protocols. First, we present an adaptively secure asynchronous protocol with optimal resilience $t<n/3$ and $\mathcal{O}(n^2 \kappa)$ bits of communication per multiplication, improving over the state of the art protocols in this setting by a quadratic factor in the number of parties. The protocol has cryptographic security and follows the CDN approach [Eurocrypt'01], based on additive threshold homomorphic encryption. Second, we show an optimization of the above protocol that tolerates up to $t<(1-\epsilon)n/3$ corruptions and communicates $\mathcal{O}(n\cdot \poly(\kappa))$ bits per multiplication under stronger assumptions.
2021
TCC
Efficient Perfectly Secure Computation with Optimal Resilience 📺
Secure computation enables $n$ mutually distrustful parties to compute a function over their private inputs jointly. In 1988 Ben-Or, Goldwasser, and Wigderson (BGW) demonstrated that any function can be computed with perfect security in the presence of a malicious adversary corrupting at most $t< n/3$ parties. After more than 30 years, protocols with perfect malicious security, with round complexity proportional to the circuit's depth, still require sharing a total of $O(n^2)$ values per multiplication. In contrast, only $O(n)$ values need to be shared per multiplication to achieve semi-honest security. Indeed sharing $\Omega(n)$ values for a single multiplication seems to be the natural barrier for polynomial secret sharing-based multiplication. In this paper, we close this gap by constructing a new secure computation protocol with perfect, optimal resilience and malicious security that incurs sharing of only $O(n)$ values per multiplication, thus, matching the semi-honest setting for protocols with round complexity that is proportional to the circuit depth. Our protocol requires a constant number of rounds per multiplication. Like BGW, it has an overall round complexity that is proportional only to the multiplicative depth of the circuit. Our improvement is obtained by a novel construction for {\em weak VSS for polynomials of degree-$2t$}, which incurs the same communication and round complexities as the state-of-the-art constructions for {\em VSS for polynomials of degree-$t$}. Our second contribution is a method for reducing the communication complexity for any depth-1 sub-circuit to be proportional only to the size of the input and output (rather than the size of the circuit). This implies protocols with \emph{sublinear communication complexity} (in the size of the circuit) for perfectly secure computation for important functions like matrix multiplication.
2021
TCC
The Cost of Adaptivity in Security Games on Graphs 📺
The security of cryptographic primitives and protocols against adversaries that are allowed to make adaptive choices (e.g., which parties to corrupt or which queries to make) is notoriously difficult to establish. A broad theoretical framework was introduced by Jafargholi et al. [Crypto'17] for this purpose. In this paper we initiate the study of lower bounds on loss in adaptive security for certain cryptographic protocols considered in the framework. We prove lower bounds that almost match the upper bounds (proven using the framework) for proxy re-encryption, prefix-constrained PRFs and generalized selective decryption, a security game that captures the security of certain group messaging and broadcast encryption schemes. Those primitives have in common that their security game involves an underlying graph that can be adaptively built by the adversary. Some of our lower bounds only apply to a restricted class of black-box reductions which we term "oblivious" (the existing upper bounds are of this restricted type), some apply to the broader but still restricted class of non-rewinding reductions, while our lower bound for proxy re-encryption applies to all black-box reductions. The fact that some of our lower bounds seem to crucially rely on obliviousness or at least a non-rewinding reduction hints to the exciting possibility that the existing upper bounds can be improved by using more sophisticated reductions. Our main conceptual contribution is a two-player multi-stage game called the Builder-Pebbler Game. We can translate bounds on the winning probabilities for various instantiations of this game into cryptographic lower bounds for the above mentioned primitives using oracle separation techniques.
2021
TCC
Non-Malleable Vector Commitments via Local Equivocability 📺
Vector commitments (VCs), enabling to commit to a vector and locally reveal any of its entries, play a key role in a variety of both classic and recently-evolving applications. However, security notions for VCs have so far focused on passive attacks, and non-malleability notions considering active attacks have not been explored. Moreover, existing frameworks that may enable to capture the non-malleability of VCs seem either too weak (non-malleable non-interactive commitments that do not account for the security implications of local openings) or too strong (non-malleable zero-knowledge sets that support both membership and non-membership proofs). We put forward a rigorous framework capturing the non-malleability of VCs, striking a careful balance between the existing weaker and stronger frameworks: We strengthen the framework of non-malleable non-interactive commitments by considering attackers that may be exposed to local openings, and we relax the framework of non-malleable zero-knowledge sets by focusing on membership proofs. In addition, we strengthen both frameworks by supporting (inherently-private) updates to entries of committed vectors, and discuss the benefits of non-malleable VCs in the context of both UTXO-based and account-based stateless blockchains, and in the context of simultaneous multi-round auctions (that have been adopted by the US Federal Communications Commission as the standard auction format for selling spectrum ranges). Within our framework we present a direct approach for constructing non-malleable VCs whose efficiency essentially matches that of the existing standard VCs. Specifically, we show that any VC can be transformed into a non-malleable one, relying on a new primitive that we put forth. Our new primitive, locally-equivocable commitments with all-but-one binding, is evidently both conceptually and technically simpler compared to multi-trapdoor mercurial trapdoor commitments (the main building block underlying existing non-malleable zero-knowledge sets), and admits more efficient instantiations based on the same number-theoretic assumptions.
2021
TCC
Non-Malleable Time-Lock Puzzles and Applications 📺
Time-lock puzzles are a mechanism for sending messages "to the future", by allowing a sender to quickly generate a puzzle with an underlying message that remains hidden until a receiver spends a moderately large amount of time solving it. We introduce and construct a variant of a time-lock puzzle which is non-malleable, which roughly guarantees that it is impossible to "maul" a puzzle into one for a related message without solving it. Using non-malleable time-lock puzzles, we achieve the following applications: - The first fair non-interactive multi-party protocols for coin flipping and auctions in the plain model without setup. - Practically efficient fair multi-party protocols for coin flipping and auctions proven secure in the (auxiliary-input) random oracle model. As a key step towards proving the security of our protocols, we introduce the notion of functional non-malleability, which protects against tampering attacks that affect a specific function of the related messages. To support an unbounded number of participants in our protocols, our time-lock puzzles satisfy functional non-malleability in the fully concurrent setting. We additionally show that standard (non-functional) non-malleability is impossible to achieve in the concurrent setting (even in the random oracle model).
2021
TCC
On Communication Models and Best-Achievable Security in Two-Round MPC 📺
Recently, a sequence of works have made strong advances in two-round (i.e., round-optimal) secure multi-party computation (MPC). In the {\em honest-majority} setting -- the focus of this work -- Ananth et al. [CRYPTO'18, EC'19], Applebaum et al. [TCC'18, EC'19] and Garg et al. [TCC'18] have established the feasibility of general two-round MPC in standard communication models involving broadcast ($\BC$) and private point-to-point ($\PTP$) channels. In this work, we set out to understand what features of the communication model are necessary for these results, and more broadly the design of two-round MPC. Focusing our study on the plain model -- the most natural model for honest-majority MPC -- we obtain the following results: 1. {\bf Dishonest majority from Honest majority:} In the two round setting, honest-majority MPC and dishonest-majority MPC are surprisingly close, and often {\em equivalent}. This follows from our results that the former implies 2-message oblivious transfer, in many settings. (i) We show that without private point-to-point ($\PTP$) channels, i.e., when we use only broadcast ($\BC$) channels, {\em honest-majority MPC implies 2-message oblivious transfer}. (ii) Furthermore, this implication holds even when we use both $\PTP$ and $\BC$, provided that the MPC protocol is robust against ``fail-stop'' adversaries. 2. {\bf Best-Achievable Security:} While security with guaranteed output delivery (and even fairness) against malicious adversaries is impossible in two rounds, nothing is known with regards to the ``next best'' security notion, namely, security with identifiable abort (\IA). We show that \IA\ is also {\em impossible} to achieve with honest-majority even if we use both $\PTP$ and $\BC$ channels. However, if we replace $\PTP$ channels with a ``bare'' (i.e., untrusted) public-key infrastructure ($\PKI$), then even security with guaranteed output delivery (and hence $\IA$) is possible to achieve. \end{itemize} These results ``explain'' that the reliance on $\PTP$ channels (together with $\BC$) in the recent two-round protocols in the plain model was in fact {\em necessary}, and that these protocols {\em couldn't} have achieved a stronger security guarantee, namely, $\IA$. Overall, our results (put together with prior works) fully determine the best-achievable security for honest-majority MPC in different communication models in two rounds. As a consequence, they yield the following hierarchy of communication models: $\BC < \PTP < \BC+\PTP < \BC+\PKI$. This shows that $\BC$ channel is the {\em weakest} communication model, and that $\BC+\PKI$ model is strictly stronger than $\BC+\PTP$ model.
2021
TCC
2021
TCC
Concurrent Composition of Differential Privacy 📺
We initiate a study of the composition properties of interactive differentially private mechanisms. An interactive differentially private mechanism is an algorithm that allows an analyst to adaptively ask queries about a sensitive dataset, with the property that an adversarial analyst's view of the interaction is approximately the same regardless of whether or not any individual's data is in the dataset. Previous studies of composition of differential privacy have focused on non-interactive algorithms, but interactive mechanisms are needed to capture many of the intended applications of differential privacy and a number of the important differentially private primitives. We focus on concurrent composition, where an adversary can arbitrarily interleave its queries to several differentially private mechanisms, which may be feasible when differentially private query systems are deployed in practice. We prove that when the interactive mechanisms being composed are pure differentially private, their concurrent composition achieves privacy parameters (with respect to pure or approximate differential privacy) that match the (optimal) composition theorem for noninteractive differential privacy. We also prove a composition theorem for interactive mechanisms that satisfy approximate differential privacy. That bound is weaker than even the basic (suboptimal) composition theorem for noninteractive differential privacy, and we leave closing the gap as a direction for future research, along with understanding concurrent composition for other variants of differential privacy.
2021
TCC
Grafting Key Trees: Efficient Key Management for Overlapping Groups 📺
Key trees are often the best solution in terms of transmission cost and storage requirements for managing keys in a setting where a group needs to share a secret key, while being able to efficiently rotate the key material of users (in order to recover from a potential compromise, or to add or remove users). Applications include multicast encryption protocols like LKH (Logical Key Hierarchies) or group messaging like the current IETF proposal TreeKEM. A key tree is a (typically balanced) binary tree, where each node is identified with a key: leaf nodes hold users’ secret keys while the root is the shared group key. For a group of size N, each user just holds log(N) keys (the keys on the path from its leaf to the root) and its entire key material can be rotated by broadcasting 2log(N) ciphertexts (encrypting each fresh key on the path under the keys of its parents). In this work we consider the natural setting where we have many groups with partially overlapping sets of users, and ask if we can find solutions where the cost of rotating a key is better than in the trivial one where we have a separate key tree for each group. We show that in an asymptotic setting (where the number m of groups is fixed while the number N of users grows) there exist more general key graphs whose cost converges to the cost of a single group, thus saving a factor linear in the number of groups over the trivial solution. As our asymptotic “solution” converges very slowly and performs poorly on concrete examples, we propose an algorithm that uses a natural heuristic to compute a key graph for any given group structure. Our algorithm combines two greedy algorithms, and is thus very efficient: it first converts the group structure into a “lattice graph”, which then is turned into a key graph by repeatedly applying the algorithm for constructing a Huffman code. To better understand how far our proposal is from an optimal solution, we prove lower bounds on the update cost of continuous group-key agreement and multicast encryption in a symbolic model admitting (asymmetric) encryption, pseudorandom generators, and secret sharing as building blocks.
2021
TCC
The Round Complexity of Quantum Zero-Knowledge 📺
We study the round complexity of zero-knowledge for QMA (the quantum analogue of NP). Assuming the quantum quasi-polynomial hardness of the learning with errors (LWE) problem, we obtain the following results: - 2-Round statistical witness indistinguishable (WI) arguments for QMA. - 4-Round statistical zero-knowledge arguments for QMA in the plain model, additionally assuming the existence of quantum fully homomorphic encryption. This is the first protocol for constant-round statistical zero-knowledge arguments for QMA. - 2-Round computational (statistical, resp.) zero-knowledge for QMA in the timing model, additionally assuming the existence of post-quantum non-parallelizing functions (time-lock puzzles, resp.). All of these protocols match the best round complexity known for the corresponding protocols for NP with post-quantum security. Along the way, we introduce and construct the notions of sometimes-extractable oblivious transfer and sometimes-simulatable zero-knowledge, which might be of independent interest.
2021
TCC
Rate-1 Quantum Fully Homomorphic Encryption 📺
Secure function evaluation (SFE) allows Alice to publish an encrypted version of her input m such that Bob (holding a circuit C) can send a single message that reveals C(m) to Alice, and nothing more. Security is required to hold against malicious parties, that may behave arbitrarily. In this work we study the notion of SFE in the quantum setting, where Alice outputs an encrypted quantum state |\psi> and learns C(|\psi>) after receiving Bob's message. We show that, assuming the quantum hardness of the learning with errors problem (LWE), there exists an SFE protocol for quantum computation with communication complexity (||\psi>|+|C(|\psi>)|)(1+o(1)), which is nearly optimal. This result is obtained by two main technical steps, which might be of independent interest. Specifically, we show (i) a construction of a rate-1 quantum fully-homomorphic encryption and (ii) a generic transformation to achieve malicious circuit privacy in the quantum setting.
2021
TCC
Direct Product Hardness Amplification 📺
We revisit one of the most fundamental hardness amplification constructions, originally proposed by Yao (FOCS 1982). We present a hardness amplification theorem for the direct product of certain games that is simpler, more general, and stronger than previously known hardness amplification theorems of the same kind. Our focus is two-fold. First, we aim to provide close-to-optimal concrete bounds, as opposed to asymptotic ones. Second, in the spirit of abstraction and reusability, our goal is to capture the essence of direct product hardness amplification as generally as possible. Furthermore, we demonstrate how our amplification theorem can be applied to obtain hardness amplification results for non-trivial interactive cryptographic games such as MAC forgery or signature forgery games.
2021
TCC
Amortizing Rate-1 OT and Applications to PIR and PSI 📺
Recent new constructions of rate-1 OT [D\"ottling, Garg, Ishai, Malavolta, Mour, and Ostrovsky, CRYPTO 2019] have brought this primitive under the spotlight and the techniques have led to new feasibility results for private-information retrieval, and homomorphic encryption for branching programs. The receiver communication of this construction consists of a quadratic (in the sender's input size) number of group elements for a single instance of rate-1 OT. Recently [Garg, Hajiabadi, Ostrovsky, TCC 2020] improved the receiver communication to a linear number of group elements for a single string-OT. However, most applications of rate-1 OT require executing it multiple times, resulting in large communication costs for the receiver. In this work, we introduce a new technique for amortizing the cost of multiple rate-1 OTs. Specifically, based on standard pairing assumptions, we obtain a two-message rate-1 OT protocol for which the amortized cost per string-OT is asymptotically reduced to only four group elements. Our results lead to significant communication improvements in PSI and PIR, special cases of SFE for branching programs. 1. PIR: We obtain a rate-1 PIR scheme with client communication cost of $O(\lambda\cdot\log N)$ group elements for security parameter $\lambda$ and database size $N$. Notably, after a one-time setup (or one PIR instance), any following PIR instance only requires communication cost $O(\log N)$ number of group elements. 2. PSI with unbalanced inputs: We apply our techniques to private set intersection with unbalanced set sizes (where the receiver has a smaller set) and achieve receiver communication of $O((m+\lambda) \log N)$ group elements where $m, N$ are the sizes of the receiver and sender sets, respectively. Similarly, after a one-time setup (or one PSI instance), any following PSI instance only requires communication cost $O(m \cdot \log N)$ number of group elements. All previous sublinear-communication non-FHE based PSI protocols for the above unbalanced setting were also based on rate-1 OT, but incurred at least $O(\lambda^2 m \log N)$ group elements.
2021
TCC
Unifying Presampling via Concentration Bounds 📺
Auxiliary-input (AI) idealized models, such as auxiliary-input random oracle model (AI-ROM) and auxiliary-input random permutation model (AI-PRM), play a critical role in assessing non-uniform security of symmetric key and hash function constructions. However, obtaining security bounds in these models is often much more challenging. The presampling technique, introduced by Unruh (CRYPTO' 07), generically reduces security proofs in the auxiliary-input models to much simpler bit-fixing models. This technique has been further optimized by Coretti, Dodis, Guo, Steinberger (EUROCRYPT' 18), and generalized by Coretti, Dodis, Guo (CRYPTO' 18), resulting in powerful tools for proving non-uniform security bounds in various idealized models. We study the possibility of leveraging the presampling technique to the quantum world. To this end, (*) We show that such leveraging will {resolve a major open problem in quantum computing, which is closely related to the famous Aaronson-Ambainis conjecture (ITCS' 11). (*) Faced with this barrier, we give a new but equivalent bit-fixing model and a simple proof of presampling techniques for arbitrary oracle distribution in the classical setting, including AI-ROM and AI-RPM. Our theorem matches the best-known security loss and unifies previous presampling techniques. (*) Finally, we leverage our new classical presampling techniques to a novel ``quantum bit-fixing'' version of presampling. It matches the optimal security loss of the classical presampling. Using our techniques, we give the first post-quantum non-uniform security for salted Merkle-Damgard hash functions and reprove the tight non-uniform security for function inversion by Chung et al. (FOCS' 20).
2021
TCC
On the (Ir)Replaceability of Global Setups, or How (Not) to Use a Global Ledger 📺
In universally composable (UC) security, a global setup is intended to capture the ideal behavior of a primitive which is accessible by multiple protocols, allowing them to share state. A representative example is the Bitcoin ledger. Indeed, since Bitcoin---and more generally blockchain ledgers---are known to be useful in various scenarios, it has become increasingly popular to capture such ledgers as global setup. Intuitively, one would expect UC to allow us to make security statements about protocols that use such a global setup, e.g., a global ledger, which can then be automatically translated into the setting where the setup is replaced by a protocol implementing it, such as Bitcoin. We show that the above reasoning is flawed and such a generic security-preserving replacement can only work under very (often unrealistic) strong conditions on the global setup and the security statement. For example, the UC security of Bitcoin for realizing a ledger proved by Badertscher {\em et al.} [CRYPTO'17] is {\em not} sufficient per se to allow us to replace the ledger by Bitcoin when used as a global setup. In particular, we cannot expect that all security statements in the global ledger-hybrid world would be preserved when using Bitcoin as a ledger. On the positive side, we provide characterizations of security statements for protocols that make use of global setups, for which the replacement is sound. Our results can be seen as a first guide on how to navigate the very tricky question of what constitutes a ``good'' global setup and how to use it in order to keep the modular protocol-design approach intact.
2021
TCC
Disappearing Cryptography in the Bounded Storage Model 📺
In this work, we study disappearing cryptography in the bounded storage model. Here, a component of the transmission, say a ciphertext, a digital signature, or even a program, is streamed bit by bit. The stream is too large for anyone to store in its entirety, meaning the transmission effectively disappears once the stream stops. We first propose the notion of online obfuscation, capturing the goal of disappearing programs in the bounded storage model. We give a negative result for VBB security in this model, but propose candidate constructions for a weaker security goal, namely VGB security. We then demonstrate the utility of VGB online obfuscation, showing that it can be used to generate disappearing ciphertexts and signatures. All of our applications are not possible in the standard model of cryptography, regardless of computational assumptions used.
2021
TCC
Trojan-Resilience without Cryptography 📺
Digital hardware Trojans are integrated circuits whose implementation differ from the specification in an arbitrary and malicious way. For example, the circuit can differ from its specified input/output behavior after some fixed number of queries (known as ``time bombs'') or on some particular input (known as ``cheat codes''). To detect such Trojans, countermeasures using multiparty computation (MPC) or verifiable computation (VC), have been proposed. On a high level, to realize a circuit with specification $\cF$ one has more sophisticated circuits $\cF^\diamond$ manufactured (where $\cF^\diamond$ specifies a MPC or VC of $\cF$), and then embeds these $\cF^\diamond$'s into a \emph{master circuit} which must be trusted but is relatively simple compared to $\cF$. Those solutions have a significant overhead as $\cF^\diamond$ is significantly more complex than $\cF$ and also the master circuits are not exactly trivial either. In this work, we show that in restricted settings, where $\cF$ has no evolving state and is queried on independent inputs, we can achieve a relaxed security notion using very simple constructions. In particular, we do not change the specification of the circuit at all (i.e., $\cF=\cF^\diamond$). Moreover the master circuit basically just queries a subset of its manufactured circuits and checks if they're all the same. The security we achieve guarantees that, if the manufactured circuits are initially tested on up to $T$ inputs, the master circuit will catch Trojans that try to deviate on significantly more than a $1/T$ fraction of the inputs. This bound is optimal for the type of construction considered, and we provably achieve it using a construction where $12$ instantiations of $\cF$ need to be embedded into the master. We also discuss an extremely simple construction with just $2$ instantiations for which we conjecture that it already achieves the optimal bound.
2021
TCC
Vector and Functional Commitments from Lattices 📺
Vector commitment (VC) schemes allow one to commit concisely to an ordered sequence of values, so that the values at desired positions can later be proved concisely. In addition, a VC can be statelessly updatable, meaning that commitments and proofs can be updated to reflect changes to individual entries, using knowledge of just those changes (and not the entire vector). VCs have found important applications in verifiable outsourced databases, cryptographic accumulators, and cryptocurrencies. However, to date there have been relatively few post-quantum constructions, i.e., ones that are plausibly secure against quantum attacks. More generally, functional commitment (FC) schemes allow one to concisely and verifiably reveal various functions of committed data, such as linear functions (i.e., inner products, including evaluations of a committed polynomial). Under falsifiable assumptions, all known functional commitments schemes have been limited to ``linearizable'' functions, and there are no known post-quantum FC schemes beyond ordinary VCs. In this work we give post-quantum constructions of vector and functional commitments based on the standard Short Integer Solution lattice problem (appropriately parameterized): \begin{itemize} \item First, we present new statelessly updatable VCs with significantly shorter proofs than (and efficiency otherwise similar to) the only prior post-quantum, statelessly updatable construction (Papamanthou \etal, EUROCRYPT 13). Our constructions use private-key setup, in which an authority generates public parameters and then goes offline. \item Second, we construct functional commitments for \emph{arbitrary (bounded) Boolean circuits} and branching programs. Under falsifiable assumptions, this is the first post-quantum FC scheme beyond ordinary VCs, and the first FC scheme of any kind that goes beyond linearizable functions. Our construction works in a new model involving an authority that generates the public parameters and remains online to provide public, reusable ``opening keys'' for desired functions of committed messages. \end{itemize}
2021
TCC
Quantum Key-length Extension 📺
Should quantum computers become available, they will reduce the effective key length of basic secret-key primitives, such as blockciphers. To address this we will either need to use blockciphers with inherently longer keys or develop key-length extension techniques to amplify the security of a blockcipher to use longer keys. We consider the latter approach and revisit the FX and double encryption constructions. Classically, FX was proven to be a secure key-length extension technique, while double encryption fails to be more secure than single encryption due to a meet-in-the-middle attack. In this work we provide positive results, with concrete and tight bounds, for the security of both of these constructions against quantum attackers in ideal models. For FX, we consider a partially-quantum model, where the attacker has quantum access to the ideal primitive, but only classical access to FX. This is a natural model and also the strongest possible, since effective quantum attacks against FX exist in the fully-quantum model when quantum access is granted to both oracles. We provide two results for FX in this model. The first establishes the security of FX against non-adaptive attackers. The second establishes security against general adaptive attackers for a variant of FX using a random oracle in place of an ideal cipher. This result relies on the techniques of Zhandry (CRYPTO '19) for lazily sampling a quantum random oracle. An extension to perfectly lazily sampling a quantum random permutation, which would help resolve the adaptive security of standard FX, is an important but challenging open question. We introduce techniques for partially-quantum proofs without relying on analyzing the classical and quantum oracles separately, which is common in existing work. This may be of broader interest. For double encryption, we show that it amplifies strong pseudorandom permutation security in the fully-quantum model, strengthening a known result in the weaker sense of key-recovery security. This is done by adapting a technique of Tessaro and Thiruvengadam (TCC '18) to reduce the security to the difficulty of solving the list disjointness problem and then showing its hardness via a chain of reductions to the known quantum difficulty of the element distinctness problem.
2021
TCC
Fully-succinct Publicly Verifiable Delegation from Constant-Size Assumptions 📺
We construct a publicly verifiable, non-interactive delegation scheme for any polynomial size arithmetic circuit with proof-size and verification complexity comparable to those of pairing based zk-SNARKS. Concretely, the proof consists of $O(1)$ group elements and verification requires $O(1)$ pairings and $n$ group exponentiations, where $n$ is the size of the input. While known SNARK-based constructions rely on non-falsifiable assumptions, our construction can be proven sound under any constant size ($k\geq 2$) $k$-Matrix Diffie-Hellman ($k$-MDDH) assumption. However, the size of the reference string as well as the prover's complexity are quadratic in the size of the circuit. This result demonstrates that we can construct delegation from very simple and well-understood assumptions. We consider this work a first step towards achieving practical delegation from standard, falsifiable assumptions. Our main technical contributions are first, the introduction and construction of what we call "no-signaling, somewhere statistically binding commitment schemes". These commitments are extractable for any small part $x_S$ of an opening $x$, where $S\subseteq [n]$ is of size at most $K$. Here $n$ is the dimension of $x$ and $x_S=(x_i)_{i\in S}$. Importantly, for any $S'\subseteq S$, extracting $x_{S'}$ can be done independently of $S\setminus S'$. Second, we use of these commitments to construct more efficient "quasi-arguments"' with no-signaling extraction, introduced by Paneth and Rothblum (TCC 17). These arguments allow extracting parts of the witness of a statement and checking it against some local constraints without revealing which part is checked. We construct pairing-based quasi arguments for linear and quadratic constraints and combine them with the low-depth delegation result of Gonzáles et. al. (Asiacrypt 19) to construct the final delegation scheme.
2021
TCC
BKW Meets Fourier: New Algorithms for LPN with Sparse Parities 📺
We consider the Learning Parity with Noise (LPN) problem with a sparse secret, where the secret vector $\mathbf{s}$ of dimension $n$ has Hamming weight at most $k$. We are interested in algorithms with asymptotic improvement in the \emph{exponent} beyond the state of the art. Prior work in this setting presented algorithms with runtime $n^{c \cdot k}$ for constant $c < 1$, obtaining a constant factor improvement over brute force search, which runs in time ${n \choose k}$. We obtain the following results: - We first consider the \emph{constant} error rate setting, and in this case present a new algorithm that leverages a subroutine from the acclaimed BKW algorithm [Blum, Kalai, Wasserman, J.~ACM '03] as well as techniques from Fourier analysis for $p$-biased distributions. Our algorithm achieves asymptotic improvement in the exponent compared to prior work, when the sparsity $k = k(n) = \frac{n}{\log^{1+ 1/c}(n)}$, where $c \in o(\log \log(n))$ and $c \in \omega(1)$. The runtime and sample complexity of this algorithm are approximately the same. - We next consider the \emph{low noise} setting, where the error is subconstant. We present a new algorithm in this setting that requires only a \emph{polynomial} number of samples and achieves asymptotic improvement in the exponent compared to prior work, when the sparsity $k = \frac{1}{\eta} \cdot \frac{\log(n)}{\log(f(n))}$ and noise rate of $\eta \neq 1/2$ and $\eta^2 = \left(\frac{\log(n)}{n} \cdot f(n)\right)$, for $f(n) \in \omega(1) \cap n^{o(1)}$. To obtain the improvement in sample complexity, we create subsets of samples using the \emph{design} of Nisan and Wigderson [J.~Comput.~Syst.~Sci. '94], so that any two subsets have a small intersection, while the number of subsets is large. Each of these subsets is used to generate a single $p$-biased sample for the Fourier analysis step. We then show that this allows us to bound the covariance of pairs of samples, which is sufficient for the Fourier analysis. - Finally, we show that our first algorithm extends to the setting where the noise rate is very high $1/2 - o(1)$, and in this case can be used as a subroutine to obtain new algorithms for learning DNFs and Juntas. Our algorithms achieve asymptotic improvement in the exponent for certain regimes. For DNFs of size $s$ with approximation factor $\epsilon$ this regime is when $\log \frac{s}{\epsilon} \in \omega \left( \frac{c}{\log n \log \log c}\right)$, and $\log \frac{s}{\epsilon} \in n^{1 - o(1)}$, for $c \in n^{1 - o(1)}$. For Juntas of $k$ the regime is when $k \in \omega \left( \frac{c}{\log n \log \log c}\right)$, and $k \in n^{1 - o(1)}$, for $c \in n^{1 - o(1)}$.
2021
TCC
Computational Robust (Fuzzy) Extractors for CRS-dependent Sources with Minimal Min-entropy 📺
Robust (fuzzy) extractors are very useful for, e.g., authenticated key exchange from a shared weak secret and remote biometric authentication against active adversaries. They enable two parties to extract the same uniform randomness with a ``helper'' string. More importantly, they have an authentication mechanism built in that tampering of the ``helper'' string will be detected. Unfortunately, as shown by Dodis and Wichs, in the information-theoretic setting, a robust extractor for an $(n,k)$-source requires $k>n/2$, which is in sharp contrast with randomness extractors which only require $k=\omega(\log n)$. Existing works either rely on random oracles or introduce CRS and work only for CRS-independent sources (even in the computational setting). In this work, we give a systematic study about robust (fuzzy) extractors for general CRS {\em dependent} sources. We show in the information-theoretic setting, the same entropy lower bound holds even in the CRS model; we then show we {\em can} have robust extractors in the computational setting for general CRS-dependent source that is only with minimal entropy. We further extend our construction to robust fuzzy extractors. Along the way, we propose a new primitive called $\kappa$-MAC, which is unforgeable with a weak key and hides all partial information about the key (both against auxiliary input); it may be of independent interests.
2021
TCC
Updatable Public Key Encryption in the Standard Model 📺
Forward security (FS) ensures that corrupting the current secret key in the system preserves the privacy or integrity of the prior usages of the system. Achieving forward security is especially hard in the setting of public-key encryption (PKE), where time is divided into periods, and in each period the receiver derives the next-period secret key from their current secret key, while the public key stays constant. Indeed, all current constructions of FS-PKE are built from hierarchical identity-based encryption (HIBE) and are rather complicated. Motivated by applications to secure messaging, recent works of Jost et al. (Eurocrypt’19) and Alwen et al. (CRYPTO’20) consider a natural relaxation of FS-PKE, which they term *updatable* PKE (UPKE). In this setting, the transition to the next period can be initiated by any sender, who can compute a special update ciphertext. This ciphertext directly produces the next-period public key and can be processed by the receiver to compute the next-period secret key. If done honestly, future (regular) ciphertexts produced with the new public key can be decrypted with the new secret key, but past such ciphertexts cannot be decrypted with the new secret key. Moreover, this is true even if all other previous-period updates were initiated by untrusted senders. Both papers also constructed a very simple UPKE scheme based on the CDH assumption in the random oracle model. However, they left open the question of building such schemes in the standard model, or based on other (e.g., post-quantum) assumptions, without using the heavy HIBE techniques. In this work, we construct two efficient UPKE schemes in the standard model, based on the DDH and LWE assumptions, respectively. Somewhat interestingly, our constructions gain their efficiency (compared to prior FS-PKE schemes from the same assumptions) by using tools from the area of circular-secure and leakage resilient public-key encryption schemes (rather than HIBE).
2021
TCC
Generalized Pseudorandom Secret Sharing and Efficient Straggler-Resilient Secure Computation 📺
Secure multiparty computation (MPC) enables $n$ parties, of which up to $t$ may be corrupted, to perform joint computations on their private inputs while revealing only the outputs. Optimizing the asymptotic and concrete costs of MPC protocols has become an important line of research. Much of this research focuses on the setting of an honest majority, where $n \ge 2t+1$, which gives rise to concretely efficient protocols that are either information-theoretic or make a black-box use of symmetric cryptography. Efficiency can be further improved in the case of a {\em strong} honest majority, where $n>2t+1$. Motivated by the goal of minimizing the communication and latency costs of MPC with a strong honest majority, we make two related contributions. \begin{itemize}[leftmargin=*] \item {\bf Generalized pseudorandom secret sharing (PRSS).} Linear correlations serve as an important resource for MPC protocols and beyond. PRSS enables secure generation of many pseudorandom instances of such correlations without interaction, given replicated seeds of a pseudorandom function. We extend the PRSS technique of Cramer et al.\ (TCC 2015) for sharing degree-$d$ polynomials to new constructions leveraging a particular class of combinatorial designs. Our constructions yield a dramatic efficiency improvement when the degree $d$ is higher than the security threshold $t$, not only for standard degree-$d$ correlations but also for several useful generalizations. In particular, correlations for locally converting between slot configurations in ``share packing'' enable us to avoid the concrete overhead of prior works. \item {\bf Cheap straggler resilience.} In reality, communication is not fully synchronous: protocol executions suffer from variance in communication delays and occasional node or message-delivery failures. We explore the benefits of PRSS-based MPC with a strong honest majority toward robustness against such failures, in turn yielding improved latency delays. In doing so we develop a novel technique for defending against a subtle ``double-dipping'' attack, which applies to the best existing protocols, with almost no extra cost in communication or rounds. \end{itemize} Combining the above tools requires further work, including new methods for batch verification via distributed zero-knowledge proofs (Boneh et al., CRYPTO 2019) that apply to packed secret sharing. Overall, our work demonstrates new advantages of the strong honest majority setting, and introduces new tools---in particular, generalized PRSS---that we believe will be of independent use within other cryptographic applications.
2021
TCC
Towards Tight Adaptive Security of Non-Interactive Key Exchange 📺
We investigate the quality of security reductions for non-interactive key exchange (NIKE) schemes. Unlike for many other cryptographic building blocks (like public-key encryption, signatures, or zero-knowledge proofs), all known NIKE security reductions to date are non-tight, i.e., lose a factor of at least the number of users in the system. In that sense, NIKE forms a particularly elusive target for tight security reductions. The main technical obstacle in achieving tightly secure NIKE schemes are adaptive corruptions. Hence, in this work, we explore security notions and schemes that lie between selective security and fully adaptive security. Concretely: - We exhibit a tradeoff between key size and reduction loss. We show that a tighter reduction can be bought by larger public and secret NIKE keys. Concretely, we present a simple NIKE scheme with a reduction loss of O(N^2 log(\nu)/\nu^2), and public and secret keys of O(\nu) group elements, where N denotes the overall number of users in the system, and \nu is a freely adjustable scheme parameter. Our scheme achieves full adaptive security even against multiple "test queries" (i.e., adversarial challenges), but requires keys of size O(N) to achieve (almost) tight security under the matrix Diffie-Hellman assumption. Still, already this simple scheme circumvents existing lower bounds. - We show that this tradeoff is inherent. We contrast the security of our simple scheme with a lower bound for all NIKE schemes in which shared keys can be expressed as an ``inner product in the exponent''. This result covers the original Diffie-Hellman NIKE scheme, as well as a large class of its variants, and in particular our simple scheme. Our lower bound gives a tradeoff between the ``dimension'' of any such scheme (which directly corresponds to key sizes in existing schemes), and the reduction quality. For \nu = O(N), this shows our simple scheme and reduction optimal (up to a logarithmic factor). - We exhibit a tradeoff between security and key size for tight reductions. We show that it is possible to circumvent the inherent tradeoff above by relaxing the desired security notion. Concretely, we consider the natural notion of semi-adaptive security, where the adversary has to commit to a single test query after seeing all public keys. As a feasibility result, we bring forward the first scheme that enjoys compact public keys and tight semi-adaptive security under the conjunction of the matrix Diffie-Hellman and learning with errors assumptions. We believe that our results shed a new light on the role of adaptivity in NIKE security, and also illustrate the special role of NIKE when it comes to tight security reductions.
2021
TCC
Polynomial-time targeted attacks on coin-tossing for any number of corruptions 📺
Consider a coin tossing protocol in which n processors P_1,...,P_n agree on a random bit b in n rounds, where in round i P_i sends a single message w_i. Imagine a full-information adversary who prefers the output 1, and in every round i it knows all the finalized messages w_1,...,w_{i-1} so far as well as the prepared message w_i. A k-replacing attack will have a chance to replace the prepared w_i with its own choice w'_i \neq w_i in up to k rounds. Taking majority protocol over uniformly random bits w_i = b_i is robust in the following strong sense. Any k-replacing adversary can only increase the probability of outputting 1 by at most O(k/\sqrt{n}). In this work, we ask if the above simple protocol is tight. For the same setting, but restricted to uniformly random bit messages, Lichtenstein, Linial, and Saks [Combinatorica'89] showed how to achieve bias \Omega(k/\sqrt{n}) for any k \in [n]. Kalai, Komargodski, and Raz [DISC'18, Combinatorica'21] gave an alternative polynomial-time attack when k \geq \Theta(\sqrt{n}). Etesami, Mahloujifar, and Mahmoody [ALT'19, SODA'20] extended the result of KKR18 to arbitrary long messages. In this work, we resolve both of these problems. - For arbitrary length messages, we show that k-replacing polynomial-time attacks can indeed increase the probability of outputting 1 by \Omega(k/\sqrt{n}) for any k, which is optimal up to a constant factor. By plugging in our attack into the framework of Mahloujifar Mahmoody [TCC'17] we obtain similar data poisoning attacks against deterministic learners when adversary is limited to changing k=o(\sqrt{n}) of the n training examples. - For uniformly random bits b_1,...,b_n, we show that whenever Pr[b=1]=Pr[\sum b_i \geq t]=\beta[t]_n for t \in [n] is the probability of a Hamming ball, then online polynomial-time k-replacing attacks can increase Pr[b=1] from \beta[t]_n to \beta[t-k]_n , which is optimal due to the majority protocol. In comparison, the (information-theoretic) attack of LLS89 increased Pr[b=1] to \beta[t-k]_{n-k}, which is optimal for adaptive adversaries who cannot see the message before changing it. Thus, we obtain a computational variant of Harper's celebrated vertex isoperimetric inequality.
2021
TCC
On expected polynomial runtime in cryptography 📺
A common definition of black-box zero-knowledge considers strict polynomial time (PPT) adversaries but expected polynomial time (EPT) simulation. This is necessary for constant round black-box zero-knowledge in the plain model, and the asymmetry between simulator and adversary an accepted consequence. Consideration of EPT adversaries naturally leads to designated adversaries, i.e. adversaries which are only required to be efficient in the protocol they are designed to attack. They were first examined in Feige’s thesis [Fei90], where obstructions to proving security are shown. Prior work on (designated) EPT adversaries by Katz and Lindell (TCC’05) requires superpolynomial hardness assumptions, whereas the work of Goldreich (TCC’07) postulates “nice” behaviour under rewinding. In this work, we start from scratch and revisit the definition of efficient algorithms. We argue that the standard runtime classes, PPT and EPT, behave “unnatural” from a cryptographic perspective. Namely, algorithms can have indistinguishable runtime distributions, yet one is considered efficient while the other is not. Hence, classical runtime classes are not “closed under indistinguishability”, which causes problems. Relaxations of PPT which are “closed” are (well-)known and used. We propose computationally expected polynomial time (CEPT), the class of runtimes which are (computationally) indistinguishable from EPT, which is “closed”. We analyze CEPT in the setting of uniform complexity (following Goldreich (JC’93)) with designated adversaries, and provide easy-to-check criteria for zero-knowledge protocols with blackbox simulation in the plain model, which show that many (all known?) such protocols handle designated CEPT adversaries in CEPT.
2021
TCC
Relationships between quantum IND-CPA notions 📺
An encryption scheme is called indistinguishable under chosen plaintext attack (short IND-CPA) if an attacker cannot distinguish the encryptions of two messages of his choice. There are other variants of this definition but they all turn out to be equivalent in the classical case. In this paper, we give a comprehensive overview of these different variants of IND-CPA for symmetric encryption schemes in the quantum setting. We investigate the relationships between these notions and prove various equivalences, implications, non-equivalences, and non-implications between these variants.
2021
TCC
Blockchains Enable Non-Interactive MPC 📺
We propose to use blockchains to achieve MPC which does not require the participating parties to be online simultaneously or interact with each other. Parties who contribute inputs but do not wish to receive outputs can go offline after submitting a single message. In addition to our main result, we study combined communication- and state-complexity in MPC, as it has implications for the communication complexity of our main construction. Finally, we provide a variation of our main protocol which additionally provides guaranteed output delivery.
2021
TCC
Classical Binding for Quantum Commitments 📺
In classical commitments, statistical binding means that for almost any commitment transcript there is at most one possible opening. While quantum commitments (for classical messages) sometimes have benefits over their classical counterparts (e.g. in terms of assumptions), they provide a weaker notion of binding. Essentially that the sender cannot open a given commitment to a random value with probability noticeably greater than 1/2. We introduce a notion of classical binding for quantum commitments which provides guarantees analogous to the classical case. In our notion, the receiver performs a (partial) measurement of the quantum commitment string, and the outcome of this measurement determines a single value that the sender may open. We expect that our notion can replace classical commitments in various settings, leaving the security proof essentially unchanged. As an example we show a soundness proof for the GMW zero-knowledge proof system. We construct a non-interactive quantum commitment scheme which is classically statistically-binding and has a classical opening, based on the existence of any post-quantum one-way function. Prior candidates had inherently quantum openings and were not classically binding. In contrast, we show that it is impossible to achieve classical binding for statistically hiding commitments, regardless of assumption or round complexity. Our scheme is simply Naor's commitment scheme (which classically requires a common random string, CRS), but executed in superposition over all possible values of the CRS, and repeated several times. We hope that this technique for using quantum communication to remove a CRS may find other uses.
2021
TCC
Covert Learning: How to Learn with an Untrusted Intermediary 📺
We consider the task of learning a function via oracle queries, where the queries and responses are monitored (and perhaps also modified) by an untrusted intermediary. Our goal is twofold: First, we would like to prevent the intermediary from gaining any information about either the function or the learner's intentions (e.g. the particular hypothesis class the learner is considering). Second, we would like to curb the intermediary's ability to meaningfully interfere with the learning process, even when it can modify the oracles' responses. Inspired by the works of Ishai et al. (Crypto 2019) and Goldwasser et al. (ITCS 2021), we formalize two new learning models, called Covert Learning and Covert Verifiable Learning, that capture these goals. Then, assuming hardness of the Learning Parity with Noise (LPN) problem, we show: 1. Covert Learning algorithms in the agnostic setting for parity functions and decision trees, where a polynomial time eavesdropping adversary that observes all queries and responses learns nothing about either the function, or the learned hypothesis. 2. Covert Verifiable Learning algorithms that provide similar learning and privacy guarantees, even in the presence of a polynomial-time adversarial intermediary that can modify all oracle responses. Here the learner is granted additional random examples and is allowed to abort whenever the oracles responses are modified. Aside theoretical interest, our study is motivated by applications to the secure outsourcing of automated scientific discovery in drug design and molecular biology. It also uncovers limitations of current techniques for defending against model extraction attacks.
2021
TCC
Multi-party PSM, Revisited: Improved Communication and Unbalanced Communication 📺
We improve the communication complexity in the Private Simultaneous Messages (PSM) model, which is a minimal model of non-interactive information-theoretic multi-party computation. The state-of-the-art PSM protocols were recently constructed by Beimel, Kushilevitz and Nissim (EUROCRYPT 2018). We present new constructions of $k$-party PSM protocols. The new protocols match the previous upper bounds when $k=2$ or $3$ and improve the upper bounds for larger $k$. We also construct $2$-party PSM protocols with unbalanced communication complexity. More concretely, - For infinitely many $k$ (including all $k \leq 20$), we construct $k$-party PSM protocols for arbitrary functionality $f:[N]^k\to\{0,1\}$, whose communication complexity is $O_k(N^{\frac{k-1}{2}})$. This improves the former best known upper bounds of $O_k(N^{\frac{k}{2}})$ for $k\geq 6$, $O(N^{7/3})$ for $k=5$, and $O(N^{5/3})$ for $k=4$. - For all rational $0<\eta<1$ whose denominator is $\leq 20$, we construct 2-party PSM protocols for arbitrary functionality $f:[N]\times[N]\to\{0,1\}$, whose communication complexity is $O(N^\eta)$ for one party, $O(N^{1-\eta})$ for the other. Previously the only known unbalanced 2-party PSM has communication complexity $O(\log(N)), O(N)$.
2021
TCC
Unclonable Encryption, Revisited 📺
Unclonable encryption, introduced by Broadbent and Lord (TQC'20), is an encryption scheme with the following attractive feature: given a ciphertext, an adversary cannot create two ciphertexts both of which decrypt to the same message as the original ciphertext. We revisit this notion and show the following: -Reusability: The constructions proposed by Broadbent and Lord have the disadvantage that they either guarantee one-time security (that is, the encryption key can only be used once to encrypt the message) in the plain model or they guaranteed security in the random oracle model. We construct unclonable encryption schemes with semantic security. We present two constructions from minimal cryptographic assumptions: (i) a private-key unclonable encryption scheme assuming post-quantum one-way functions and, (ii) a public-key unclonable encryption scheme assuming a post-quantum public-key encryption scheme. -Lower Bound and Generalized Construction: We revisit the information-theoretic one-time secure construction of Broadbent and Lord. The success probability of the adversary in their construction was guaranteed to be $0.85^n$, where $n$ is the length of the message. It was interesting to understand whether the ideal success probability of (negligibly close to) $0.5^n$ was unattainable. We generalize their construction to be based on a broader class of monogamy of entanglement games (while their construction was based on BB84 game). We demonstrate a simple cloning attack that succeeds with probability $0.71^n$ against a class of schemes including that of Broadbent and Lord. We also present a $0.75^n$ cloning attack exclusively against their scheme. -Implication to Copy-Protection: We show that unclonable encryption, satisfying a stronger property, called unclonable-indistinguishability (defined by Broadbent and Lord), implies copy-protection for a simple class of unlearnable functions. While we currently don't have encryption schemes satisfying this stronger property, this implication demonstrates a new path to construct copy-protection.
2021
TCHES
Speed Reading in the Dark: Accelerating Functional Encryption for Quadratic Functions with Reprogrammable Hardware 📺
Functional encryption is a new paradigm for encryption where decryption does not give the entire plaintext but only some function of it. Functional encryption has great potential in privacy-enhancing technologies but suffers from excessive computational overheads. We introduce the first hardware accelerator that supports functional encryption for quadratic functions. Our accelerator is implemented on a reprogrammable system-on-chip following the hardware/software codesign methogology. We benchmark our implementation for two privacy-preserving machine learning applications: (1) classification of handwritten digits from the MNIST database and (2) classification of clothes images from the Fashion MNIST database. In both cases, classification is performed with encrypted images. We show that our implementation offers speedups of over 200 times compared to a published software implementation and permits applications which are unfeasible with software-only solutions.
2021
TCHES
Online Template Attacks: Revisited 📺
An online template attack (OTA) is a powerful technique previously used to attack elliptic curve scalar multiplication algorithms. This attack has only been analyzed in the realm of power consumption and EM side channels, where the signals leak related to the value being processed. However, microarchitecture signals have no such feature, invalidating some assumptions from previous OTA works.In this paper, we revisit previous OTA descriptions, proposing a generic framework and evaluation metrics for any side-channel signal. Our analysis reveals OTA features not previously considered, increasing its application scenarios and requiring a fresh countermeasure analysis to prevent it.In this regard, we demonstrate that OTAs can work in the backward direction, allowing to mount an augmented projective coordinates attack with respect to the proposal by Naccache, Smart and Stern (Eurocrypt 2004). This demonstrates that randomizing the initial targeted algorithm state does not prevent the attack as believed in previous works.We analyze three libraries libgcrypt, mbedTLS, and wolfSSL using two microarchitecture side channels. For the libgcrypt case, we target its EdDSA implementation using Curve25519 twist curve. We obtain similar results for mbedTLS and wolfSSL with curve secp256r1. For each library, we execute extensive attack instances that are able to recover the complete scalar in all cases using a single trace.This work demonstrates that microarchitecture online template attacks are also very powerful in this scenario, recovering secret information without knowing a leakage model. This highlights the importance of developing secure-by-default implementations, instead of fix-on-demand ones.
2021
TCHES
Efficiency through Diversity in Ensemble Models applied to Side-Channel Attacks: – A Case Study on Public-Key Algorithms – 📺
Deep Learning based Side-Channel Attacks (DL-SCA) are considered as fundamental threats against secure cryptographic implementations. Side-channel attacks aim to recover a secret key using the least number of leakage traces. In DL-SCA, this often translates in having a model with the highest possible accuracy. Increasing an attack’s accuracy is particularly important when an attacker targets public-key cryptographic implementations where the recovery of each secret key bits is directly related to the model’s accuracy. Commonly used in the deep learning field, ensemble models are a well suited method that combine the predictions of multiple models to increase the ensemble accuracy by reducing the correlation between their errors. Linked to this correlation, the diversity is considered as an indicator of the ensemble model performance. In this paper, we propose a new loss, namely Ensembling Loss (EL), that generates an ensemble model which increases the diversity between the members. Based on the mutual information between the ensemble model and its related label, we theoretically demonstrate how the ensemble members interact during the training process. We also study how an attack’s accuracy gain translates to a drastic reduction of the remaining time complexity of a side-channel attacks through multiple scenarios on public-key implementations. Finally, we experimentally evaluate the benefits of our new learning metric on RSA and ECC secure implementations. The Ensembling Loss increases by up to 6.8% the performance of the ensemble model while the remaining brute-force is reduced by up to 222 operations depending on the attack scenario.
2021
TCHES
Optimizing BIKE for the Intel Haswell and ARM Cortex-M4 📺
BIKE is a key encapsulation mechanism that entered the third round of the NIST post-quantum cryptography standardization process. This paper presents two constant-time implementations for BIKE, one tailored for the Intel Haswell and one tailored for the ARM Cortex-M4. Our Haswell implementation is much faster than the avx2 implementation written by the BIKE team: for bikel1, the level-1 parameter set, we achieve a 1.39x speedup for decapsulation (which is the slowest operation) and a 1.33x speedup for the sum of all operations. For bikel3, the level-3 parameter set, we achieve a 1.5x speedup for decapsulation and a 1.46x speedup for the sum of all operations. Our M4 implementation is more than two times faster than the non-constant-time implementation portable written by the BIKE team. The speedups are achieved by both algorithm-level and instruction-level optimizations.
2021
TCHES
Classic McEliece on the ARM Cortex-M4 📺
This paper presents a constant-time implementation of Classic McEliece for ARM Cortex-M4. Specifically, our target platform is stm32f4-Discovery, a development board on which the amount of SRAM is not even large enough to hold the public key of the smallest parameter sets of Classic McEliece. Fortunately, the flash memory is large enough, so we use it to store the public key. For the level-1 parameter sets mceliece348864 and mceliece348864f, our implementation takes 582 199 cycles for encapsulation and 2 706 681 cycles for decapsulation. Compared to the level-1 parameter set of FrodoKEM, our encapsulation time is more than 80 times faster, and our decapsulation time is more than 17 times faster. For the level-3 parameter sets mceliece460896 and mceliece460896f, our implementation takes 1 081 335 cycles for encapsulation and 6 535 186 cycles for decapsulation. In addition, our implementation is also able to carry out key generation for the level-1 parameter sets and decapsulation for level-5 parameter sets on the board.
2021
TCHES
Timing Black-Box Attacks: Crafting Adversarial Examples through Timing Leaks against DNNs on Embedded Devices 📺
Deep neural networks (DNNs) have been applied to various industries. In particular, DNNs on embedded devices have attracted considerable interest because they allow real-time and distributed processing on site. However, adversarial examples (AEs), which add small perturbations to the input data of DNNs to cause misclassification, are serious threats to DNNs. In this paper, a novel black-box attack is proposed to craft AEs based only on processing time, i.e., the side-channel leaks from DNNs on embedded devices. Unlike several existing black-box attacks that utilize output probability, the proposed attack exploits the relationship between the number of activated nodes and processing time without using training data, model architecture, parameters, substitute models, or output probability. The perturbations for AEs are determined by the differential processing time based on the input data of the DNNs in the proposed attack. The experimental results show that the AEs of the proposed attack effectively cause an increase in the number of activated nodes and the misclassification of one of the incorrect labels against the DNNs on a microcontroller unit. Moreover, these results indicate that the attack can evade gradient-masking and confidence reduction countermeasures, which conceal the output probability, to prevent the crafting of AEs against several black-box attacks. Finally, the countermeasures against the attack are implemented and evaluated to clarify that the implementation of an activation function with data-dependent timing leaks is the cause of the proposed attack.
2021
TCHES
Security and Trust in Open Source Security Tokens 📺
Using passwords for authentication has been proven vulnerable in countless security incidents. Hardware security tokens effectively prevent most password-related security issues and improve security indisputably. However, we would like to highlight that there are new threats from attackers with physical access which need to be discussed. Supply chain adversaries may manipulate devices on a large scale and install backdoors before they even reach end users. In evil maid scenarios, specific devices may even be attacked while already in use. Hence, we thoroughly investigate the security and trustworthiness of seven commercially available open source security tokens, including devices from the two market leaders: SoloKeys and Nitrokey. Unfortunately, we identify and practically verify significant vulnerabilities in all seven examined tokens. Some of them are based on severe, previously undiscovered, vulnerabilities of two major microcontrollers which are used at a large scale in various products. Our findings clearly emphasize the significant threat from supply chain and evil maid scenarios since the attacks are practical and only require moderate attacker efforts. Fortunately, we are able to describe software-based countermeasures as effective improvements to retrofit the examined devices. To improve the security and trustworthiness of future security tokens, we also derive important general design recommendations.
2021
TCHES
Breaking Masked Implementations with Many Shares on 32-bit Software Platforms: or When the Security Order Does Not Matter 📺
We explore the concrete side-channel security provided by state-of-theart higher-order masked software implementations of the AES and the (candidate to the NIST Lightweight Cryptography competition) Clyde, in ARM Cortex-M0 and M3 devices. Rather than looking for possibly reduced security orders (as frequently considered in the literature), we directly target these implementations by assuming their maximum security order and aim at reducing their noise level thanks to multivariate, horizontal and analytical attacks. Our investigations point out that the Cortex-M0 device has so limited physical noise that masking is close to ineffective. The Cortex-M3 shows a better trend but still requires a large number of shares to provide strong security guarantees. Practically, we first exhibit a full 128-bit key recovery in less than 10 traces for a 6-share masked AES implementation running on the Cortex-M0 requiring 232 enumeration power. A similar attack performed against the Cortex-M3 with 5 shares require 1,000 measurements with 244 enumeration power. We then show the positive impact of lightweight block ciphers with limited number of AND gates for side-channel security, and compare our attacks against a masked Clyde with the best reported attacks of the CHES 2020 CTF. We complement these experiments with a careful information theoretic analysis, which allows interpreting our results. We also discuss our conclusions under the umbrella of “backwards security evaluations” recently put forwards by Azouaoui et al. We finally extrapolate the evolution of the proposed attack complexities in the presence of additional countermeasures using the local random probing model proposed at CHES 2020.
2021
TCHES
Pay Attention to Raw Traces: A Deep Learning Architecture for End-to-End Profiling Attacks 📺
With the renaissance of deep learning, the side-channel community also notices the potential of this technology, which is highly related to the profiling attacks in the side-channel context. Many papers have recently investigated the abilities of deep learning in profiling traces. Some of them also aim at the countermeasures (e.g., masking) simultaneously. Nevertheless, so far, all of these papers work with an (implicit) assumption that the number of time samples in raw traces can be reduced before the profiling, i.e., the position of points of interest (PoIs) can be manually located. This is arguably the most challenging part of a practical black-box analysis targeting an implementation protected by masking. Therefore, we argue that to fully utilize the potential of deep learning and get rid of any manual intervention, the end-to-end profiling directly mapping raw traces to target intermediate values is demanded.In this paper, we propose a neural network architecture that consists of encoders, attention mechanisms and a classifier, to conduct the end-to-end profiling. The networks built by our architecture could directly classify the traces that contain a large number of time samples (i.e., raw traces without manual feature extraction) while whose underlying implementation is protected by masking. We validate our networks on several public datasets, i.e., DPA contest v4 and ASCAD, where over 100,000 time samples are directly used in profiling. To our best knowledge, we are the first that successfully carry out end-to-end profiling attacks. The results on the datasets indicate that our networks could get rid of the tricky manual feature extraction. Moreover, our networks perform even systematically better (w.r.t. the number of traces in attacks) than those trained on the reduced traces. These validations imply our approach is not only a first but also a concrete step towards end-to-end profiling attacks in the side-channel context.
2021
TCHES
Analysis and Comparison of Table-based Arithmetic to Boolean Masking 📺
Masking is a popular technique to protect cryptographic implementations against side-channel attacks and comes in several variants including Boolean and arithmetic masking. Some masked implementations require conversion between these two variants, which is increasingly the case for masking of post-quantum encryption and signature schemes. One way to perform Arithmetic to Boolean (A2B) mask conversion is a table-based approach first introduced by Coron and Tchulkine, and later corrected and adapted by Debraize in CHES 2012. In this work, we show both analytically and experimentally that the table-based A2B conversion algorithm proposed by Debraize does not achieve the claimed resistance against differential power analysis due to a non-uniform masking of an intermediate variable. This non-uniformity is hard to find analytically but leads to clear leakage in experimental validation. To address the non-uniform masking issue, we propose two new A2B conversions: one that maintains efficiency at the cost of additional memory and one that trades efficiency for a reduced memory footprint. We give analytical and experimental evidence for their security, and will make their implementations, which are shown to be free from side-channel leakage in 100.000 power traces collected on the ARM Cortex-M4, available online. We conclude that when designing side-channel protection mechanisms, it is of paramount importance to perform both a theoretical analysis and an experimental validation of the method.
2021
TCHES
AES-LBBB: AES Mode for Lightweight and BBB-Secure Authenticated Encryption 📺
In this paper, a new lightweight authenticated encryption scheme AESLBBB is proposed, which was designed to provide backward compatibility with advanced encryption standard (AES) as well as high security and low memory. The primary design goal, backward compatibility, is motivated by the fact that AES accelerators are now very common for devices in the field; we are interested in designing an efficient and highly secure mode of operation that exploits the best of those AES accelerators. The backward compatibility receives little attention in the NIST lightweight cryptography standardization process, in which only 3 out of 32 round-2 candidates are based on AES. Our mode, LBBB, is inspired by the design of ALE in the sense that the internal state size is a minimum 2n bits when using a block cipher of length n bits for the key and data. Unfortunately, there is no security proof of ALE, and forgery attacks have been found on ALE. In LBBB, we introduce an additional feed from block cipher’s output to the key state via a certain permutation λ, which enables us to prove beyond-birthday-bound (BBB) security. We then specify its AES instance, AES-LBBB, and evaluate its performance for (i) software implementation on a microcontroller with an AES coprocessor and (ii) hardware implementation for an application-specific integrated circuit (ASIC) to show that AES-LBBB performs better than the current state-of-the-art Remus-N2 with AES-128.
2021
TCHES
Attacking and Defending Masked Polynomial Comparison for Lattice-Based Cryptography 📺
In this work, we are concerned with the hardening of post-quantum key encapsulation mechanisms (KEM) against side-channel attacks, with a focus on the comparison operation required for the Fujisaki-Okamoto (FO) transform. We identify critical vulnerabilities in two proposals for masked comparison and successfully attack the masked comparison algorithms from TCHES 2018 and TCHES 2020. To do so, we use first-order side-channel attacks and show that the advertised security properties do not hold. Additionally, we break the higher-order secured masked comparison from TCHES 2020 using a collision attack, which does not require side-channel information. To enable implementers to spot such flaws in the implementation or underlying algorithms, we propose a framework that is designed to test the re-encryption step of the FO transform for information leakage. Our framework relies on a specifically parametrized t-test and would have identified the previously mentioned flaws in the masked comparison. Our framework can be used to test both the comparison itself and the full decapsulation implementation.
2021
TCHES
Cutting Through the Complexity of Reverse Engineering Embedded Devices 📺
Performing security analysis of embedded devices is a challenging task. They present many difficulties not usually found when analyzing commodity systems: undocumented peripherals, esoteric instruction sets, and limited tool support. Thus, a significant amount of reverse engineering is almost always required to analyze such devices. In this paper, we present Incision, an architecture and operating-system agnostic reverse engineering framework. Incision tackles the problem of reducing the upfront effort to analyze complex end-user devices. It combines static and dynamic analyses in a feedback loop, enabling information from each to be used in tandem to improve our overall understanding of the firmware analyzed. We use Incision to analyze a variety of devices and firmware. Our evaluation spans firmware based on three RTOSes, an automotive ECU, and a 4G/LTE baseband. We demonstrate that Incision does not introduce significant complexity to the standard reverse engineering process and requires little manual effort to use. Moreover, its analyses produce correct results with high confidence and are robust across different OSes and ISAs.
2021
TCHES
Learning Parity with Physical Noise: Imperfections, Reductions and FPGA Prototype 📺
Hard learning problems are important building blocks for the design of various cryptographic functionalities such as authentication protocols and post-quantum public key encryption. The standard implementations of such schemes add some controlled errors to simple (e.g., inner product) computations involving a public challenge and a secret key. Hard physical learning problems formalize the potential gains that could be obtained by leveraging inexact computing to directly generate erroneous samples. While they have good potential for improving the performances and physical security of more conventional samplers when implemented in specialized integrated circuits, it remains unknown whether physical defaults that inevitably occur in their instantiation can lead to security losses, nor whether their implementation can be viable on standard platforms such as FPGAs. We contribute to these questions in the context of the Learning Parity with Physical Noise (LPPN) problem by: (1) exhibiting new (output) data dependencies of the error probabilities that LPPN samples may suffer from; (2) formally showing that LPPN instances with such dependencies are as hard as the standard LPN problem; (3) analyzing an FPGA prototype of LPPN processor that satisfies basic security and performance requirements.
2021
TCHES
Breaking CAS-Lock and Its Variants by Exploiting Structural Traces 📺
Logic locking is a prominent solution to protect against design intellectual property theft. However, there has been a decade-long cat-and-mouse game between defenses and attacks. A turning point in logic locking was the development of miterbased Boolean satisfiability (SAT) attack that steered the research in the direction of developing SAT-resilient schemes. These schemes, however achieved SAT resilience at the cost of low output corruption. Recently, cascaded locking (CAS-Lock) [SXTF20a] was proposed that provides non-trivial output corruption all-the-while maintaining resilience to the SAT attack. Regardless of the theoretical properties, we revisit some of the assumptions made about its implementation, especially about security-unaware synthesis tools, and subsequently expose a set of structural vulnerabilities that can be exploited to break these schemes. We propose our attacks on baseline CAS-Lock as well as mirrored CAS (M-CAS), an improved version of CAS-Lock. We furnish extensive simulation results of our attacks on ISCAS’85 and ITC’99 benchmarks, where we show that CAS-Lock/M-CAS can be broken with ∼94% success rate. Further, we open-source all implementation scripts, locked circuits, and attack scripts for the community. Finally, we discuss the pitfalls of point function-based locking techniques including Anti-SAT [XS18] and Stripped Functionality Logic Locking(SFLL-HD) [YSN+17], which suffer from similar implementation issues.
2021
TCHES
Denial-of-Service on FPGA-based Cloud Infrastructures — Attack and Defense 📺
This paper presents attacks targeting the FPGAs of AWS F1 instances at the electrical level through power-hammering, where excessive dynamic power is used to crash FPGA instances. We demonstrate different power-hammering attacks that pass all AWS security fences implemented on F1 instances, including the FPGA vendor design rule checks. In addition, we fingerprint the FPGA instances to observe the responsiveness of the instances, which indicates a successful denial-of-service attack. Most importantly, we provide an FPGA virus scanner framework, which was improved to support large datacenter FPGAs for preventing such attacks, including virtually all currently demonstrated side-channel attacks. Our experiments showed that an AWS F1 instance crashes immediately by starting an FPGA design demanding 369W. By using FPGA-fingerprinting, we found that crashed instances are unavailable for about one to over 200 hours.
2021
TCHES
Information Leakages in Code-based Masking: A Unified Quantification Approach 📺
This paper presents a unified approach to quantifying the information leakages in the most general code-based masking schemes. Specifically, by utilizing a uniform representation, we highlight first that all code-based masking schemes’ side-channel resistance can be quantified by an all-in-one framework consisting of two easy-tocompute parameters (the dual distance and the number of conditioned codewords) from a coding-theoretic perspective. In particular, we use signal-to-noise ratio (SNR) and mutual information (MI) as two complementary metrics, where a closed-form expression of SNR and an approximation of MI are proposed by connecting both metrics to the two coding-theoretic parameters. Secondly, considering the connection between Reed-Solomon code and SSS (Shamir’s Secret Sharing) scheme, the SSS-based masking is viewed as a particular case of generalized code-based masking. Hence as a straightforward application, we evaluate the impact of public points on the side-channel security of SSS-based masking schemes, namely the polynomial masking, and enhance the SSS-based masking by choosing optimal public points for it. Interestingly, we show that given a specific security order, more shares in SSS-based masking leak more information on secrets in an information-theoretic sense. Finally, our approach provides a systematic method for optimizing the side-channel resistance of every code-based masking. More precisely, this approach enables us to select optimal linear codes (parameters) for the generalized code-based masking by choosing appropriate codes according to the two coding-theoretic parameters. Summing up, we provide a best-practice guideline for the application of code-based masking to protect cryptographic implementations.
2021
TCHES
Let’s Take it Offline: Boosting Brute-Force Attacks on iPhone’s User Authentication through SCA 📺
In recent years, smartphones have become an increasingly important storage facility for personal sensitive data ranging from photos and credentials up to financial and medical records like credit cards and person’s diseases. Trivially, it is critical to secure this information and only provide access to the genuine and authenticated user. Smartphone vendors have already taken exceptional care to protect user data by the means of various software and hardware security features like code signing, authenticated boot chain, dedicated co-processor and integrated cryptographic engines with hardware fused keys. Despite these obstacles, adversaries have successfully broken through various software protections in the past, leaving only the hardware as the last standing barrier between the attacker and user data. In this work, we build upon existing software vulnerabilities and break through the final barrier by performing the first publicly reported physical Side-Channel Analysis (SCA) attack on an iPhone in order to extract the hardware-fused devicespecific User Identifier (UID) key. This key – once at hand – allows the adversary to perform an offline brute-force attack on the user passcode employing an optimized and scalable implementation of the Key Derivation Function (KDF) on a Graphics Processing Unit (GPU) cluster. Once the passcode is revealed, the adversary has full access to all user data stored on the device and possibly in the cloud.As the software exploit enables acquisition and processing of hundreds of millions oftraces, this work further shows that an attacker being able to query arbitrary many chosen-data encryption/decryption requests is a realistic model, even for compact systems with advanced software protections, and emphasizes the need for assessing resilience against SCA for a very high number of traces.
2021
TCHES
Inconsistency of Simulation and Practice in Delay-based Strong PUFs 📺
The developments in the areas of strong Physical Unclonable Functions (PUFs) predicate an ongoing struggle between designers and attackers. Such a combat motivated the atmosphere of open research, hence enhancing PUF designs in the presence of Machine Learning (ML) attacks. As an example of this controversy, at CHES 2019, a novel delay-based PUF (iPUF) has been introduced and claimed to be resistant against various ML and reliability attacks. At CHES 2020, a new divide-and-conquer modeling attack (splitting iPUF) has been presented showing the vulnerability of even large iPUF variants.Such attacks and analyses are naturally examined purely in the simulation domain, where some metrics like uniformity are assumed to be ideal. This assumption is motivated by a common belief that implementation defects (such as bias) may ease the attacks. In this paper, we highlight the critical role of uniformity in the success of ML attacks, and for the first time present a case where the bias originating from implementation defects hardens certain learning problems in complex PUF architectures. We present the result of our investigations conducted on a cluster of 100 Xilinx Artix 7 FPGAs, showing the incapability of the splitting iPUF attack to model even small iPUF instances when facing a slight non-uniformity. In fact, our findings imply that non-ideal conditions due to implementation defects should also be considered when developing an attack vector on complex PUF architectures like iPUF. On the other hand, we observe a relatively low uniqueness even when following the suggestions made by the iPUF’s original authors with respect to the FPGA implementations, which indeed questions the promised physical unclonability.
2021
TCHES
DL-LA: Deep Learning Leakage Assessment: A modern roadmap for SCA evaluations 📺
In recent years, deep learning has become an attractive ingredient to side-channel analysis (SCA) due to its potential to improve the success probability or enhance the performance of certain frequently executed tasks. One task that is commonly assisted by machine learning techniques is the profiling of a device’s leakage behavior in order to carry out a template attack. At CHES 2019, deep learning has also been applied to non-profiled scenarios for the first time, extending its reach within SCA beyond template attacks. The proposed method, called DDLA, has some tempting advantages over traditional SCA due to merits inherited from (convolutional) neural networks. Most notably, it greatly reduces the need for pre-processing steps&lt; when the SCA traces are misaligned or when the leakage is of a multivariate nature. However, similar to traditional attack scenarios the success of this approach highly depends on the correct choice of a leakage model and the intermediate value to target. In this work we explore, for the first time in literature, whether deep learning can similarly be used as an instrument to advance another crucial (non-profiled) discipline of SCA which is inherently independent of leakage models and targeted intermediates, namely leakage assessment. In fact, given the simple classification-based nature of common leakage assessment techniques, in particular distinguishing two groups fixed-vs-random or fixed-vs-fixed, it comes as a surprise that machine learning has not been brought into this context, yet. Our contribution is the development of the first full leakage assessment methodology based on deep learning. It gives the evaluator the freedom to not worry about location, alignment and statistical order of the leakages and easily covers multivariate and horizontal patterns as well. We test our approach against a number of case studies based on FPGA, ASIC and μC implementations of the PRESENT block cipher, equipped with state-of-the-art SCA countermeasures. Our results clearly show that the proposed methodology and network structures are robust across all case studies and outperform the classical detection approaches (t-test and X2-test) in all considered scenarios.
2021
TCHES
Probing Security through Input-Output Separation and Revisited Quasilinear Masking 📺
The probing security model is widely used to formally prove the security of masking schemes. Whenever a masked implementation can be proven secure in this model with a reasonable leakage rate, it is also provably secure in a realistic leakage model known as the noisy leakage model. This paper introduces a new framework for the composition of probing-secure circuits. We introduce the security notion of input-output separation (IOS) for a refresh gadget. From this notion, one can easily compose gadgets satisfying the classical probing security notion –which does not ensure composability on its own– to obtain a region probing secure circuit. Such a circuit is secure against an adversary placing up to t probes in each gadget composing the circuit, which ensures a tight reduction to the more realistic noisy leakage model. After introducing the notion and proving our composition theorem, we compare our approach to the composition approaches obtained with the (Strong) Non-Interference (S/NI) notions as well as the Probe-Isolating Non-Interference (PINI) notion. We further show that any uniform SNI gadget achieves the IOS security notion, while the converse is not true. We further describe a refresh gadget achieving the IOS property for any linear sharing with a quasilinear complexity Θ(n log n) and a O(1/ log n) leakage rate (for an n-size sharing). This refresh gadget is a simplified version of the quasilinear SNI refresh gadget proposed by Battistello, Coron, Prouff, and Zeitoun (ePrint 2016). As an application of our composition framework, we revisit the quasilinear-complexity masking scheme of Goudarzi, Joux and Rivain (Asiacrypt 2018). We improve this scheme by generalizing it to any base field (whereas the original proposal only applies to field with nth powers of unity) and by taking advantage of our composition approach. We further patch a flaw in the original security proof and extend it from the random probing model to the stronger region probing model. Finally, we present some application of this extended quasilinear masking scheme to AES and MiMC and compare the obtained performances.
2021
TCHES
Improved Leakage-Resistant Authenticated Encryption based on Hardware AES Coprocessors 📺
We revisit Unterstein et al.’s leakage-resilient authenticated encryption scheme from CHES 2020. Its main goal is to enable secure software updates by leveraging unprotected (e.g., AES, SHA256) coprocessors available on low-end microcontrollers. We show that the design of this scheme ignores an important attack vector that can significantly reduce its security claims, and that the evaluation of its leakage-resilient PRF is quite sensitive to minor variations of its measurements, which can easily lead to security overstatements. We then describe and analyze a new mode of operation for which we propose more conservative security parameters and show that it competes with the CHES 2020 one in terms of performances. As an additional bonus, our solution relies only on AES-128 coprocessors, an
2021
TCHES
Reinforcement Learning for Hyperparameter Tuning in Deep Learning-based Side-channel Analysis 📺
Deep learning represents a powerful set of techniques for profiling sidechannel analysis. The results in the last few years show that neural network architectures like multilayer perceptron and convolutional neural networks give strong attack performance where it is possible to break targets protected with various countermeasures. Considering that deep learning techniques commonly have a plethora of hyperparameters to tune, it is clear that such top attack results can come with a high price in preparing the attack. This is especially problematic as the side-channel community commonly uses random search or grid search techniques to look for the best hyperparameters.In this paper, we propose to use reinforcement learning to tune the convolutional neural network hyperparameters. In our framework, we investigate the Q-Learning paradigm and develop two reward functions that use side-channel metrics. We mount an investigation on three commonly used datasets and two leakage models where the results show that reinforcement learning can find convolutional neural networks exhibiting top performance while having small numbers of trainable parameters. We note that our approach is automated and can be easily adapted to different datasets. Several of our newly developed architectures outperform the current state-of-the-art results. Finally, we make our source code publicly available. https://github.com/AISyLab/Reinforcement-Learning-for-SCA
2021
TCHES
Second-Order SCA Security with almost no Fresh Randomness 📺
Masking schemes are among the most popular countermeasures against Side-Channel Analysis (SCA) attacks. Realization of masked implementations on hardware faces several difficulties including dealing with glitches. Threshold Implementation (TI) is known as the first strategy with provable security in presence of glitches. In addition to the desired security order d, TI defines the minimum number of shares to also depend on the algebraic degree of the target function. This may lead to unaffordable implementation costs for higher orders.For example, at least five shares are required to protect the smallest nonlinear function against second-order attacks. By cuttingsuch a dependency, the successor schemes are able to achieve the same security level by just d + 1 shares, at the cost of high demand for fresh randomness, particularly at higher orders. In this work, we provide a methodology to realize the second-order glitch-extended probing-secure implementation of a group of quadratic functions with three shares and no fresh randomness. This allows us to construct second-order secure implementations of several cryptographic primitives with very limited number of fresh masks, including Keccak, SKINNY, Midori, PRESENT, and PRINCE.
2021
TCHES
SEAL-Embedded: A Homomorphic Encryption Library for the Internet of Things 📺
The growth of the Internet of Things (IoT) has led to concerns over the lack of security and privacy guarantees afforded by IoT systems. Homomorphic encryption (HE) is a promising privacy-preserving solution to allow devices to securely share data with a cloud backend; however, its high memory consumption and computational overhead have limited its use on resource-constrained embedded devices. To address this problem, we present SEAL-Embedded, the first HE library targeted for embedded devices, featuring the CKKS approximate homomorphic encryption scheme. SEAL-Embedded employs several computational and algorithmic optimizations along with a detailed memory re-use scheme to achieve memory efficient, high performance CKKS encoding and encryption on embedded devices without any sacrifice of security. We additionally provide an “adapter” server module to convert data encrypted by SEAL-Embedded to be compatible with the Microsoft SEAL library for homomorphic encryption, enabling an end-to-end solution for building privacy-preserving applications. For a polynomial ring degree of 4096, using RNS primes of 30 or fewer bits, our library can be configured to use between 64–137 KB of RAM and 1–264 KB of flash data, depending on developer-selected configurations and tradeoffs. Using these parameters, we evaluate SEAL-Embedded on two different IoT platforms with high performance, memory efficient, and balanced configurations of the library for asymmetric and symmetric encryption. With 136 KB of RAM, SEAL-Embedded can perform asymmetric encryption of 2048 single-precision numbers in 77 ms on the Azure Sphere Cortex-A7 and 737 ms on the Nordic nRF52840 Cortex-M4.
2021
TCHES
Countermeasures against Static Power Attacks: – Comparing Exhaustive Logic Balancing and Other Protection Schemes in 28 nm CMOS – 📺
In recent years it has been demonstrated convincingly that the standby power of a CMOS chip reveals information about the internally stored and processed data. Thus, for adversaries who seek to extract secrets from cryptographic devices via side-channel analysis, the static power has become an attractive quantity to obtain. Most works have focused on the destructive side of this subject by demonstrating attacks. In this work, we examine potential solutions to protect circuits from silently leaking sensitive information during idle times. We focus on countermeasures that can be implemented using any common digital standard cell library and do not consider solutions that require full-custom or analog design flow. In particular, we evaluate and compare a set of five distinct standard-cell-based hiding countermeasures, including both, randomization and equalization techniques. We then combine the hiding countermeasures with state-of-the-art hardware masking in order to amplify the noise level and achieve a high resistance against attacks. An important part of our contribution is the proposal and evaluation of the first ever standard-cell-based balancing scheme which achieves perfect data-independence on paper, i.e., in absence of intra-die process variations and aging effects. We call our new countermeasure Exhaustive Logic Balancing (ELB). While this scheme, applied to a threshold implementation, provides the highest level of resistance in our experiments, it may not be the most cost effective option due to the significant resource overhead associated. All evaluated countermeasures and combinations thereof are applied to a serialized hardware implementation of the PRESENT block cipher and realized as cryptographic co-processors on a 28nm CMOS ASIC prototype. Our experimental results are obtained through real-silicon measurements of a fabricated die of the ASIC in a temperature-controlled environment using a source measure unit (SMU). We believe that our elaborate comparison serves as a useful guideline for hardware designers to find a proper tradeoff between security and cost for almost any application.
2021
TCHES
Machine Learning of Physical Unclonable Functions using Helper Data: Revealing a Pitfall in the Fuzzy Commitment Scheme 📺
Physical Unclonable Functions (PUFs) are used in various key-generation schemes and protocols. Such schemes are deemed to be secure even for PUFs with challenge-response behavior, as long as no responses and no reliability information about the PUF are exposed. This work, however, reveals a pitfall in these constructions: When using state-of-the-art helper data algorithms to correct noisy PUF responses, an attacker can exploit the publicly accessible helper data and challenges. We show that with this public information and the knowledge of the underlying error correcting code, an attacker can break the security of the system: The redundancy in the error correcting code reveals machine learnable features and labels. Learning these features and labels results in a predictive model for the dependencies between different challenge-response pairs (CRPs) without direct access to the actual PUF response. We provide results based on simulated data of a k-SUM PUF model and an Arbiter PUF model. We also demonstrate the attack for a k-SUM PUF model generated from real data and discuss the impact on more recent PUF constructions such as the Multiplexer PUF and the Interpose PUF. The analysis reveals that especially the frequently used repetition code is vulnerable: For a SUM-PUF in combination with a repetition code, e.g., already the observation of 800 challenges and helper data bits suffices to reduce the entropy of the key down to one bit. The analysis also shows that even other linear block codes like the BCH, the Reed-Muller, or the Single Parity Check code are affected by the problem. The code-dependent insights we gain from the analysis allow us to suggest mitigation strategies for the identified attack. While the shown vulnerability advances Machine Learning (ML) towards realistic attacks on key-storage systems with PUFs, our analysis also facilitates a better understanding and evaluation of existing approaches and protocols with PUFs. Therefore, it brings the community one step closer to a more complete leakage assessment of PUFs.
2021
TCHES
Fault Attacks on CCA-secure Lattice KEMs 📺
NIST’s post-quantum standardization effort very recently entered its final round. This makes studying the implementation-security aspect of the remaining candidates an increasingly important task, as such analyses can aid in the final selection process and enable appropriately secure wider deployment after standardization. However, lattice-based key-encapsulation mechanisms (KEMs), which are prominently represented among the finalists, have thus far received little attention when it comes to fault attacks.Interestingly, many of these KEMs exhibit structural similarities. They can be seen as variants of the encryption scheme of Lyubashevsky, Peikert, and Rosen, and employ the Fujisaki-Okamoto transform (FO) to achieve CCA2 security. The latter involves re-encrypting a decrypted plaintext and testing the ciphertexts for equivalence. This corresponds to the classic countermeasure of computing the inverse operation and hence prevents many fault attacks.In this work, we show that despite this inherent protection, practical fault attacks are still possible. We present an attack that requires a single instruction-skipping fault in the decoding process, which is run as part of the decapsulation. After observing if this fault actually changed the outcome (effective fault) or if the correct result is still returned (ineffective fault), we can set up a linear inequality involving the key coefficients. After gathering enough of these inequalities by faulting many decapsulations, we can solve for the key using a bespoke statistical solving approach. As our attack only requires distinguishing effective from ineffective faults, various detection-based countermeasures, including many forms of double execution, can be bypassed.We apply this attack to Kyber and NewHope, both of which belong to the aforementioned class of schemes. Using fault simulations, we show that, e.g., 6,500 faulty decapsulations are required for full key recovery on Kyber512. To demonstrate practicality, we use clock glitches to attack Kyber running on a Cortex M4. As we argue that other schemes of this class, such as Saber, might also be susceptible, the presented attack clearly shows that one cannot rely on the FO transform’s fault deterrence and that proper countermeasures are still needed.
2021
TCHES
A White-Box Masking Scheme Resisting Computational and Algebraic Attacks 📺
White-box cryptography attempts to protect cryptographic secrets in pure software implementations. Due to their high utility, white-box cryptosystems (WBC) are deployed by the industry even though the security of these constructions is not well defined. A major breakthrough in generic cryptanalysis of WBC was Differential Computation Analysis (DCA), which requires minimal knowledge of the underlying white-box protection and also thwarts many obfuscation methods. To avert DCA, classic masking countermeasures originally intended to protect against highly related side-channel attacks have been proposed for use in WBC. However, due to the controlled environment of WBCs, new algebraic attacks against classic masking schemes have quickly been found. These algebraic DCA attacks break all classic masking countermeasures efficiently, as they are independent of the masking order.In this work, we propose a novel generic masking scheme that can resist both DCA and algebraic DCA attacks. The proposed scheme extends the seminal work by Ishai et al. which is probing secure and thus resists DCA, to also resist algebraic attacks. To prove the security of our scheme, we demonstrate the connection between two main security notions in white-box cryptography: probing security and prediction security. Resistance of our masking scheme to DCA is proven for an arbitrary order of protection, using the well-known strong non-interference notion by Barthe et al. Our masking scheme also resists algebraic attacks, which we show concretely for first and second-order algebraic protection. Moreover, we present an extensive performance analysis and quantify the overhead of our scheme, for a proof-of-concept protection of an AES implementation.
2021
TCHES
Secure, Accurate, and Practical Narrow-Band Ranging System 📺
Relay attacks pose a serious security threat to wireless systems, such as, contactless payment systems, keyless entry systems, or smart access control systems. Distance bounding protocols, which allow an entity to not only authenticate another entity but also determine whether it is physically close by, effectively mitigate relay attacks. However, secure implementation of distance bounding protocols, especially of the time critical challenge-response phase, has been a challenging task. In this paper, we design and implement a secure and accurate distance bounding protocol based on Narrow-Band signals, such as Bluetooth Low Energy (BLE), to particularly mitigate relay attacks. Narrow-Band ranging, specifically, phase-based ranging, enables accurate distance measurement, but it is vulnerable to phase rollover attacks. In our solution, we mitigate phase rollover attacks by also measuring time-of-flight (ToF) to detect the delay introduced by such attacks. Therefore, our protocol effectively combines the best of both worlds: phase-based ranging for accuracy and time-of-flight (ToF) measurement for security. To demonstrate the feasibility and practicality of our solution, we prototype it on NXP KW36 BLE chips and evaluate its performance and relay attack resistance. The obtained precision and accuracy of the presented ranging solution are 2.5 cm and 30 cm, respectively, in wireless measurements.
2021
TCHES
Provably Secure Hardware Masking in the Transition- and Glitch-Robust Probing Model: Better Safe than Sorry 📺
There exists many masking schemes to protect implementations of cryptographic operations against side-channel attacks. It is common practice to analyze the security of these schemes in the probing model, or its variant which takes into account physical effects such as glitches and transitions. Although both effects exist in practice and cause leakage, masking schemes implemented in hardware are often only analyzed for security against glitches. In this work, we fill this gap by proving sufficient conditions for the security of hardware masking schemes against transitions, leading to the design of new masking schemes and a proof of security for an existing masking scheme in presence of transitions. Furthermore, we give similar results in the stronger model where the effects of glitches and transitions are combined.
2021
TCHES
NTT Multiplication for NTT-unfriendly Rings: New Speed Records for Saber and NTRU on Cortex-M4 and AVX2 📺
In this paper, we show how multiplication for polynomial rings used in the NIST PQC finalists Saber and NTRU can be efficiently implemented using the Number-theoretic transform (NTT). We obtain superior performance compared to the previous state of the art implementations using Toom–Cook multiplication on both NIST’s primary software optimization targets AVX2 and Cortex-M4. Interestingly, these two platforms require different approaches: On the Cortex-M4, we use 32-bit NTT-based polynomial multiplication, while on Intel we use two 16-bit NTT-based polynomial multiplications and combine the products using the Chinese Remainder Theorem (CRT).For Saber, the performance gain is particularly pronounced. On Cortex-M4, the Saber NTT-based matrix-vector multiplication is 61% faster than the Toom–Cook multiplication resulting in 22% fewer cycles for Saber encapsulation. For NTRU, the speed-up is less impressive, but still NTT-based multiplication performs better than Toom–Cook for all parameter sets on Cortex-M4. The NTT-based polynomial multiplication for NTRU-HRSS is 10% faster than Toom–Cook which results in a 6% cost reduction for encapsulation. On AVX2, we obtain speed-ups for three out of four NTRU parameter sets.As a further illustration, we also include code for AVX2 and Cortex-M4 for the Chinese Association for Cryptologic Research competition award winner LAC (also a NIST round 2 candidate) which outperforms existing code.
2021
TCHES
Novel Key Recovery Attack on Secure ECDSA Implementation by Exploiting Collisions between Unknown Entries 📺
In this paper, we propose a novel key recovery attack against secure ECDSA signature generation employing regular table-based scalar multiplication. Our attack exploits novel leakage, denoted by collision information, which can be constructed by iteratively determining whether two entries loaded from the table are the same or not through side-channel collision analysis. Without knowing the actual value of the table entries, an adversary can recover the private key of ECDSA by finding the condition for which several nonces are linearly dependent by exploiting only the collision information. We show that this condition can be satisfied practically with a reasonable number of digital signatures and corresponding traces. Furthermore, we also show that all entries in the pre-computation table can be recovered using the recovered private key and a sufficient number of digital signatures based on the collision information. As case studies, we find that fixed-base comb and T_SM scalar multiplication are vulnerable to our attack. Finally, we verify that our attack is a real threat by conducting an experiment with power consumption traces acquired during T_SM scalar multiplication operations on an ARM Cortex-M based microcontroller. We also provide the details for validation process.
2021
TCHES
Revisiting the functional bootstrap in TFHE 📺
The FHEW cryptosystem introduced the idea that an arbitrary function can be evaluated within the bootstrap procedure as a table lookup. The faster bootstraps of TFHE strengthened this approach, which was later named Functional Bootstrap (Boura et al., CSCML’19). From then on, little effort has been made towards defining efficient ways of using it to implement functions with high precision. In this paper, we introduce two methods to combine multiple functional bootstraps to accelerate the evaluation of reasonably large look-up tables and highly precise functions. We thoroughly analyze and experimentally validate the error propagation in both methods, as well as in the functional bootstrap itself. We leverage the multi-value bootstrap of Carpov et al. (CT-RSA’19) to accelerate (single) lookup table evaluation, and we improve it by lowering the complexity of its error variance growth from quadratic to linear in the value of the output base. Compared to previous literature using TFHE’s functional bootstrap, our methods are up to 2.49 times faster than the lookup table evaluation of Carpov et al. (CT-RSA’19) and up to 3.19 times faster than the 32-bit integer comparison of Bourse et al. (CT-RSA’20). Compared to works using logic gates, we achieved speedups of up to 6.98, 8.74, and 3.55 times over 8-bit implementations of the functions ReLU, Addition, and Maximum, respectively.
2021
TCHES
Cross-Device Profiled Side-Channel Attack with Unsupervised Domain Adaptation 📺
Deep learning (DL)-based techniques have recently proven to be very successful when applied to profiled side-channel attacks (SCA). In a real-world profiled SCA scenario, attackers gain knowledge about the target device by getting access to a similar device prior to the attack. However, most state-of-the-art literature performs only proof-of-concept attacks, where the traces intended for profiling and attacking are acquired consecutively on the same fully-controlled device. This paper reminds that even a small discrepancy between the profiling and attack traces (regarded as domain discrepancy) can cause a successful single-device attack to completely fail. To address the issue of domain discrepancy, we propose a Cross-Device Profiled Attack (CDPA), which introduces an additional fine-tuning phase after establishing a pretrained model. The fine-tuning phase is designed to adjust the pre-trained network, such that it can learn a hidden representation that is not only discriminative but also domain-invariant. In order to obtain domain-invariance, we adopt a maximum mean discrepancy (MMD) loss as a constraint term of the classic cross-entropy loss function. We show that the MMD loss can be easily calculated and embedded in a standard convolutional neural network. We evaluate our strategy on both publicly available datasets and multiple devices (eight Atmel XMEGA 8-bit microcontrollers and three SAKURA-G evaluation boards). The results demonstrate that CDPA can improve the performance of the classic DL-based SCA by orders of magnitude, which significantly eliminates the impact of domain discrepancy caused by different devices.
2021
TCHES
Time-Memory Analysis of Parallel Collision Search Algorithms 📺
Parallel versions of collision search algorithms require a significant amount of memory to store a proportion of the points computed by the pseudo-random walks. Implementations available in the literature use a hash table to store these points and allow fast memory access. We provide theoretical evidence that memory is an important factor in determining the runtime of this method. We propose to replace the traditional hash table by a simple structure, inspired by radix trees, which saves space and provides fast look-up and insertion. In the case of many-collision search algorithms, our variant has a constant-factor improved runtime. We give benchmarks that show the linear parallel performance of the attack on elliptic curves discrete logarithms and improved running times for meet-in-the-middle applications.
2021
TCHES
Structural Attack (and Repair) of Diffused-Input-Blocked-Output White-Box Cryptography 📺
In some practical enciphering frameworks, operational constraints may require that a secret key be embedded into the cryptographic algorithm. Such implementations are referred to as White-Box Cryptography (WBC). One technique consists of the algorithm’s tabulation specialized for its key, followed by obfuscating the resulting tables. The obfuscation consists of the application of invertible diffusion and confusion layers at the interface between tables so that the analysis of input/output does not provide exploitable information about the concealed key material.Several such protections have been proposed in the past and already cryptanalyzed thanks to a complete WBC scheme analysis. In this article, we study a particular pattern for local protection (which can be leveraged for robust WBC); we formalize it as DIBO (for Diffused-Input-Blocked-Output). This notion has been explored (albeit without having been nicknamed DIBO) in previous works. However, we notice that guidelines to adequately select the invertible diffusion ∅and the blocked bijections B were missing. Therefore, all choices for ∅ and B were assumed as suitable. Actually, we show that most configurations can be attacked, and we even give mathematical proof for the attack. The cryptanalysis tool is the number of zeros in a Walsh-Hadamard spectrum. This “spectral distinguisher” improves on top of the previously known one (Sasdrich, Moradi, Güneysu, at FSE 2016). However, we show that such an attack does not work always (even if it works most of the time).Therefore, on the defense side, we give a straightforward rationale for the WBC implementations to be secure against such spectral attacks: the random diffusion part ∅ shall be selected such that the rank of each restriction to bytes is full. In AES’s case, this seldom happens if ∅ is selected at random as a linear bijection of F322. Thus, specific care shall be taken. Notice that the entropy of the resulting ∅ (suitable for WBC against spectral attacks) is still sufficient to design acceptable WBC schemes.
2021
TCHES
RASSLE: Return Address Stack based Side-channel LEakage 📺
Microarchitectural attacks on computing systems often stem from simple artefacts in the underlying architecture. In this paper, we focus on the Return Address Stack (RAS), a small hardware stack present in modern processors to reduce the branch miss penalty by storing the return addresses of each function call. The RAS is useful to handle specifically the branch predictions for the RET instructions which are not accurately predicted by the typical branch prediction units. In particular, we envisage a spy process who crafts an overflow condition in the RAS by filling it with arbitrary return addresses, and wrestles with a concurrent process to establish a timing side channel between them. We call this attack principle, RASSLE 1 (Return Address Stack based Side-channel Leakage), which an adversary can launch on modern processors by first reverse engineering the RAS using a generic methodology exploiting the established timing channel. Subsequently, we show three concrete attack scenarios: i) How a spy can establish a covert channel with another co-residing process? ii) How RASSLE can be utilized to determine the secret key of the P-384 curves in OpenSSL (v1.1.1 library)? iii) How an Elliptic Curve Digital Signature Algorithm (ECDSA) secret key on P-256 curve of OpenSSL can be revealed using Lattice Attack on partially leaked nonces with the aid of RASSLE? In this attack, we show that the OpenSSL implementation of scalar multiplication on this curve has varying number of add-and-sub function calls, which depends on the secret scalar bits. We demonstrate through several experiments that the number of add-and-sub function calls can be used to template the secret bit, which can be picked up by the spy using the principles of RASSLE. Finally, we demonstrate a full end-to-end attack on OpenSSL ECDSA using curve parameters of curve P-256. In this part of our experiments with RASSLE, we abuse the deadline scheduler policy to attain perfect synchronization between the spy and victim, without any aid of induced synchronization from the victim code. This synchronization and timing leakage through RASSLE is sufficient to retrieve the Most Significant Bits (MSB) of the ephemeral nonces used while signature generation, from which we subsequently retrieve the secret signing key of the sender applying the Hidden Number Problem. 1RASSLE is a non-standard spelling for wrestle.
2021
TCHES
Chosen Ciphertext k-Trace Attacks on Masked CCA2 Secure Kyber 📺
Single-trace attacks are a considerable threat to implementations of classic public-key schemes, and their implications on newer lattice-based schemes are still not well understood. Two recent works have presented successful single-trace attacks targeting the Number Theoretic Transform (NTT), which is at the heart of many lattice-based schemes. However, these attacks either require a quite powerful side-channel adversary or are restricted to specific scenarios such as the encryption of ephemeral secrets. It is still an open question if such attacks can be performed by simpler adversaries while targeting more common public-key scenarios. In this paper, we answer this question positively. First, we present a method for crafting ring/module-LWE ciphertexts that result in sparse polynomials at the input of inverse NTT computations, independent of the used private key. We then demonstrate how this sparseness can be incorporated into a side-channel attack, thereby significantly improving noise resistance of the attack compared to previous works. The effectiveness of our attack is shown on the use-case of CCA2 secure Kyber k-module-LWE, where k ∈ {2, 3, 4}. Our k-trace attack on the long-term secret can handle noise up to a σ ≤ 1.2 in the noisy Hamming weight leakage model, also for masked implementations. A 2k-trace variant for Kyber1024 even allows noise σ ≤ 2.2 also in the masked case, with more traces allowing us to recover keys up to σ ≤ 2.7. Single-trace attack variants have a noise tolerance depending on the Kyber parameter set, ranging from σ ≤ 0.5 to σ ≤ 0.7. As a comparison, similar previous attacks in the masked setting were only successful with σ ≤ 0.5.
2021
TCHES
New First-Order Secure AES Performance Records 📺
Being based on a sound theoretical basis, masking schemes are commonly applied to protect cryptographic implementations against Side-Channel Analysis (SCA) attacks. Constructing SCA-protected AES, as the most widely deployed block cipher, has been naturally the focus of several research projects, with a direct application in industry. The majority of SCA-secure AES implementations introduced to the community opted for low area and latency overheads considering Application-Specific Integrated Circuit (ASIC) platforms. Albeit a few, those which particularly targeted Field Programmable Gate Arrays (FPGAs) as the implementation platform yield either a low throughput or a not-highly secure design.In this work, we fill this gap by introducing first-order glitch-extended probing secure masked AES implementations highly optimized for FPGAs, which support both encryption and decryption. Compared to the state of the art, our designs efficiently map the critical non-linear parts of the masked S-box into the built-in Block RAMs (BRAMs).The most performant variant of our constructions accomplishes five first-order secure AES encryptions/decryptions simultaneously in 50 clock cycles. Compared to the equivalent state-of-the-art designs, this leads to at least 70% reduction in utilization of FPGA resources (slices) at the cost of occupying BRAMs. Last but not least, we provide a wide range of such secure and efficient implementations supporting a large set of applications, ranging from low-area to high-throughput.
2021
TCHES
Over 100x Faster Bootstrapping in Fully Homomorphic Encryption through Memory-centric Optimization with GPUs 📺
Fully Homomorphic encryption (FHE) has been gaining in popularity as an emerging means of enabling an unlimited number of operations in an encrypted message without decryption. A major drawback of FHE is its high computational cost. Specifically, a bootstrapping step that refreshes the noise accumulated through consequent FHE operations on the ciphertext can even take minutes of time. This significantly limits the practical use of FHE in numerous real applications.By exploiting the massive parallelism available in FHE, we demonstrate the first instance of the implementation of a GPU for bootstrapping CKKS, one of the most promising FHE schemes supporting the arithmetic of approximate numbers. Through analyzing CKKS operations, we discover that the major performance bottleneck is their high main-memory bandwidth requirement, which is exacerbated by leveraging existing optimizations targeted to reduce the required computation. These observations motivate us to utilize memory-centric optimizations such as kernel fusion and reordering primary functions extensively.Our GPU implementation shows a 7.02× speedup for a single CKKS multiplication compared to the state-of-the-art GPU implementation and an amortized bootstrapping time of 0.423us per bit, which corresponds to a speedup of 257× over a single-threaded CPU implementation. By applying this to logistic regression model training, we achieved a 40.0× speedup compared to the previous 8-thread CPU implementation with the same data.
2021
TCHES
A Compact Hardware Implementation of CCA-Secure Key Exchange Mechanism CRYSTALS-KYBER on FPGA 📺
Post-quantum cryptosystems should be prepared before the advent of powerful quantum computers to ensure information secure in our daily life. In 2016 a post-quantum standardization contest was launched by National Institute of Standards and Technology (NIST), and there have been lots of works concentrating on evaluation of these candidate protocols, mainly in pure software or through hardware-software co-design methodology on different platforms. As the contest progresses to third round in July 2020 with only 7 finalists and 8 alternate candidates remained, more dedicated and specific hardware designs should be considered to illustrate the intrinsic property of a certain protocol and achieve better performance. To this end, we present a standalone hardware design of CRYSTALS-KYBER, amodule learning-with-errors (MLWE) based key exchange mechanism (KEM) protocol within the 7 finalists on FPGA platform. Through elaborate scheduling of sampling and number theoretic transform (NTT) related calculations, decent performance is achieved with limited hardware resources. The way that Encode/Decode and the tweaked Fujisaki-Okamoto transform are implemented is demonstrated in detail. Analysis about minimizing memory footprint is also given out. In summary, we realize the adaptive chosen ciphertext attack (CCA) secure Kyber with all selectable module dimension k on the smallest Xilinx Artix-7 device. Our design computes key-generation, encapsulation (encryption) and decapsulation (decryption and reencryption) phase in 3768/5079/6668 cycles when k = 2, 6316/7925/10049 cycles when k = 3, and 9380/11321/13908 cycles when k = 4, consuming 7412/6785 LUTs, 4644/3981 FFs, 2126/1899 slices, 2/2 DSPs and 3/3 BRAMs in server/client with 6.2/6.0 ns critical path delay, outperforming corresponding high level synthesis (HLS) based designs or hardware-software co-designs to a large extent.
2021
TCHES
My other car is your car: compromising the Tesla Model X keyless entry system 📺
CHES 2021 Best Paper Award
This paper documents a practical security evaluation of the Tesla Model X keyless entry system. In contrast to other works, the keyless entry system analysed in this paper employs secure symmetric-key and public-key cryptographic primitives implemented by a Common Criteria certified Secure Element. We document the internal workings of this system, covering the key fob, the body control module and the pairing protocol. Additionally, we detail our reverse engineering techniques and document several security issues. The identified issues in the key fob firmware update mechanism and the key fob pairing protocol allow us to bypass all of the cryptographic security measures put in place. To demonstrate the practical impact of our research we develop a fully remote Proof-of-Concept attack that allows to gain access to the vehicle’s interior in a matter of minutes and pair a modified key fob, allowing to drive off. Our attack is not a relay attack, as our new key fob allows us to start the car anytime anywhere. Finally, we provide an analysis of the update performed by Tesla to mitigate our findings. Our work highlights how the increased complexity and connectivity of vehicular systems can result in a larger and easier to exploit attack surface.
2021
TCHES
Combining Optimization Objectives: New Modeling Attacks on Strong PUFs 📺
Strong Physical Unclonable Functions (PUFs), as a promising security primitive, are supposed to be a lightweight alternative to classical cryptography for purposes such as device authentication. Most of the proposed candidates, however, have been plagued by modeling attacks breaking their security claims. The Interpose PUF (iPUF), which has been introduced at CHES 2019, was explicitly designed with state-of-the-art modeling attacks in mind and is supposed to be impossible to break by classical and reliability attacks. In this paper, we analyze its vulnerability to reliability attacks. Despite the increased difficulty, these attacks are still feasible, against the original authors’ claim. We explain how adding constraints to the modeling objective streamlines reliability attacks and allows us to model all individual components of an iPUF successfully. In order to build a practical attack, we give several novel contributions. First, we demonstrate that reliability attacks can be performed not only with covariance matrix adaptation evolution strategy (CMA-ES) but also with gradient-based optimization. Second, we show that the switch to gradient-based reliability attacks makes it possible to combine reliability attacks, weight constraints, and Logistic Regression (LR) into a single optimization objective. This framework makes modeling attacks more efficient, as it exploits knowledge of responses and reliability information at the same time. Third, we show that a differentiable model of the iPUF exists and how it can be utilized in a combined reliability attack. We confirm that iPUFs are harder to break than regular XOR Arbiter PUFs. However, we are still able to break (1,10)-iPUF instances, which were originally assumed to be secure, with less than 107 PUF response queries.
2021
TCHES
Masking Kyber: First- and Higher-Order Implementations 📺
In the final phase of the post-quantum cryptography standardization effort, the focus has been extended to include the side-channel resistance of the candidates. While some schemes have been already extensively analyzed in this regard, there is no such study yet of the finalist Kyber.In this work, we demonstrate the first completely masked implementation of Kyber which is protected against first- and higher-order attacks. To the best of our knowledge, this results in the first higher-order masked implementation of any post-quantum secure key encapsulation mechanism algorithm. This is realized by introducing two new techniques. First, we propose a higher-order algorithm for the one-bit compression operation. This is based on a masked bit-sliced binary-search that can be applied to prime moduli. Second, we propose a technique which enables one to compare uncompressed masked polynomials with compressed public polynomials. This avoids the costly masking of the ciphertext compression while being able to be instantiated at arbitrary orders.We show performance results for first-, second- and third-order protected implementations on the Arm Cortex-M0+ and Cortex-M4F. Notably, our implementation of first-order masked Kyber decapsulation requires 3.1 million cycles on the Cortex-M4F. This is a factor 3.5 overhead compared to the unprotected optimized implementationin pqm4. We experimentally show that the first-order implementation of our new modules on the Cortex-M0+ is hardened against attacks using 100 000 traces and mechanically verify the security in a fine-grained leakage model using the verification tool scVerif.
2021
TCHES
ROTed: Random Oblivious Transfer for embedded devices 📺
Oblivious Transfer (OT) is a fundamental primitive in cryptography, supporting protocols such as Multi-Party Computation and Private Set Intersection (PSI), that are used in applications like contact discovery, remote diagnosis and contact tracing. Due to its fundamental nature, it is utterly important that its execution is secure even if arbitrarily composed with other instances of the same, or other protocols. This property can be guaranteed by proving its security under the Universal Composability model. Herein, a 3-round Random Oblivious Transfer (ROT) protocol is proposed, which achieves high computational efficiency, in the Random Oracle Model. The security of the protocol is based on the Ring Learning With Errors assumption (for which no quantum solver is known). ROT is the basis for OT extensions and, thus, achieves wide applicability, without the overhead of compiling ROTs from OTs. Finally, the protocol is implemented in a server-class Intel processor and four application-class ARM processors, all with different architectures. The usage of vector instructions provides on average a 40% speedup. The implementation shows that our proposal is at least one order of magnitude faster than the state-of-the-art, and is suitable for a wide range of applications in embedded systems, IoT, desktop, and servers. From a memory footprint perspective, there is a small increase (16%) when compared to the state-of-the-art. This increase is marginal and should not prevent the usage of the proposed protocol in a multitude of devices. In sum, the proposal achieves up to 37k ROTs/s in an Intel server-class processor and up to 5k ROTs/s in an ARM application-class processor. A PSI application, using the proposed ROT, is up to 6.6 times faster than related art.
2021
TCHES
Side-Channel Protections for Picnic Signatures 📺
We study masking countermeasures for side-channel attacks against signature schemes constructed from the MPC-in-the-head paradigm, specifically when the MPC protocol uses preprocessing. This class of signature schemes includes Picnic, an alternate candidate in the third round of the NIST post-quantum standardization project. The only previously known approach to masking MPC-in-the-head signatures suffers from interoperability issues and increased signature sizes. Further, we present a new attack to demonstrate that known countermeasures are not sufficient when the MPC protocol uses a preprocessing phase, as in Picnic3.We overcome these challenges by showing how to mask the underlying zero-knowledge proof system due to Katz–Kolesnikov–Wang (CCS 2018) for any masking order, and by formally proving that our approach meets the standard security notions of non-interference for masking countermeasures. As a case study, we apply our masking technique to Picnic. We then implement different masked versions of Picnic signing providing first order protection for the ARM Cortex M4 platform, and quantify the overhead of these different masking approaches. We carefully analyze the side-channel risk of hashing operations, and give optimizations that reduce the CPU cost of protecting hashing in Picnic by a factor of five. The performance penalties of the masking countermeasures ranged from 1.8 to 5.5, depending on the degree of masking applied to hash function invocations.
2021
TCHES
An Instruction Set Extension to Support Software-Based Masking 📺
In both hardware and software, masking can represent an effective means of hardening an implementation against side-channel attack vectors such as Differential Power Analysis (DPA). Focusing on software, however, the use of masking can present various challenges: specifically, it often 1) requires significant effort to translate any theoretical security properties into practice, and, even then, 2) imposes a significant overhead in terms of efficiency. To address both challenges, this paper explores the use of an Instruction Set Extension (ISE) to support masking in software-based implementations of a range of (symmetric) cryptographic kernels including AES: we design, implement, and evaluate such an ISE, using RISC-V as the base ISA. Our ISE-supported first-order masked implementation of AES, for example, is an order of magnitude more efficient than a software-only alternative with respect to both execution latency and memory footprint; this renders it comparable to an unmasked implementation using the same metrics, but also first-order secure.
2021
TCHES
Revealing the Weakness of Addition Chain Based Masked SBox Implementations 📺
Addition chain is a well-known approach for implementing higher-order masked SBoxes. However, this approach induces more computations of intermediate monomials over F2n, which in turn leak more information related to the sensitive variables and may decrease its side-channel resistance consequently. In this paper, we introduce a new notion named polygon degree to measure the resistance of monomial computations. With the help of this notion, we select several typical addition chain implementations with the strongest or the weakest resistance. In practical experiments based on an ARM Cortex-M4 architecture, we collect power and electromagnetic traces in consideration of different noise levels. The results show that the resistance of the weakest masked SBox implementation is close to that of an unprotected implementation, while the strongest one can also be broken with fewer than 1,500 traces due to extra leakages. Moreover, we study the resistance of addition chain implementations against profiled attacks. We find that some monomials with smaller output size leak more information than the SBox output. The work by Duc et al. at JOC 2019 showed that for a balanced function, the smaller the output size is, the less information is leaked. Thus, our attacks demonstrate that this property of balanced functions does not apply to unbalanced functions.
2021
TCHES
CTIDH: faster constant-time CSIDH 📺
This paper introduces a new key space for CSIDH and a new algorithm for constant-time evaluation of the CSIDH group action. The key space is not useful with previous algorithms, and the algorithm is not useful with previous key spaces, but combining the new key space with the new algorithm produces speed records for constant-time CSIDH. For example, for CSIDH-512 with a 256-bit key space, the best previous constant-time results used 789000 multiplications and more than 200 million Skylake cycles; this paper uses 438006 multiplications and 125.53 million cycles.
2021
TCHES
Low-Latency Keccak at any Arbitrary Order 📺
Correct application of masking on hardware implementation of cryptographic primitives necessitates the instantiation of registers in order to achieve the non-completeness (commonly said to stop the propagation of glitches). This sometimes leads to a high latency overhead, making the implementation not necessarily suitable for the underlying application. As a concrete example, this holds for Keccak. Application of d + 1 Domain Oriented Masking (DOM) on a round-based implementation of Keccak leads to the introduction of two register stages per round, i.e., two times higher latency. On the other hand, Rhythmic-Keccak, introduced in CHES 2018, unrolls two rounds to half the latency compared to an unprotected ordinary round-based implementation. To that end, td + 1 masking is used which requires a notable area, and – apart from the difficulty to construct – its extension to higher orders seems beyond the bounds of feasibility.In this paper, we focus on d + 1 masking and introduce a methodology which enables us to stay with the latency of an unprotected round-based implementation, i.e., one register stage per round. While being secure under glitch-extended probing model, we provide a general design where the desired security order can be easily adjusted without any effect on the above-given latency. Compared to the Rhythmic-Keccak, the synthesis results show that our first-order design is able to accomplish the entire operations of Keccak-f[200] in the same period of time while decreasing the area by 74.5%. Notably, our implementations achieve around 30% less delay compared to the corresponding original DOM-Keccak designs.
2021
TCHES
LifeLine for FPGA Protection: Obfuscated Cryptography for Real-World Security 📺
Over the last decade attacks have repetitively demonstrated that bitstream protection for SRAM-based FPGAs is a persistent problem without a satisfying solution in practice. Hence, real-world hardware designs are prone to intellectual property infringement and malicious manipulation as they are not adequately protected against reverse-engineering.In this work, we first review state-of-the-art solutions from industry and academia and demonstrate their ineffectiveness with respect to reverse-engineering and design manipulation. We then describe the design and implementation of novel hardware obfuscation primitives based on the intrinsic structure of FPGAs. Based on our primitives, we design and implement LifeLine, a hardware design protection mechanism for FPGAs using hardware/software co-obfuscated cryptography. We show that LifeLine offers effective protection for a real-world adversary model, requires minimal integration effort for hardware designers, and retrofits to already deployed (and so far vulnerable) systems.
2021
TCHES
FIVER – Robust Verification of Countermeasures against Fault Injections 📺
Fault Injection Analysis is seen as a powerful attack against implementations of cryptographic algorithms. Over the last two decades, researchers proposed a plethora of countermeasures to secure such implementations. However, the design process and implementation are still error-prone, complex, and manual tasks which require long-standing experience in hardware design and physical security. Moreover, the validation of the claimed security is often only done by empirical testing in a very late stage of the design process. To prevent such empirical testing strategies, approaches based on formal verification are applied instead providing the designer early feedback.In this work, we present a fault verification framework to validate the security of countermeasures against fault-injection attacks designed for ICs. The verification framework works on netlist-level, parses the given digital circuit into a model based on Binary Decision Diagrams, and performs symbolic fault injections. This verification approach constitutes a novel strategy to evaluate protected hardware designs against fault injections offering new opportunities as performing full analyses under a given fault models.Eventually, we apply the proposed verification framework to real-world implementations of well-established countermeasures against fault-injection attacks. Here, we consider protected designs of the lightweight ciphers CRAFT and LED-64 as well as AES. Due to several optimization strategies, our tool is able to perform more than 90 million fault injections in a single-round CRAFT design and evaluate the security in under 50 min while the symbolic simulation approach considers all 2128 primary inputs.
2021
TCHES
Scabbard: a suite of efficient learning with rounding key-encapsulation mechanisms 📺
In this paper, we introduce Scabbard, a suite of post-quantum keyencapsulation mechanisms. Our suite contains three different schemes Florete, Espada, and Sable based on the hardness of module- or ring-learning with rounding problem. In this work, we first show how the latest advancements on lattice-based cryptographycan be utilized to create new better schemes and even improve the state-of-the-art on post-quantum cryptography. We put particular focus on designing schemes that can optimally exploit the parallelism offered by certain hardware platforms and are also suitable for resource constrained devices. We show that this can be achieved without compromising the security of the schemes or penalizing their performance on other platforms.To substantiate our claims, we provide optimized implementations of our three new schemes on a wide range of platforms including general-purpose Intel processors using both portable C and vectorized instructions, embedded platforms such as Cortex-M4 microcontrollers, and hardware platforms such as FPGAs. We show that on each platform, our schemes can outperform the state-of-the-art in speed, memory footprint, or area requirements.
2021
TCHES
The SPEEDY Family of Block Ciphers: Engineering an Ultra Low-Latency Cipher from Gate Level for Secure Processor Architectures 📺
We introduce SPEEDY, a family of ultra low-latency block ciphers. We mix engineering expertise into each step of the cipher’s design process in order to create a secure encryption primitive with an extremely low latency in CMOS hardware. The centerpiece of our constructions is a high-speed 6-bit substitution box whose coordinate functions are realized as two-level NAND trees. In contrast to other low-latency block ciphers such as PRINCE, PRINCEv2, MANTIS and QARMA, we neither constrain ourselves by demanding decryption at low overhead, nor by requiring a super low area or energy. This freedom together with our gate- and transistor-level considerations allows us to create an ultra low-latency cipher which outperforms all known solutions in single-cycle encryption speed. Our main result, SPEEDY-6-192, is a 6-round 192-bit block and 192-bit key cipher which can be executed faster in hardware than any other known encryption primitive (including Gimli in Even-Mansour scheme and the Orthros pseudorandom function) and offers 128-bit security. One round more, i.e., SPEEDY-7-192, provides full 192-bit security. SPEEDY primarily targets hardware security solutions embedded in high-end CPUs, where area and energy restrictions are secondary while high performance is the number one priority.
2021
TCHES
Higher-Order Lookup Table Masking in Essentially Constant Memory 📺
Masking using randomised lookup tables is a popular countermeasure for side-channel attacks, particularly at small masking orders. An advantage of this class of countermeasures for masking S-boxes compared to ISW-based masking is that it supports pre-processing and thus significantly reducing the amount of computation to be done after the unmasked inputs are available. Indeed, the “online” computation can be as fast as just a table lookup. But the size of the randomised lookup table increases linearly with the masking order, and hence the RAM memory required to store pre-processed tables becomes infeasible for higher masking orders. Hence demonstrating the feasibility of full pre-processing of higher-order lookup table-based masking schemes on resource-constrained devices has remained an open problem. In this work, we solve the above problem by implementing a higher-order lookup table-based scheme using an amount of RAM memory that is essentially independent of the masking order. More concretely, we reduce the amount of RAM memory needed for the table-based scheme of Coron et al. (TCHES 2018) approximately by a factor equal to the number of shares. Our technique is based upon the use of pseudorandom number generator (PRG) to minimise the randomness complexity of ISW-based masking schemes proposed by Ishai et al. (ICALP 2013) and Coron et al. (Eurocrypt 2020). Hence we show that for lookup table-based masking schemes, the use of a PRG not only reduces the randomness complexity (now logarithmic in the size of the S-box) but also the memory complexity, and without any significant increase in the overall running time. We have implemented in software the higher-order table-based masking scheme of Coron et al. (TCHES 2018) at tenth order with full pre-processing of a single execution of all the AES S-boxes on a ARM Cortex-M4 device that has 256 KB RAM memory. Our technique requires only 41.2 KB of RAM memory, whereas the original scheme would have needed 440 KB. Moreover, our 8-bit implementation results demonstrate that the online execution time of our variant is about 1.5 times faster compared to the 8-bit bitsliced masked implementation of AES-128.
2021
TCHES
Yoroi: Updatable Whitebox Cryptography 📺
Whitebox cryptography aims to provide security in the whitebox setting where the adversary has unlimited access to the implementation and its environment. In order to ensure security in the whitebox setting, it should prevent key extraction attacks and code-lifting attacks, in which the adversary steals the original cryptographic implementation instead of the key, and utilizes it as a big key. Although recent published ciphers such as SPACE, SPNbox, and Whiteblock successfully achieve security against the key extraction attacks, they only provide mitigation of codelifting attack by the so-called space hardness and incompressibility properties of the underlying tables as the space-hard/incompressible table might be eventually stolen by continuous leakage. The complete prevention of such attacks may need to periodically update the secret key. However, that entails high costs and might introduce an additional vulnerability into the system due to the necessity for the reencryption of all data by the updated key. In this paper, we introduce a new property, denominated longevity, for whitebox cryptography. This property enhances security against code-lifting attacks with continuous leakage by updating incompressible tables instead of the secret key. We propose a family of new whitebox-secure block ciphers Yoroi that has the longevity property in addition to the space hardness. By updating its implementation periodically, Yoroi provides constant security against code-lifting attacks without key updating. Moreover, the performance of Yoroi is competitive with existing ciphers implementations in the blackbox and whitebox context.
2021
TCHES
Batching CSIDH Group Actions using AVX-512 📺
Commutative Supersingular Isogeny Diffie-Hellman (or CSIDH for short) is a recently-proposed post-quantum key establishment scheme that belongs to the family of isogeny-based cryptosystems. The CSIDH protocol is based on the action of an ideal class group on a set of supersingular elliptic curves and comes with some very attractive features, e.g. the ability to serve as a “drop-in” replacement for the standard elliptic curve Diffie-Hellman protocol. Unfortunately, the execution time of CSIDH is prohibitively high for many real-world applications, mainly due to the enormous computational cost of the underlying group action. Consequently, there is a strong demand for optimizations that increase the efficiency of the class group action evaluation, which is not only important for CSIDH, but also for related cryptosystems like the signature schemes CSI-FiSh and SeaSign. In this paper, we explore how the AVX-512 vector extensions (incl. AVX-512F and AVX-512IFMA) can be utilized to optimize constant-time evaluation of the CSIDH-512 class group action with the goal of, respectively, maximizing throughput and minimizing latency. We introduce different approaches for batching group actions and computing them in SIMD fashion on modern Intel processors. In particular, we present a hybrid batching technique that, when combined with optimized (8 × 1)-way prime-field arithmetic, increases the throughput by a factor of 3.64 compared to a state-of-the-art (non-vectorized) x64 implementation. On the other hand, vectorization in a 2-way fashion aimed to reduce latency makes our AVX-512 implementation of the group action evaluation about 1.54 times faster than the state-of-the-art. To the best of our knowledge, this paper is the first to demonstrate the high potential of using vector instructions to increase the throughput (resp. decrease the latency) of constant-time CSIDH.
2021
TCHES
Rainbow on Cortex-M4 📺
We present the first Cortex-M4 implementation of the NISTPQC signature finalist Rainbow. We target the Giant Gecko EFM32GG11B which comes with 512 kB of RAM which can easily accommodate the keys of RainbowI.We present fast constant-time bitsliced F16 multiplication allowing multiplication of 32 field elements in 32 clock cycles. Additionally, we introduce a new way of computing the public map P in the verification procedure allowing vastly faster signature verification.Both the signing and verification procedures of our implementation are by far the fastest among the NISTPQC signature finalists. Signing of rainbowIclassic requires roughly 957 000 clock cycles which is 4× faster than the state of the art Dilithium2 implementation and 45× faster than Falcon-512. Verification needs about 239 000 cycles which is 5× and 2× faster respectively. The cost of signing can be further decreased by 20% when storing the secret key in a bitsliced representation.
2021
TCHES
A Side-Channel Attack on a Masked IND-CCA Secure Saber KEM Implementation 📺
In this paper, we present a side-channel attack on a first-order masked implementation of IND-CCA secure Saber KEM. We show how to recover both the session key and the long-term secret key from 24 traces using a deep neural network created at the profiling stage. The proposed message recovery approach learns a higher-order model directly, without explicitly extracting random masks at each execution. This eliminates the need for a fully controllable profiling device which is required in previous attacks on masked implementations of LWE/LWR-based PKEs/KEMs. We also present a new secret key recovery approach based on maps from error-correcting codes that can compensate for some errors in the recovered message. In addition, we discovered a previously unknown leakage point in the primitive for masked logical shifting on arithmetic shares.
2021
TCHES
Masking in Fine-Grained Leakage Models: Construction, Implementation and Verification 📺
We propose a new approach for building efficient, provably secure, and practically hardened implementations of masked algorithms. Our approach is based on a Domain Specific Language in which users can write efficient assembly implementations and fine-grained leakage models. The latter are then used as a basis for formal verification, allowing for the first time formal guarantees for a broad range of device-specific leakage effects not addressed by prior work. The practical benefits of our approach are demonstrated through a case study of the PRESENT S-Box: we develop a highly optimized and provably secure masked implementation, and show through practical evaluation based on TVLA that our implementation is practically resilient. Our approach significantly narrows the gap between formal verification of masking and practical security.
2021
TOSC
1, 2, 3, Fork: Counter Mode Variants based on a Generalized Forkcipher 📺
A multi-forkcipher (MFC) is a generalization of the forkcipher (FC) primitive introduced by Andreeva et al. at ASIACRYPT’19. An MFC is a tweakable cipher that computes s output blocks for a single input block, with s arbitrary but fixed. We define the MFC security in the ind-prtmfp notion as indistinguishability from s tweaked permutations. Generalizing tweakable block ciphers (TBCs, s = 1), as well as forkciphers (s = 2), MFC lends itself well to building simple-to-analyze modes of operation that support any number of cipher output blocks.Our main contribution is the generic CTR encryption mode GCTR that makes parallel calls to an MFC to encrypt a message M. We analyze the set of all 36 “simple and natural” GCTR variants under the nivE security notion by Peyrin and Seurin rom CRYPTO’16. Our proof method makes use of an intermediate abstraction called tweakable CTR (TCTR) that captures the core security properties of GCTR common to all variants, making their analyses easier. Our results show that many of the schemes achieve from well beyond birthday bound (BBB) to full n-bit security under nonce respecting adversaries and some even BBB and close to full n-bit security in the face of realistic nonce misuse conditions.We finally present an efficiency comparison of GCTR using ForkSkinny (an MFC with s = 2) with the traditional CTR and the more recent CTRT modes, both are instantiated with the SKINNY TBC. Our estimations show that any GCTR variant with ForkSkinny can achieve an efficiency advantage of over 20% for moderately long messages, illustrating that the use of an efficient MFC with s ≥ 2 brings a clear speed-up.
2021
TOSC
Cryptanalysis of the SoDark Cipher for HF Radio Automatic Link Establishment 📺
The SoDark cipher is used to protect transmitted frames in the second and third generation automatic link establishment (ALE) standards for high frequency (HF) radios. The cipher is primarily meant to prevent unauthorized linking and attacks on the availability of HF radio networks. This paper represents the first known security analysis of the cipher used by the second generation ALE protocol—the de facto world standard—and presents a related-tweak attack on the full eight round version of the algorithm. Under certain conditions, collisions of intermediate states several rounds into the cipher can be detected from the ciphertext with high probability. This enables testing against the intermediate states using only parts of the key. The best attack is a chosen-ciphertext attack which can recover the secret key in about an hour with 100% probability, using 29 chosen ciphertexts.
2021
TOSC
Improved guess-and-determine and distinguishing attacks on SNOW-V 📺
In this paper, we investigate the security of SNOW-V, demonstrating two guess-and-determine (GnD) attacks against the full version with complexities 2384 and 2378, respectively, and one distinguishing attack against a reduced variant with complexity 2303. Our GnD attacks use enumeration with recursion to explore valid guessing paths, and try to truncate as many invalid guessing paths as possible at early stages of the recursion by carefully designing the order of guessing. In our first GnD attack, we guess three 128-bit state variables, determine the remaining four according to four consecutive keystream words. We finally use the next three keystream words to verify the correct guess. The second GnD attack is similar but exploits one more keystream word as side information helping to truncate more guessing paths. Our distinguishing attack targets a reduced variant where 32-bit adders are replaced with exclusive-OR operations. The samples can be collected from short keystream sequences under different (key, IV) pairs. These attacks do not threaten SNOW-V, but provide more in-depth details for understanding its security and give new ideas for cryptanalysis of other ciphers.
2021
TOSC
Improved Preimage Attacks on 3-Round Keccak-224/256 📺
In this paper, we provide an improved method on preimage attacks of standard 3-round Keccak-224/256. Our method is based on the work by Li and Sun. Their strategy is to find a 2-block preimage instead of a 1-block one by constructing the first and second message blocks in two stages. Under this strategy, they design a new linear structure for 2-round Keccak-224/256 with 194 degrees of freedom left, which is able to construct the second message block with a complexity of 231/262. However, the bottleneck of this strategy is that the first stage needs much more expense than the second one. Therefore, we improve the first stage by using two techniques. The first technique is constructing multi-block messages rather than one-block message in the first stage, which can reach a better inner state. The second technique is setting restricting equations more efficiently, which can work in 3-round Keccak-256. As a result, the complexity of finding a preimage for 3-round Keccak-224/256 can be decreased from 238/281 to 232/265.
2021
TOSC
Exploring Differential-Based Distinguishers and Forgeries for ASCON 📺
Automated methods have become crucial components when searching for distinguishers against symmetric-key cryptographic primitives. While MILP and SAT solvers are among the most popular tools to model ciphers and perform cryptanalysis, other methods with different performance profiles are appearing. In this article, we explore the use of Constraint Programming (CP) for differential cryptanalysis on the Ascon authenticated encryption family (first choice of the CAESAR lightweight applications portfolio and current finalist of the NIST LWC competition) and its internal permutation. We first present a search methodology for finding differential characteristics for Ascon with CP, which can easily find the best differential characteristics already reported by the Ascon designers. This shows the capability of CP in generating easily good differential results compared to dedicated search heuristics. Based on our tool, we also parametrize the search strategies in CP to generate other differential characteristics with the goal of forming limited-birthday distinguishers for 4, 5, 6 and 7 rounds and rectangle attacks for 4 and 5 rounds of the Ascon internal permutation. We propose a categorization of the distinguishers into black-box and non-black-box to better differentiate them as they are often useful in different contexts. We also obtained limited-birthday distinguishers which represent currently the best known distinguishers for 4, 5 and 6 rounds under the category of non-black-box distinguishers. Leveraging again our tool, we have generated forgery attacks against both reduced-rounds Ascon-128 and Ascon-128a, improving over the best reported results at the time of writing. Finally, using the best differential characteristic we have found for 2 rounds, we could also improve a recent attack on round-reduced Ascon-Hash.
2021
TOSC
Boomeyong: Embedding Yoyo within Boomerang and its Applications to Key Recovery Attacks on AES and Pholkos 📺
This work investigates a generic way of combining two very effective and well-studied cryptanalytic tools, proposed almost 18 years apart, namely the boomerang attack introduced by Wagner in FSE 1999 and the yoyo attack by Ronjom et al. in Asiacrypt 2017. In doing so, the s-box switch and ladder switch techniques are leveraged to embed a yoyo trail inside a boomerang trail. As an immediate application, a 6-round key recovery attack on AES-128 is mounted with time complexity of 278. A 10-round key recovery attack on recently introduced AES-based tweakable block cipher Pholkos is also furnished to demonstrate the applicability of the new technique on AES-like constructions. The results on AES are experimentally verified by applying and implementing them on a small scale variant of AES. We provide arguments that draw a relation between the proposed strategy with the retracing boomerang attack devised in Eurocrypt 2020. To the best of our knowledge, this is the first attempt to merge the yoyo and boomerang techniques to analyze SPN ciphers and warrants further attention as it has the potential of becoming an important cryptanalysis tool.
2021
TOSC
PLCrypto: A Symmetric Cryptographic Library for Programmable Logic Controllers 📺
Programmable Logic Controllers (PLCs) are control devices widely used in industrial automation. They can be found in critical infrastructures like power grids, water systems, nuclear plants, manufacturing systems, etc. This paper introduces PLCrypto, a software cryptographic library that implements lightweight symmetric cryptographic algorithms for PLCs using a standard PLC programming language called structured text (ST). To the best of our knowledge, PLCrypto is the first ST-based cryptographic library that is executable on commercial off-the-shelf PLCs. PLCrypto includes a wide range of commonly used algorithms, totaling ten algorithms, including one-way functions, message authentication codes, hash functions, block ciphers, and pseudo-random functions/generators. PLCrypto can be used to protect the confidentiality and integrity of data on PLCs without additional hardware or firmware modification. This paper also presents general optimization methodologies and techniques used in PLCrypto for implementing primitive operations like bit-shifting/rotation, substitution, and permutation. The optimization tricks we distilled from our practice can also guide future implementation of other computationheavy programs on PLCs. To demonstrate a use case of PLCrypto in practice, we further realize a cryptographic protocol called proof of aliveness as a case study. We benchmarked the algorithms and protocols in PLCrypto on a commercial PLC, Allen Bradley ControlLogix 5571, which is widely used in the real world. Also, we make our source codes publicly available, so plant operators can freely deploy our library in practice.
2021
TOSC
Rocca: An Efficient AES-based Encryption Scheme for Beyond 5G 📺
In this paper, we present an AES-based authenticated-encryption with associated-data scheme called Rocca, with the purpose to reach the requirements on the speed and security in 6G systems. To achieve ultra-fast software implementations, the basic design strategy is to take full advantage of the AES-NI and SIMD instructions as that of the AEGIS family and Tiaoxin-346. Although Jean and Nikolić have generalized the way to construct efficient round functions using only one round of AES (aesenc) and 128-bit XOR operation and have found several efficient candidates, there still seems to exist potential to further improve it regarding speed and state size. In order to minimize the critical path of one round, we remove the case of applying both aesenc and XOR in a cascade way for one round. By introducing a cost-free block permutation in the round function, we are able to search for candidates in a larger space without sacrificing the performance. Consequently, we obtain more efficient constructions with a smaller state size than candidates by Jean and Nikolić. Based on the newly-discovered round function, we carefully design the corresponding AEAD scheme with 256-bit security by taking several reported attacks on the AEGIS family and Tiaxion-346 into account. Our AEAD scheme can reach 138Gbps which is 4 times faster than the AEAD scheme of SNOW-V. Rocca is also much faster than other efficient schemes with 256-bit key length, e.g. AEGIS-256 and AES-256-GCM. As far as we know, Rocca is the first dedicated cryptographic algorithm targeting 6 systems, i.e., 256-bit key length and the speed of more than 100 Gbps.
2021
TOSC
Permutation Based EDM: An Inverse Free BBB Secure PRF 📺
In CRYPTO 2019, Chen et al. have initiated an interesting research direction in designing PRF based on public permutations. They have proposed two beyond the birthday bound secure n-bit to n-bit PRF constructions, i.e., SoEM22 and SoKAC21, which are built on public permutations, where n is the size of the permutation. However, both of their constructions require two independent instances of public permutations. In FSE 2020, Chakraborti et al. have proposed a single public permutation based n-bit to n-bit beyond the birthday bound secure PRF, which they refer to as PDMMAC. Although the construction is minimal in the number of permutations, it requires the inverse call of its underlying permutation in their design. Coming up with a beyond the birthday bound secure public permutation based n-bit to n-bit PRF with a single permutation and two forward calls was left as an open problem in their paper. In this work, we propose pEDM, a single permutation based n-bit to n-bit PRF with two calls that do not require invertibility of the permutation. We have shown that our construction is secured against all adaptive information-theoretic distinguishers that make roughly up to 22n/3 construction and primitive queries. Moreover, we have also shown a matching attack with similar query complexity that establishes the tightness of our security bound.
2021
TOSC
Comparing Large-unit and Bitwise Linear Approximations of SNOW 2.0 and SNOW 3G and Related Attacks 📺
In this paper, we study and compare the byte-wise and bitwise linear approximations of SNOW 2.0 and SNOW 3G, and present a fast correlation attack on SNOW 3G by using our newly found bitwise linear approximations. On one side, we reconsider the relation between the large-unit linear approximation and the smallerunit/ bitwise ones derived from the large-unit one, showing that approximations on large-unit alphabets have advantages over all the smaller-unit/bitwise ones in linear attacks. But then on the other side, by comparing the byte-wise and bitwise linear approximations of SNOW 2.0 and SNOW 3G respectively, we have found many concrete examples of 8-bit linear approximations whose certain 1-dimensional/bitwise linear approximations have almost the same SEI (Squared Euclidean Imbalance) as that of the original 8-bit ones. That is, each of these byte-wise linear approximations is dominated by a single bitwise approximation, and thus the whole SEI is not essentially larger than the SEI of the dominating single bitwise approximation. Since correlation attacks can be more efficiently implemented using bitwise approximations rather than large-unit approximations, improvements over the large-unit linear approximation attacks are possible for SNOW 2.0 and SNOW 3G. For SNOW 3G, we make a careful search of the bitwise masks for the linear approximations of the FSM and obtain many mask tuples which yield high correlations. By using these bitwise linear approximations, we mount a fast correlation attack to recover the initial state of the LFSR with the time/memory/data/pre-computation complexities all upper bounded by 2174.16, improving slightly the previous best one which used an 8-bit (vectorized) linear approximation in a correlation attack with all the complexities upper bounded by 2176.56. Though not a significant improvement, our research results illustrate that we have an opportunity to achieve improvement over the large-unit attacks by using bitwise linear approximations in a linear approximation attack, and provide a newinsight on the relation between large-unit and bitwise linear approximations.
2021
TOSC
Weak Keys in Reduced AEGIS and Tiaoxin 📺
AEGIS-128 and Tiaoxin-346 (Tiaoxin for short) are two AES-based primitives submitted to the CAESAR competition. Among them, AEGIS-128 has been selected in the final portfolio for high-performance applications, while Tiaoxin is a third-round candidate. Although both primitives adopt a stream cipher based design, they are quite different from the well-known bit-oriented stream ciphers like Trivium and the Grain family. Their common feature consists in the round update function, where the state is divided into several 128-bit words and each word has the option to pass through an AES round or not. During the 6-year CAESAR competition, it is surprising that for both primitives there is no third-party cryptanalysis of the initialization phase. Due to the similarities in both primitives, we are motivated to investigate whether there is a common way to evaluate the security of their initialization phases. Our technical contribution is to write the expressions of the internal states in terms of the nonce and the key by treating a 128-bit word as a unit and then carefully study how to simplify these expressions by adding proper conditions. As a result, we find that there are several groups of weak keys with 296 keys each in 5-round AEGIS-128 and 8-round Tiaoxin, which allows us to construct integral distinguishers with time complexity 232 and data complexity 232. Based on the distinguisher, the time complexity to recover the weak key is 272 for 5-round AEGIS-128. However, the weak key recovery attack on 8-round Tiaoxin will require the usage of a weak constant occurring with probability 2−32. All the attacks reach half of the total number of initialization rounds. We expect that this work can advance the understanding of the designs similar to AEGIS and Tiaoxin.
2021
TOSC
Improved Rectangle Attacks on SKINNY and CRAFT 📺
The boomerang and rectangle attacks are adaptions of differential cryptanalysis that regard the target cipher E as a composition of two sub-ciphers, i.e., E = E1 ∘ E0, to construct a distinguisher for E with probability p2q2 by concatenating two short differential trails for E0 and E1 with probability p and q respectively. According to the previous research, the dependency between these two differential characteristics has a great impact on the probability of boomerang and rectangle distinguishers. Dunkelman et al. proposed the sandwich attack to formalise such dependency that regards E as three parts, i.e., E = E1 ∘ Em ∘ E0, where Em contains the dependency between two differential trails, satisfying some differential propagation with probability r. Accordingly, the entire probability is p2q2r. Recently, Song et al. have proposed a general framework to identify the actual boundaries of Em and systematically evaluate the probability of Em with any number of rounds, and applied their method to accurately evaluate the probabilities of the best SKINNY’s boomerang distinguishers. In this paper, using a more advanced method to search for boomerang distinguishers, we show that the best previous boomerang distinguishers for SKINNY can be significantly improved in terms of probability and number of rounds. More precisely, we propose related-tweakey boomerang distinguishers for up to 19, 21, 23, and 25 rounds of SKINNY-64-128, SKINNY-128-256, SKINNY-64-192 and SKINNY-128-384 respectively, which improve the previous boomerang distinguishers of these variants of SKINNY by 1, 2, 1, and 1 round respectively. Based on the improved boomerang distinguishers for SKINNY, we provide related-tweakey rectangle attacks on 23 rounds of SKINNY-64-128, 24 rounds of SKINNY-128-256, 29 rounds of SKINNY-64-192, and 30 rounds of SKINNY-128-384. It is worth noting that our improved related-tweakey rectangle attacks on SKINNY-64-192, SKINNY-128-256 and SKINNY-128-384 can be directly applied for the same number of rounds of ForkSkinny-64-192, ForkSkinny-128-256 and ForkSkinny-128-384 respectively. CRAFT is another SKINNY-like tweakable block cipher for which we provide the security analysis against rectangle attack for the first time. As a result, we provide a 14-round boomerang distinguisher for CRAFT in the single-tweak model based on which we propose a single-tweak rectangle attack on 18 rounds of this cipher. Moreover, following the previous research regarding the evaluation of switching in multiple rounds of boomerang distinguishers, we also introduce new tools called Double Boomerang Connectivity Table (DBCT), LBCT⫤, and UBCT⊨ to evaluate the boomerang switch through the multiple rounds more accurately.
2021
TOSC
Linear Cryptanalyses of Three AEADs with GIFT-128 as Underlying Primitives 📺
This paper considers the linear cryptanalyses of Authenticated Encryptions with Associated Data (AEADs) GIFT-COFB, SUNDAE-GIFT, and HyENA. All of these proposals take GIFT-128 as underlying primitives. The automatic search with the Boolean satisfiability problem (SAT) method is implemented to search for linear approximations that match the attack settings concerning these primitives. With the newly identified approximations, we launch key-recovery attacks on GIFT-COFB, SUNDAE-GIFT, and HyENA when the underlying primitives are replaced with 16-round, 17-round, and 16-round versions of GIFT-128. The resistance of GIFT-128 against linear cryptanalysis is also evaluated. We present a 24-round key-recovery attack on GIFT-128 with a newly obtained 19-round linear approximation. We note that the attack results in this paper are far from threatening the security of GIFT-COFB, SUNDAE-GIFT, HyENA, and GIFT-128.
2021
TOSC
(Quantum) Collision Attacks on Reduced Simpira v2 📺
Simpira v2 is an AES-based permutation proposed by Gueron and Mouha at ASIACRYPT 2016. In this paper, we build an improved MILP model to count the differential and linear active Sboxes for Simpira v2, which achieves tighter bounds of the minimum number of active Sboxes for a few versions of Simpira v2. Then, based on the new model, we find some new truncated differentials for Simpira v2 and give a series (quantum) collision attacks on two versions of reduced Simpira v2.
2021
TOSC
Automated Search Oriented to Key Recovery on Ciphers with Linear Key Schedule: Applications to Boomerangs in SKINNY and ForkSkinny 📺
Automatic modelling to search distinguishers with high probability covering as many rounds as possible, such as MILP, SAT/SMT, CP models, has become a very popular cryptanalysis topic today. In those models, the optimizing objective is usually the probability or the number of rounds of the distinguishers. If we want to recover the secret key for a round-reduced block cipher, there are usually two phases, i.e., finding an efficient distinguisher and performing key-recovery attack by extending several rounds before and after the distinguisher. The total number of attacked rounds is not only related to the chosen distinguisher, but also to the extended rounds before and after the distinguisher. In this paper, we try to combine the two phases in a uniform automatic model.Concretely, we apply this idea to automate the related-key rectangle attacks on SKINNY and ForkSkinny. We propose some new distinguishers with advantage to perform key-recovery attacks. Our key-recovery attacks on a few versions of round-reduced SKINNY and ForkSkinny cover 1 to 2 more rounds than the best previous attacks.
2021
TOSC
Maximums of the Additive Differential Probability of Exclusive-Or 📺
At FSE 2004, Lipmaa et al. studied the additive differential probability adp⊕(α,β → γ) of exclusive-or where differences α,β,γ ∈ Fn2 are expressed using addition modulo 2n. This probability is used in the analysis of symmetric-key primitives that combine XOR and modular addition, such as the increasingly popular Addition-Rotation-XOR (ARX) constructions. The focus of this paper is on maximal differentials, which are helpful when constructing differential trails. We provide the missing proof for Theorem 3 of the FSE 2004 paper, which states that maxα,βadp⊕(α,β → γ) = adp⊕(0,γ → γ) for all γ. Furthermore, we prove that there always exist either two or eight distinct pairs α,β such that adp⊕( α,β → γ) = adp⊕(0,γ → γ), and we obtain recurrence formulas for calculating adp⊕. To gain insight into the range of possible differential probabilities, we also study other properties such as the minimum value of adp⊕(0,γ → γ), and we find all γ that satisfy this minimum value.
2021
TOSC
Proving Resistance Against Infinitely Long Subspace Trails: How to Choose the Linear Layer 📺
Designing cryptographic permutations and block ciphers using a substitutionpermutation network (SPN) approach where the nonlinear part does not cover the entire state has recently gained attention due to favorable implementation characteristics in various scenarios.For word-oriented partial SPN (P-SPN) schemes with a fixed linear layer, our goal is to better understand how the details of the linear layer affect the security of the construction. In this paper, we derive conditions that allow us to either set up or prevent attacks based on infinitely long truncated differentials with probability 1. Our analysis is rather broad compared to earlier independent work on this problem since we consider (1) both invariant and non-invariant/iterative trails, and (2) trails with and without active S-boxes.For these cases, we provide rigorous sufficient and necessary conditions for the matrix that defines the linear layer to prevent the analyzed attacks. On the practical side, we present a tool that can determine whether a given linear layer is vulnerable based on these results. Furthermore, we propose a sufficient condition for the linear layer that, if satisfied, ensures that no infinitely long truncated differential exists. This condition is related to the degree and the irreducibility of the minimal polynomial of the matrix that defines the linear layer. Besides P-SPN schemes, our observations may also have a crucial impact on the Hades design strategy, which mixes rounds with full S-box layers and rounds with partial S-box layers.
2021
TOSC
Provable Security of SP Networks with Partial Non-Linear Layers 📺
Motivated by the recent trend towards low multiplicative complexity blockciphers (e.g., Zorro, CHES 2013; LowMC, EUROCRYPT 2015; HADES, EUROCRYPT 2020; MALICIOUS, CRYPTO 2020), we study their underlying structure partial SPNs, i.e., Substitution-Permutation Networks (SPNs) with parts of the substitution layer replaced by an identity mapping, and put forward the first provable security analysis for such partial SPNs built upon dedicated linear layers. For different instances of partial SPNs using MDS linear layers, we establish strong pseudorandom security as well as practical provable security against impossible differential attacks. By extending the well-established MDS code-based idea, we also propose the first principled design of linear layers that ensures optimal differential propagation. Our results formally confirm the conjecture that partial SPNs achieve the same security as normal SPNs while consuming less non-linearity, in a well-established framework.
2021
TOSC
Algorithm Substitution Attacks: State Reset Detection and Asymmetric Modifications 📺
In this paper, we study algorithm substitution attacks (ASAs), where an algorithm in a cryptographic scheme is substituted for a subverted version. First, we formalize and study the use of state resets to detect ASAs, and show that many published stateful ASAs are detectable with simple practical methods relying on state resets. Second, we introduce two asymmetric ASAs on symmetric encryption, which are undetectable or unexploitable even by an adversary who knows the embedded subversion key. We also generalize this result, allowing for any symmetric ASA (on any cryptographic scheme) satisfying certain properties to be transformed into an asymmetric ASA. Our work demonstrates the broad application of the techniques first introduced by Bellare, Paterson, and Rogaway (Crypto 2014) and Bellare, Jaeger, and Kane (CCS 2015) and reinforces the need for precise definitions surrounding detectability of stateful ASAs.
2021
TOSC
On Length Independent Security Bounds for the PMAC Family 📺
At FSE 2017, Gaži et al. demonstrated a pseudorandom function (PRF) distinguisher (Gaži et al., ToSC 2016(2)) on PMAC with Ω(lq2/2n) advantage, where q, l, and n, denote the number of queries, maximum permissible query length (in terms of n-bit blocks), and block size of the underlying block cipher. This, in combination with the upper bounds of Ο(lq2/2n) (Minematsu and Matsushima, FSE 2007) and Ο(qσ/2n) (Nandi and Mandal, J. Mathematical Cryptology 2008(2)), resolved the long-standing problem of exact security of PMAC. Gaži et al. also showed that the dependency on l can be dropped (i.e. O(q2/2n) bound up to l ≤ 2n/2) for a simplified version of PMAC, called sPMAC, by replacing the Gray code-based masking in PMAC with any 4-wise independent universal hash-based masking. Recently, Naito proposed another variant of PMAC with two powering-up maskings (Naito, ToSC 2019(2)) that achieves l-free bound of O(q2/2n), provided l ≤ 2n/2. In this work, we first identify a flaw in the analysis of Naito’s PMAC variant that invalidates the security proof. Apparently, the flaw is not easy to fix under the existing proof setup. We then formulate an equivalent problem which must be solved in order to achieve l-free security bounds for this variant. Second, we show that sPMAC achieves O(q2/2n) bound for a weaker notion of universality as compared to the earlier condition of 4-wise independence. Third, we analyze the security of PMAC1 (a popular variant of PMAC) with a simple modification in the linear combination of block cipher outputs. We show that this simple modification of PMAC1 has tight security O(q2/2n) provided l ≤ 2n/4. Even if l &lt; 2n/4, we still achieve same tight bound as long as total number of blocks in all queries is less than 22n/3.
2021
TOSC
Power Yoga: Variable-Stretch Security of CCM for Energy-Efficient Lightweight IoT 📺
The currently ongoing NIST LWC project aims at identifying new standardization targets for lightweight authenticated encryption with associated data (AEAD) and (optionally) lightweight cryptographic hashing. NIST has deemed it important for performance and cost to be optimized on relevant platforms, especially for short messages. Reyhanitabar, Vaudenay and Vizár (Asiacrypt 2016) gave a formal treatment for security of nonce-based AEAD with variable stretch, i.e., when the length of the authentication tag is changed between encryptions without changing the key. They argued that AEAD supporting variable stretch is of practical interest for constrained applications, especially low-power devices operated by battery, due to the ability to flexibly trade communication overhead and level of integrity.In this work, we investigate this hypothesis with affirmative results. We present vCCM, a variable-stretch variant of the standard CCM and prove it is secure when used with variable stretch. We then experimentally measure the energy consumption of a real-world wireless sensor node when encrypting and sending messages with vCCM and CCM, respectively. Our projections show that the flexible trade of integrity level and ciphertext expansion can lead up to 21% overall energy consumption reduction in certain scenarios. As vCCM is obtained from the widely-used CCM by a black-box transformation, allowing any existing CCM implementations to be reused as-is, our results can be immediately put to use in practice. vCCM is all the more relevant because neither the NIST LWC project, nor any of the candidates give a consideration for the support of variable stretch and the related integrity-overhead trade-off.
2021
TOSC
CTET+: A Beyond-Birthday-Bound Secure Tweakable Enciphering Scheme Using a Single Pseudorandom Permutation 📺
In this work, we propose a construction of 2-round tweakable substitutionpermutation networks using a single secret S-box. This construction is based on non-linear permutation layers using independent round keys, and achieves security beyond the birthday bound in the random permutation model. When instantiated with an n-bit block cipher with ωn-bit keys, the resulting tweakable block cipher, dubbed CTET+, can be viewed as a tweakable enciphering scheme that encrypts ωκ-bit messages for any integer ω ≥ 2 using 5n + κ-bit keys and n-bit tweaks, providing 2n/3-bit security.Compared to the 2-round non-linear SPN analyzed in [CDK+18], we both minimize it by requiring a single permutation, and weaken the requirements on the middle linear layer, allowing better performance. As a result, CTET+ becomes the first tweakable enciphering scheme that provides beyond-birthday-bound security using a single permutation, while its efficiency is still comparable to existing schemes including AES-XTS, EME, XCB and TET. Furthermore, we propose a new tweakable enciphering scheme, dubbed AES6-CTET+, which is an actual instantiation of CTET+ using a reduced round AES block cipher as the underlying secret S-box. Extensivecryptanalysis of this algorithm allows us to claim 127 bits of security.Such tweakable enciphering schemes with huge block sizes become desirable in the context of disk encryption, since processing a whole sector as a single block significantly worsens the granularity for attackers when compared to, for example, AES-XTS, which treats every 16-byte block on the disk independently. Besides, as a huge amount of data is being stored and encrypted at rest under many different keys in clouds, beyond-birthday-bound security will most likely become necessary in the short term.
2021
TOSC
Perfect Trees: Designing Energy-Optimal Symmetric Encryption Primitives 📺
Energy efficiency is critical in battery-driven devices, and designing energyoptimal symmetric-key ciphers is one of the goals for the use of ciphers in such environments. In the paper by Banik et al. (IACR ToSC 2018), stream ciphers were identified as ideal candidates for low-energy solutions. One of the main conclusions of this paper was that Trivium, when implemented in an unrolled fashion, was by far the most energy-efficient way of encrypting larger quantity of data. In fact, it was shown that as soon as the number of databits to be encrypted exceeded 320 bits, Trivium consumed the least amount of energy on STM 90 nm ASIC circuits and outperformed the Midori family of block ciphers even in the least energy hungry ECB mode (Midori was designed specifically for energy efficiency).In this work, we devise the first heuristic energy model in the realm of stream ciphers that links the underlying algebraic topology of the state update function to the consumptive behaviour. The model is then used to derive a metric that exhibits a heavy negative correlation with the energy consumption of a broad range of stream cipher architectures, i.e., the families of Trivium-like, Grain-like and Subterranean-like constructions. We demonstrate that this correlation is especially pronounced for Trivium-like ciphers which leads us to establish a link between the energy consumption and the security guarantees that makes it possible to find several alternative energy-optimal versions of Trivium that meet the requirements but consume less energy. We present two such designs Trivium-LE(F) and Trivium-LE(S) that consume around 15% and 25% less energy respectively making them the to date most energy-efficient encryption primitives. They inherit the same security level as Trivium, i.e., 80-bit security. We further present Triad-LE as an energy-efficient variant satisfying a higher security level. The simplicity and wide applicability of our model has direct consequences for the conception of future hardware-targeted stream ciphers as for the first time it is possible to optimize for energy during the design phase. Moreover, we extend the reach of our model beyond plain encryption primitives and propose a novel energy-efficient message authentication code Trivium-LE-MAC.
2021
TOSC
Diving Deep into the Weak Keys of Round Reduced Ascon 📺
At ToSC 2021, Rohit et al. presented the first distinguishing and key recovery attacks on 7 rounds Ascon without violating the designer’s security claims of nonce-respecting setting and data limit of 264 blocks per key. So far, these are the best attacks on 7 rounds Ascon. However, the distinguishers require (impractical) 260 data while the data complexity of key recovery attacks exactly equals 264. Whether there are any practical distinguishers and key recovery attacks (with data less than 264) on 7 rounds Ascon is still an open problem.In this work, we give positive answers to these questions by providing a comprehensive security analysis of Ascon in the weak key setting. Our first major result is the 7-round cube distinguishers with complexities 246 and 233 which work for 282 and 263 keys, respectively. Notably, we show that such weak keys exist for any choice (out of 64) of 46 and 33 specifically chosen nonce variables. In addition, we improve the data complexities of existing distinguishers for 5, 6 and 7 rounds by a factor of 28, 216 and 227, respectively. Our second contribution is a new theoretical framework for weak keys of Ascon which is solely based on the algebraic degree. Based on our construction, we identify 2127.99, 2127.97 and 2116.34 weak keys (out of 2128) for 5, 6 and 7 rounds, respectively. Next, we present two key recovery attacks on 7 rounds with different attack complexities. The best attack can recover the secret key with 263 data, 269 bits of memory and 2115.2 time. Our attacks are far from threatening the security of full 12 rounds Ascon, but we expect that they provide new insights into Ascon’s security.
2021
TOSC
Automatic Search of Cubes for Attacking Stream Ciphers 📺
Cube attack was proposed by Dinur and Shamir, and it has become an important tool for analyzing stream ciphers. As the problem that how to recover the superpolys accurately was resolved by Hao et al. in EUROCRYPT 2020, another important problem is how to find “good” superpolys, which is equivalent to finding “good” cubes. However, there are two difficulties in finding “good” cubes. Firstly, the number of candidate cubes is enormous and most of the cubes are not “good”. Secondly, it is costly to evaluate whether a cube is “good”.In this paper, we present a new algorithm to search for a kind of “good” cubes, called valuable cubes. A cube is called valuable, if its superpoly has (at least) a balanced secret variable. A valuable cube is “good”, because its superpoly brings in 1 bit of information about the key. More importantly, the superpolys of valuable cubes could be used in both theoretical and practical analyses. To search for valuable cubes, instead of testing a set of cubes one by one, the new algorithm deals with the set of cubes together, such that the common computations can be done only once for all candidate cubes and duplicated computations are avoided. Besides, the new algorithm uses a heuristic method to reject useless cubes efficiently. This heuristic method is based on the divide-and-conquer strategy as well as an observation.For verifications of this new algorithm, we applied it to Trivium and Kreyvium, and obtained three improvements. Firstly, we found two valuable cubes for 843-round Trivium, such that we proposed, as far as we know, the first theoretical key-recovery attack against 843-round Trivium, while the previous highest round of Trivium that can be attacked was 842, given by Hao et al. in EUROCRYPT 2020. Secondly, by finding many small valuable cubes, we presented practical attacks against 806- and 808-round Trivium for the first time, while the previous highest round of Trivium that can be attacked practically was 805. Thirdly, based on the cube used to attack 892-round Kreyvium in EUROCRYPT 2020, we found more valuable cubes and mounted the key-recovery attacks against Kreyvium to 893-round.
2021
TOSC
Statistical Model of Correlation Difference and Related-Key Linear Cryptanalysis 📺
The goal of this work is to propose a related-key model for linear cryptanalysis. We start by giving the mean and variance of the difference of sampled correlations of two Boolean functions when using the same sample of inputs to compute both correlations. This result is further extended to determine the mean and variance of the difference of correlations of a pair of Boolean functions taken over a random data sample of fixed size and over a random pair of Boolean functions. We use the properties of the multinomial distribution to achieve these results without independence assumptions. Using multivariate normal approximation of the multinomial distribution we obtain that the distribution of the difference of related-key correlations is approximately normal. This result is then applied to existing related-key cryptanalyses. We obtain more accurate right-key and wrong-key distributions and remove artificial assumptions about independence of sampled correlations. We extend this study to using multiple linear approximations and propose a Χ2-type statistic, which is proven to be Χ2 distributed if the linear approximations are independent. We further examine this statistic for multidimensional linear approximation and discuss why removing the assumption about independence of linear approximations does not work in the related-key setting the same way as in the single-key setting.
2021
TOSC
Improved Security Bound of (E/D)WCDM 📺
In CRYPTO’16, Cogliati and Seurin proposed a block cipher based nonce based MAC, called Encrypted Wegman-Carter with Davies-Meyer (EWCDM), that gives 2n/3 bit MAC security in the nonce respecting setting and n/2 bit security in the nonce misuse setting, where n is the block size of the underlying block cipher. However, this construction requires two independent block cipher keys. In CRYPTO’18, Datta et al. came up with a single-keyed block cipher based nonce based MAC, called Decrypted Wegman-Carter with Davies-Meyer (DWCDM), that also provides 2n/3 bit MAC security in the nonce respecting setting and n/2 bit security in the nonce misuse setting. However, the drawback of DWCDM is that it takes only 2n/3 bit nonce. In fact, authors have shown that DWCDM cannot achieve beyond the birthday bound security with n bit nonces. In this paper, we prove that DWCDM with 3n/4 bit nonces provides MAC security up to O(23n/4) MAC queries against all nonce respecting adversaries. We also improve the MAC bound of EWCDM from 2n/3 bit to 3n/4 bit. The backbone of these two results is a refined treatment of extended mirror theory that systematically estimates the number of solutions to a system of bivariate affine equations and non-equations, which we apply on the security proofs of the constructions to achieve 3n/4 bit security.
2021
TOSC
Atom: A Stream Cipher with Double Key Filter 📺
It has been common knowledge that for a stream cipher to be secure against generic TMD tradeoff attacks, the size of its internal state in bits needs to be at least twice the size of the length of its secret key. In FSE 2015, Armknecht and Mikhalev however proposed the stream cipher Sprout with a Grain-like architecture, whose internal state was equal in size with its secret key and yet resistant against TMD attacks. Although Sprout had other weaknesses, it germinated a sequence of stream cipher designs like Lizard and Plantlet with short internal states. Both these designs have had cryptanalytic results reported against them. In this paper, we propose the stream cipher Atom that has an internal state of 159 bits and offers a security of 128 bits. Atom uses two key filters simultaneously to thwart certain cryptanalytic attacks that have been recently reported against keystream generators. In addition, we found that our design is one of the smallest stream ciphers that offers this security level, and we prove in this paper that Atom resists all the attacks that have been proposed against stream ciphers so far in literature. On the face of it, Atom also builds on the basic structure of the Grain family of stream ciphers. However, we try to prove that by including the additional key filter in the architecture of Atom we can make it immune to all cryptanalytic advances proposed against stream ciphers in recent cryptographic literature.
2021
TOSC
Orthros: A Low-Latency PRF 📺
We present Orthros, a 128-bit block pseudorandom function. It is designed with primary focus on latency of fully unrolled circuits. For this purpose, we adopt a parallel structure comprising two keyed permutations. The round function of each permutation is similar to Midori, a low-energy block cipher, however we thoroughly revise it to reduce latency, and introduce different rounds to significantly improve cryptographic strength in a small number of rounds. We provide a comprehensive, dedicated security analysis. For hardware implementation, Orthros achieves the lowest latency among the state-of-the-art low-latency primitives. For example, using the STM 90nm library, Orthros achieves a minimum latency of around 2.4 ns, while other constructions like PRINCE, Midori-128 and QARMA9-128- σ0 achieve 2.56 ns, 4.10 ns, 4.38 ns respectively.
2021
TOSC
MOE: Multiplication Operated Encryption with Trojan Resilience 📺
In order to lower costs, the fabrication of Integrated Circuits (ICs) is increasingly delegated to offshore contract foundries, making them exposed to malicious modifications, known as hardware Trojans. Recent works have demonstrated that a strong form of Trojan-resilience can be obtained from untrusted chips by exploiting secret sharing and Multi-Party Computation (MPC), yet with significant cost overheads. In this paper, we study the possibility of building a symmetric cipher enabling similar guarantees in a more efficient manner. To reach this goal, we exploit a simple round structure mixing a modular multiplication and a multiplication with a binary matrix. Besides being motivated as a new block cipher design for Trojan resilience, our research also exposes the cryptographic properties of the modular multiplication, which is of independent interest.
2021
TOSC
Misuse-Free Key-Recovery and Distinguishing Attacks on 7-Round Ascon 📺
Being one of the winning algorithms of the CAESAR competition and currently a second round candidate of the NIST lightweight cryptography standardization project, the authenticated encryption scheme Ascon (designed by Dobraunig, Eichlseder, Mendel, and Schläffer) has withstood extensive self and third-party cryptanalysis. The best known attack on Ascon could only penetrate up to 7 (out of 12) rounds due to Li et al. (ToSC Vol I, 2017). However, it violates the data limit of 264 blocks per key specified by the designers. Moreover, the best known distinguishers of Ascon in the AEAD context reach only 6 rounds. To fill these gaps, we revisit the security of 7-round Ascon in the nonce-respecting setting without violating the data limit as specified in the design. First, we introduce a new superpoly-recovery technique named as partial polynomial multiplication for which computations take place between the so-called degree-d homogeneous parts of the involved Boolean functions for a 2d-dimensional cube. We apply this method to 7-round Ascon and present several key recovery attacks. Our best attack can recover the 128-bit secret key with a time complexity of about 2123 7-round Ascon permutations and requires 264 data and 2101 bits memory. Also, based on division properties, we identify several 60 dimensional cubes whose superpolies are constant zero after 7 rounds. We further improve the cube distinguishers for 4, 5 and 6 rounds. Although our results are far from threatening the security of full 12-round Ascon, they provide new insights in the security analysis of Ascon.
2021
TOSC
Towards Key-recovery-attack Friendly Distinguishers: Application to GIFT-128 📺
When analyzing a block cipher, the first step is to search for some valid distinguishers, for example, the differential trails in the differential cryptanalysis and the linear trails in the linear cryptanalysis. A distinguisher is advantageous if it can be utilized to attack more rounds and the amount of the involved key bits during the key-recovery process is small, as this leads to a long attack with a low complexity. In this article, we propose a two-step strategy to search for such advantageous distinguishers. This strategy is inspired by the intuition that if a differential is advantageous only when some properties are satisfied, then we can predefine some constraints describing these properties and search for the differentials in the small set.As applications, our strategy is used to analyze GIFT-128, which was proposed in CHES 2017. Based on some 20-round differentials, we give the first 27-round differential attack on GIFT-128, which covers one more round than the best previous result. Also, based on two 17-round linear trails, we give the first linear hull attack on GIFT-128, which covers 22 rounds. In addition, we also give some results on two GIFT-128 based AEADs GIFT-COFB and SUNDAE-GIFT.
2021
TOSC
Exploiting Weak Diffusion of Gimli: Improved Distinguishers and Preimage Attacks 📺
The Gimli permutation proposed in CHES 2017 was designed for cross-platform performance. One main strategy to achieve such a goal is to utilize a sparse linear layer (Small-Swap and Big-Swap), which occurs every two rounds. In addition, the round constant addition occurs every four rounds and only one 32-bit word is affected by it. The above two facts have been recently exploited to construct a distinguisher for the full Gimli permutation with time complexity 264. By utilizing a new property of the SP-box, we demonstrate that the time complexity of the full-round distinguisher can be further reduced to 252 while a significant bias still remains. Moreover, for the 18-round Gimli permutation, we could construct a distinguisher even with only 2 queries. Apart from the permutation itself, the weak diffusion can also be utilized to accelerate the preimage attacks on reduced Gimli-Hash and Gimli-XOF-128 with a divide-and-conquer method. As a consequence, the preimage attacks on reduced Gimli-Hash and Gimli-XOF-128 can reach up to 5 rounds and 9 rounds, respectively. Since Gimli is included in the second round candidates in NIST’s Lightweight Cryptography Standardization process, we expect that our analysis can further advance the understanding of Gimli. To the best of our knowledge, the distinguishing attacks and preimage attacks are the best so far.
2021
TOSC
Improved Preimage Attacks on 4-Round Keccak-224/256 📺
This paper provides an improved preimage attack method on standard 4-round Keccak-224/256. The method is based on the work pioneered by Li and Sun, who design a linear structure of 2-round Keccak-224/256 with 194 degrees of freedom left. By partially linearizing 17 output bits through the last 2 rounds, they finally reach a complexity of 2207/2239 for searching a 4-round preimage. Yet under their strategy, those 17 bits are regarded as independent bits and the linearization costs a great amount of freedom. Inspired by their thoughts, we improve the partial linearization method where multiple output bits can reuse some common degrees of freedom. As a result, the complexity of preimage attack on 4-round Keccak-224/256 can be decreased to 2192/2218, which are both the best known theoretical preimage cryptanalysis so far. To support the theoretical analysis, we apply our strategy to a 64-bit partial preimage attack within practical complexity. It is remarkable that this partial linearization method can be directly applied if a better linear structure with more freedom left is proposed.
2021
TOSC
Algebraic Collision Attacks on Keccak 📺
In this paper, we analyze the collision resistance of the two smallest versions of Keccak which have a width of 200 and 400 bits respectively. We show that algebraic and linearization techniques can serve collision cryptanalysis by using some interesting properties of the linear part of the round function of Keccak. We present an attack on the Keccak versions that could be used in lightweight cryptography reduced to two rounds. For Keccak[40, 160] (resp. Keccak[72, 128] and Keccak[144, 256]) our attack has a computational complexity of 273 (resp. 252.5 and 2101.5) Keccak calls.
2021
TOSC
Accelerating the Search of Differential and Linear Characteristics with the SAT Method 📺
The introduction of the automatic search boosts the cryptanalysis of symmetric-key primitives to some degree. However, the performance of the automatic search is not always satisfactory for the search of long trails or ciphers with large state sizes. Compared with the extensive attention on the enhancement for the search with the mixed integer linear programming (MILP) method, few works care for the acceleration of the automatic search with the Boolean satisfiability problem (SAT) or satisfiability modulo theories (SMT) method. This paper intends to fill this vacancy. Firstly, with the additional encoding variables of the sequential counter circuit for the original objective function in the standard SAT method, we put forward a new encoding method to convert the Matsui’s bounding conditions into Boolean formulas. This approach does not rely on new auxiliary variables and significantly reduces the consumption of clauses for integrating multiple bounding conditions into one SAT problem. Then, we evaluate the accelerating effect of the novel encoding method under different sets of bounding conditions. With the observations and experience in the tests, a strategy on how to create the sets of bounding conditions that probably achieve extraordinary advances is proposed. The new idea is applied to search for optimal differential and linear characteristics for multiple ciphers. For PRESENT, GIFT-64, RECTANGLE, LBlock, TWINE, and some versions in SIMON and SPECK families of block ciphers, we obtain the complete bounds (full rounds) on the number of active S-boxes, the differential probability, as well as the linear bias. The acceleration method is also employed to speed up the search of related-key differential trails for GIFT-64. Based on the newly identified 18-round distinguisher with probability 2−58, we launch a 26-round key-recovery attack with 260.96 chosen plaintexts. To our knowledge, this is the longest attack on GIFT-64. Lastly, we note that the attack result is far from threatening the security of GIFT-64 since the designers recommended users to double the number of rounds under the related-key attack setting.
2021
TOSC
Quantum Free-Start Collision Attacks on Double Block Length Hashing with Round-Reduced AES-256 📺
Recently, Hosoyamada and Sasaki (EUROCRYPT 2020), and Xiaoyang Dong et al. (ASIACRYPT 2020) proposed quantum collision attacks against AES-like hashing modes AES-MMO and AES-MP. Their collision attacks are based on the quantum version of the rebound attack technique exploiting the differential trails whose probabilities are too low to be useful in the classical setting but large enough in the quantum setting. In this work, we present dedicated quantum free-start collision attacks on Hirose’s double block length compression function instantiated with AES-256, namely HCF-AES-256. The best publicly known classical attack against HCF-AES-256 covers up to 9 out of 14 rounds. We present a new 10-round differential trail for HCF-AES-256 with probability 2−160, and use it to find collisions with a quantum version of the rebound attack. Our attack succeeds with a time complexity of 285.11 and requires 216 qRAM in the quantum-attack setting, where an attacker can make only classical queries to the oracle and perform offline computations. We also present a quantum free-start collision attack on HCF-AES-256 with a time complexity of 286.07 which outperforms Chailloux, Naya-Plasencia, and Schrottenloher’s generic quantum collision attack (ASIACRYPT 2017) in a model when large qRAM is not available.
2021
TOSC
Provably Quantum-Secure Tweakable Block Ciphers 📺
Recent results on quantum cryptanalysis show that some symmetric key schemes can be broken in polynomial time even if they are proven to be secure in the classical setting. Liskov, Rivest, and Wagner showed that secure tweakable block ciphers can be constructed from secure block ciphers in the classical setting. However, Kaplan et al. showed that their scheme can be broken by polynomial time quantum superposition attacks, even if underlying block ciphers are quantum-secure. Since then, it remains open if there exists a mode of block ciphers to build quantum-secure tweakable block ciphers. This paper settles the problem in the reduction-based provable security paradigm. We show the first design of quantum-secure tweakable block ciphers based on quantum-secure block ciphers, and present a provable security bound. Our construction is simple, and when instantiated with a quantum-secure n-bit block cipher, it is secure against attacks that query arbitrary quantum superpositions of plaintexts and tweaks up to O(2n/6) quantum queries. Our security proofs use the compressed oracle technique introduced by Zhandry. More precisely, we use an alternative formalization of the technique introduced by Hosoyamada and Iwata.
2021
TOSC
Resistance of SNOW-V against Fast Correlation Attacks 📺
SNOW-V is a new member in the SNOW family of stream ciphers, hoping to be competitive in the 5G mobile communication system. In this paper, we study the resistance of SNOW-V against bitwise fast correlation attacks by constructing bitwise linear approximations. First, we propose and summarize some efficient algorithms using the slice-like techniques to compute the bitwise linear approximations of certain types of composition functions composed of basic operations like ⊞, ⊕, Permutation, and S-box, which have been widely used in word-oriented stream ciphers such as SNOW-like ciphers. Then, using these algorithms, we find a number of stronger linear approximations for the FSM of the two variants of SNOW-V given in the design document, i.e., SNOW-V σ0 and SNOW-V⊞8, ⊞8. For SNOW-V σ0, where there is no byte-wise permutation, we find some bitwise linear approximations of the FSM with the SEI (Squared Euclidean Imbalance) around 2−37.34 and mount a bitwise fast correlation attack with the time complexity 2251.93 and memory complexity 2244, given 2103.83 keystream outputs, which improves greatly the results in the design document. For SNOW-V⊞8, ⊞8, where both of the two 32-bit adders in the FSM are replaced by 8-bit adders, we find our best bitwise linear approximations of the FSM with the SEI 2−174.14, while the best byte-wise linear approximation in the design document of SNOW-V has the SEI 2−214.80. Finally, we study the security of a closer variant of SNOW-V, denoted by SNOW-V⊞32, ⊞8, where only the 32-bit adder used for updating the first register is replaced by the 8-bit adder, while everything else remains identical. For SNOW-V⊞32, ⊞8, we derive many mask tuples yielding the bitwise linear approximations of the FSM with the SEI larger than 2−184. Using these linear approximations, we mount a fast correlation attack with the time complexity 2377.01 and a memory complexity 2363, given 2253.73 keystream outputs. Note that neither of our attack threatens the security of SNOW-V. We hope our research could further help in understanding bitwise linear approximation attacks and also the structure of SNOW-like stream ciphers.
2021
TOSC
On the Relationships between Different Methods for Degree Evaluation 📺
In this paper, we compare several non-tight degree evaluation methods i.e., Boura and Canteaut’s formula, Carlet’s formula as well as Liu’s numeric mapping and division property proposed by Todo, and hope to find the best one from these methodsfor practical applications. Specifically, for the substitution-permutation-network (SPN) ciphers, we first deeply explore the relationships between division property of an Sbox and its algebraic properties (e.g., the algebraic degree of its inverse). Based on these findings, we can prove theoretically that division property is never worse than Boura and Canteaut’s and Carlet’s formulas, and we also experimentally verified that the division property can indeed give a better bound than the latter two methods. In addition, for the nonlinear feedback shift registers (NFSR) based ciphers, according to the propagation of division property and the core idea of numeric mapping, we give a strict proof that the estimated degree using division property is never greater than that of numeric mapping. Moreover, our experimental results on Trivium and Kreyvium indicate the division property actually derives a much better bound than the numeric mapping. To the best of our knowledge, this is the first time to give a formal discussion on the relationships between division property and other degree evaluation methods, and we present the first theoretical proof and give the experimental verification to illustrate that division property is the optimal one among these methods in terms of the accuracy of the upper bounds on algebraic degree.
2020
ASIACRYPT
Twisted-PHS: Using the Product Formula to Solve Approx-SVP in Ideal Lattices 📺
Approx-SVP is a well-known hard problem on lattices, which asks to find short vectors on a given lattice, but its variant restricted to ideal lattices (which correspond to ideals of the ring of integers $\mathcal{O}_{K}$ of a number field $K$) is still not fully understood. For a long time, the best known algorithm to solve this problem on ideal lattices was the same as for arbitrary lattice. But recently, a series of works tends to show that solving this problem could be easier in ideal lattices than in arbitrary ones, in particular in the quantum setting. Our main contribution is to propose a new ``twisted'' version of the PHS (by Pellet-Mary, Hanrot and Stehlé 2019) algorithm, that we call Twisted-PHS. As a minor contribution, we also propose several improvements of the PHS algorithm. On the theoretical side, we prove that our twisted-PHS algorithm performs at least as well as the original PHS algorithm. On the practical side though, we provide a full implementation of our algorithm which suggest that much better approximation factors are achieved, and that the given lattice bases are a lot more orthogonal than the ones used in PHS. This is the first time to our knowledge that this type of algorithm is completely implemented and tested for fields of degrees up to 60.
2020
ASIACRYPT
Succinct Diophantine-Satisfiability Arguments 📺
A Diophantine equation is a multi-variate polynomial equation with integer coefficients and it is satisfiable if it has a solution with all unknowns taking integer values. Davis, Putnam, Robinson and Matiyasevich showed that the general Diophantine satisfiability problem is undecidable (giving a negative answer to Hilbert's tenth problem) but it is nevertheless possible to argue in zero-knowledge the knowledge of a solution, if a solution is known to a prover. We provide the first succinct honest-verifier zero-knowledge argument for the satisfiability of Diophantine equations with a communication complexity and a round complexity that grows logarithmically in the size of the polynomial equation. The security of our argument relies on standard assumptions on hidden-order groups. As the argument requires to commit to integers, we introduce a new integer-commitment scheme that has much smaller parameters than Damgard and Fujisaki's scheme. We finally show how to succinctly argue knowledge of solutions to several NP-complete problems and cryptographic problems by encoding them as Diophantine equations.
2020
ASIACRYPT
On the Adaptive Security of MACs and PRFs 📺
We consider the security of two of the most commonly used cryptographic primitives--message authentication codes (MACs) and pseudorandom functions (PRFs)--in a multi-user setting with adaptive corruption. Whereas is it well known that any secure MAC or PRF is also multi-user secure under adaptive corruption, the trivial reduction induces a security loss that is linear in the number of users. Our main result shows that black-box reductions from "standard" assumptions cannot be used to provide a tight, or even a linear-preserving, security reduction for adaptive multi-user secure deterministic stateless MACs and thus also PRFs. In other words, a security loss that grows with the number of users is necessary for any such black-box reduction.
2020
ASIACRYPT
Individual Simulations 📺
We develop an individual simulation technique that explicitly makes use of particular properties/structures of a given adversary's functionality. Using this simulation technique, we obtain the following results. 1. We construct the first protocols that break previous black-box barriers under the standard hardness of factoring, both of which are polynomial time simulatable against all a-priori bounded polynomial size distinguishers: a)Two-round selective opening secure commitment scheme. b)Three-round concurrent zero knowledge and concurrent witness hiding argument for NP in the bare public-key model. 2. We present a simpler two-round weak zero knowledge and witness hiding argument for NP in the plain model under the sub-exponential hardness of factoring. Our technique also yields a significantly simpler proof that existing distinguisher-dependent simulatable zero knowledge protocols are also polynomial time simulatable against all distinguishers of a-priori bounded polynomial size. The core conceptual idea underlying our individual simulation technique is an observation of the existence of nearly optimal extractors for all hard distributions: For any NP-instance(s) sampling algorithm, there exists a polynomial-size witness extractor (depending on the sampler's functionality) that almost outperforms any circuit of a-priori bounded polynomial size in terms of the success probability.
2020
ASIACRYPT
Circuit Amortization Friendly Encodings and their Application to Statistically Secure Multiparty Computation 📺
At CRYPTO 2018, Cascudo et al. introduced Reverse Multiplication Friendly Embeddings (RMFEs). These are a mechanism to compute $\delta$ parallel evaluations of the same arithmetic circuit over a field $\mathbb{F}_q$ at the cost of a single evaluation of that circuit in $\mathbb{F}_{q^d}$, where $\delta < d$. Due to this inequality, RMFEs are a useful tool when protocols require to work over $\mathbb{F}_{q^d}$ but one is only interested in computing over $\mathbb{F}_q$. In this work we introduce Circuit Amortization Friendly Encodings (CAFEs), which generalize RMFEs while having concrete efficiency in mind. For a Galois Ring $R = GR(2^k,d)$, CAFEs allow to compute certain circuits over $\mathbb{Z}_{2^k}}$ at the cost of a single secure multiplication in $R$. We present three CAFE instantiations, which we apply to the protocol for MPC over $\mathbb{Z}_{2^k}}$ via Galois Rings by Abspoel et al. (TCC 2019). Our protocols allow for efficient switching between the different CAFEs, as well as between computation over $GR(2^k,d)$ and $\mathbb{F}_{2^{d}}$ in a way that preserves the CAFE in both rings. This adaptability leads to efficiency gains for e.g. Machine Learning applications, which can be represented as highly parallel circuits over $\mathbb{Z}_{2^k}}$ followed by bit-wise operations. From an implementation of our techniques, we estimate that an SVM can be evaluated on 250 images in parallel up to $\times 7$ as efficient using our techniques, compared to using the protocols from Abspoel et al. (TCC 2019).
2020
ASIACRYPT
Unbounded Dynamic Predicate Compositions in ABE from Standard Assumptions 📺
At Eurocrypt'19, Attrapadung presented several transformations that dynamically compose a set of attribute-based encryption (ABE) schemes for simpler predicates into a new ABE scheme for more expressive predicates. Due to the powerful unbounded and modular nature of his compositions, many new ABE schemes can be obtained in a systematic manner (including those that resolved some open problems at the time). However, his approach heavily relies on so-called $q$-type assumptions, which are not standard. Devising such powerful compositions from standard assumptions was left as an important open problem. In this paper, we present a new framework for constructing ABE schemes that allow unbounded and dynamic predicate compositions among them, and show that the adaptive security of these composed ABE will be preserved by relying only on the standard matrix Diffie-Hellman (MDDH) assumption. This thus resolves the open problem posed by Attrapadung. As for applications, we obtain various ABEs that are the first such instantiations of their kinds from standard assumptions. These include the following adaptively secure \emph{large-universe} ABEs for Boolean formulae under MDDH: - The first completely unbounded monotone key-policy (KP)/ciphertext-policy (CP) ABE. Previously, such ABE has been only recently proposed, but only for the KP and \emph{small-universe} flavor (Kowalczyk and Wee, Eurocrypt'19). - The first completely unbounded non-monotone KP/CP-ABE. Especially, our ABEs support a new type of non-monotonicity that subsumes previous two types of non-monotonicity, namely, by Ostrovsky et al. (CCS'07) and by Okamoto and Takashima (CRYPTO'10). - The first non-monotone KP and CP-ABE with constant-size ciphertexts and secret keys, respectively. - The first monotone KP and CP-ABE with constant-size secret keys and ciphertexts, respectively. At the core of our framework lies a new \emph{partially symmetric} design of the core 1-key 1-ciphertext oracle component called Key Encoding Indistinguishability, which exploits the symmetry so as to obtain compositions.
2020
ASIACRYPT
Simpler Statistically Sender Private Oblivious Transfer from Ideals of Cyclotomic Integers 📺
We present a two-message oblivious transfer protocol achieving statistical sender privacy and computational receiver privacy based on the RLWE assumption for cyclotomic number fields. This work improves upon prior lattice-based statistically sender-private oblivious transfer protocols by reducing the total communication between parties by a factor O(nlogq) for transfer of length O(n) messages. Prior work of Brakerski and Dottling uses transference theorems to show that either a lattice or its dual must have short vectors, the existence of which guarantees lossy encryption for encodings with respect to that lattice, and therefore statistical sender privacy. In the case of ideal lattices from embeddings of cyclotomic integers, the existence of one short vector implies the existence of many, and therefore encryption with respect to either a lattice or its dual is guaranteed to ``lose" more information about the message than can be ensured in the case of general lattices. This additional structure of ideals of cyclotomic integers allows for efficiency improvements beyond those that are typical when moving from the generic to ideal lattice setting, resulting in smaller message sizes for sender and receiver, as well as a protocol that is simpler to describe and analyze.
2020
ASIACRYPT
Efficient and Round-Optimal Oblivious Transfer and Commitment with Adaptive Security 📺
We construct the most efficient two-round adaptively secure bit-OT in the Common Random String (CRS) model. The scheme is UC secure under the Decisional Diffie-Hellman (DDH) assumption. It incurs O(1) exponentiations and sends O(1) group elements, whereas the state of the art requires O(k^2) exponentiations and communicates poly(k) bits, where k is the computational security parameter. Along the way, we obtain several other efficient UC-secure OT protocols under DDH : - The most efficient yet two-round adaptive string-OT protocol assuming global programmable random oracle. Furthermore, the protocol can be made non-interactive in the simultaneous message setting, assuming random inputs for the sender. - The fi rst two-round string-OT with amortized constant exponentiations and communication overhead which is secure in the global observable random oracle model. - The first two-round receiver equivocal string-OT in the CRS model that incurs constant computation and communication overhead. We also obtain the first non-interactive adaptive string UC-commitment in the CRS model which incurs a sublinear communication overhead in the security parameter. Speci cally, we commit to polylog(k) bits while communicating O(k) bits. Moreover, it is additively homomorphic. We can also extend our results to the single CRS model where multiple sessions share the same CRS. As a corollary, we obtain a two-round adaptively secure MPC protocol in this model.
2020
ASIACRYPT
Two-Pass Authenticated Key Exchange with Explicit Authentication and Tight Security 📺
We propose a generic construction of 2-pass authenticated key exchange (AKE) scheme with explicit authentication from key encapsulation mechanism (KEM) and signature (SIG) schemes. We improve the security model due to Gjosteen and Jager [Crypto2018] to a stronger one. In the strong model, if a replayed message is accepted by some user, the authentication of AKE is broken. We define a new security notion named ''IND-mCPA with adaptive reveals'' for KEM. When the underlying KEM has such a security and SIG has unforgeability with adaptive corruptions, our construction of AKE equipped with counters as states is secure in the strong model, and stateless AKE without counter is secure in the traditional model. We also present a KEM possessing tight ''IND-mCPA security with adaptive reveals'' from the Computation Diffie-Hellman assumption in the random oracle model. When the generic construction of AKE is instantiated with the KEM and the available SIG by Gjosteen and Jager [Crypto2018], we obtain the first practical 2-pass AKE with tight security and explicit authentication. In addition, the integration of the tightly IND-mCCA secure KEM (derived from PKE by Han et al. [Crypto2019]) and the tightly secure SIG by Bader et al. [TCC2015] results in the first tightly secure 2-pass AKE with explicit authentication in the standard model.
2020
ASIACRYPT
Succinct and Adaptively Secure ABE for Arithmetic Branching Programs from k-Lin 📺
We present succinct and adaptively secure attribute-based encryption (ABE) schemes for arithmetic branching programs, based on k-Lin in pairing groups. Our key-policy ABE scheme have ciphertexts of constant size, independent of the length of the attributes, and our ciphertext-policy ABE scheme have secret keys of constant size. Our schemes improve upon the recent succinct ABE schemes in [Tomida and Attrapadung, ePrint '20], which only handles Boolean formulae. All other prior succinct ABE schemes either achieve only selective security or rely on q-type assumptions. Our schemes are obtained through a general and modular approach that combines a public-key inner product functional encryption satisfying a new security notion called gradual simulation security and an information-theoretic randomized encoding scheme called arithmetic key garbling scheme.
2020
ASIACRYPT
CCA Updatable Encryption Against Malicious Re-Encryption Attacks 📺
Updatable encryption (UE) is an attractive primitive, which allows the secret key of the outsourced encrypted data to be updated to a fresh one periodically. Several elegant works exist studying various security properties. We notice several major issues in existing security models of (ciphertext dependent) updatable encryption, in particular, integrity and CCA security. The adversary in the models is only allowed to request the server to re-encrypt {\em honestly} generated ciphertext, while in practice, an attacker could try to inject arbitrary ciphertexts into the server as she wishes. Those malformed ciphertext could be updated and leveraged by the adversary and cause serious security issues. In this paper, we fill the gap and strengthen the security definitions in multiple aspects: most importantly our integrity and CCA security models remove the restriction in previous models and achieve standard notions of integrity and CCA security in the setting of updatable encryption. Along the way, we refine the security model to capture post-compromise security and enhance the re-encryption indistinguishability to the CCA style. Guided by the new models, we provide a novel construction \recrypt, which satisfies our strengthened security definitions. The technical building block of homomorphic hash from a group may be of independent interests. We also study the relations among security notions; and a bit surprisingly, the folklore result in authenticated encryption that IND-CPA plus ciphertext integrity imply IND-CCA security does {\em not} hold for ciphertext dependent updatable encryption.
2020
ASIACRYPT
Efficient Fully Secure Computation via Distributed Zero-Knowledge Proofs 📺
Secure computation protocols enable mutually distrusting parties to compute a function of their private inputs while revealing nothing but the output. Protocols with {\em full security} (also known as {\em guaranteed output delivery}) in particular protect against denial-of-service attacks, guaranteeing that honest parties receive a correct output. This feature can be realized in the presence of an honest majority, and significant research effort has gone toward attaining full security with good asymptotic and concrete efficiency. We present a fully secure protocol for {\em any constant} number of parties $n$ and $t<n/2$ corruptions that achieves full security with the {\em same amortized communication} as for semi-honest security: $\frac{3t}{2t+1}|C| + o(|C|)$ $R$-elements per party ($\approx 1.5$ $R$-elements), for a circuit with $|C|$ multiplication gates over either a finite field $R=\FF$ or over the ring $R=\Z_{2^k}$. Our techniques include new methods for utilizing the distributed zero-knowledge proofs of Boneh {\em et al.} (CRYPTO 2019) for both distributed verifiers {\em and} provers. As a secondary contribution, we show that similar techniques can be used to compile the best known honest-majority protocols for an arbitrary (super-constant) number of semi-honest parties into ones that achieve {\em security with abort} against malicious parties, with sublinear additive cost. We present an efficient protocol for {\em any constant} number of parties $n$, with full security against $t<n/2$ corrupted parties, that makes a black-box use of a pseudorandom generator. Our protocol evaluates an arithmetic circuit $C$ over a finite ring $R$ (either a finite field or $R=\Z_{2^k}$) with communication complexity of $\frac{3t}{2t+1}S + o(S)$ $R$-elements per party, where $S$ is the number of multiplication gates in $C$ (namely, $<1.5$ elements per party per gate). This matches the best known protocols for the semi-honest model up to the sublinear additive term. For a small number of parties $n$, this improves over a recent protocol of Goyal {\em et al.} (Crypto 2020) by a constant factor for circuits over large fields, and by at least an $\Omega(\log n)$ factor for Boolean circuits or circuits over rings. Our protocol provides new methods for applying the distributed zero-knowledge proofs of Boneh {\em et al.}~(Crypto 2019), which only require logarithmic communication, for compiling semi-honest protocols into fully secure ones in the more challenging case of $t>1$ corrupted parties. %Similarly to the recent fully secure 3-party protocol of Boyle {\em et al.} (CCS 2019), our protocol builds on the sublinear-communication distributed zero-knowledge proofs of Boneh {\em et al.} (Crypto 2019) to compile any ``natural'' semi-honest protocol into a fully secure protocol. However, applying this tool with $t>1$ corrupted parties introduces several nontrivial challenges that we overcome in this work. Our protocol relies on {\em replicated secret sharing} to minimize communication and simplify the mechanism for achieving full security. This results in computational cost that scales exponentially with $n$. Our main protocol builds on a new honest-majority protocol for verifying the correctness of multiplication triples by making a {\em general} use of distributed zero-knowledge proofs. While the protocol only achieves the weaker notion of {\em security with abort}, it applies to any linear secret-sharing scheme and provides a conceptually simpler, more general, and more efficient alternative to previous protocols from the literature. In particular, it can be combined with the Fiat-Shamir heuristic to simultaneously achieve logarithmic communication complexity and constant round complexity.
2020
ASIACRYPT
2020
ASIACRYPT
2020
ASIACRYPT
Practical Exact Proofs from Lattices: New Techniques to Exploit Fully-Splitting Rings 📺
We propose a lattice-based zero-knowledge proof system for exactly proving knowledge of a ternary solution $\vec{s} \in \{-1,0,1\}^n$ to a linear equation $A\vec{s}=\vec{u}$ over $\mathbb{Z}_q$, which improves upon the protocol by Bootle, Lyubashevsky and Seiler (CRYPTO 2019) by producing proofs that are shorter by a factor of $7.5$. At the core lies a technique that utilizes the module-homomorphic BDLOP commitment scheme (SCN 2018) over the fully splitting cyclotomic ring $\mathbb{Z}_q[X]/(X^d + 1)$ to prove scalar products with the NTT vector of a secret polynomial.
2020
ASIACRYPT
MOTIF: (Almost) Free Branching in GMW via Vector-Scalar Multiplication 📺
MPC functionalities are increasingly specified in high-level languages, where control-flow constructions such as conditional statements are extensively used. Today, concretely efficient MPC protocols are circuit-based and must evaluate all conditional branches at high cost to hide the taken branch. The Goldreich-Micali-Wigderson, or GMW, protocol is a foundational circuit-based technique that realizes MPC for p players and is secure against up to p-1 semi-honest corruptions. While GMW requires communication rounds proportional to the computed circuit’s depth, it is effective in many natural settings. Our main contribution is MOTIF (Minimizing OTs for IFs), a novel GMW extension that evaluates conditional branches almost for free by amortizing Oblivious Transfers (OTs) across branches. That is, we simultaneously evaluate multiple independent AND gates, one gate from each mutually exclusive branch, by representing them as a single cheap vector-scalar multiplication (VS) gate. For 2PC with b branches, we simultaneously evaluate up to b AND gates using only two 1-out-of-2 OTs of b-bit secrets. This is a factor ~b improvement over the state-of-the-art 2b 1-out-of-2 OTs of 1-bit secrets. Our factor b improvement generalizes to the multiparty setting as well: b AND gates consume only p(p ? 1) 1-out-of-2 OTs of b-bit secrets. We implemented our approach and report its performance. For 2PC and a circuit with 16 branches, each comparing two length-65000 bitstrings, MOTIF outperforms standard GMW in terms of communication by ~9.4x. Total wall-clock time is improved by 4.1 - 9.2x depending on network settings. Our work is in the semi-honest model, tolerating all-but-one corruptions.
2020
ASIACRYPT
Towards Classical Hardness of Module-LWE: The Linear Rank Case 📺
We prove that the module learning with errors (M-LWE) problem with arbitrary polynomial-sized modulus $p$ is \emph{classically} at least as hard as standard worst-case lattice problems, as long as the module rank $d$ is not smaller than the ring dimension $n$. Previous publications only showed the hardness under quantum reductions. We achieve this result in an analogous manner as in the case of the learning with errors (LWE) problem. First, we show the classical hardness of M-LWE with an exponential-sized modulus. In a second step, we prove the hardness of M-LWE using a binary secret. And finally, we provide a modulus reduction technique. The complete result applies to the class of power-of-two cyclotomic fields. However, several tools hold for more general classes of number fields and may be of independent interest.
2020
ASIACRYPT
Maliciously Secure Matrix Multiplication with Applications to Private Deep Learning 📺
Computing on data in a manner that preserve the privacy is of growing importance. Multi-Party Computation (MPC) and Homomorphic Encryption (HE) are two cryptographic techniques for privacy-preserving computations. In this work, we have developed efficient UC-secure multiparty protocols for matrix multiplications and two-dimensional convolutions. We built upon the SPDZ framework and integrated the state-of-the-art HE algorithms for matrix multiplication. Our protocol achieved communication cost linear only in the input and output dimensions and not on the number of multiplication operations. We eliminate the ''triple sacrifice'' step of SPDZ to improve efficiency and simplify the zero-knowledge proofs. We implemented our protocols and benchmarked them against the SPDZ LowGear variant (Keller et al. Eurocrypt'18). For multiplying two square matrices of size 128, we reduced the communication cost from 1.54 GB to 12.46 MB, an improvement of over two orders of magnitude that only improves with larger matrix sizes. For evaluating all convolution layers of the ResNet-50 neural network, the communication reduces cost from 5 TB to 41 GB.
2020
ASIACRYPT
Finding Collisions in a Quantum World: Quantum Black-Box Separation of Collision-Resistance and One-Wayness 📺
Best Paper Award
Since the celebrated work of Impagliazzo and Rudich (STOC 1989), a number of black-box impossibility results have been established. However, these works only ruled out classical black-box reductions among cryptographic primitives. Therefore it may be possible to overcome these impossibility results by using quantum reductions. To exclude such a possibility, we have to extend these impossibility results to the quantum setting. In this paper, we study black-box impossibility in the quantum setting. We first formalize a quantum counterpart of fully-black-box reduction following the formalization by Reingold, Trevisan and Vadhan (TCC 2004). Then we prove that there is no quantum fully-black-box reduction from collision-resistant hash functions to one-way permutations (or even trapdoor permutations). We take both of classical and quantum implementations of primitives into account. This is an extension to the quantum setting of the work of Simon (Eurocrypt 1998) who showed a similar result in the classical setting.
2020
ASIACRYPT
Estimating quantum speedups for lattice sieves 📺
Quantum variants of lattice sieve algorithms are routinely used to assess the security of lattice based cryptographic constructions. In this work we provide a heuristic, non-asymptotic, analysis of the cost of several algorithms for near neighbour search on high dimensional spheres. These algorithms are key components of lattice sieves. We design quantum circuits for near neighbour search algorithms and provide software that numerically optimises algorithm parameters according to various cost metrics. Using this software we estimate the cost of classical and quantum near neighbour search on spheres. For the most performant near neighbour search algorithm that we analyse we fi nd a small quantum speedup in dimensions of cryptanalytic interest. Achieving this speedup requires several optimistic physical and algorithmic assumptions.
2020
ASIACRYPT
B-SIDH: supersingular isogeny Diffie-Hellman using twisted torsion 📺
This paper explores a new way of instantiating isogeny-based cryptography in which parties can work in both the (p+1)-torsion of a set of supersingular curves and in the (p-1)-torsion corresponding to the set of their quadratic twists. Although the isomorphism between a given supersingular curve and its quadratic twist is not defined over GF(p^2) in general, restricting operations to the x-lines of both sets of twists allows all arithmetic to be carried out over GF(p^2) as usual. Furthermore, since supersingular twists always have the same GF(p^2)-rational j-invariant, the SIDH protocol remains unchanged when Alice and Bob are free to work in both sets of twists. This framework lifts the restrictions on the shapes of the underlying prime fields originally imposed by Jao and De Feo, and allows a range of new options for instantiating isogeny-based public key cryptography. These include alternatives that exploit Mersenne and Montgomery-friendly primes, as well as the possibility of halving the size of the primes in the Jao-De Feo construction at no known loss of asymptotic security. For a given target security level, the resulting public keys are smaller than the public keys of all of the key encapsulation schemes currently under consideration in the NIST post-quantum standardisation effort. The best known attacks against the instantiations proposed in this paper are the classical path finding algorithm due to Delfs and Galbraith and its quantum adapation due to Biasse, Jao and Sankar; these run in respective time O(p^(1/2)) and O(p^(1/4)), and are essentially memory-free. The upshot is that removing the big-O's and obtaining concrete security estimates is a matter of costing the circuits needed to implement the corresponding isogeny. In contrast to other post-quantum proposals, this makes the security analysis of B-SIDH rather straightforward. Searches for friendly parameters are used to find several primes that range from 237 to 256 bits, the conjectured security of which are comparable to the 434-bit prime used to target NIST level 1 security in the SIKE proposal. One noteworthy example is a 247-bit prime for which Alice's secret isogeny is 7901-smooth and Bob's secret isogeny is 7621-smooth.
2020
ASIACRYPT
Mind the Propagation of States New Automatic Search Tool for Impossible Differentials and Impossible Polytopic Transitions 📺
Impossible differentials cryptanalysis and impossible polytopic cryptanalysis are the most effective approaches to estimate the security of block ciphers. However, the previous automatic search methods of their distinguishers, impossible differentials and impossible polytopic transitions, neither consider the impact of key schedule in the single-key setting and the differential property of large S-boxes, nor apply to the block ciphers with variable rotations. Thus, unlike previous methods which focus on the propagation of the difference or s-difference, we redefine the impossible differentials and impossible (s + 1)-polytopic transitions according to the propagation of state, which allow us to break through those limitations of the previous methods. Theoretically, we prove that traditional impossible differentials and impossible (s+1)-polytopic transitions are equivalent to part of our redefinitions, which have advantages from broader view. Technically, we renew the automatic search model and design an SAT-based tool to evaluate our redefined impossible differentials and impossible (s + 1)-polytopic transitions efficiently. As a result, for GIFT64, we get the 6-round impossible differentials which cannot be detected by all previous tools. For PRINTcipher, we propose the first modeling method for the key-dependent permutation and key-dependent S-box. For MISTY1, we derive 902 4-round impossible differentials by exploiting the differential property of S-boxes. For RC5, we present the first modeling method for the variable rotation and get 2.5-round impossible differentials for each version of it. More remarkable, our tool can be used to evaluate the security of given cipher against the impossible differentials, and we prove that there exists no 5-round 1 input active word and 1 output active word impossible differentials for AES-128 even consider the relations of 3-round keys. Besides, we also get the impossible (s + 1)-polytopic transitions for PRINTcipher, GIFT64, PRESENT, and RC5, all of which can cover more rounds than their corresponding impossible differentials as far as we know.
2020
ASIACRYPT
Minimizing the Two-Round Tweakable Even-Mansour Cipher 📺
In CRYPTO 2015, Cogliati et al. have proposed one-round tweakable Even-Mansour (\textsf{1-TEM}) cipher constructed out of a single $n$-bit public permutation $\pi$ and a uniform and almost XOR-universal hash function \textsf{H} as $(k, t, x) \mapsto \textsf{H}_k(t) \oplus \pi(\textsf{H}_k(t) \oplus x)$, where $t$ is the tweak, and $x$ is the $n$-bit message. Authors have shown that its two-round extension, which we refer to as \textsf{2-TEM}, obtained by cascading $2$-independent instances of the construction gives $2n/3$-bit security and $r$-round cascading gives $rn/r+2$-bit security. In ASIACRYPT 2015, Cogliati and Seurin have shown that four-round tweakable Even-Mansour cipher, which we refer to as \textsf{4-TEM}, constructed out of four independent $n$-bit permutations $\pi_1, \pi_2, \pi_3, \pi_4$ and two independent $n$-bit keys $k_1, k_2$, defined as $$k_1 \oplus t \oplus \pi_4(k_2 \oplus t \oplus \pi_3(k_1 \oplus t \oplus \pi_2(k_2 \oplus t \oplus \pi_1(k_1 \oplus t \oplus x)))), $$ is secure upto $2^{2n/3}$ adversarial queries. In this paper, we have shown that if we replace two independent permutations of \textsf{2-TEM} (Cogliati et al., CRYPTO 2015) with a single $n$-bit public permutation, then the resultant construction still guarrantees security upto $2^{2n/3}$ adversarial queries. Using the results derived therein, we also show that replacing the permutation $(\pi_4, \pi_3)$ with $(\pi_1, \pi_2)$ in the above equation preserves security upto $2^{2n/3}$ adversarial queries.
2020
ASIACRYPT
Determining the Core Primitive for Optimally Secure Ratcheting 📺
After ratcheting attracted attention mostly due to practical real-world protocols, recently a line of work studied ratcheting as a primitive from a theoretic point of view. Literature in this line, pursuing the strongest security of ratcheting one can hope for, utilized for constructions strong, yet inefficient key-updatable primitives – based on hierarchical identity based encryption (HIBE). As none of these works formally justified utilizing these building blocks, we answer the yet open question under which conditions their use is actually necessary. We revisit these strong notions of ratcheted key exchange (RKE), and propose a more realistic (slightly stronger) security definition. In this security definition, both exposure of participants' local secrets and attacks against executions' randomness are considered. While these two attacks were partially considered in previous work, we are the first to unify them cleanly in a natural game based notion. Our definitions are based on the systematic RKE notion by Poettering and Rösler (CRYPTO 2018). Due to slight (but meaningful) changes to regard attacks against randomness, we are ultimately able to show that, in order to fulfill strong security for RKE, public key cryptography with (independently) updatable key pairs is a necessary building block. Surprisingly, this implication already holds for the simplest RKE variant. Hence, (1) we model optimally secure RKE under randomness manipulation to cover realistic attacks, (2) we (provably) extract the core primitive that is necessary to realize strongly secure RKE, and (3) our results indicate which relaxations in security allow for constructions that only rely on standard public key cryptography.
2020
ASIACRYPT
Beyond Birthday Bound Secure Fresh Rekeying: Application to Authenticated Encryption 📺
Fresh rekeying is a well-established method to protect a primitive or mode against side-channel attacks: an easy to protect but cryptographically not so involved function generates a subkey from the master key, and this subkey is then used for the block encryption of a single or a few messages. It is an efficient way to achieve side-channel protection, but current solutions only achieve birthday bound security in the block size of the cipher and thus halve its security (except if more involved primitives are employed). We present generalized solutions to parallel block cipher rekeying that, for the first time, achieve security beyond the birthday bound in the block size $n$. The first solution involves, next to the subkey generation, one multiplication and the core block cipher call and achieves $2^{2n/3}$ security. The second solution makes two block cipher calls, and achieves optimal $2^n$ security. Our third solution uses a slightly larger subkey generation function but requires no adaptations to the core encryption and also achieves optimal security. The construction seamlessly generalizes to permutation based fresh rekeying. Central to our schemes is the observation that fresh rekeying and generic tweakable block cipher design are two very related topics, and we can take lessons from the advanced results in the latter to improve our understanding and development of the former. We subsequently use these rekeying schemes in a constructive manner to deliver three authenticated encryption modes that achieve beyond birthday bound security and are easy to protect against side-channel attacks.
2020
ASIACRYPT
The Direction of Updatable Encryption does not Matter Much 📺
Updatable encryption schemes allow for key rotation on ciphertexts. A client outsourcing storage of encrypted data to a cloud server can change its encryption key. The cloud server can update the stored ciphertexts to the new key using only a token provided by the client. This paper solves two open problems in updatable encryption, that of uni-directional vs. bi-directional updates, and post-quantum security. The main result in this paper is to analyze the security notions based on uni- and bi-directional updates. Surprisingly, we prove that uni- and bi-directional variants of each security notion are equivalent. The second result in this paper is to provide a new and efficient updatable encryption scheme based on the Decisional Learning with Error assumption. This gives us post-quantum security. Our scheme is bi-directional, but because of our main result, this is sufficient.
2020
ASIACRYPT
Incrementally Aggregatable Vector Commitments and Applications to Verifiable Decentralized Storage 📺
Vector commitments with subvector openings (SVC) [Lai-Malavolta, Boneh-Bunz-Fisch; CRYPTO'19] allow one to open a committed vector at a set of positions with an opening of size independent of both the vector's length and the number of opened positions. We continue the study of SVC with two goals in mind: improving their efficiency and making them more suitable to decentralized settings. We address both problems by proposing a new notion for VC that we call \emph{incremental aggregation} and that allows one to merge openings in a succinct way an \emph{unbounded} number of times. We show two applications of this property. The first one is immediate and is a method to generate openings in a distributed way. The second application is an algorithm for faster generation of openings via preprocessing. We then proceed to realize SVC with incremental aggregation. We provide two constructions in groups of unknown order that, similarly to that of Boneh et al. (which supports aggregating only once), have constant-size public parameters, commitments and openings. As an additional feature, for the first construction we propose efficient arguments of knowledge of subvector openings which immediately yields a keyless proof of storage with compact proofs. Finally, we address a problem closely related to that of SVC: storing a file efficiently in completely decentralized networks. We introduce and construct \emph{verifiable decentralized storage} (VDS), a cryptographic primitive that allows to check the integrity of a file stored by a network of nodes in a distributed and decentralized way. Our VDS constructions rely on our new vector commitment techniques.
2020
ASIACRYPT
Lattice-Based E-Cash, Revisited 📺
Electronic cash (e-cash) was introduced 40 years ago as the digital analogue of traditional cash. It allows users to withdraw electronic coins that can be spent anonymously with merchants. As advocated by Camenisch et al. (Eurocrypt 2005), it should be possible to store the withdrawn coins compactly (i.e., with logarithmic cost in the total number of coins), which has led to the notion of compact e-cash. Many solutions were proposed for this problem but the security proofs of most of them were invalidated by a very recent paper by Bourse et al. (Asiacrypt 2019). The same paper describes a generic way of fixing existing constructions/proofs but concrete instantiations of this patch are currently unknown in some settings. In particular, compact e-cash is no longer known to exist under quantum-safe assumptions. In this work, we resolve this problem by proposing the first secure compact e-cash system based on lattices following the result from Bourse et al. Contrarily to the latter work, our construction is not only generic, but we describe two concrete instantiations. We depart from previous frameworks of e-cash systems by leveraging lossy trapdoor functions to construct our coins. The indistinguishability of lossy and injective keys allows us to avoid the very strong requirements on the involved pseudo-random functions that were necessary to instantiate the generic patch proposed by Bourse et al.
2020
ASIACRYPT
Privacy-Preserving Pattern Matching on Encrypted Data 📺
Pattern matching is one of the most fundamental and important paradigms in several application domains such as digital forensics, cyber threat intelligence, or genomic and medical data analysis. While it is a straightforward operation when performed on plaintext data, it becomes a challenging task when the privacy of both the analyzed data and the analysis patterns must be preserved. In this paper, we propose new provably correct, secure, and relatively efficient (compared to similar existing schemes) public and private key based constructions that allow arbitrary pattern matching over encrypted data while protecting both the data to be analyzed and the patterns to be matched. That is, except the pattern provider (resp. the data owner), all other involved parties in the proposed constructions will learn nothing about the patterns to be searched (resp. the data to be inspected). Compared to existing solutions, the constructions we propose has some interesting properties: (1) the size of the ciphertext is linear to the size of plaintext and independent of the sizes and the number of the analysis patterns; (2) the sizes of the issued trapdoors are constant on the size of the data to be analyzed; and (3) the search complexity is linear on the size of the data to be inspected and is constant on the sizes of the analysis patterns. The conducted evaluations show that our constructions drastically improve the performance of the most efficient state of the art solution.
2020
ASIACRYPT
SiGamal: A supersingular isogeny-based PKE and its application to a PRF 📺
We propose two new supersingular isogeny-based public key encryptions: SiGamal and C-SiGamal. They were developed by giving an additional point of the order $2^r$ to CSIDH. SiGamal is similar to ElGamal encryption, while C-SiGamal is a compressed version of SiGamal. We prove that SiGamal and C-SiGamal are IND-CPA secure without using hash functions under a new assumption: the P-CSSDDH assumption. This assumption comes from the expectation that no efficient algorithm can distinguish between a random point and a point that is the image of a public point under a hidden isogeny. Next, we propose a Naor-Reingold type pseudo random function (PRF) based on SiGamal. If the P-CSSDDH assumption and the CSSDDH$^*$ assumption, which guarantees the security of CSIDH that uses a prime $p$ in the setting of SiGamal, hold, then our proposed function is a pseudo random function. Moreover, we estimate that the computational costs of group actions to compute our proposed PRF are about $\sqrt{\frac{8T}{3\pi}}$ times that of the group actions in CSIDH, where $T$ is the Hamming weight of the input of the PRF. Finally, we experimented with group actions in SiGamal and C-SiGamal. The computational costs of group actions in SiGamal-512 with a $256$-bit plaintext message space were about $2.62$ times that of a group action in CSIDH-512.
2020
ASIACRYPT
Improving Speed and Security in Updatable Encryption Schemes 📺
Periodic key rotation is a common practice designed to limit the long-term power of cryptographic keys. Key rotation refers to the process of re-encrypting encrypted content under a fresh key, and overwriting the old ciphertext with the new one. When encrypted data is stored in the cloud, key rotation can be very costly: it may require downloading the entire encrypted content from the cloud, re-encrypting it on the client's machine, and uploading the new ciphertext back to the cloud. An updatable encryption scheme is a symmetric-key encryption scheme designed to support efficient key rotation in the cloud. The data owner sends a short update token to the cloud. This update token lets the cloud rotate the ciphertext from the old key to the new key, without learning any information about the plaintext. Recent work on updatable encryption has led to several security definitions and proposed constructions. However, existing constructions are not yet efficient enough for practical adoption, and the existing security definitions can be strengthened. In this work we make three contributions. First, we introduce stronger security definitions for updatable encryption (in the ciphertext-dependent setting) that capture desirable security properties not covered in prior work. Second, we construct two new updatable encryption schemes. The first construction relies only on symmetric cryptographic primitives, but only supports a bounded number of key rotations. The second construction supports a (nearly) unbounded number of updates, and is built from the Ring Learning with Errors (RLWE) assumption. Due to complexities of using RLWE, this scheme achieves a slightly weaker notion of integrity compared to the first. Finally, we implement both constructions and compare their performance to prior work. Our RLWE-based construction is 200x faster than a prior proposal for an updatable encryption scheme based on the hardness of elliptic curve DDH. Our first construction, based entirely on symmetric primitives, has the highest encryption throughput, approaching the performance of AES, and the highest decryption throughput on ciphertexts that were re-encrypted fewer than fifty times. For ciphertexts re-encrypted over fifty times, the RLWE construction dominates it in decryption speed.
2020
ASIACRYPT
Non-Committing Encryption with Constant Ciphertext Expansion from Standard Assumptions 📺
Non-committing encryption (NCE) introduced by Canetti et al. (STOC '96) is a central tool to achieve multi-party computation protocols secure in the adaptive setting. Recently, Yoshida et al. (ASIACRYPT '19) proposed an NCE scheme based on the hardness of the DDH problem, which has ciphertext expansion $\mathcal{O}(\log\lambda)$ and public-key expansion $\mathcal{O}(\lambda^2)$. In this work, we improve their result and propose a methodology to construct an NCE scheme that achieves \emph{constant} ciphertext expansion. Our methodology can be instantiated from the DDH assumption and the LWE assumption. When instantiated from the LWE assumption, the public-key expansion is $\lambda\cdot\mathsf{poly}(\log\lambda)$. They are the first NCE schemes satisfying constant ciphertext expansion without using iO or common reference strings. Along the way, we define a weak notion of NCE, which satisfies only weak forms of correctness and security. We show how to amplify such a weak NCE scheme into a full-fledged one using wiretap codes with a new security property.
2020
ASIACRYPT
Efficient Homomorphic Comparison Methods with Optimal Complexity 📺
Comparison of two numbers is one of the most frequently used operations, but it has been a challenging task to efficiently compute the comparison function in homomorphic encryption~(HE) which basically support addition and multiplication. Recently, Cheon et al.~(Asiacrypt~2019) introduced a new approximate representation of the comparison function with a rational function, and showed that this rational function can be evaluated by an iterative algorithm. Due to this iterative feature, their method achieves a logarithmic computational complexity compared to previous polynomial approximation methods; however, the computational complexity is still not optimal, and the algorithm is quite slow for large-bit inputs in HE implementation. In this work, we propose new comparison methods with \emph{optimal} asymptotic complexity based on \emph{composite polynomial} approximation. The main idea is to systematically design a constant-degree polynomial $f$ by identifying the \emph{core properties} to make a composite polynomial $f\circ f \circ \cdots \circ f$ get close to the sign function (equivalent to the comparison function) as the number of compositions increases. We additionally introduce an acceleration method applying a mixed polynomial composition $f\circ \cdots \circ f\circ g \circ \cdots \circ g$ for some other polynomial $g$ with different properties instead of $f\circ f \circ \cdots \circ f$. Utilizing the devised polynomials $f$ and $g$, our new comparison algorithms only require $\Theta(\log(1/\epsilon)) + \Theta(\log\alpha)$ computational complexity to obtain an approximate comparison result of $a,b\in[0,1]$ satisfying $|a-b|\ge \epsilon$ within $2^{-\alpha}$ error. The asymptotic optimality results in substantial performance enhancement: our comparison algorithm on $16$-bit encrypted integers for $\alpha = 16$ takes $1.22$ milliseconds in amortized running time based on an approximate HE scheme HEAAN, which is $18$ times faster than the previous work.
2020
ASIACRYPT
Fuzzy Asymmetric Password-Authenticated Key Exchange 📺
Password-Authenticated Key Exchange (PAKE) lets users with passwords exchange a cryptographic key. There have been two variants of PAKE which make it more applicable to real-world scenarios: * Asymmetric PAKE (aPAKE), which aims at protecting a client's password even if the authentication server is untrusted, and * Fuzzy PAKE (fPAKE), which enables key agreement even if passwords of users are noisy, but "close enough". Supporting fuzzy password matches eases the use of higher entropy passwords and enables using biometrics and environmental readings (both of which are naturally noisy). Until now, both variants of PAKE have been considered only in separation. In this paper, we consider both of them simultaneously. We introduce the notion of Fuzzy Asymmetric PAKE (fuzzy aPAKE), which protects against untrusted servers and supports noisy passwords. We formulate our new notion in the Universal Composability framework of Canetti (FOCS'01), which is the preferred model for password-based primitives. We then show that fuzzy aPAKE can be obtained from oblivious transfer and some variant of robust secret sharing (Cramer et al, EC'15). We achieve security against malicious parties while avoiding expensive tools such as non-interactive zero-knowledge proofs. Our construction is round-optimal, with message and password file sizes that are independent of the schemes error tolerance.
2020
ASIACRYPT
Public-Key Generation with Verifiable Randomness 📺
We revisit the problem of proving that a user algorithm selected and correctly used a truly random seed in the generation of her cryptographic key. A first approach was proposed in 2002 by Juels and Guajardo for the validation of RSA secret keys. We present a new security model and general tools to efficiently prove that a private key was generated at random according to a prescribed process, without revealing any further information about the private key. We give a generic protocol for all key-generation algorithms based on probabilistic circuits and prove its security. We also propose a new protocol for factoring-based cryptography that we prove secure in the aforementioned model. This latter relies on a new efficient zero-knowledge argument for the double discrete logarithm problem that achieves an exponential improvement in communication complexity compared to the state of the art, and is of independent interest.
2020
ASIACRYPT
Side Channel Information Set Decoding using Iterative Chunking 📺
This paper presents an attack based on side-channel information and information set decoding (ISD) on the code-based Niederreiter cryptosystem and an evaluation of the practicality of the attack using an electromagnetic side channel. We start by directly adapting the timing side-channel plaintext-recovery attack by Shoufan et al. from 2010 to the constant-time implementation of the Niederreiter cryptosystem as used in the official FPGA-implementation of the NIST finalist “Classic McEliece”. We then enhance our attack using ISD and a new technique that we call iterative chunking to further significantly reduce the number of required side-channel measurements. We theoretically show that our attack improvements have a significant impact on reducing the number of required side-channel measurements. For example, for the 256-bit security parameter set kem/mceliece6960119 of “Classic McEliece”, we improve the basic attack that requires 5415 measurements to less than 562 measurements on average to mount a successful plaintext-recovery attack. Further reductions can be achieved at the price of increasing the cost of the ISD computations. We confirm our findings by practically mounting the attack on the official FPGA-implementation of “Classic McEliece” for all proposed parameter sets.
2020
ASIACRYPT
On the Exact Round Complexity of Best-of-both-Worlds Multi-party Computation 📺
The two traditional streams of multiparty computation (MPC) protocols consist of-- (a) protocols achieving guaranteed output delivery (\god) or fairness (\fair) in the honest-majority setting and (b) protocols achieving unanimous or selective abort (\uab, \sab) in the dishonest-majority setting. The favorable presence of honest majority amongst the participants is necessary to achieve the stronger notions of \god~or \fair. While the constructions of each type are abound in the literature, one class of protocols does not seem to withstand the threat model of the other. For instance, the honest-majority protocols do not guarantee privacy of the inputs of the honest parties in the face of dishonest majority and likewise the dishonest-majority protocols cannot achieve $\god$ and $\fair$, tolerating even a single corruption, let alone dishonest minority. The promise of the unconventional yet much sought-after species of MPC, termed as `Best-of-Both-Worlds' (BoBW), is to offer the best possible security depending on the actual corruption scenario. This work nearly settles the exact round complexity of two classes of BoBW protocols differing on the security achieved in the honest-majority setting, namely $\god$ and $\fair$ respectively, under the assumption of no setup (plain model), public setup (CRS) and private setup (CRS + PKI or simply PKI). The former class necessarily requires the number of parties to be strictly more than the sum of the bounds of corruptions in the honest-majority and dishonest-majority setting, for a feasible solution to exist. Demoting the goal to the second-best attainable security in the honest-majority setting, the latter class needs no such restriction. Assuming a network with pair-wise private channels and a broadcast channel, we show that 5 and 3 rounds are necessary and sufficient for the class of BoBW MPC with $\fair$ under the assumption of `no setup' and `public and private setup' respectively. For the class of BoBW MPC with $\god$, we show necessity and sufficiency of 3 rounds for the public setup case and 2 rounds for the private setup case. In the no setup setting, we show the sufficiency of 5 rounds, while the known lower bound is 4. All our upper bounds are based on polynomial-time assumptions and assume black-box simulation. With distinct feasibility conditions, the classes differ in terms of the round requirement. The bounds are in some cases different and on a positive note at most one more, compared to the maximum of the needs of the honest-majority and dishonest-majority setting. Our results remain unaffected when security with abort and fairness are upgraded to their identifiable counterparts.
2020
ASIACRYPT
Improved Security Analysis for Nonce-based Enhanced Hash-then-Mask MACs 📺
In this paper, we prove that the nonce-based enhanced hash-then-mask MAC (nEHtM) is secure up to 2^{3n/4} MAC queries and 2^n verification queries (ignoring logarithmic factors) as long as the number of faulty queries \mu is below 2^{3n/8}, significantly improving the previous bound by Dutta et al. Even when \mu goes beyond 2^{3n/8}, nEHtM enjoys graceful degradation of security. The second result is to prove the security of PRF-based nEHtM; when nEHtM is based on an n-to-s bit random function for a fixed size s such that 1 <= s <= n, it is proved to be secure up to any number of MAC queries and 2^s verification queries, if (1) s = n and \mu < 2^{n/2} or (2) n/2 < s < 2^{n-s} and \mu < max{2^{s/2}, 2^{n-s}}, or (3) s <= n/2 and \mu < 2^{n/2}. This result leads to the security proof of truncated nEHtM that returns only s bits of the original tag since a truncated permutation can be seen as a pseudorandom function. In particular, when s <= 2n/3, the truncated nEHtM is secure up to 2^{n - s/2} MAC queries and 2^s verification queries as long as \mu < min{2^{n/2}, 2^{n-s}}. For example, when s = n/2 (resp. s = n/4), the truncated nEHtM is secure up to 2^{3n/4} (resp. 2^{7n/8}) MAC queries. So truncation might provide better provable security than the original nEHtM with respect to the number of MAC queries.
2020
ASIACRYPT
Simulation-Sound Arguments for LWE and Applications to KDM-CCA2 Security 📺
The Naor-Yung paradigm is a well-known technique that constructs IND-CCA2-secure encryption schemes by means of non-interactive zero-knowledge proofs satisfying a notion of simulation-soundness. Until recently, it was an open problem to instantiate it under the sole Learning-With-Errors (LWE) assumption without relying on random oracles. While the recent results of Canetti et al. (STOC'19) and Peikert-Shiehian (Crypto'19) provide a solution to this problem by applying the Fiat-Shamir transform in the standard model, the resulting constructions are extremely inefficient as they proceed via a reduction to an NP-complete problem. In this paper, we give a direct, non-generic method for instantiating Naor-Yung under the LWE assumption outside the random oracle model. Specifically, we give a direct construction of an unbounded simulation-sound NIZK argument system which, for carefully chosen parameters, makes it possible to express the equality of plaintexts encrypted under different keys in Regev's cryptosystem. We also give a variant of our argument that provides tight security. As an application, we obtain an LWE-based public-key encryption scheme for which we can prove (tight) key-dependent message security under chosen-ciphertext attacks in the standard model.
2020
ASIACRYPT
MPC with Synchronous Security and Asynchronous Responsiveness 📺
Two paradigms for secure MPC are synchronous and asynchronous protocols. While synchronous protocols tolerate more corruptions and allow every party to give its input, they are very slow because the speed depends on the conservatively assumed worst-case delay $\Delta$ of the network. In contrast, asynchronous protocols allow parties to obtain output as fast as the actual network allows, a property called \emph{responsiveness}, but unavoidably have lower resilience and parties with slow network connections cannot give input. It is natural to wonder whether it is possible to leverage synchronous MPC protocols to achieve responsiveness, hence obtaining the advantages of both paradigms: full security with responsiveness up to t corruptions, and 'extended' security (full security or security with unanimous abort) with no responsiveness up to a larger threshold T of corruptions. We settle the question by providing matching feasibility and impossibility results: -For the case of unanimous abort as extended security, there is an MPC protocol if and only if T + 2t < n. -For the case of full security as extended security, there is an MPC protocol if and only if T < n/2 and T + 2t < n. In particular, setting t = n/4 allows to achieve a fully secure MPC for honest majority, which in addition benefits from having substantial responsiveness.
2020
ASIACRYPT
A New Decryption Failure Attack against HQC 📺
HQC is an IND-CCA2 KEM running for standardization in NIST's post-quantum cryptography project and has advanced to the second round. It is a code-based scheme in the class of public key encryptions, with given sets of parameters spanning NIST security strength 1, 3 and 5, corresponding to 128, 192 and 256 bits of classic security. In this paper we present an attack recovering the secret key of an HQC instance named hqc-256-1. The attack requires a single precomputation performed once and then never again. The online attack on an HQC instance then submits about $2^{64}$ special ciphertexts for decryption (obtained from the precomputation) and a phase of analysis studies the subset of ciphertexts that are not correctly decrypted. In this phase, the secret key of the HQC instance is determined. The overall complexity is estimated to be \(2^{246}\) if the attacker balances the costs of precomputation and post-processing, thereby claiming a successful attack on hqc-256-1 in the NIST setting. If we allow the precomputation cost to be $2^{254}$, which is below exhaustive key search on a 256 bit secret key, the computational complexity of the later parts can be no more than $2^{64}$. This is a setting relevant to practical security since the large precomputation needs to be done only once. Also, we note that the complexity of the precomputation can be lower if the online attack is allowed to submit more than $2^{64}$ ciphertexts for decryption.
2020
ASIACRYPT
Possibility and Impossibility Results for Receiver Selective Opening Secure PKE in the Multi-Challenge Setting 📺
Public key encryption (PKE) schemes are usually deployed in an open system with numerous users. In practice, it is common that some users are corrupted. A PKE scheme is said to be receiver selective opening (RSO) secure if it can still protect messages transmitted to uncorrupted receivers after the adversary corrupts some receivers and learns their secret keys. This is usually defined by requiring the existence of a simulator that can simulate the view of the adversary given only the opened messages. Existing works construct RSO secure PKE schemes in a single-challenge setting, where the adversary can only obtain one challenge ciphertext for each public key. However, in practice, it is preferable to have a PKE scheme with RSO security in the multi-challenge setting, where public keys can be used to encrypt multiple messages. In this work, we explore the possibility for achieving PKE schemes with receiver selective opening security in the multi-challenge setting. Our contributions are threefold. First, we demonstrate that PKE schemes with RSO security in the single-challenge setting are not necessarily RSO secure in the multi-challenge setting. Then, we show that it is impossible to achieve RSO security for PKE schemes if the number of challenge ciphertexts under each public key is a priori unbounded. In particular, we prove that no PKE scheme can be RSO secure in the $k$-challenge setting (i.e., the adversary can obtain $k$ challenge ciphertexts for each public key) if its secret key contains less than $k$ bits. On the positive side, we give a concrete construction of PKE scheme with RSO security in the $k$-challenge setting, where the ratio of the secret key length to $k$ approaches the lower bound 1.
2020
ASIACRYPT
Cryptography from One-Way Communication: On Completeness of Finite Channels 📺
Garg et al. (Crypto 2015) initiated the study of cryptographic protocols over noisy channels in the non-interactive setting, namely when only one party speaks. A major question left open by this work is the completeness of {\em finite} channels, whose input and output alphabets do not grow with the desired level of security. In this work, we address this question by obtaining the following results: Completeness of Bit-ROT with Inverse Polynomial Error: We show that bit-ROT (i.e., Randomized Oblivious Transfer channel, where each of the two messages is a single bit) can be used to realize general randomized functionalities with inverse polynomial error. Towards this, we provide a construction of string-ROT from bit-ROT with inverse polynomial error. No Finite Channel is Complete with Negligible Error: To complement the above, we show that {\it no} finite channel can be used to realize string-ROT with negligible error, implying that the inverse polynomial error in the completeness of bit-ROT is inherent. This holds even with semi-honest parties and for computational security, and is contrasted with the (negligible-error) completeness of string-ROT shown by Garg et al. Characterization of Finite Channels Enabling Zero-Knowledge Proofs: An important instance of secure computation is zero-knowledge proofs. Noisy channels can potentially be used to realize truly non-interactive zero-knowledge proofs, without trusted common randomness, and with non-transferability and deniability features that cannot be realized in the plain model. Garg et al. obtain such zero-knowledge proofs from the binary erasure channel (BEC) and the binary symmetric channel (BSC). We complete the picture by showing that in fact any non-trivial channel suffices.
2020
ASIACRYPT
Tight Security Analysis of 3-Round Key-Alternating Cipher with A Single Permutation 📺
The tight security bound of the KAC (Key-Alternating Cipher) construction whose round permutations are independent from each other has been well studied. Then a natural question is how the security bound will change when we use fewer permutations in a KAC construction. In CRYPTO 2014, Chen et al. proved that 2-round KAC with a single permutation (2KACSP) has the same security level as the classic one (i.e., 2-round KAC). But we still know little about the security bound of incompletely-independent KAC constructions with more than 2 rounds. In this paper,we will show that a similar result also holds for 3-round case. More concretely, we prove that 3-round KAC with a single permutation (3KACSP) is secure up to $\varTheta(2^{\frac{3n}{4}})$ queries, which also caps the security of 3-round KAC. To avoid the cumbersome graphical illustration used in Chen et al.'s work, a new representation is introduced to characterize the underlying combinatorial problem. Benefited from it, we can handle the knotty dependence in a modular way, and also show a plausible way to study the security of $r$KACSP. Technically, we abstract a type of problems capturing the intrinsic randomness of $r$KACSP construction, and then propose a high-level framework to handle such problems. Furthermore, our proof techniques show some evidence that for any $r$, $r$KACSP has the same security level as the classic $r$-round KAC in random permutation model.
2020
ASIACRYPT
Quantum Collision Attacks on AES-like Hashing with Low Quantum Random Access Memories 📺
At EUROCRYPT 2020, Hosoyamada and Sasaki proposed the first dedicated quantum attack on hash functions -- a quantum version of the rebound attack exploiting differentials whose probabilities are too low to be useful in the classical setting. This work opens up a new perspective toward the security of hash functions against quantum attacks. In particular, it tells us that the search for differentials should not stop at the classical birthday bound. Despite these interesting and promising implications, the concrete attacks described by Hosoyamada and Sasaki make use of large quantum random access memories (qRAMs), a resource whose availability in the foreseeable future is controversial even in the quantum computation community. Without large qRAMs, these attacks incur significant increases in time complexities. In this work, we reduce or even avoid the use of qRAMs by performing a quantum rebound attack based on differentials with non-full-active super S-boxes. Along the way, an MILP-based method is proposed to systematically explore the search space of useful truncated differentials with respect to rebound attacks. As a result, we obtain improved attacks on \aes-\texttt{MMO}, \aes-\texttt{MP}, and the first classical collision attacks on 4- and 5-round \grostl-\texttt{512}. Interestingly, the use of non-full-active super S-box differentials in the analysis of \aes-\texttt{MMO} gives rise to new difficulties in collecting enough starting points. To overcome this issue, we consider attacks involving two message blocks to gain more degrees of freedom, and we successfully compress the qRAM demand of the collision attacks on \texttt{AES}-\texttt{MMO} and \texttt{AES}-\texttt{MP} (EUROCRYPT 2020) from $2^{48}$ to a range from $2^{16}$ to $0$, while still maintaining a comparable time complexity. To the best of our knowledge, these are the first dedicated quantum attacks on hash functions that slightly outperform Chailloux, Naya-Plasencia, and Schrottenloher's generic quantum collision attack (ASIACRYPT 2017) in a model where large qRAMs are not available. This work demonstrates again how a clever combination of classical cryptanalytic technique and quantum computation leads to improved attacks, and shows that the direction pointed out by Hosoyamada and Sasaki deserves further investigation.
2020
ASIACRYPT
An Algebraic Formulation of the Division Property: Revisiting Degree Evaluations, Cube Attacks, and Key-Independent Sums 📺
Since it was proposed in 2015 as a generalization of integral properties, the division property has evolved into a powerful tool for probing the structures of Boolean functions whose algebraic normal forms are not available. We capture the most essential elements for the detection of division properties from a pure algebraic perspective, proposing a technique named as {\it monomial prediction}, which can be employed to determine the presence or absence of a monomial in the product of the coordinate functions of a vectorial Boolean function $\bs f$ by counting the number of the so-called {\it monomial trails} across a sequence of simpler functions whose composition is $\bs f$. Under the framework of the monomial prediction, we formally prove that most algorithms for detecting division properties in previous literature raise no false alarms but may miss. We also establish the equivalence between the monomial prediction and the three-subset bit-based division property without unknown subset presented at EUROCRYPT 2020, and show that these two techniques are perfectly accurate. This algebraic formulation gives more insights into division properties and inspires new search strategies. With the monomial prediction, we obtain the {\it exact} algebraic degrees of \TRIVIUM up to 834 rounds for the first time. In the context of cube attacks, we are able to explore a larger search space in limited time and recover the exact algebraic normal forms of complex superpolies with the help of a divide-and-conquer strategy. As a result, we identify more cubes with smaller dimensions, leading to improvements of some near-optimal attacks against 840-, 841- and 842-round \TRIVIUM.
2020
ASIACRYPT
CCA-Secure (Puncturable) KEMs from Encryption With Non-Negligible Decryption Errors 📺
Public-key encryption (PKE) schemes or key-encapsulation mechanisms (KEMs) are fundamental cryptographic building blocks to realize secure communication protocols. There are several known transformations that generically turn weakly secure schemes into strongly (i.e., IND-CCA) secure ones. While most of these transformations require the weakly secure scheme to provide perfect correctness, Hofheinz, Hövelmanns, and Kiltz (HHK) (TCC 2017) have recently shown that variants of the Fujisaki-Okamoto (FO) transform can work with schemes that have negligible correctness error in the (quantum) random oracle model (QROM). Many recent schemes in the NIST post-quantum competition (PQC) use variants of these transformations. Some of their CPA-secure versions even have a non-negligible correctness error and so the techniques of HHK cannot be applied. In this work, we study the setting of generically transforming PKE schemes with potentially large, i.e., non-negligible, correctness error to ones having negligible correctness error. While there have been previous treatments in an asymptotic setting by Dwork et al. (EUROCRYPT 2004), our goal is to come up with practically efficient compilers in a concrete setting and apply them in two different contexts: firstly, we show how to generically transform weakly secure deterministic or randomized PKEs into CCA-secure KEMs in the (Q)ROM using variants of HHK. This applies to essentially all candidates to the NIST PQC based on lattices and codes with non-negligible error, for which we provide an extensive analysis. We thereby show that it improves some of the code-based candidates. Secondly, we study puncturable KEMs in terms of the Bloom Filter KEM (BFKEM) proposed by Derler et al. (EUROCRYPT 2018) which inherently have a non-negligible correctness error. BFKEMs are a building block to construct fully forward-secret zero round-trip time (0-RTT) key-exchange protocols. In particular, we show how to achieve the first post-quantum secure BFKEM generically from lattices and codes by applying our techniques to identity-based encryption (IBE) schemes with (non-)negligible correctness error.
2020
ASIACRYPT
A Bit-Vector Differential Model for the Modular Addition by a Constant 📺
ARX algorithms are a class of symmetric-key algorithms constructed by Addition, Rotation, and XOR, which achieve the best software performances in low-end microcontrollers. To evaluate the resistance of an ARX cipher against differential cryptanalysis and its variants, the recent automated methods employ constraint satisfaction solvers, such as SMT solvers, to search for optimal characteristics. The main difficulty to formulate this search as a constraint satisfaction problem is obtaining the differential models of the non-linear operations, that is, the constraints describing the differential probability of each non-linear operation of the cipher. While an efficient bit-vector differential model was obtained for the modular addition with two variable inputs, no differential model for the modular addition by a constant has been proposed so far, preventing ARX ciphers including this operation from being evaluated with automated methods. In this paper, we present the first bit-vector differential model for the n-bit modular addition by a constant input. Our model contains O(log2(n)) basic bit-vector constraints and describes the binary logarithm of the differential probability. We also represent an SMT-based automated method to look for differential characteristics of ARX, including constant additions, and we provide an open-source tool ArxPy to find ARX differential characteristics in a fully automated way. To provide some examples, we have searched for related-key differential characteristics of TEA, XTEA, HIGHT, and LEA, obtaining better results than previous works. Our differential model and our automated tool allow cipher designers to select the best constant inputs for modular additions and cryptanalysts to evaluate the resistance of ARX ciphers against differential attacks.
2020
ASIACRYPT
MoniPoly---An Expressive $q$-SDH-Based Anonymous Attribute-Based Credential System 📺
Modern attribute-based anonymous credential (ABC) systems benefit from special encodings that yield expressive and highly efficient show proofs on logical statements. The technique was first proposed by Camenisch and Gro{\ss}, who constructed an SRSA-based ABC system with prime-encoded attributes that offers efficient \textsf{AND}, \textsf{OR} and \textsf{NOT} proofs. While other ABC frameworks have adopted constructions in the same vein, the Camenisch-Gro{\ss} ABC has been the most expressive and asymptotically most efficient proof system to date, even if it was constrained by the requirement of a trusted message-space setup and an inherent restriction to finite-set attributes encoded as primes. In this paper, combining a new set commitment scheme and an SDH-based signature scheme, we present a provably secure ABC system that supports show proofs for complex statements. This construction is not only more expressive than existing approaches, but it is also highly efficient under unrestricted attribute space due to its ECC protocols only requiring a constant number of bilinear pairings by the verifier; none by the prover. Furthermore, we introduce strong security models for impersonation and unlinkability under adaptive active and concurrent attacks to allow for the expressiveness of our ABC as well as for a systematic comparison to existing schemes. Given this foundation, we are the first to comprehensively formally prove the security of an ABC with expressive show proofs. Specifically, we prove the security against impersonation under the $q$-(co-)SDH assumption with a tight reduction. Besides the set commitment scheme, which may be of independent interest, our security models can serve as a foundation for the design of future ABC systems.
2020
ASIACRYPT
Collusion Resistant Trace-and-Revoke for Arbitrary Identities from Standard Assumptions 📺
A traitor tracing scheme is a multi-user public-key encryption scheme where each user in the system holds a decryption key that is associated with the user's identity. Using the public key, a content distributor can encrypt a message to all of the users in the system. At the same time, if a malicious group of users combine their respective decryption keys to build a "pirate decoder," there is an efficient tracing algorithm that the content distributor can use to identify at least one of the keys used to construct the decoder. A trace-and-revoke scheme is an extension of a standard traitor tracing scheme where there is an additional key-revocation mechanism that the content distributor can use to disable the decryption capabilities of compromised keys. Namely, during encryption, the content distributor can encrypt a message with respect to a list of revoked users such that only non-revoked users can decrypt the resulting ciphertext. Trace-and-revoke schemes are challenging to construct. Existing constructions from standard assumptions can only tolerate bounded collusions (i.e., there is an a priori bound on the number of keys an adversary obtains), have system parameters that scale exponentially in the bit-length of the identities, or satisfy weaker notions of traceability that are vulnerable to certain types of "pirate evolution" attacks. In this work, we provide the first construction of a trace-and-revoke scheme that is fully collusion resistant and capable of supporting arbitrary identities (i.e., the identities can be drawn from an exponential-size space). Our scheme supports public encryption and secret tracing, and can be based on the sub-exponential hardness of the LWE problem (with a super-polynomial modulus-to-noise ratio). The ciphertext size in our construction scales logarithmically in the size of the identity space and linearly in the size of the revocation list. Our scheme leverages techniques from both combinatorial and algebraic constructions for traitor tracing.
2020
ASIACRYPT
An Algebraic Attack on Ciphers with Low-Degree Round Functions: Application to Full MiMC 📺
Algebraically simple PRFs, ciphers, or cryptographic hash functions are becoming increasingly popular, for example due to their attractive properties for MPC and new proof systems (SNARKs, STARKs, among many others). In this paper, we focus on the algebraically simple construction MiMC, which became an attractive cryptanalytic target due to its simplicity, but also due to its use as a baseline in a competition for more recent algorithms exploring this design space. For the first time, we are able to describe key-recovery attacks on all full-round versions of MiMC over GF(2^n), requiring half the code book. In the chosen-ciphertext scenario, recovering the key from this data for the n-bit full version of MiMC takes the equivalent of less than 2^(n - log_2(n) + 1) calls to MiMC and negligible amounts of memory. The attack procedure is a generalization of higher-order differential cryptanalysis, and it is based on two main ingredients. First, we present a higher-order distinguisher which exploits the fact that the algebraic degree of MiMC grows significantly slower than originally believed. Secondly, we describe an approach to turn this distinguisher into a key-recovery attack without guessing the full subkey. Finally, we show that approximately ceil(log_3(2 * R)) more rounds (where R = ceil(n * log_3(2)) is the current number of rounds of MiMC-n/n) can be necessary and sufficient to restore the security against the key-recovery attack presented here. The attack has been practically verified on toy versions of MiMC. Note that our attack does not affect the security of MiMC over prime fields.
2020
ASIACRYPT
Adaptively Secure Inner Product Encryption from LWE 📺
Attribute-based encryption (ABE) is an advanced form of encryption scheme allowing for access policies to be embedded within the secret keys and ciphertexts. By now, we have ABEs supporting numerous types of policies based on hardness assumptions over bilinear maps and lattices. However, one of the distinguishing differences between ABEs based on these two breeds of assumptions is that the former can achieve adaptive security for quite expressible policies (e.g., inner-products, boolean formula) while the latter can not. Recently, two adaptively secure lattice-based ABEs have appeared and changed the state of affairs: a non-zero inner-product (NIPE) encryption by Katsumata and Yamada (PKC'19) and an ABE for t-CNF policies by Tsabary (CRYPTO'19). However, the policies supported by these ABEs are still quite limited and do not embrace the more interesting policies that have been studied in the literature. Notably, constructing an adaptively secure inner-product encryption (IPE) based on lattices still remains open. In this work, we propose the first adaptively secure IPE based on the learning with errors (LWE) assumption with sub-exponential modulus size (without resorting to complexity leveraging). Concretely, our IPE supports inner-products over the integers Z with polynomial sized entries and satisfies adaptively weakly-attribute-hiding security. We also show how to convert such an IPE to an IPE supporting inner-products over Z_p for a polynomial-sized p and a fuzzy identity-based encryption (FIBE) for small and large universes. Our result builds on the ideas presented in Tsabary (CRYPTO'19), which uses constrained pseudorandom functions (CPRF) in a semi-generic way to achieve adaptively secure ABEs, and the recent lattice-based adaptively secure CPRF for inner-products by Davidson et al. (CRYPTO'20). Our main observation is realizing how to weaken the conforming CPRF property introduced in Tsabary (CRYPTO'19) by taking advantage of the specific linearity property enjoyed by the lattice evaluation algorithms by Boneh et al. (EUROCRYPT'14).
2020
ASIACRYPT
Scalable Ciphertext Compression Techniques for Post-Quantum KEMs and their Applications 📺
A multi-recipient key encapsulation mechanism, or mKEM, provides a scalable solution to securely communicating to a large group, and offers savings in both bandwidth and computational cost compared to the trivial solution of communicating with each member individually. All prior works on mKEM are only limited to classical assumptions and, although some generic constructions are known, they all require specific properties that are not shared by most post-quantum schemes. In this work, we first provide a simple and efficient generic construction of mKEM that can be instantiated from versatile assumptions, including post-quantum ones. We then study these mKEM instantiations at a practical level using 8 post-quantum KEMs (which are lattice and isogeny-based NIST candidates), and CSIDH, and show that compared to the trivial solution, our mKEM offers savings of at least one order of magnitude in the bandwidth, and make encryption time shorter by a factor ranging from 1.92 to 35. Additionally, we show that by combining mKEM with the TreeKEM protocol used by MLS – an IETF draft for secure group messaging – we obtain significant bandwidth savings.
2020
ASIACRYPT
SILVER - Statistical Independence and Leakage Verification 📺
Implementing cryptographic functions securely in the presence of physical adversaries is still a challenge although a lion's share of research in the physical security domain has been put in development of countermeasures. Among several protection schemes, masking has absorbed the most attention of research in both academic and industrial communities, due to its theoretical foundation allowing to provide proofs or model the achieved security level. In return, masking schemes are difdicult to implement as the implementation process often is manual, complex, and error-prone. This motivated the need for formal verification tools that allow the designers and engineers to analyze and verify the designs before manufacturing. In this work, we present a new framework to analyze and verify masked implementations against various security notions using different security models as reference. In particular, our framework { which directly processes the resulting gate-level netlist of a hardware synthesis { particularly relies on Reduced Ordered Binary Decision Diagrams (ROBDDs) and the concept of statistical independence of probability distributions. Compared to existing tools, our framework captivates due to its simplicity, accuracy, and functionality while still having a reasonable efficiency for many applications and common use-cases.
2020
ASIACRYPT
Lower Bounds on the Degree of Block Ciphers 📺
Only the method to estimate the upper bound of the algebraic degree on block ciphers is known so far, but it is not useful for the designer to guarantee the security. In this paper we provide meaningful lower bounds on the algebraic degree of modern block ciphers.
2020
ASIACRYPT
Improvements of Algebraic Attacks for solving the Rank Decoding and MinRank problems 📺
In this paper, we show how to significantly improve algebraic techniques for solving the MinRank problem, which is ubiquitous in multivariate and rank metric code based cryptography. In the case of the structured MinRank instances arising in the latter, we build upon a recent breakthrough in Bardet et al. (EUROCRYPT 2020) showing that algebraic attacks outperform the combinatorial ones that were considered state of the art up until now. Through a slight modification of this approach, we completely avoid Gr\¨obner bases computations for certain parameters and are left only with solving linear systems. This does not only substantially improve the complexity, but also gives a convincing argument as to why algebraic techniques work in this case. When used against the second round NIST-PQC candidates ROLLO-I-128/192/256, our new attack has bit complexity respectively 71, 87, and 151, to be compared to 117, 144, and 197 as obtained in Bardet et al. (EUROCRYPT 2020). The linear systems arise from the nullity of the maximal minors of a certain matrix associated to the algebraic modeling. We also use a similar approach to improve the algebraic MinRank solvers for the usual MinRank problem. When applied against the second round NIST-PQC candidates GeMSS and Rainbow, our attack has a complexity that is very close to or even slightly better than those of the best known attacks so far. Note that these latter attacks did not rely on MinRank techniques since the MinRank approach used to give complexities that were far away from classical security levels.
2020
ASIACRYPT
Succinct Functional Commitment for a Large Class of Arithmetic Circuits 📺
A succinct functional commitment (SFC) scheme for a circuit class $\mathbf{CC}$ enables, for any circuit $\mathcal{C}\in \mathbf{CC}$, the committer to first succinctly commit to a vector $\vec{\alpha}$, and later succinctly open the commitment to $\mathcal{C} (\vec{\alpha}, \vec{\beta})$, where the verifier chooses $\vec{\beta}$ at the time of opening. Unfortunately, SFC commitment schemes are known only for severely limited function classes like the class of inner products. By making non-black-box use of SNARK-construction techniques, we propose a SFC scheme for the large class of semi-sparse polynomials. The new SFC scheme can be used to, say, efficiently (1) implement sparse polynomials, and (2) aggregate various interesting SFC (e.g., vector commitment and polynomial commitment) schemes. The new scheme is evaluation-binding under a new instantiation of the computational uber-assumption. We provide a thorough analysis of the new assumption.
2020
ASIACRYPT
Secure MPC: Laziness Leads to GOD 📺
Motivated by what we call "honest but lazy” parties in the context of secure multi party computation, we revisit the notion of multi-key FHE schemes (MFHE). In MFHE, any message encrypted using a public key pk_i can be "expanded" so that the resulting ciphertext is encrypted with respect to a set of public keys (pk_1,..,pk_n). Such expanded ciphertexts can be homomorphically evaluated with respect to any circuit to generate a ciphertext ct. Then, this ciphertext ct can be partially decrypted using a secret key sk_i (corresponding to the public key pk_i) to produce a partial decryption p_i. Finally, these partial decryptions {p_{i}}_{i in [n]} can be combined to recover the output. However, this definition of MFHE works only for n-out-of-n access structures and, thus, each node in the system is a point of failure. In the context of "honest but lazy” parties, it is necessary to be able to decrypt even when only given a subset of partial decryptions (say t out of n). In order to solve this problem, we introduce a new notion of multi-key FHE designed to handle arbitrary access patterns that can reconstruct the output. We call it a threshold multi-key FHE scheme (TMFHE). Our main contributions are the following: * We formally define and construct TMFHE for any access structure given by a monotone boolean formula, assuming LWE. * We construct the first simulation-extractable multi-string NIZK from polynomially hard LWE. * We use TMFHE and our multi-string NIZK to obtain the first round-optimal (three round) MPC protocol in the plain model with guaranteed output delivery secure against malicious adversaries or, more generally, mixed adversaries (which supports "honest but lazy” parties), assuming LWE. * Our MPC protocols simultaneously achieve security against the maximum number of corruptions under which guaranteed output delivery is achievable, depth-proportional communication complexity, and reusability.
2020
ASIACRYPT
Cryptographic Group Actions and Applications 📺
Isogeny-based assumptions have emerged as a viable option for quantum-secure cryptography. Recent works have shown how to build efficient (public-key) primitives from isogeny-based assumptions such as CSIDH and CSI-FiSh. However, in its present form, the landscape of isogenies does not seem very amenable to realizing new cryptographic applications. Isogeny-based assumptions often have unique efficiency and security properties, which makes building new cryptographic applications from them a potentially tedious and time-consuming task. In this work, we propose a new framework based on group actions that enables the easy usage of a variety of isogeny-based assumptions. Our framework generalizes the works of Brassard and Yung (Crypto'90) and Couveignes (Eprint'06). We provide new definitions for group actions endowed with natural hardness assumptions that model isogeny-based constructions amenable to group actions such as CSIDH and CSI-FiSh. We demonstrate the utility of our new framework by leveraging it to construct several primitives that were not previously known from isogeny-based assumptions. These include smooth projective hashing, dual-mode PKE, two-message statistically sender-private OT, and Naor-Reingold style PRF. These primitives are useful building blocks for a wide range of cryptographic applications. We introduce a new assumption over group actions called Linear Hidden Shift (LHS) assumption. We then present some discussions on the security of the LHS assumption and we show that it implies symmetric KDM-secure encryption, which in turn enables many other primitives that were not previously known from isogeny-based assumptions.
2020
ASIACRYPT
Security Reductions for White-Box Key-Storage in Mobile Payments 📺
The goal of white-box cryptography is to provide security even when the cryptographic implementation is executed in adversarially controlled environments. White-box implementations nowadays appear in commercial products such as mobile payment applications, e.g., those certified by Mastercard. Interestingly, there, white-box cryptography is championed as a tool for secure storage of payment tokens, and importantly, the white-boxed storage functionality is bound to a hardware functionality to prevent code-lifting attacks. In this paper, we show that the approach of using hardware-binding and obfuscation for secure storage is conceptually sound. Following security specifications by Mastercard and also EMVCo, we first define security for a white-box key derivation functions (WKDF) that is bound to a hardware functionality. WKDFs with hardware-binding model a secure storage functionality, as the WKDFs in turn can be used to derive encryption keys for secure storage. We then provide a proof-of-concept construction of WKDFs based on pseudorandom functions (PRF) and obfuscation. To show that our use of cryptographic primitives is sound, we perform a cryptographic analysis and reduce the security of our WKDF to the cryptographic assumptions of indistinguishability obfuscation and PRF-security. The hardware-functionality that our WKDF is bound to is a PRF-like functionality. Obfuscation helps us to hide the secret key used for the verification, essentially emulating a signature functionality as is provided by the Android key store. We rigorously define the required security properties of a hardware-bound white-box payment application (WPAY) for generating and encrypting valid payment requests. We construct a WPAY, which uses a WKDF as a secure building block. We thereby show that a WKDF can be securely combined with any secure symmetric encryption scheme, including those based on standard ciphers such as AES.
2020
ASIACRYPT
Cryptanalysis of Masked Ciphers: A not so Random Idea 📺
A new approach to the security analysis of hardware-oriented masked ciphers against second-order side-channel attacks is developed. By relying on techniques from symmetric-key cryptanalysis, concrete security bounds are obtained in a variant of the probing model that allows the adversary to make only a bounded, but possibly very large, number of measurements. Specifically, it is formally shown how a bounded-query variant of robust probing security can be reduced to the linear cryptanalysis of masked ciphers. As a result, the compositional issues of higher-order threshold implementations can be overcome without relying on fresh randomness. From a practical point of view, the aforementioned approach makes it possible to transfer many of the desirable properties of first-order threshold implementations, such as their low randomness usage, to the second-order setting. For example, a straightforward application to the block cipher LED results in a masking using less than 700 random bits including the initial sharing. In addition, the cryptanalytic approach introduced in this paper provides additional insight into the design of masked ciphers and allows for a quantifiable trade-off between security and performance.
2020
ASIACRYPT
A Combinatorial Approach to Quantum Random Functions 📺
Quantum pseudorandom functions (QPRFs) extend the classical security of a PRF by allowing the adversary to issue queries on input superpositions. Zhandry [Zhandry, FOCS 2012] showed a separation between the two notions and proved that common construction paradigms are also quantum secure, albeit with a new ad-hoc analysis. In this work, we revisit the question of constructing QPRFs and propose a new method starting from small-domain (classical) PRFs: At the heart of our approach is a new domain-extension technique based on bipartite expanders. Interestingly, our analysis is almost entirely classical. As a corollary of our main theorem, we obtain the first (approximate) key-homomorphic quantum PRF based on the quantum intractability of the learning with errors problem.
2020
ASIACRYPT
Calamari and Falafl: Logarithmic (Linkable) Ring Signatures from Isogenies and Lattices 📺
We construct efficient ring signatures (RS) from isogeny and lattice assumptions. Our ring signatures are based on a logarithmic OR proof for group actions. We instantiate this group action by either the CSIDH group action or an MLWE-based group action to obtain our isogeny-based or lattice-based RS scheme, respectively. Even though the OR proof has a binary challenge space and therefore requires a number of repetitions which is linear in the security parameter, the sizes of our ring signatures are small and scale better with the ring size N than previously known post-quantum ring signatures. We also construct linkable ring signatures (LRS) that are almost as efficient as the non-linkable variants. The isogeny-based scheme produces signatures whose size is an order of magnitude smaller than all previously known logarithmic post-quantum ring signatures, but it is relatively slow (e.g. 5.5 KB signatures and 79 s signing time for rings with 8 members). In comparison, the lattice-based construction is much faster, but has larger signatures (e.g. 30 KB signatures and 90 ms signing time for the same ring size). For small ring sizes our lattice-based ring signatures are slightly larger than state-of-the- art schemes, but they are smaller for ring sizes larger than N approximately 1024.
2020
ASIACRYPT
Crowd Verifiable Zero-Knowledge and End-to-end Verifiable Multiparty Computation 📺
Auditing a secure multiparty computation (MPC) protocol entails the validation of the protocol transcript by a third party that is otherwise untrusted. In this work we introduce the concept of end-to-end verifiable MPC (VMPC), that requires the validation to provide a correctness guarantee even in the setting that all servers, trusted setup primitives and all the client systems utilized by the input-providing users of the MPC protocol are subverted by an adversary. To instantiate VMPC, we introduce a new concept in the setting of zero-knowlegde protocols that we term crowd verifiable zero-knowledge (CVZK). A CVZK protocol enables a prover to convince a set of verifiers about a certain statement, even though each one individually contributes a small amount of entropy for verification and some of them are adversarially controlled. Given CVZK, we present a VMPC protocol that is based on discrete-logarithm related assumptions. At the high level of adversity that VMPC is meant to withstand, it is infeasible to ensure perfect correctness, thus we investigate the classes of functions and verifiability relations that are feasible in our framework, and present a number of possible applications the underlying functions of which can be implemented via VMPC.
2020
ASIACRYPT
Towards Closing The Security Gap of Tweak-aNd-Tweak (TNT) 📺
Tweakable block ciphers (TBCs) have been established as a valuable replacement for many applications of classical block ciphers. While several dedicated TBCs have been proposed in the previous years, generic constructions that build a TBC from a classical block cipher are still highly useful, for example, to reuse an existing implementation. However, most generic constructions need an additional call to either the block cipher or a universal hash function to process the tweak, which limited their efficiency. To address this deficit, Bao et al. proposed Tweak-aNd-Tweak (TNT) at EUROCRYPT'20. Their construction chains three calls to independent keyed permutations and adds the unmodified tweak to the state in between the calls. They further suggested an efficient instantiation TNT-AES that was based on round-reduced AES for each of the permutations. Their work could prove 2n/3-bit security for their construction, where n is the block size in bits. Though, in the absence of an upper bound, their analysis had to consider all possible attack vectors with up to 2^n time, data, and memory. Still, closing the gap between both bounds remained a highly interesting research question. In this work, we show that a variant of Mennink's distinguisher on CLRW2 with O(sqrt{n} 2^{3n/4}) data and O(2^{3n/2}) time from TCC'18 also applies to TNT. We reduce its time complexity to O(sqrt{n} 2^{3n/4}), show the existence of a second similar distinguisher, and demonstrate how to transform the distinguisher to a key-recovery attack on TNT-AES[5,*,*] from an impossible differential. From a constructive point of view, we adapt the rigorous STPRP analysis of CLRW2 by Jha and Nandi to show O(2^{3n/4}) TPRP security for TNT. Thus, we move towards closing the gap between the previous proof and attacks for TNT as well as its proposed instance.
2020
ASIACRYPT
Improved Classical and Quantum Algorithms for Subset-Sum 📺
We present new classical and quantum algorithms for solving random subset-sum instances. First, we improve over the Becker-Coron-Joux algorithm (EUROCRYPT 2011) from $\widetilde{O}(2^{0.291 n})$ down to $\widetilde{O}(2^{0.283 n})$, using more general representations with values in $\{0,1,-1,2\}$. Next, we improve the state of the art of quantum algorithms for this problem in several directions. By combining the Howgrave-Graham-Joux algorithm (EUROCRYPT 2010) and quantum search, we devise an algorithm with asymptotic cost $\widetilde{O}(2^{0.236 n})$, lower than the cost of the quantum walk based on the same classical algorithm proposed by Bernstein, Jeffery, Lange and Meurer (PQCRYPTO 2013). This algorithm has the advantage of using \emph{classical} memory with quantum random access, while the previously known algorithms used the quantum walk framework, and required \emph{quantum} memory with quantum random access. We also propose new quantum walks for subset-sum, performing better than the previous best time complexity of $\widetilde{O}(2^{0.226 n})$ given by Helm and May (TQC 2018). We combine our new techniques to reach a time $\widetilde{O}(2^{0.216 n})$. This time is dependent on a heuristic on quantum walk updates, formalized by Helm and May, that is also required by the previous algorithms. We show how to partially overcome this heuristic, and we obtain an algorithm with quantum time $\widetilde{O}(2^{0.218 n})$ requiring only the standard classical subset-sum heuristics.
2020
ASIACRYPT
KVaC: Key-Value Commitments for Blockchains and Beyond 📺
As blockchains grow in size, validating new transactions becomes more and more resource intensive. To deal with this, there is a need to discover compact encodings of the (effective) state of a blockchain --- an encoding that allows for efficient proofs of membership and updates. In the case of account-based cryptocurrencies, the state can be represented by a key-value map, where keys are the account addresses and values consist of account balance, nonce, etc. We propose a new commitment scheme for key-value maps whose size does not grow with the number of keys, yet proofs of membership are of constant-size. In fact, both the encoding and the proofs consist of just two and three group elements respectively (in groups of unknown order like class groups). Verifying and updating proofs involves just a few group exponentiations. Additive updates to key values enjoy the same level of efficiency too. Key-value commitments can be used to build dynamic accumulators and vector commitments, which find applications in group signatures, anonymous credentials, verifiable databases, interactive oracle proofs, etc. Using our new key-value commitment, we provide the most efficient constructions of (sub)vector commitments to date.
2020
ASIACRYPT
Radical Isogenies 📺
This paper introduces a new approach to computing isogenies called ``radical isogenies'' and a corresponding method to compute chains of $N$-isogenies that is very efficient for small $N$. The method is fully deterministic and completely avoids generating $N$-torsion points. It is based on explicit formulae for the coordinates of an $N$-torsion point $P'$ on the codomain of a cyclic $N$-isogeny $\varphi : E \to E'$, such that composing $\varphi$ with $E' \to E' / \langle P' \rangle$ yields a cyclic $N^2$-isogeny. These formulae are simple algebraic expressions in the coefficients of $E$, the coordinates of a generator $P$ of $\ker \varphi$, and an $N$th root $\sqrtN{\rho}$, where the radicand $\rho$ itself is given by an easily computable algebraic expression in the coefficients of $E$ and the coordinates of $P$. The formulae can be iterated and are particularly useful when computing chains of $N$-isogenies over a finite field $\F_q$ with $\gcd(q-1, N) = 1$, where taking an $N$th root is a simple exponentiation. Compared to the state-of-the-art, our method results in an order of magnitude speed-up for $N \leq 13$; for larger $N$, the advantage disappears due to the increasing complexity of the formulae. When applied to CSIDH, we obtain a speed-up of about $19 \%$ over the implementation by Bernstein, De Feo, Leroux and Smith for the CSURF-512 parameters.
2020
ASIACRYPT
Subvert KEM to Break DEM: Practical Algorithm-Substitution Attacks on Public-Key Encryption 📺
Motivated by the widespread concern about mass surveillance of encrypted communications, Bellare \textit{et al.} introduced at CRYPTO 2014 the notion of Algorithm-Substitution Attack (ASA) where the legitimate encryption algorithm is replaced by a subverted one that aims to undetectably exfiltrate the secret key via ciphertexts. Practically implementable ASAs on various cryptographic primitives (Bellare \textit{et al.}, CRYPTO'14 \& CCS'15; Ateniese \textit{et al.}, CCS'15; Berndt and Li\'{s}kiewicz, CCS'17) have been constructed and analyzed, leaking the secret key successfully. Nevertheless, in spite of much current attention, the practical impact of ASAs (formulated originally for symmetric key cryptography) on public-key (PKE) encryption operations remains unclear, primarily since the encryption operation of PKE does not involve the secret key and previously known ASAs become relatively inefficient for leaking the plaintext due to the logarithmic upper bound of exfiltration rate (Berndt and Li\'{s}kiewicz, CCS'17). In this work, we formulate a practical ASA on PKE encryption algorithm which, perhaps surprisingly, turns out to be much more efficient and robust than existing ones, showing that ASAs on PKE schemes are far more dangerous than previously believed. We mainly target PKE of hybrid encryption which is the most prevalent way to employ PKE in the literature and in practical systems. The main strategy of our ASA is to subvert the underlying key encapsulation mechanism (KEM) so that the session key encapsulated could be efficiently extracted, which, in turn, breaks the data encapsulation mechanism (DEM) enabling us to learn the plaintext itself. Concretely, our non-black-box attack enables recovering the plaintext from only two successive ciphertexts and minimally depends on a short state of previous internal randomness. A widely used class of KEMs is shown to be subvertible by our powerful attack. Our attack relies on a novel identification and formalization of specific non-black-box yet general enough properties that yield practical ASAs on KEMs. More broadly, this may shed some light on exploring the structural weakness of other composed cryptographic primitives, which may make them susceptible to more dangerous ASAs that surpass the logarithmic upper bound of exfiltration rate on universal ASAs.
2020
ASIACRYPT
Inner-Product Functional Encryption with Fine-Grained Access Control 📺
We construct new functional encryption schemes that combine the access control functionality of attribute-based encryption with the possibility of performing linear operations on the encrypted data. While such a primitive could be easily realized from fully fledged functional encryption schemes, what makes our result interesting is the fact that our schemes simultaneously achieve all the following properties. They are public-key, efficient and can be proved secure under standard and well established assumptions (such as LWE or pairings). Furthermore, security is guaranteed in the setting where adversaries are allowed to get functional keys that decrypt the challenge ciphertext. Our first results are two functional encryption schemes for the family of functions that allow users to embed policies (expressed by monotone span programs) in the encrypted data, so that one can generate functional keys to compute weighted sums on the latter. Both schemes are pairing-based and quite generic: they combine the ALS functional encryption scheme for inner products from Crypto 2016 with any attribute-based encryption schemes relying on the dual-system encryption methodology. As an additional bonus, they yield simple and elegant multi-input extensions essentially for free, thereby broadening the set of applications for such schemes. Multi-input is a particularly desirable feature in our setting, since it gives a finer access control over the encrypted data, by allowing users to associate different access policies to different parts of the encrypted data. Our second result builds identity-based functional encryption for inner products from lattices. This is achieved by carefully combining existing IBE schemes from lattices with adapted, LWE-based, variants of ALS. We point out to intrinsic technical bottlenecks to obtain richer forms of access control from lattices. From a conceptual point of view, all our results can be seen as further evidence that more expressive forms of functional encryption can be realized under standard assumptions and with little computational overhead.
2020
ASIACRYPT
Unbounded HIBE with Tight Security 📺
We construct the first unbounded hierarchical identity-based encryption (HIBE) scheme with tight security reductions based on standard assumptions. Our main technical contribution is a novel proof strategy that allows us to tightly randomize user secret keys for identities with arbitrary hierarchy depths using low entropy hidden in a small and hierarchy-independent master public key. The notion of unbounded HIBE is proposed by Lewko and Waters (Eurocrypt 2011). In contrast to most HIBE schemes, an unbounded scheme does not require any maximum depth to be specified in the setup phase, and user secret keys or ciphertexts can be generated for identities of arbitrary depths with hierarchy-independent system parameters. While all the previous unbounded HIBE schemes have security loss that grows at least linearly in the number of user secret key queries, the security loss of our scheme is only dependent on the security parameter, even in the multi-challenge setting, where an adversary can ask for multiple challenge ciphertexts. We prove the adaptive security of our scheme based on the Matrix Decisional Diffie-Hellman assumption in prime-order pairing groups, which generalizes a family of standard Diffie-Hellman assumptions such as k-Linear.
2020
ASIACRYPT
Asymptotically Good Multiplicative LSSS over Galois Rings and Applications to MPC over Z/p^k Z 📺
We study information-theoretic multiparty computation (MPC) protocols over rings Z/p^k Z that have good asymptotic communication complexity for a large number of players. An important ingredient for such protocols is arithmetic secret sharing, i.e., linear secret-sharing schemes with multiplicative properties. The standard way to obtain these over fields is with a family of linear codes C, such that C, $C^\perp$ and C^2 are asymptotically good (strongly multiplicative). For our purposes here it suffices if the square code C^2 is not the whole space, i.e., has codimension at least 1 (multiplicative). Our approach is to lift such a family of codes defined over a finite field F to a Galois ring, which is a local ring that has F as its residue field and that contains Z/p^k Z as a subring, and thus enables arithmetic that is compatible with both structures. Although arbitrary lifts preserve the distance and dual distance of a code, as we demonstrate with a counterexample, the multiplicative property is not preserved. We work around this issue by showing a dedicated lift that preserves \emph{self-orthogonality} (as well as distance and dual distance), for p > 2. Self-orthogonal codes are multiplicative, therefore we can use existing results of asymptotically good self-dual codes over fields to obtain arithmetic secret sharing over Galois rings. For p = 2 we obtain multiplicativity by using existing techniques of secret-sharing using both C and $C^\perp$, incurring a constant overhead. As a result, we obtain asymptotically good arithmetic secret-sharing schemes over Galois rings. With these schemes in hand, we extend existing field-based MPC protocols to obtain MPC over Z/p^k Z, in the setting of a submaximal adversary corrupting less than a fraction 1/2 - \varepsilon of the players, where \varepsilon > 0 is arbitrarily small. We consider 3 different corruption models, and obtain O(n) bits communicated per multiplication for both passive security and active security with abort. For full security with guaranteed output delivery we use a preprocessing model and get O(n) bits per multiplication in the online phase and O(n log n) bits per multiplication in the offline phase. Thus, we obtain true linear bit complexities, without the common assumption that the ring size depends on the number of players.
2020
ASIACRYPT
New results on Gimli: full-permutation distinguishers and improved collisions 📺
Best Paper Award
Gimli is a family of cryptographic primitives (both a hash function and an AEAD scheme) that has been selected for the second round of the NIST competition for standardizing new lightweight designs. The candidate Gimli is based on the permutation Gimli, which was presented at CHES 2017. In this paper, we study the security of both the permutation and the constructions that are based on it. We exploit the slow diffusion in Gimli and its internal symmetries to build, for the first time, a distinguisher on the full permutation of complexity $2^{64}$. We also provide a practical distinguisher on 23 out of the full 24 rounds of Gimli that has been implemented. Next, we give (full state) collision and semi-free-start collision attacks on Gimli-Hash, reaching respectively up to 12 and 18 rounds. On the practical side, we compute a collision on 8-round Gimli-Hash. In the quantum setting, these attacks reach 2 more rounds. Finally, we perform the first study of linear trails in the permutation, and we propose differential-linear cryptanalysis that reach up to 17 rounds of Gimli.
2020
ASIACRYPT
How to Build Optimally Secure PRFs Using Block Ciphers 📺
In EUROCRYPT '96, Aiello and Venkatesan proposed two candidates for $ 2n $-bit to $ 2n $-bit pseudorandom functions (PRFs), called Benes and modified Benes (or mBenes), based on $ n $-bit to $ n $-bit PRFs. While Benes is known to be secure up to $ 2^n $ queries (Patarin, AFRICACRYPT '08), the security of mBenes has only been proved up to $ 2^{n(1-\epsilon)} $ queries for all $ \epsilon > 0 $ by Patarin and Montreuil in ICISC '05. In this work, we show that the composition of a $ 2n $-bit hash function with mBenes is a secure variable input length (VIL) PRF up to $ 2^{n-2} $ queries (given appropriate hash function bounds). We extend our analysis with block ciphers as the underlying primitive and obtain two optimally secure VIL PRFs using block ciphers. The first of these candidates requires $ 6 $ calls to the block cipher. The second candidate requires just $ 4 $ calls to the block cipher, but here the proof is based on Patarin's mirror theory. Further, we instantiate the hash function with a PMAC+/LightMAC+ like hash, to get six candidates for deterministic message authentication codes with optimal security.
2020
ASIACRYPT
Security Limitations of Classical-Client Delegated Quantum Computing 📺
Secure delegated quantum computing allows a computationally weak client to outsource an arbitrary quantum computation to an untrusted quantum server in a privacy-preserving manner. One of the promising candidates to achieve classical delegation of quantum computation is classical-client remote state preparation ($\sf{RSP}_{CC}$), where a client remotely prepares a quantum state using a classical channel. However, the privacy loss incurred by employing $\sf{RSP}_{CC}$ as a sub-module is unclear. In this work, we investigate this question using the Constructive Cryptography framework by Maurer and Renner (ICS'11). We first identify the goal of $\sf{RSP}_{CC}$ as the construction of ideal \RSP resources from classical channels and then reveal the security limitations of using $\sf{RSP}_{CC}$. First, we uncover a fundamental relationship between constructing ideal \RSP resources (from classical channels) and the task of cloning quantum states. Any classically constructed ideal \RSP resource must leak to the server the full classical description (possibly in an encoded form) of the generated quantum state, even if we target computational security only. As a consequence, we find that the realization of common \RSP resources, without weakening their guarantees drastically, is impossible due to the no-cloning theorem. Second, the above result does not rule out that a specific $\sf{RSP}_{CC}$ protocol can replace the quantum channel at least in some contexts, such as the Universal Blind Quantum Computing ($\sf{UBQC}$) protocol of Broadbent et al. (FOCS '09). However, we show that the resulting UBQC protocol cannot maintain its proven composable security as soon as $\sf{RSP}_{CC}$ is used as a subroutine. Third, we show that replacing the quantum channel of the above $\sf{UBQC}$ protocol by the $\sf{RSP}_{CC}$ protocol QFactory of Cojocaru et al. (Asiacrypt '19) preserves the weaker, game-based, security of $\sf{UBQC}$.
2020
ASIACRYPT
Packed Multiplication: How to Amortize the Cost of Side-channel Masking? 📺
Higher-order masking countermeasures provide strong provable security against side-channel attacks at the cost of incurring significant overheads, which largely hinders its applicability. Previous works towards remedying cost mostly concentrated on ``local'' calculations, i.e., optimizing the cost of computation units such as a single AND gate or a field multiplication. This paper explores a complementary ``global'' approach, i.e., considering multiple operations in the masked domain as a batch and reducing randomness and computational cost via amortization. In particular, we focus on the amortization of $\ell$ parallel field multiplications for appropriate integer $\ell > 1$, and design a kit named {\it packed multiplication} for implementing such a batch. Higher-order masking countermeasures provide strong provable security against side-channel attacks at the cost of incurring significant overheads, which largely hinders its applicability. Previous works towards remedying cost mostly concentrated on ``local'' calculations, i.e., optimizing the cost of computation units such as a single AND gate or a field multiplication. This paper explores a complementary ``global'' approach, i.e., considering multiple operations in the masked domain as a batch and reducing randomness and computational cost via amortization. In particular, we focus on the amortization of $\ell$ parallel field multiplications for appropriate integer $\ell > 1$, and design a kit named {\it packed multiplication} for implementing such a batch. For $\ell+d\leq2^m$, when $\ell$ parallel multiplications over $\mathbb{F}_{2^{m}}$ with $d$-th order probing security are implemented, packed multiplication consumes $d^2+2\ell d + \ell$ bilinear multiplications and $2d^2 + d(d+1)/2$ random field variables, outperforming the state-of-the-art results with $O(\ell d^2)$ multiplications and $\ell \left \lfloor d^2/4\right \rfloor + \ell d$ randomness. To prove $d$-probing security for packed multiplications, we introduce some weaker security notions for multiple-inputs-multiple-outputs gadgets and use them as intermediate steps, which may be of independent interest. As parallel field multiplications exist almost everywhere in symmetric cryptography, lifting optimizations from ``local'' to ``global'' substantially enlarges the space of improvements. To demonstrate, we showcase the method on the AES Subbytes step, GCM and TET (a popular disk encryption). Notably, when $d=8$, our implementation of AES Subbytes in ARM Cortex M architecture achieves a gain of up to $33\%$ in total speeds and saves up to $68\%$ random bits than the state-of-the-art bitsliced implementation reported at ASIACRYPT~2018.
2020
ASIACRYPT
Catalic: Delegated PSI Cardinality with Applications to Contact Tracing 📺
Private Set Intersection Cardinality (PSI-CA) allows two parties, each holding a set of items, to learn the size of the intersection of those sets without revealing any additional information. To the best of our knowledge, this work presents the first protocol that allows one of the parties to delegate PSI-CA computation to untrusted servers. At the heart of our delegated PSI-CA protocol is a new oblivious distributed key PRF (Odk-PRF) abstraction, which may be of independent interest. We explore in detail how to use our delegated PSI-CA protocol to perform privacy-preserving contact tracing. It has been estimated that a significant percentage of a given population would need to use a contact tracing app to stop a disease’s spread. Prior privacy-preserving contact tracing systems, however, impose heavy bandwidth or computational demands on client devices. These demands present an economic disincentive to participate for end users who may be billed per MB by their mobile data plan or for users who want to save battery life. We propose Catalic (ContAct TrAcing for LIghtweight Clients), a new contact tracing system that minimizes bandwidth cost and computation workload on client devices. By applying our new delegated PSI-CA protocol, Catalic shifts most of the client-side computation of contact tracing to untrusted servers, and potentially saves each user hundreds of megabytes of mobile data per day while preserving privacy.
2020
ASIACRYPT
Secret-Shared Shuffle 📺
Generating additive secret shares of a shuffled dataset - such that neither party knows the order in which it is permuted - is a fundamental building block in many protocols, such as secure collaborative filtering, oblivious sorting, and secure function evaluation on set intersection. Traditional approaches to this problem either involve expensive public-key based crypto or using symmetric crypto on permutation networks. While public-key-based solutions are bandwidth efficient, they are computation-heavy. On the other hand, constructions based on permutation networks are communication-bound, especially when the dataset contains large elements, for e.g., feature vectors in an ML context. We design a new 2-party protocol for this task of computing secret shares of shuffled data, which we refer to as secret-shared shuffle. Our protocol is secure against a static semi-honest adversary. At the heart of our approach is a new primitive we define (which we call ``Share Translation'') that generates two sets of pseudorandom values ``correlated via the permutation''. This allows us to reduce the problem of shuffling the dataset to the problem of shuffling pseudorandom values, which enables optimizations both in computation and communication. We then design a Share Translation protocol based on oblivious transfer and puncturable PRFs. Our final protocol for secret-shared shuffle uses lightweight operations like XOR and PRGs, and in particular doesn't use public-key operations besides the base OTs. As a result, our protocol is concretely more efficient than the existing solutions. In particular, we are two-three orders of magnitude faster than public-key-based approach and one order of magnitude faster compared to the best known symmetric-key approach when the elements are moderately large.
2020
ASIACRYPT
Multi-Client Oblivious RAM with Poly-Logarithmic Communication 📺
Oblivious RAM enables oblivious access to memory in the single-client setting, which may not be the best fit in the network setting. Multi-client oblivious RAM (MCORAM) considers a collaborative but untrusted environment, where a database owner selectively grants read access and write access to different entries of a confidential database to multiple clients. Their access pattern must remain oblivious not only to the server but also to fellow clients. This upgrade rules out many techniques for constructing ORAM, forcing us to pursue new techniques. MCORAM not only provides an alternative solution to private anonymous data access (Eurocrypt 2019) but also serves as a promising building block for equipping oblivious file systems with access control and extending other advanced cryptosystems to the multi-client setting. Despite being a powerful object, the current state-of-the-art is unsatisfactory: The only existing scheme requires $O(\sqrt n)$ communication and client computation for a database of size $n$. Whether it is possible to reduce these complexities to $\mathsf{polylog}(n)$, thereby matching the upper bounds for ORAM, is an open problem, i.e., can we enjoy access control and client-obliviousness under the same bounds? Our first result answers the above question affirmatively by giving a construction from fully homomorphic encryption (FHE). Our main technical innovation is a new technique for cross-key trial evaluation of ciphertexts. We also consider the same question in the setting with $N$ non-colluding servers, out of which at most $t$ of them can be corrupt. We build multi-server MCORAM from distributed point functions (DPF), and propose new constructions of DPF via a virtualization technique with bootstrapping, assuming the existence of homomorphic secret sharing and pseudorandom generators in NC0, which are not known to imply FHE.
2020
ASIACRYPT
SQISign: Compact Post-Quantum signatures from Quaternions and Isogenies 📺
Best Paper Award
We introduce a new signature scheme, \emph{SQISign}, (for \emph{Short Quaternion and Isogeny Signature}) from isogeny graphs of supersingular elliptic curves. The signature scheme is derived from a new one-round, high soundness, interactive identification protocol. Targeting the post-quantum NIST-1 level of security, our implementation results in signatures of $204$ bytes, secret keys of $16$ bytes and public keys of $64$ bytes. In particular, the signature and public key sizes combined are an order of magnitude smaller than all other post-quantum signature schemes. On a modern workstation, our implementation in C takes 0.6s for key generation, 2.5s for signing, and 50ms for verification. While the soundness of the identification protocol follows from classical assumptions, the zero-knowledge property relies on the second main contribution of this paper. We introduce a new algorithm to find an isogeny path connecting two given supersingular elliptic curves of known endomorphism rings. A previous algorithm to solve this problem, due to Kohel, Lauter, Petit and Tignol, systematically reveals paths from the input curves to a `special' curve. This leakage would break the zero-knowledge property of the protocol. Our algorithm does not directly reveal such a path, and subject to a new computational assumption, we prove that the resulting identification protocol is zero-knowledge.
2020
ASIACRYPT
Post-Quantum Verification of Fujisaki-Okamoto 📺
We present a computer-verified formalization of the post-quantum security proof of the Fujisaki-Okamoto transform (as analyzed by Hövelmanns, Kiltz, Schäge, and Unruh, PKC 2020). The formalization is done in quantum relational Hoare logic and checked in the qrhl-tool (Unruh, POPL 2019).
2020
ASIACRYPT
Circular Security Is Complete for KDM Security 📺
Circular security is the most elementary form of key-dependent message (KDM) security, which allows us to securely encrypt only a copy of secret key bits. In this work, we show that circular security is complete for KDM security in the sense that an encryption scheme satisfying this security notion can be transformed into one satisfying KDM security with respect to all functions computable by a-priori bounded-size circuits (bounded-KDM security). This result holds in the presence of any number of keys and in any of secret-key/public-key and CPA/CCA settings. Such a completeness result was previously shown by Applebaum (EUROCRYPT 2011) for KDM security with respect to projection functions (projection-KDM security) that allows us to securely encrypt both a copy and a negation of secret key bits. Besides amplifying the strength of KDM security, our transformation in fact can start from an encryption scheme satisfying circular security against CPA attacks and results in one satisfying bounded-KDM security against CCA attacks. This result improves the recent result by Kitagawa and Matsuda (TCC 2019) showing a CPA-to-CCA transformation for KDM secure public-key encryption schemes.
2020
ASIACRYPT
Towards Efficiency-Preserving Round Compression in MPC: Do fewer rounds mean more computation? 📺
Reducing the rounds of interaction in secure multiparty computation (MPC) protocols has been the topic of study of many works. One popular approach to reduce rounds is to construct {\em round compression compilers}. A round compression compiler is one that takes a highly interactive protocol and transforms it into a protocol with far fewer rounds. The design of round compression compilers has traditionally focused on preserving the security properties of the underlying protocol and in particular, not much attention has been given towards preserving their computational and communication efficiency. Indeed, the recent round compression compilers that yield round-optimal MPC protocols incur large computational and communication overhead. In this work, we initiate the study of {\em efficiency-preserving} round compression compilers, i.e. compilers that translate the efficiency benefits of the underlying highly interactive protocols to the fewer round setting. Focusing on the honest majority setting (with near-optimal corruption threshold $\frac{1}{2} - \varepsilon$, for any $\varepsilon > 0$), we devise a new compiler that yields two round (i.e., round optimal) semi-honest MPC with similar communication efficiency as the underlying (arbitrary round) protocol. By applying our compiler on the most efficient known MPC protocols, we obtain a two-round semi-honest protocol based on one-way functions, with total communication (and per-party computation) cost $\widetilde{O}(s+n^4)$ -- a significant improvement over prior two-round protocols with cost $\widetilde{O}(n^\tau s+n^{\tau+1}d)$, where $\tau\geq 2$, $s$ is the size of the circuit computing the function and $d$ the corresponding depth. Our result can also be extended to handle malicious adversaries, either using stronger assumptions in the public key infrastructure (PKI) model, or in the plain model using an extra round. An artifact of our approach is that the resultant protocol is ``unbalanced'' in the amount of computation performed by different parties. We give evidence that this is {\em necessary} in our setting. Our impossibility result makes novel use of the ``MPC-in-the-head" paradigm which has typically been used to demonstrate feasibility results.
2020
ASIACRYPT
Quantum Circuit Implementations of AES with Fewer Qubits 📺
We propose some quantum circuit implementations of AES with the following improvements. Firstly, we propose some quantum circuits of the AES S-box and S-box$^{-1}$,which require fewer qubits than prior work. Secondly, we reduce the number of qubits in the zig-zag method by introducing the S-box$^{-1}$ operation in our quantum circuits of AES. Thirdly, we present a method to reduce the number of qubits in the key schedule of AES. While the previous quantum circuits of AES-128, AES-192, and AES-256 need at least 864, 896, and 1232 qubits respectively,our quantum circuit implementations of AES-128, AES-192, and AES-256 only require 512, 640, and 768 qubits respectively, where the number of qubits is reduced by more than 30\%.
2020
ASIACRYPT
ALBATROSS: publicly AttestabLe BATched Randomness based On Secret Sharing 📺
In this paper we present ALBATROSS, a family of multiparty randomness generation protocols with guaranteed output delivery and public verification that allows to trade off corruption tolerance for a much improved amortized computational complexity. Our basic stand alone protocol is based on publicly verifiable secret sharing (PVSS) and is secure under in the random oracle model under the decisional Diffie-Hellman (DDH) hardness assumption. We also address the important issue of constructing Universally Composable randomness beacons, showing two UC versions of Albatross: one based on simple UC NIZKs and another one based on novel efficient ``designated verifier'' homomorphic commitments. Interestingly this latter version can be instantiated from a global random oracle under the weaker Computational Diffie-Hellman (CDH) assumption. An execution of ALBATROSS with $n$ parties, out of which up to $t=(1/2-\epsilon)\cdot n$ are corrupt for a constant $\epsilon>0$, generates $\Theta(n^2)$ uniformly random values, requiring in the worst case an amortized cost per party of $\Theta(\log n)$ exponentiations per random value. We significantly improve on the SCRAPE protocol (Cascudo and David, ACNS 17), which required $\Theta(n^2)$ exponentiations per party to generate one uniformly random value. This is mainly achieved via two techniques: first, the use of packed Shamir secret sharing for the PVSS; second, the use of linear $t$-resilient functions (computed via a Fast Fourier Transform-based algorithm) to improve the randomness extraction.
2020
ASIACRYPT
Oblivious Pseudorandom Functions from Isogenies 📺
An oblivious PRF, or OPRF, is a protocol between a client and a server, where the server has a key $k$ for a secure pseudorandom function $F$, and the client has an input $x$ for the function. At the end of the protocol the client learns $F(k,x)$, and nothing else, and the server learns nothing. An OPRF is verifiable if the client is convinced that the server has evaluated the PRF correctly with respect to a prior commitment to $k$. OPRFs and verifiable OPRFs have numerous applications, such as private-set-intersection protocols, password-based key-exchange protocols, and defense against denial-of-service attacks. Existing OPRF constructions use RSA-, Diffie-Hellman-, and lattice-type assumptions. The first two are not post-quantum secure. In this paper we construct OPRFs and verifiable OPRFs from isogenies. Our main construction uses isogenies of supersingular elliptic curves over $\Fpp$ and tries to adapt the Diffie-Hellman OPRF to that setting. However, a recent attack on supersingular-isogeny systems due to Galbraith~et~al.~[ASIACRYPT 2016] makes this approach difficult to secure. To overcome this attack, and to validate the server's response, we develop two new zero-knowledge protocols that convince each party that its peer has sent valid messages. With these protocols in place, we obtain an OPRF in the SIDH setting and prove its security in the UC framework. Our second construction is an adaptation of the Naor-Reingold PRF to commutative group actions. Combining it with recent constructions of oblivious transfer from isogenies, we obtain an OPRF in the CSIDH setting.
2020
ASIACRYPT
Non-Interactive Composition of Sigma-Protocols via Share-then-Hash 📺
Proofs of partial knowledge demonstrate the possession of certain subsets of witnesses for a given collection of statements x_1,\dots,x_n. Cramer, Damg{\aa}rd, and Schoenmakers (CDS), built proofs of partial knowledge, given "atomic" protocols for individual statements x_i, by having the prover randomly secret share the verifier's challenge and using the shares as challenges for the atomic protocols. This simple and highly-influential transformation has been used in numerous applications, ranging from anonymous credentials to ring signatures. We consider what happens if, instead of using the shares directly as challenges, the prover first hashes them. We show that this elementary enhancement can result in significant benefits: - the proof contains a {\em single} atomic transcript per statement x_i, - it suffices that the atomic protocols are k-special sound for k \geq 2, - when compiled using the Fiat-Shamir heuristic, the protocol retains its soundness in the {\em non-programmable} random oracle model. None of the above features is satisfied by the CDS transformation.
2020
CRYPTO
Compressed Sigma-Protocol Theory and Practical Application to Plug & Play Secure Algorithmics 📺
Sigma-Protocols provide a well-understood basis for secure algorithmics. Recently, Bulletproofs (Bootle et al., EUROCRYPT 2016, and Bünz et al., S&P 2018) have been proposed as a drop-in replacement in case of zero-knowledge (ZK) for arithmetic circuits, achieving logarithmic communication instead of linear. Its pivot is an ingenious, logarithmic-size proof of knowledge BP for certain quadratic relations. However, reducing ZK for general relations to it forces a somewhat cumbersome ``reinvention'' of cryptographic protocol theory. We take a rather different viewpoint and reconcile Bulletproofs with Sigma-Protocol Theory such that (a) simpler circuit ZK is developed within established theory, while (b) achieving exactly the same logarithmic communication. The natural key here is linearization. First, we repurpose BPs as a blackbox compression mechanism for standard Sigma-Protocols handling ZK proofs of general linear relations (on compactly committed secret vectors); our pivot. Second, we reduce the case of general nonlinear relations to blackbox applications of our pivot via a novel variation on arithmetic secret sharing based techniques for Sigma-Protocols (Cramer et al., ICITS 2012). Orthogonally, we enhance versatility by enabling scenarios not previously addressed, e.g., when a secret input is dispersed across several commitments. Standard implementation platforms leading to logarithmic communication follow from a Discrete-Log assumption or a generalized Strong-RSA assumption. Also, under a Knowledge-of-Exponent Assumption (KEA) communication drops to constant, as in ZK-SNARKS. All in all, our theory should more generally be useful for modular (``plug & play'') design of practical cryptographic protocols; this is further evidenced by our separate work (2020) on proofs of partial knowledge.
2020
CRYPTO
New Techniques for Traitor Tracing: Size N^{1/3} and More from Pairings 📺
The best existing traitor tracing scheme from pairings achieves $O(\sqrt{N})$-sized parameters, which has stood since 2006. This intuitively seems to be consistent with the fact that pairings allow for degree-2 computations, yielding a quadratic compression. In this work, we show that this intuition is false by building a traitor tracing scheme from pairings with $O(\sqrt[3]{N})$-sized parameters. We obtain our scheme by developing a number of new traitor tracing techniques offering various trade-offs that were not possible before, giving the first significant parameter improvements in pairings-based traitor tracing in over a decade.
2020
CRYPTO
New Techniques for Zero-Knowledge: Leveraging Inefficient Provers to Reduce Assumptions, Interaction, and Trust 📺
We present a transformation from NIZK with inefficient provers in the uniform random string (URS) model to ZAPs (two message witness indistinguishable proofs) with inefficient provers. While such a transformation was known for the case where the prover is efficient, the security proof breaks down if the prover is inefficient. Our transformation is obtained via new applications of Nisan-Wigderson designs, a combinatorial object originally introduced in the derandomization literature. We observe that our transformation is applicable both in the setting of super-polynomial provers/poly-time adversaries, as well as a new fine-grained setting, where the prover is polynomial time and the verifier/simulator/zero knowledge distinguisher are in a lower complexity class, such as $\mathsf{NC}^1$. We also present $\mathsf{NC}^1$-fine-grained NIZK in the URS model for all of NP from the worst-case assumption $\oplus L/\poly \not\subseteq \mathsf{NC}^1$. Our techniques yield the following applications: --ZAPs for $\mathsf{AM}$ from Minicrypt assumptions (with super-polynomial time provers), --$\mathsf{NC}^1$-fine-grained ZAPs for $\mathsf{NP}$ from worst-case assumptions, --Protocols achieving an ``offline'' notion of NIZK (oNIZK) in the standard (no-CRS) model with uniform soundness in both the super-polynomial setting (from Minicrypt assumptions) and the $\mathsf{NC}^1$-fine-grained setting (from worst-case assumptions). The oNIZK notion is sufficient for use in indistinguishability-based proofs.
2020
CRYPTO
On Succinct Arguments and Witness Encryption from Groups 📺
Succinct non-interactive arguments (SNARGs) enable proofs of NP statements with very low communication. Recently, there has been significant work in both theory and practice on constructing SNARGs with very short proofs. Currently, the state-of-the-art in succinctness is due to Groth (Eurocrypt 2016) who constructed a SNARG from bilinear maps where the proof consists of just 3 group elements. In this work, we first construct a concretely-efficient designated-verifier (preprocessing) SNARG with inverse polynomial soundness, where the proof consists of just 2 group elements in a standard (generic) group. This leads to a 50% reduction in concrete proof size compared to Groth's construction. We follow the approach of Bitansky et al. (TCC 2013) who describe a compiler from linear PCPs to SNARGs in the preprocessing model. Our improvement is based on a new linear PCP packing technique that allows us to construct 1-query linear PCPs which can then be compiled into a SNARG (using ElGamal encryption over a generic group). An appealing feature of our new SNARG is that the verifier can precompute a statement-independent lookup table in an offline phase; verifying proofs then only requires 2 exponentiations and a single table lookup. This makes our new designated-verifier SNARG appealing in settings that demand fast verification and minimal communication. We then turn to the question of constructing arguments where the proof consists of a single group element. Here, we first show that any (possibly interactive) argument for a language L where the verification algorithm is "generic" (i.e., only performs generic group operations) and the proof consists of a single group element, implies a witness encryption scheme for L. We then show that under a yet-unproven, but highly plausible, hypothesis on the hardness of approximating the minimal distance of linear codes, we can construct a 2-message laconic argument for NP where the proof consists of a single group element. Under the same hypothesis, we obtain a witness encryption scheme for NP in the generic group model. Along the way, we show that under a conceptually-similar but proven hardness of approximation result, there is a 2-message laconic argument for NP with negligible soundness error where the prover's message consists of just 2 group elements. In both settings, we obtain laconic arguments (and linear PCPs) with linear decision procedures. Our constructions circumvent a previous lower bound by Groth on such argument systems with linear decision procedures by relying on imperfect completeness. Namely, our constructions have vanishing but not negligible completeness error, while the lower bound of Groth implicitly assumes negligible completeness error of the underlying argument. Our techniques thus highlight new avenues for designing linear PCPs, succinct arguments, and witness encryption schemes.
2020
CRYPTO
Cryptanalytic Extraction of Neural Network Models 📺
We argue that the machine learning problem of model extraction is actually a cryptanalytic problem in disguise, and should be studied as such. Given oracle access to a neural network, we introduce a differential attack that can efficiently steal the parameters of the remote model up to floating point precision. Our attack relies on the fact that ReLU neural networks are piecewise linear functions, and thus queries at the critical points reveal information about the model parameters. We evaluate our attack on multiple neural network models and extract models that are 2^20 times more precise and require 100x fewer queries than prior work. For example, we extract a 100,000 parameter neural network trained on the MNIST digit recognition task with 2^21.5 queries in under an hour, such that the extracted model agrees with the oracle on all inputs up to a worst-case error of 2^-25, or a model with 4,000 parameters in 2^18.5 queries with worst-case error of 2^-40.4. Code is available at https://github.com/google-research/cryptanalytic-model-extraction.
2020
CRYPTO
Dynamic Decentralized Functional Encryption 📺
We introduce Dynamic Decentralized Functional Encryption (DDFE), a generalization of Functional Encryption which allows multiple users to join the system dynamically, without relying on a trusted third party or on expensive and interactive Multi-Party Computation protocols. This notion subsumes existing multi-user extensions of Functional Encryption, such as Multi-Input, Multi-Client, and Ad Hoc Multi-Input Functional Encryption. We define and construct schemes for various functionalities which serve as building-blocks for latter primitives and may be useful in their own right, such as a scheme for dynamically computing sums in any Abelian group. These constructions build upon simple primitives in a modular way, and have instantiations from well-studied assumptions, such as DDH or LWE. Our constructions culminate in an Inner-Product scheme for computing weighted sums on aggregated encrypted data, from standard assumptions in prime-order groups in the Random Oracle Model.
2020
CRYPTO
Slide Reduction, Revisited—Filling the Gaps in SVP Approximation 📺
We show how to generalize Gama and Nguyen's slide reduction algorithm [STOC '08] for solving the approximate Shortest Vector Problem over lattices (SVP) to allow for arbitrary block sizes, rather than just block sizes that divide the rank n of the lattice. This leads to significantly better running times for most approximation factors. We accomplish this by combining slide reduction with the DBKZ algorithm of Micciancio and Walter [Eurocrypt '16]. We also show a different algorithm that works when the block size is quite large---at least half the total rank. This yields the first non-trivial algorithm for sublinear approximation factors. Together with some additional optimizations, these results yield significantly faster provably correct algorithms for \delta-approximate SVP for all approximation factors n^{1/2+\eps} \leq \delta \leq n^{O(1)}, which is the regime most relevant for cryptography. For the specific values of \delta = n^{1-\eps} and \delta = n^{2-\eps}, we improve the exponent in the running time by a factor of 2 and a factor of 1.5 respectively.
2020
CRYPTO
Practical Product Proofs for Lattice Commitments 📺
We construct a practical lattice-based zero-knowledge argument for proving multiplicative relations between committed values. The underlying commitment scheme that we use is the currently most efficient one of Baum et al. (SCN 2018), and the size of our multiplicative proof is only slightly larger than of the one for just proving knowledge of the committed values. We additionally improve on the results of Lyubashevsky and Seiler (Eurocrypt 2018) to show that the above-mentioned techniques can work over rings $Z_q[X]/(X^d+1)$ where $X^d+1$ splits into low-degree factors, which is a property necessary for many applications. In particular, we use Fourier analysis to show that the NTT coefficients of random small-norm challenges are not concentrated on any particular value.
2020
CRYPTO
Anonymous Tokens with Private Metadata Bit 📺
We present a cryptographic construction for anonymous tokens with private metadata bit, called PMBTokens. This primitive enables an issuer to provide a user with a lightweight, single-use anonymous trust token that can embed a single private bit, which is accessible only to the party who holds the secret authority key and is private with respect to anyone else. Our construction generalizes and extends the functionality of Privacy Pass (PETS’18) with this private metadata bit capability. It provides unforgeability, unlinkability, and privacy for the metadata bit properties based on the DDH and CTDH assumptions in the random oracle model. Both Privacy Pass and PMBTokens rely on non-interactive zero-knowledge proofs (NIZKs). We present new techniques to remove the need for NIZKs, while still achieving unlinkability. We implement our constructions and we report their efficiency costs.
2020
CRYPTO
Round-optimal Black-box Commit-and-prove with Succinct Communication 📺
We give a four-round black-box construction of a commit-and-prove protocol with succinct communication. Our construction is WI and has constant soundness error, and it can be upgraded into a one that is ZK and has negligible soundness error by relying on a round-preserving transformation of Khurana et al. (TCC 2018). Our construction is obtained by combining the MPC-in-the-head technique of Ishai et al. (SICOMP 2009) with the two-round succinct argument of Kalai et al. (STOC 2014), and the main technical novelty lies in the analysis of the soundness---we show that, although the succinct argument of Kalai et al. does not necessarily provide soundness for NP statements, it can be used in the MPC-in-the-head technique for proving the consistency of committed MPC views. Our construction is based on sub-exponentially hard collision-resistant hash functions, two-round PIRs, and two-round OTs.
2020
CRYPTO
Black-Box Transformations from Passive to Covert Security with Public Verifiability 📺
In the context of secure computation, protocols with security against covert adversaries ensure that any misbehavior by malicious parties will be detected by the honest parties with some constant probability. As such, these protocols provide better security guarantees than passively secure protocols and, moreover, are easier to construct than protocols with full security against active adversaries. Protocols that, upon detecting a cheating attempt, allow the honest parties to compute a certificate that enables third parties to verify whether an accused party misbehaved or not are called publicly verifiable. In this work, we present the first generic compilers for constructing two-party protocols with covert security and public verifiability from protocols with passive security. We present two separate compilers, which are both fully blackbox in the underlying protocols they use. Both of them only incur a constant multiplicative factor in terms of bandwidth overhead and a constant additive factor in terms of round complexity on top of the passively secure protocols they use. The first compiler applies to all two-party protocols that have no private inputs. This class of protocols covers the important class of preprocessing protocols that are used to setup correlated randomness among parties. We use our compiler to obtain the first secret-sharing based two-party protocol with covert security and public verifiability. Notably, the produced protocol achieves public verifiability essentially for free when compared with the best known previous solutions based on secret-sharing that did not provide public verifiability Our second compiler constructs protocols with covert security and public verifiability for arbitrary functionalities from passively secure protocols. It uses our first compiler to perform a setup phase, which is independent of the parties' inputs as well as the protocol they would like to execute. Finally, we show how to extend our techniques to obtain multiparty computation protocols with covert security and public verifiability against arbitrary constant fractions of corruptions.
2020
CRYPTO
Black-box use of One-way Functions is Useless for Optimal Fair Coin-Tossing 📺
A two-party fair coin-tossing protocol guarantees output delivery to the honest party even when the other party aborts during the protocol execution. Cleve (STOC--1986) demonstrated that a computationally bounded fail-stop adversary could alter the output distribution of the honest party by (roughly) $1/r$ (in the statistical distance) in an $r$-message coin-tossing protocol. An optimal fair coin-tossing protocol ensures that no adversary can alter the output distribution beyond $1/r$. In a seminal result, Moran, Naor, and Segev (TCC--2009) constructed the first optimal fair coin-tossing protocol using (unfair) oblivious transfer protocols. Whether the existence of oblivious transfer protocols is a necessary hardness of computation assumption for optimal fair coin-tossing remains among the most fundamental open problems in theoretical cryptography. The results of Impagliazzo and Luby (FOCS–1989) and Cleve and Impagliazzo (1993) prove that optimal fair coin-tossing implies the necessity of one-way functions' existence; a significantly weaker hardness of computation assumption compared to the existence of secure oblivious transfer protocols. However, the sufficiency of the existence of one-way functions is not known. Towards this research endeavor, our work proves a black-box separation of optimal fair coin-tossing from the existence of one-way functions. That is, the black-box use of one-way functions cannot enable optimal fair coin-tossing. Following the standard Impagliazzo and Rudich (STOC--1989) approach of proving black-box separations, our work considers any $r$-message fair coin-tossing protocol in the random oracle model where the parties have unbounded computational power. We demonstrate a fail-stop attack strategy for one of the parties to alter the honest party's output distribution by $1/\sqrt r$ by making polynomially-many additional queries to the random oracle. As a consequence, our result proves that the $r$-message coin-tossing protocol of Blum (COMPCON--1982) and Cleve (STOC--1986), which uses one-way functions in a black-box manner, is the best possible protocol because an adversary cannot change the honest party's output distribution by more than $1/\sqrt r$. Several previous works, for example, Dachman--Soled, Lindell, Mahmoody, and Malkin (TCC--2011), Haitner, Omri, and Zarosim (TCC--2013), and Dachman--Soled, Mahmoody, and Malkin (TCC--2014), made partial progress on proving this black-box separation assuming some restrictions on the coin-tossing protocol. Our work diverges significantly from these previous approaches to prove this black-box separation in its full generality. The starting point is the recently introduced potential-based inductive proof techniques for demonstrating large gaps in martingales in the information-theoretic plain model. Our technical contribution lies in identifying a global invariant of communication protocols in the random oracle model that enables the extension of this technique to the random oracle model.
2020
CRYPTO
Order-Fairness for Byzantine Consensus 📺
Decades of research in both cryptography and distributed systems has extensively studied the problem of state machine replication, also known as Byzantine consensus. A consensus protocol must usually satisfy two properties: {\em consistency} and {\em liveness}. These properties ensure that honest participating nodes agree on the same log and dictate when fresh transactions get added. They fail, however, to ensure against adversarial manipulation of the actual {\em ordering} of transactions in the log. Indeed, in leader-based protocols (almost all protocols used today), malicious leaders can directly choose the final transaction ordering. To rectify this problem, we propose a third consensus property: {\em transaction order-fairness}. We initiate the first formal investigation of order-fairness and explain its fundamental importance. We also provide several natural definitions for order-fairness and analyze the assumptions necessary to realize them. We also propose a new class of consensus protocols called Aequitas. Aequitas protocols are the first to achieve order-fairness in addition to consistency and liveness. They can be realized in a black-box way using existing broadcast and agreement primitives (or indeed using any consensus protocol), and work in both synchronous and asynchronous network models.
2020
CRYPTO
Universally Composable Relaxed Password Authenticated Key Exchange 📺
Protocols for password authenticated key exchange (PAKE) allow two parties who share only a weak password to agree on a cryptographic key. We revisit the notion of PAKE in the universal composability (UC) framework, and propose a relaxation of the PAKE functionality of Canetti et al. that we call lazy-extraction PAKE (lePAKE). Our relaxation allows the ideal-world adversary to postpone its password guess until after a session is complete. We argue that this relaxed notion still provides meaningful security in the password-only setting. As our main result, we show that several PAKE protocols that were previously only proven secure with respect to a ``game-based'' definition of security can be shown to UC-realize the lePAKE functionality in the random-oracle model. These include SPEKE, SPAKE2, and TBPEKE, the most efficient PAKE schemes currently known.
2020
CRYPTO
Incompressible Encodings 📺
An incompressible encoding can probabilistically encode some data $m$ into a codeword $c$, which is not much larger. Anyone can decode $c$ to recover the original data $m$. However, $c$ cannot be efficiently compressed, even if the original data $m$ is given to the decompression procedure for free. In other words, $c$ is a representation of $m$, yet is computationally incompressible even given $m$. An incompressible encoding is composable if many encodings cannot be simultaneously compressed into anything sufficiently smaller than their concatenation. A recent work of Damgard, Ganesh and Orlandi (CRYPTO '19) defined a variant of incompressible encodings and gave an applications to ``proofs of replicated storage''. They constructed incompressible encodings in an ideal permutation model over a structured domain, but it was left open if they can be constructed under standard assumptions, or even in the more basic random-oracle model. In this work, we give new constructions, negative results and applications of incompressible encodings: * We construct incompressible encodings in the common random string (CRS) model under the Decisional Composite Residuosity (DCR) or Learning with Errors (LWE) assumptions. However, the construction has several drawbacks: (1) it is not composable, (2) it only achieves selective security, and (3) the CRS is as long as the data $m$. * We leverage the above construction to also get a scheme in the random-oracle model, under the same assumptions, that avoids all of the above drawbacks. Furthermore, it is significantly more efficient than the prior ideal-model construction. * We give black-box separations, showing that incompressible encodings in the plain model cannot be proven secure under any standard hardness assumption, and incompressible encodings in the CRS model must inherently suffer from all of the drawbacks above. * We give a new application to ``big-key cryptography in the bounded-retrieval model'', where secret keys are made intentionally huge to make them hard to exfiltrate. Using incompressible encodings, we can get all the security benefits of a big key without wasting storage space, by having the key to encode useful data.
2020
CRYPTO
Guaranteed Output Delivery Comes Free in Honest Majority MPC 📺
We study the communication complexity of unconditionally secure MPC with guaranteed output delivery over point-to-point channels for corruption threshold t < n/2, assuming the existence of a public broadcast channel. We ask the question: “is it possible to construct MPC in this setting s.t. the communication complexity per multiplication gate is linear in the number of parties?” While a number of works have focused on reducing the communication complexity in this setting, the answer to the above question has remained elusive until now. We also focus on the concrete communication complexity of evaluating each multiplication gate. We resolve the above question in the affirmative by providing an MPC with communication complexity O(Cn\phi) bits (ignoring fixed terms which are independent of the circuit) where \phi is the length of an element in the field, C is the size of the (arithmetic) circuit, n is the number of parties. This is the first construction where the asymptotic communication complexity matches the best-known semi-honest protocol. This represents a strict improvement over the previously best-known communication complexity of O(C(n\phi + \kappa) + D_Mn^2\kappa) bits, where \kappa is the security parameter and D_M is the multiplicative depth of the circuit. Furthermore, the concrete communication complexity per multiplication gate is 5.5 field elements per party in the best case and 7.5 field elements in the worst case when one or more corrupted parties have been identified. This also roughly matches the best-known semi-honest protocol, which requires 5.5 field elements per gate. The above also yields the first secure-with-abort MPC protocol with the same cost per multiplication gate as the best-known semi-honest protocol. Our main result is obtained by compiling the secure-with-abort MPC protocol into a fully secure one.
2020
CRYPTO
Cryptanalysis of The Lifted Unbalanced Oil Vinegar Signature Scheme 📺
In 2017, Ward Beullens et al. submitted Lifted Unbalanced Oil and Vinegar (LUOV), a signature scheme based on the famous multivariate public-key cryptosystem (MPKC) called Unbalanced Oil and Vinegar (UOV), to NIST for the competition for post-quantum public-key scheme standardization. The defining feature of LUOV is that, though the public key P works in the extension field of degree r of F2, the coefficients of P come from F2. This is done to significantly reduce the size of P. The LUOV scheme is now in the second round of the NIST PQC standardization process. In this paper, we introduce a new attack on LUOV. It exploits the "lifted" structure of LUOV to reduce direct attacks on it to those over a subfield. We show that this reduces the complexity below the targeted security for the NIST postquantum standardization competition.
2020
CRYPTO
Handling Adaptive Compromise for Practical Encryption Schemes 📺
Early Career Researcher Award
We provide a new definitional framework capturing the multi-user security of encryption schemes and pseudorandom functions in the face of adversaries that can adaptively compromise users' keys. We provide a sequence of results establishing the security of practical symmetric encryption schemes under adaptive compromise in the random oracle or ideal cipher model. The bulk of analysis complexity for adaptive compromise security is relegated to the analysis of lower-level primitives such as pseudorandom functions. We apply our framework to give proofs of security for the BurnBox system for privacy in the face of border searches and the in-use searchable symmetric encryption scheme due to Cash et al. In both cases, prior analyses had bugs that our framework helps avoid.
2020
CRYPTO
Delegation with Updatable Unambiguous Proofs and PPAD-Hardness 📺
In this work, we show the hardness of finding a Nash equilibrium, a \PPAD-complete problem, based on the quasi-polynomial hardness of the decisional assumption on groups with bilinear maps introduced by Kalai, Paneth and Yang [STOC 2019]. Towards this goal, we construct an {\em unambiguous} and {\em updatable} delegation scheme under this assumption for deterministic computations running in super-polynomial time and polynomial space. This delegation scheme, which is of independent interest, is publicly verifiable and non-interactive in the common reference string (CRS) model. It is {\em unambiguous} meaning that it is hard to compute two different proofs for the same statement. It is {\em updatable} meaning that given a proof for the statement that a Turing machine $M$ reaches configuration $\conf_T$ in $T$ steps, one can {\em efficiently} generate a proof for the statement that $M$ reaches configuration $\conf_{T+1}$ in $T+1$ steps.
2020
CRYPTO
The Memory-Tightness of Authenticated Encryption 📺
This paper initiates the study of the provable security of authenticated encryption (AE) in the memory-bounded setting. Recent works -- Tessaro and Thiruvengadam (TCC '18), Jaeger and Tessaro (EUROCRYPT '19), and Dinur (EUROCRYPT '20) -- focus on confidentiality, and look at schemes for which trade-offs between the attacker's memory and its data complexity are inherent. Here, we ask whether these results and techniques can be lifted to the full AE setting, which additionally asks for integrity. We show both positive and negative results. On the positive side, we provide tight memory-sensitive bounds for the security of GCM and its generalization, CAU (Bellare and Tackmann, CRYPTO '16). Our bounds apply to a restricted case of AE security which abstracts the deployment within protocols like TLS, and rely on a new memory-tight reduction to corresponding restricted notions of confidentiality and integrity. In particular, our reduction uses an amount of memory which linearly depends on that of the given adversary, as opposed to only imposing a constant memory overhead as in earlier works (Auerbach et al, CRYPTO '17). On the negative side, we show that a large class of black-box reductions cannot generically lift confidentiality and integrity security to a joint definition of AE security in a memory-tight way.
2020
CRYPTO
Efficient Pseudorandom Correlation Generators from Ring-LPN 📺
Secure multiparty computation can often utilize a trusted source of correlated randomness to achieve better efficiency. A recent line of work, initiated by Boyle et al. (CCS 2018, Crypto 2019), showed how useful forms of correlated randomness can be generated using a cheap, one-time interaction, followed by only ``silent'' local computation. This is achieved via a \emph{pseudorandom correlation generator} (PCG), a deterministic function that stretches short correlated seeds into long instances of a target correlation. Previous works constructed concretely efficient PCGs for simple but useful correlations, including random oblivious transfer and vector-OLE, together with efficient protocols to distribute the PCG seed generation. Most of these constructions were based on variants of the Learning Parity with Noise (LPN) assumption. PCGs for other useful correlations had poor asymptotic and concrete efficiency. In this work, we design a new class of efficient PCGs based on different flavors of the {\em ring-LPN} assumption. Our new PCGs can generate OLE correlations, authenticated multiplication triples, matrix product correlations, and other types of useful correlations over large fields. These PCGs are more efficient by orders of magnitude than the previous constructions and can be used to improve the preprocessing phase of many existing MPC protocols.
2020
CRYPTO
Lattice-Based Blind Signatures, Revisited 📺
We observe that all previously known lattice-based blind signatures schemes contain subtle flaws in their security proofs (e.g.,~Rückert, ASIACRYPT '08) or can be attacked (e.g., BLAZE by Alkadri et al., FC~'20). Motivated by this, we revisit the problem of constructing blind signatures from standard lattice assumptions. We propose a new three-round lattice-based blind signature scheme whose security can be proved, in the random oracle model, from the standard SIS assumption. Our starting point is a modified version of the insecure three-round BLAZE scheme, which itself is based Lyubashevsky's three-round identification scheme combined with a new aborting technique to reduce the correctness error. Our proof builds upon and extends the recent modular framework for blind signatures of Hauck, Kiltz, and Loss (EUROCRYPT~'19). It also introduces several new techniques to overcome the additional challenges posed by the correctness error which is inherent to all lattice-based constructions. While our construction is mostly of theoretical interest, we believe it to be an important stepping stone for future works in this area.
2020
CRYPTO
Fully Deniable Interactive Encryption 📺
Deniable encryption (Canetti \emph{et al.}, Crypto 1996) enhances secret communication over public channels, providing the additional guarantee that the secrecy of communication is protected even if the parties are later coerced (or willingly bribed) to expose their entire internal states: plaintexts, keys and randomness. To date, constructions of deniable encryption --- and more generally, interactive deniable communication --- only address restricted cases where only {\em one} party is compromised (Sahai and Waters, STOC 2014). The main question --- whether deniable communication is at all possible if {\em both} parties are coerced at once --- has remained open. We resolve this question in the affirmative, presenting a communication protocol that is {\em fully deniable} under coercion of both parties. Our scheme has three rounds, assumes subexponentially secure indistinguishability obfuscation and one-way functions, and uses a short global reference string that is generated once at system set-up and suffices for an unbounded number of encryptions and decryptions. Of independent interest, we introduce a new notion called \emph{off-the-record deniability}, which protects parties even when their claimed internal states are inconsistent (a case not covered by prior definitions). Our scheme satisfies both standard deniability and off-the-record deniability.
2020
CRYPTO
Time-Space Tradeoffs and Short Collisions in Merkle-Damgård Hash Functions 📺
We study collision-finding against Merkle-Damgård hashing in the random-oracle model by adversaries with an arbitrary $S$-bit auxiliary advice input about the random oracle and $T$ queries. Recent work showed that such adversaries can find collisions (with respect to a random IV) with advantage $\Omega(ST^2/2^n)$, where $n$ is the output length, beating the birthday bound by a factor of $S$. These attacks were shown to be optimal. We observe that the collisions produced are very long, on the order $T$ blocks, which would limit their practical relevance. We prove several results related to improving these attacks to find short collisions. We first exhibit a simple attack for finding $B$-block-long collisions achieving advantage $\tilde{\Omega}(STB/2^n)$. We then study if this attack is optimal. We show that the prior technique based on the bit-fixing model (used for the $ST^2/2^n$ bound) provably cannot reach this bound, and towards a general result we prove there are qualitative jumps in the optimal attacks for finding length $1$, length $2$, and unbounded-length collisions. Namely, the optimal attacks achieve (up to logarithmic factors) order of $(S+T)/2^n$, $ST/2^n$ and $ST^2/2^n$ advantage. We also give an upper bound on the advantage of a restricted class of short-collision finding attacks via a new analysis on the growth of trees in random functional graphs that may be of independent interest.
2020
CRYPTO
Lattice Reduction for Modules, or How to Reduce ModuleSVP to ModuleSVP 📺
We show how to generalize lattice reduction algorithms to module lattices. Specifically, we reduce $\gamma$-approximate ModuleSVP over module lattices with rank $k \geq2$ to $\gamma'$-approximate ModuleSVP over module lattices with rank $2 \leq \beta \leq k$. To do so, we modify the celebrated slide-reduction algorithm of Gama and Nguyen to work with module filtrations, a high-dimensional generalization of the ($\Z$-)basis of a lattice. The particular value of $\gamma$ that we achieve depends on the underlying number field $K$, the order $R \subseteq \mathcal{O}_K$, and the embedding (as well as, of course, $k$ and $\beta$). However, for reasonable choices of these parameters, the approximation factor that we achieve is surprisingly close to the one achieved by ``plain'' lattice reduction algorithms, which require an arbitrary SVP oracle in the same dimension. In other words, we show that ModuleSVP oracles are nearly as useful as SVP oracles for solving approximate ModuleSVP in higher dimensions. Our result generalizes the recent independent result of Lee, Pellet-Mary, Stehl\'e, and Wallet, which works in the important special case when $\beta = 2$ and $R = \mathcal{O}_K$ is the ring of integers of $K$ under the canonical embedding. Indeed, at a high level our reduction can be thought of as a generalization of theirs in roughly the same way that block reduction generalizes LLL reduction.
2020
CRYPTO
Random Probing Security: Verification, Composition, Expansion and New Constructions 📺
Masking countermeasure is among the most powerful countermeasures to counteract side-channel attacks. Leakage models have been exhibited to theoretically reason on the security of such masked implementations. So far, the most widely used leakage model is the probing model defined by Ishai, Sahai, and Wagner at (CRYPTO 2003). While it is advantageously convenient for security proofs, it does not capture an adversary exploiting full leakage traces as, e.g., in horizontal attacks. Those attacks target the multiple manipulation of the same share to average a constant noise and recover the corresponding value. To capture a wider class of attacks another model was introduced and is referred to as the random probing model. From a leakage parameter p, each wire of the circuit leaks its value with probability p. While this model much better reflects the physical reality of side channels, it requires more complex security proofs and does not yet come with practical constructions. In this paper, we define the first framework dedicated to the random probing model. We provide an automatic tool, called VRAPS, to quantify the random probing security of a circuit from its leakage probability. We also formalize a composition property for secure random probing gadgets and exhibit its relation to the strong non-interference (SNI) notion used in the context of probing security. We then revisit the expansion idea proposed by Ananth, Ishai, and Sahai (CRYPTO 2018) and introduce a compiler that builds a random probing secure circuit from small base gadgets achieving a random probing expandability property. We instantiate this compiler with small gadgets for which we verify the expected properties directly from our automatic tool. Our construction can tolerate a leakage probability up to 2^−8, against 2^−25 for the previous construction, with a better asymptotic complexity.
2020
CRYPTO
Alzette: a 64-bit ARX-box (feat. CRAX and TRAX) 📺
S-boxes are the only source of non-linearity in many symmetric cryptographic primitives. While they are often defined as being functions operating on a small space, some recent designs propose the use of much larger ones (e.g., 32 bits). In this context, an S-box is then defined as a subfunction whose cryptographic properties can be estimated precisely. In this paper, we present a 64-bit ARX-based S-box called Alzette which can be evaluated in constant time using only 12 instructions on modern CPUs. Its parallel application can also leverage vector (SIMD) instructions. One iteration of Alzette has differential and linear properties comparable to those of the AES S-box, while two iterations are at least as secure as the AES super S-box. Since the state size is much larger than the typical 4 or 8 bits, the study of the relevant cryptographic properties of Alzette is not trivial. We further discuss how such wide S-boxes could be used to construct round functions of 64-, 128- and 256-bit (tweakable) block ciphers with good cryptographic properties that are guaranteed even in the related-tweak setting. We use these structures to design a very lightweight 64-bit block cipher (CRAX) which outerperforms SPECK-64/128 for short messages on micro-controllers, and a 256-bit tweakable block cipher (TRAX) which can be used to obtain strong security guarantees against powerful adversaries (nonce misuse, quantum attacks).
2020
CRYPTO
The MALICIOUS Framework: Embedding Backdoors into Tweakable Block Ciphers 📺
Inserting backdoors in encryption algorithms has long seemed like a very interesting, yet difficult problem. Most attempts have been unsuccessful for symmetric-key primitives so far and it remains an open problem of how to build such ciphers. In this work, we propose the MALICIOUS framework, a new method to build tweakable block ciphers that have a backdoor hidden, which allows to retrieve the secret key. Our backdoor is differential in nature: a specific related-tweak differential path with high probability is hidden during design phase of the cipher. We explain how the backdoor can be used to practically recover the secret key of a user for any entity knowing the backdoor and we also argue why even knowing the presence of the backdoor and the workings of the cipher will not permit to retrieve the backdoor for an external user. We analyze the security of our construction in the classical black-box model and we show that retrieving the backdoor (the hidden high-probability differential path) is very difficult. We instantiate our framework by proposing the LowMC-M construction, a new family of tweakable block ciphers based on instances of the LowMC cipher, which allow such backdoor embedding. Generating LowMC-M instances is trivial and the LowMC-M family has basically the same efficiency as the LowMC instances it is based on.
2020
CRYPTO
Reverse Firewalls for Actively Secure MPCs 📺
Reverse firewalls were introduced at Eurocrypt 2015 by Miro-nov and Stephens-Davidowitz, as a method for protecting cryptographic protocols against attacks on the devices of the honest parties. In a nutshell: a reverse firewall is placed outside of a device and its goal is to ``sanitize'' the messages sent by it, in such a way that a malicious device cannot leak its secrets to the outside world. It is typically assumed that the cryptographic devices are attacked in a ``functionality-preserving way'' (i.e.~informally speaking, the functionality of the protocol remains unchanged under this attacks). In their paper, Mironov and Stephens-Davidowitz construct a protocol for passively-secure two-party computations with firewalls, leaving extension of this result to stronger models as an open question. In this paper, we address this problem by constructing a protocol for secure computation with firewalls that has two main advantages over the original protocol from Eurocrypt 2015. Firstly, it is a \emph{multi}party computation protocol (i.e.~it works for an arbitrary number $n$ of the parties, and not just for $2$). Secondly, it is secure in much stronger corruption settings, namely in the \emph{actively corruption model}. More precisely: we consider an adversary that can fully corrupt up to $n-1$ parties, while the remaining parties are corrupt in a functionality-preserving way. Our core techniques are: malleable commitments and malleable non-interactive zero-knowledge, which in particular allow us to create a novel protocol for multiparty augmented coin-tossing into the well with reverse firewalls (that is based on a protocol of Lindell from Crypto 2001).
2020
CRYPTO
Fast reduction of algebraic lattices over cyclotomic fields 📺
We introduce a framework generalizing lattice reduction algorithms to module lattices in order to \emph{practically} and \emph{efficiently} solve the $\gamma$-Hermite Module-SVP problem over arbitrary cyclotomic fields. The core idea is to exploit the structure of the subfields for designing a recursive strategy of reduction in the tower of fields we are working in. Besides, we demonstrate how to leverage the inherent symplectic geometry existing such fields to provide a significant speed-up of the reduction for rank two modules. As a byproduct, we also generalize to all cyclotomic fields and provide speedups for many previous number theoretical algorithms, in particular to the rounding in the so-called Log-unit lattice. Quantitatively, we show that a module of rank 2 over a cyclotomic field of degree $n$ can be heuristically reduced within approximation factor $2^{\tilde{O}(n)}$ in time $\tilde{O}(n^2B)$, where $B$ is the bitlength of the entries. For $B$ large enough, this complexity shrinks to $\tilde{O}(n^{\log_2 3}B)$. This last result is particularly striking as it goes below the estimate of $n^2B$ swaps given by the classical analysis of the LLL algorithm using the decrease of the \emph{potential} of the basis. Finally, all this framework is fully parallelizable, and we provide a full implementation. We apply it to break multilinear cryptographic candidates on concrete proposed parameters. We were able to reduce matrices of dimension 4096 with 6675-bit integers in 4 days, which is more than a million times faster than previous state-of-the-art implementations. Eventually, we demonstrate a quasicubic time for the Gentry-Szydlo algorithm which finds a generator given the relative norm and a basis of an ideal. This algorithm is important in cryptanalysis and requires efficient ideal multiplications and lattice reductions; as such we can practically use it in dimension 1024.
2020
CRYPTO
Fiat-Shamir for Repeated Squaring with Applications to PPAD-Hardness and VDFs 📺
The Fiat-Shamir transform is a methodology for compiling a (public-coin) interactive proof system for a language $L$ into a {\em non-interactive} argument system for $L$. Proving security of the Fiat-Shamir transform in the standard model, especially in the context of \emph{succinct} arguments, is largely an unsolved problem. The work of Canetti et al. (STOC 2019) proved the security of the Fiat-Shamir transform applied to the Goldwasser-Kalai-Rothblum (STOC 2008) succinct interactive proof system under a very strong ``optimal learning with errors'' assumption. Achieving a similar result under standard assumptions remains an important open question. In this work, we consider the problem of compiling a different succinct interactive proof system: Pietrzak's proof system (ITCS 2019) for the iterated squaring problem. We construct a hash function family (with evaluation time roughly $2^{\lambda^{\epsilon}}$) that guarantees the soundness of Fiat-Shamir for this protocol assuming the sub-exponential ($2^{-n^{1-\epsilon}}$)-hardness of the $n$-dimensional learning with errors problem. (The latter follows from the worst-case $2^{n^{1-\epsilon}}$ hardness of lattice problems.) More generally, we extend the ``bad-challenge function'' methodology of Canetti et al. for proving the soundness of Fiat-Shamir to a class of protocols whose bad-challenge functions are {\em not} efficiently computable. As a corollary (following Choudhuri et al., ePrint 2019 and Ephraim et al., EUROCRYPT 2020), we construct hard-on-average problems in the complexity class $\mathbf{CLS}\subset \mathbf{PPAD}$ under the $2^{\secp^\epsilon}$-hardness of the repeated squaring problem and the $2^{-n^{1-\epsilon}}$-hardness of the learning with errors problem. Under the additional assumption that the repeated squaring problem is ``inherently sequential'', we also obtain a Verifiable Delay Function (Boneh et al., EUROCRYPT 2018) in the standard model. Finally, we give additional PPAD-hardness and VDF instantiations demonstrating a broader tradeoff between the strength of the repeated squaring assumption and the strength of the lattice assumption.
2020
CRYPTO
Mode-Level vs. Implementation-Level Physical Security in Symmetric Cryptography: A Practical Guide Through the Leakage-Resistance Jungle 📺
Triggered by the increasing deployment of embedded cryptographic devices (e.g., for the IoT), the design of authentication, encryption and authenticated encryption schemes enabling improved security against side-channel attacks has become an important research direction. Over the last decade, a number of modes of operation have been proposed and analyzed under different abstractions. In this paper, we investigate the practical consequences of these findings. For this purpose, we first translate the physical assumptions of leakage-resistance proofs into minimum security requirements for implementers. Thanks to this (heuristic) translation, we observe that (i) security against physical attacks can be viewed as a tradeoff between mode-level and implementation-level protection mechanisms, and (i}) security requirements to guarantee confidentiality and integrity in front of leakage can be concretely different for the different parts of an implementation. We illustrate the first point by analyzing several modes of operation with gradually increased leakage-resistance. We illustrate the second point by exhibiting leveled implementations, where different parts of the investigated schemes have different security requirements against leakage, leading to performance improvements when high physical security is needed. We finally initiate a comparative discussion of the different solutions to instantiate the components of a leakage-resistant authenticated encryption scheme.
2020
CRYPTO
Non-Interactive Zero-Knowledge Arguments for QMA, with preprocessing 📺
We initiate the study of non-interactive zero-knowledge (NIZK) arguments for languages in QMA. Our first main result is the following: if Learning With Errors (LWE) is hard for quantum computers, then any language in QMA has an NIZK argument with preprocessing. The preprocessing in our argument system consists of (i) the generation of a CRS and (ii) a single (instance-independent) quantum message from verifier to prover. The instance-dependent phase of our argument system involves only a single classical message from prover to verifier. Importantly, verification in our protocol is entirely classical, and the verifier needs not have quantum memory; its only quantum actions are in the preprocessing phase. Our second contribution is to extend the notion of a classical proof of knowledge to the quantum setting. We introduce the notions of arguments and proofs of quantum knowledge (AoQK/PoQK), and we show that our non-interactive argument system satisfies the definition of an AoQK. In particular, we explicitly construct an extractor which can recover a quantum witness from any prover which is successful in our protocol. Finally, we show that any language in QMA has an (interactive) proof of quantum knowledge.
2020
CRYPTO
Cryptanalysis of LEDAcrypt 📺
We report on the concrete cryptanalysis of LEDAcrypt, a 2nd Round candidate in NIST's Post-Quantum Cryptography standardization process and one of 17 encryption schemes that remain as candidates for near-term standardization. LEDAcrypt consists of a public-key encryption scheme built from the McEliece paradigm and a key-encapsulation mechanism (KEM) built from the Niederreiter paradigm, both using a quasi-cyclic low-density parity-check (QC-LDPC) code. In this work, we identify a large class of extremely weak keys and provide an algorithm to recover them. For example, we demonstrate how to recover $1$ in $2^{47.79}$ of LEDAcrypt's keys using only $2^{18.72}$ guesses at the 256-bit security level. This is a major, practical break of LEDAcrypt. Further, we demonstrate a continuum of progressively less weak keys (from extremely weak keys up to all keys) that can be recovered in substantially less work than previously known. This demonstrates that the imperfection of LEDAcrypt is fundamental to the system's design.
2020
CRYPTO
Crypto for the People 📺
Invited talk
2020
CRYPTO
Our Models and Us 📺
IACR Distinguished Lecture
2020
CRYPTO
Security Analysis of NIST CTR-DRBG 📺
We study the security of CTR-DRBG, one of NIST’s recommended Pseudorandom Number Generator (PRNG) designs. Recently, Woodage and Shumow (Eurocrypt’ 19), and then Cohney et al. (S&P’ 20) point out some potential vulnerabilities in both NIST specification and common implementations of CTR-DRBG. While these researchers do suggest counter-measures, the security of the patched CTR-DRBG is still questionable. Our work fills this gap, proving that CTR-DRBG satisfies the robustness notion of Dodis et al. (CCS’13), the standard security goal for PRNGs.
2020
CRYPTO
A Polynomial-Time Algorithm for Solving the Hidden Subset Sum Problem 📺
At Crypto '99, Nguyen and Stern described a lattice based algorithm for solving the hidden subset sum problem, a variant of the classical subset sum problem where the n weights are also hidden. While the Nguyen-Stern algorithm works quite well in practice for moderate values of n, we argue that its complexity is actually exponential in n; namely in the final step one must recover a very short basis of a n-dimensional lattice, which takes exponential-time in n, as one must apply BKZ reduction with increasingly large block-sizes. In this paper, we describe a variant of the Nguyen-Stern algorithm that works in polynomial-time. The first step is the same orthogonal lattice attack with LLL as in the original algorithm. In the second step, instead of applying BKZ, we use a multivariate technique that recovers the short lattice vectors and finally the hidden secrets in polynomial time. Our algorithm works quite well in practice, as we can reach n=250 in a few hours on a single PC.
2020
CRYPTO
A Tight Parallel Repetition Theorem for Partially Simulatable Interactive Arguments via Smooth KL-Divergence 📺
Hardness amplification is a central problem in the study of interactive protocols. While "natural" parallel repetition transformation is known to reduce the soundness error of some special cases of interactive arguments: three-message protocols (Bellare, Impagliazzo, and Naor [FOCS '97]) and public-coin protocols (Hastad, Pass, Wikstrom, and Pietrzak [TCC '10], Chung and Lu [TCC '10] and Chung and Pass [TCC '15]), it fails to do so in the general case (the above Bellare et al.; also Pietrzak and Wikstrom [TCC '07]). The only known round-preserving approach that applies to all interactive arguments is Haitner's random-terminating transformation [SICOMP '13], who showed that the parallel repetition of the transformed protocol reduces the soundness error at a weak exponential rate: if the original m-round protocol has soundness error (1 − ε) then the n-parallel repetition of its random-terminating variant has soundness error (1 − ε)^{ε n/m^4} (omitting constant factors). Hastad et al. have generalized this result to the so-called partially simulatable interactive arguments. In this work we prove that parallel repetition of random-terminating arguments reduces the soundness error at a much stronger exponential rate: the soundness error of the n parallel repetition is (1 − ε)^{n/m}, only an m factor from the optimal rate of (1 − ε)^n achievable in public-coin and three-message arguments. The result generalizes to partially simulatable arguments. This is achieved by presenting a tight bound on a relaxed variant of the KL-divergence between the distribution induced by our reduction and its ideal variant, a result whose scope extends beyond parallel repetition proofs. We prove the tightness of the above bound for random-terminating arguments, by presenting a matching protocol.
2020
CRYPTO
Quantifying the Security Cost of Migrating Protocols to Practice 📺
We give a framework for relating the quantitative, concrete security of a "reference'' protocol (say, one appearing in an academic paper) to that of some derived, "real'' protocol (say, appearing in a cryptographic standard). It is based on the indifferentiability framework of Maurer, Renner, and Holenstein (MRH), whose application has been exclusively focused upon non-interactive cryptographic primitives, e.g., hash functions and Feistel networks. Our extension of MRH is supported by a clearly defined execution model, and two composition lemmata, all formalized in a modern pseudocode language. Together, these allow for precise statements about game-based security properties of cryptographic objects (interactive or not) at various levels of abstraction, As a real-world application, we design and prove tight security bounds for a potential TLS 1.3 extension that integrates the SPAKE2 password-authenticated key-exchange into the existing handshake. (This is a problem of current interest to the IETF.)
2020
CRYPTO
Cryptanalysis Results on Spook: Bringing Full-round Shadow-512 to the Light 📺
Spook is one of the 32 candidates that has made it to the second round of the NIST Lightweight Cryptography Standardization process, and is particularly interesting since it proposes differential side channel resistance. In this paper, we present practical distinguishers of the full 6-step version of the underlying permutations of Spook, namely Shadow-512 and Shadow-384, solving challenges proposed by the designers on the permutation. We also propose practical forgeries with 4-step Shadow for the S1P mode of operation in the nonce misuse scenario, which is allowed by the CIML2 security game considered by the authors. All the results presented in this paper have been implemented.
2020
CRYPTO
Collusion Resistant Watermarkable PRFs from Standard Assumptions 📺
A software watermarking scheme can embed a message into a program without significantly changing its functionality. Moreover, any attempt to remove the embedded message in a marked program will substantially change the functionality of the program. Prior constructions of watermarking schemes focus on watermarking cryptographic functions, such as pseudorandom function (PRF), public key encryption, etc. A natural security requirement for watermarking schemes is collusion resistance, where the adversary’s goal is to remove the embedded messages given multiple marked versions of the same program. Currently, this strong security guarantee has been achieved by watermarking schemes for public key cryptographic primitives from standard assumptions (Goyal et al., CRYPTO 2019) and by watermarking schemes for PRFs from indistinguishability obfuscation (Yang et al., ASIACRYPT 2019). However, no collusion resistant watermarking scheme for PRF from standard assumption is known. In this work, we solve this problem by presenting a generic construction that upgrades a watermarkable PRF without collusion resistance to a collusion resistant one. One appealing feature of our construction is that it can preserve the security properties of the original scheme. For example, if the original scheme has security with extraction queries, the new scheme is also secure with extraction queries. Besides, the new scheme can achieve unforgeability even if the original scheme does not provide this security property. Instantiating our construction with existing watermarking schemes for PRF, we obtain collusion resistant watermarkable PRFs from standard assumptions, offering various security properties.
2020
CRYPTO
Comparing the difficulty of factorization and discrete logarithm: a 240-digit experiment 📺
We report on two new records: the factorization of RSA-240, a 795-bit number, and a discrete logarithm computation over a 795-bit prime field. Previous records were the factorization of RSA-768 in 2009 and a 768-bit discrete logarithm computation in 2016. Our two computations at the 795-bit level were done using the same hardware and software, and show that computing a discrete logarithm is not much harder than a factorization of the same size. Moreover, thanks to algorithmic variants and well-chosen parameters, our computations were significantly less expensive than anticipated based on previous records. The last page of this paper also reports on the factorization of RSA-250.
2020
CRYPTO
Automatic Verification of Differential Characteristics: Application to Reduced Gimli 📺
Since Keccak was selected as the SHA-3 standard, more and more permutation-based primitives have been proposed. Different from block ciphers, there is no round key in the underlying permutation for permutation-based primitives. Therefore, there is a higher risk for a differential characteristic of the underlying permutation to become incompatible when considering the dependency of difference transitions over different rounds. However, in most of the MILP or SAT based models to search for differential characteristics, only the difference transitions are involved and are treated as independent in different rounds, which may cause that an invalid one is found for the underlying permutation. To overcome this obstacle, we are motivated to design a model which automatically avoids the inconsistency in the search for differential characteristics. Our technique is to involve both the difference transitions and value transitions in the constructed model. Such an idea is inspired by the algorithm to find SHA-2 characteristics as proposed by Mendel et al. in ASIACRYPT 2011, where the differential characteristic and the conforming message pair are simultaneously searched. As a first attempt, our new technique will be applied to the Gimli permutation, which was proposed in CHES 2017. As a result, we reveal that some existing differential characteristics of reduced Gimli are indeed incompatible, one of which is found in the Gimli document. In addition, since only the permutation is analyzed in the Gimli document, we are lead to carry out a comprehensive study, covering the proposed hash scheme and the authenticated encryption (AE) scheme specified for Gimli, which has become a second round candidate of the NIST lightweight cryptography standardization process. For the hash scheme, a semi-free-start (SFS) collision attack can reach up to 8 rounds starting from an intermediate round. For the AE scheme, a state recovery attack is demonstrated to achieve up to 9 rounds. It should be emphasized that our analysis does not threaten the security of Gimli.
2020
CRYPTO
Faster Enumeration-based Lattice Reduction: Root Hermite Factor k^(1/(2k)) in Time k^(k/8 + o(k)) 📺
We give a lattice reduction algorithm that achieves root Hermite factor k^(1/(2k)) in time k^(k/8 + o(k)) and polynomial memory. This improves on the previously best known enumeration-based algorithms which achieve the same quality, but in time k^(k/(2e) + o(k)). A cost of k^(k/8 + o(k)) was previously mentioned as potentially achievable (Hanrot-Stehlé’10) or as a heuristic lower bound (Nguyen’10) for enumeration algorithms. We prove the complexity and quality of our algorithm under a heuristic assumption and provide empirical evidence from simulation and implementation experiments attesting to its performance for practical and cryptographic parameter sizes. Our work also suggests potential avenues for achieving costs below k^(k/8 + o(k)) for the same root Hermite factor, based on the geometry of SDBKZ-reduced bases.
2020
CRYPTO
Rounding in the Rings 📺
In this work, we conduct a comprehensive study on establishing hardness reductions for (Module) Learning with Rounding over rings (RLWR). Towards this, we present an algebraic framework of LWR, inspired by a recent work of Peikert and Pepin (TCC '19). Then we show a search-to-decision reduction for Ring-LWR, generalizing a result in the plain LWR setting by Bogdanov et al. (TCC '15). Finally, we show a reduction from Ring-LWE to Module Ring-LWR (even for leaky secrets), generalizing the plain LWE to LWR reduction by Alwen et al. (Crypto '13). One of our central techniques is a new ring leftover hash lemma, which might be of independent interests.
2020
CRYPTO
Verifiable Registration-Based Encryption 📺
In recent work, Garg, Hajiabadi, Mahmoody, and Rahimi (TCC 2018) introduced a new encryption framework, which they referred to as Registration-Based Encryption (RBE). The central motivation behind RBE was to provide a novel methodology for solving the well-known key-escrow problem in Identity-Based Encryption (IBE) systems. Informally, in an RBE system, there is no private-key generator unlike IBE systems, but instead, it is replaced with a public key accumulator. Every user in an RBE system samples its own public-secret key pair and sends the public key to the accumulator for registration. The key accumulator has no secret state and is only responsible for compressing all the registered user identity-key pairs into a short public commitment. Here the encryptor only requires the compressed parameters along with the target identity, whereas a decryptor requires supplementary key material along with the secret key associated with the registered public key. The initial construction by Garg et al. based on standard assumptions only provided weak efficiency properties. In a follow-up work by Garg, Hajiabadi, Mahmoody, Rahimi, and Sekar (PKC 2019), they gave an efficient RBE construction from standard assumptions. However, both these works considered the key accumulator to be honest which might be too strong an assumption in real-world scenarios. In this work, we initiate a formal study of RBE systems with malicious key accumulators. To that end, we introduce a strengthening of the RBE framework which we call Verifiable RBE (VRBE). A VRBE system additionally gives the users an extra capability to obtain short proofs from the key accumulator proving correct (and unique) registration for every registered user as well as proving non-registration for any yet unregistered identity. We construct VRBE systems that provide succinct proofs of registration and non-registration from standard assumptions (such as CDH, Factoring, LWE). Our proof systems also naturally allow a much more efficient audit process which can be performed by any non-participating third party as well. A by-product of our approach is that we provide a more efficient RBE construction than that provided in the prior work of Garg \etal\cite{GHMRS19}. And, lastly, we initiate a study on the extension of VRBE to a wider range of access and trust structures.
2020
CRYPTO
Indifferentiability for Public Key Cryptosystems 📺
We initiate the study of indifferentiability for public key encryption and other public key primitives. Our main results are definitions and constructions of public key cryptosystems that are indifferentiable from ideal cryptosystems, in the random oracle model. Cryptosystems include: 1) Public key encryption; 2) Digital signatures; 3) Non-interactive key agreement. Our schemes are based on relatively standard public key assumptions. By being indifferentiable from an ideal object, our schemes automatically satisfy a wide range of security properties, including any property representable as a single-stage game, and can be composed to operate in higher-level protocols.
2020
CRYPTO
Asymptotic complexities of discrete logarithm algorithms in pairing-relevant finite fields 📺
We study the discrete logarithm problem at the boundary case between small and medium characteristic finite fields, which is precisely the area where finite fields used in pairing-based cryptosystems live. In order to evaluate the security of pairing-based protocols, we thoroughly analyze the complexity of all the algorithms that coexist at this boundary case: the Quasi-Polynomial algorithms, the Number Field Sieve and its many variants, and the Function Field Sieve. We adapt the latter to the particular case where the extension degree is composite, and show how to lower the complexity by working in a shifted function field. All this study finally allows us to give precise values for the characteristic asymptotically achieving the highest security level for pairings. Surprisingly enough, there exist special characteristics that are as secure as general ones.
2020
CRYPTO
The Measure-and-Reprogram Technique 2.0: Multi-Round Fiat-Shamir and More 📺
We revisit recent works by Don, Fehr, Majenz and Schaffner and by Liu and Zhandry on the security of the Fiat-Shamir transformation of sigma-protocols in the quantum random oracle model (QROM). Two natural questions that arise in this context are: (1) whether the results extend to the Fiat-Shamir transformation of {\em multi-round} interactive proofs, and (2) whether Don et al.'s O(q^2) loss in security is optimal. Firstly, we answer question (1) in the affirmative. As a byproduct of solving a technical difficulty in proving this result, we slightly improve the result of Don et al., equipping it with a cleaner bound and an even simpler proof. We apply our result to digital signature schemes showing that it can be used to prove strong security for schemes like MQDSS in the QROM. As another application we prove QROM-security of a non-interactive OR proof by Liu, Wei and Wong. As for question (2), we show via a Grover-search based attack that Don et al.'s quadratic security loss for the Fiat-Shamir transformation of sigma-protocols is optimal up to a small constant factor. This extends to our new multi-round result, proving it tight up to a factor that depends on the number of rounds only, i.e. is constant for any constant-round interactive proof.
2020
CRYPTO
MPC with Friends and Foes 📺
Classical definitions for secure multiparty computation assume the existence of a single adversarial entity controlling the set of corrupted parties. Intuitively, the definition requires that the view of the adversary, corrupting t parties, in a real-world execution can be simulated by an adversary in an ideal model, where parties interact only via a trusted-party. No restrictions, however, are imposed on the view of honest parties in the protocol, thus, if honest parties obtain information about the private inputs of other honest parties -- it is not counted as a violation of privacy. This is arguably undesirable in many situations that fall into the MPC framework. Nevertheless, there are secure protocols (e.g., the 2-round multiparty protocol of Ishai et al. [CRYPTO 2010] tolerating a single corrupted party) that instruct the honest parties to reveal their private inputs to all other honest parties (once the malicious party is somehow identified). In this paper, we put forth a new security notion, which we call FaF-security, extending the classical notion. In essence, (t,h^*)-FaF-security requires the view of a subset of up to h^* honest parties to also be simulatable in the ideal model (in addition to the view of the malicious adversary, corrupting up to t parties). This property should still hold, even if the adversary leaks information to honest parties by sending them non-prescribed messages. We provide a thorough exploration of the new notion, investigating it in relation to a variety of existing security notions. We further investigate the feasibility of achieving FaF-security and show that every functionality can be computed with (computational) (t,h^*)-FaF full-security, if and only if 2t+ h^*<m. Interestingly, the lower-bound result actually shows that even fair FaF-security is impossible in general when 2t+ h^*\ge m (surprisingly, the view of the malicious attacker is not used as the trigger for the attack). We also investigate the optimal round complexity for (t,h^*)-Faf-secure protocols and give evidence that the leakage of private inputs of honest parties in the protocol of Ishai et al. [CRYPTO 2010] is inherent.
2020
CRYPTO
Better Concrete Security for Half-Gates Garbling (in the Multi-Instance Setting) 📺
We study the concrete security of high-performance implementations of half-gates garbling, which all rely on (hardware-accelerated) AES. We find that current instantiations using k-bit wire labels can be completely broken—in the sense that the circuit evaluator learns all the inputs of the circuit garbler—in time O(2k/C), where C is the total number of (non-free) gates that are garbled, possibly across multiple independent executions. The attack can be applied to existing circuit-garbling libraries using k = 80 when C ≈ $10^9$, and would require 267 machine-months and cost about $3500 to implement on the Google Cloud Platform. Since the attack can be entirely parallelized, the attack could be carried out in about a month using ≈ 250 machines. With this as our motivation, we seek a way to instantiate the hash function in the half-gates scheme so as to achieve better concrete security. We present a construction based on AES that achieves optimal security in the single-instance setting (when only a single circuit is garbled). We also show how to modify the half-gates scheme so that its concrete security does not degrade in the multi-instance setting. Our modified scheme is as efficient as prior work in networks with up to 2 Gbps bandwidth.
2020
CRYPTO
Always Have a Backup Plan: Fully Secure Synchronous MPC with Asynchronous Fallback 📺
Protocols for secure Multi-Party Computation (MPC) can be classified according to the underlying communication model. Two prominent communication models considered in the literature are the synchronous and asynchronous models, which considerably differ in terms of the achievable security guarantees. Synchronous MPC protocols can achieve the optimal corruption threshold $n/2$ and allow every party to give input, but become completely insecure when synchrony assumptions are violated. On the other hand, asynchronous MPC protocols remain secure under arbitrary network conditions, but can tolerate only $n/3$ corruptions and parties with slow connections unavoidably cannot give input. A natural question is whether there exists a protocol for MPC that can tolerate up to $t_s < n/2$ corruptions under a synchronous network and $t_a < n/3$ corruptions even when the network is asynchronous. We answer this question by showing tight feasibility and impossibility results. More specifically, we show that such a protocol exists if and only if $t_a + 2t_s < n$ and the number of inputs taken into account under an asynchronous network is at most $n-t_s$.
2020
CRYPTO
New Constructions of Hinting PRGs, OWFs with Encryption, and more 📺
Over the last few years, there has been a surge of new cryptographic results, including laconic oblivious transfer, (anonymous/ hierarchical) identity-based encryption, trapdoor functions, chosen-ciphertext security transformations, designated-verifier zero-knowledge proofs, due to a beautiful framework recently introduced in the works of Cho et al. (Crypto 2017), and Dottling and Garg (Crypto 2017). The primitive of one-way function with encryption (OWFE) and its relatives (chameleon encryption, one-time signatures with encryption, hinting PRGs, trapdoor hash encryption, batch encryption) has been a centerpiece in all these results. While there exist multiple realizations of OWFE (and its relatives) from a variety of assumptions such as CDH, Factoring, and LWE, all such constructions fall under the same general ``missing block" framework. Although this framework has been instrumental in opening up a new pathway towards various cryptographic functionalities via the abstraction of OWFE (and its relatives), it has been accompanied by undesirable inefficiencies that might inhibit a much wider adoption in many practical scenarios. Motivated by the surging importance of the OWFE abstraction (and its relatives), a natural question to ask is whether the existing approaches can be diversified to not only obtain more constructions from different assumptions, but also in developing newer frameworks. We believe answering this question will eventually lead to important and previously unexplored performance trade-offs in the overarching applications of this novel cryptographic paradigm. In this work, we propose a new accumulation-style framework for building a new class of OWFE as well as hinting PRG constructions with a special focus on achieving shorter ciphertext size and shorter public parameter size (respectively). Such performance improvements parlay into shorter parameters in their corresponding applications. Briefly, we explore the following performance trade-offs --- (1) for OWFE, our constructions outperform in terms of ciphertext size as well as encryption time, but this comes at the cost of larger evaluation and setup times, (2) for hinting PRGs, our constructions provide a rather dramatic trade-off between evaluation time versus parameter size, with our construction leading to significantly shorter public parameter size. The trade-off enabled by our hinting PRG construction also leads to interesting implications in the CPA-to-CCA transformation provided in. We also provide concrete performance measurements for our constructions and compare them with existing approaches. We believe highlighting such trade-offs will lead to wider adoption of these abstractions in a practical sense.
2020
CRYPTO
Two-Sided Malicious Security for Private Intersection-Sum with Cardinality 📺
Private intersection-sum with cardinality allows two parties, where each party holds a private set and one of the parties additionally holds a private integer value associated with each element in her set, to jointly compute the cardinality of the intersection of the two sets as well as the sum of the associated integer values for all the elements in the intersection, and nothing beyond that. We present a new construction for private intersection sum with cardinality that provides malicious security with abort and guarantees that both parties receive the output upon successful completion of the protocol. A central building block for our constructions is a primitive called shuffled distributed oblivious PRF (DOPRF), which is a PRF that offers oblivious evaluation using a secret key shared between two parties, and in addition to this allows obliviously permuting the PRF outputs of several parallel oblivious evaluations. We present the first construction for shuffled DOPRF with malicious security. We further present several new sigma proof protocols for relations across Pedersen commitments, ElGamal encryptions, and Camenisch-Shoup encryptions that we use in our main construction, for which we develop new batching techniques to reduce communication. We implement and evaluate the efficiency of our protocol and show that we can achieve communication cost that is only 4-5x greater than the most efficient semi-honest protocol. When measuring monetary cost of executing the protocol in the cloud, our protocol is 25x more expensive than the semi-honest protocol. Our construction also allows for different parameter regimes that enable trade-offs between communication and computation.
2020
CRYPTO
Private Set Intersection in the Internet Setting From Lightweight Oblivious PRF 📺
We present a new protocol for two-party private set intersection (PSI) with semi-honest security in the plain model and one-sided malicious security in the random oracle model. Our protocol achieves a better balance between computation and communication than existing PSI protocols. Specifically, our protocol is the fastest in networks with moderate bandwidth (e.g., 30 - 100 Mbps). Considering the monetary cost (proposed by Pinkas et al. in CRYPTO 2019) to run the protocol on a cloud computing service, our protocol also compares favorably. Underlying our PSI protocol is a new lightweight multi-point oblivious pesudorandom function (OPRF) protocol based on oblivious transfer (OT) extension. We believe this new protocol may be of independent interest.
2020
CRYPTO
Spartan: Efficient and general-purpose zkSNARKs without trusted setup 📺
This paper introduces Spartan, a new family of zero-knowledge succinct non- interactive arguments of knowledge (zkSNARKs) for the rank-1 constraint satisfiabil- ity (R1CS), an NP-complete language that generalizes arithmetic circuit satisfiability. A distinctive feature of Spartan is that it offers the first zkSNARKs without trusted setup (i.e., transparent zkSNARKs) for NP where verifying a proof incurs sub-linear costs—without requiring uniformity in the NP statement’s structure. Furthermore, Spartan offers zkSNARKs with a time-optimal prover, a property that has remained elusive for nearly all zkSNARKs in the literature. To achieve these results, we introduce new techniques that we compose with the sum-check protocol, a seminal interactive proof protocol: (1) computation commit- ments, a primitive to create a succinct commitment to a description of a computation; this technique is crucial for a verifier to achieve sub-linear costs after investing a one-time, public computation to preprocess a given NP statement; (2) SPARK, a cryptographic compiler to transform any existing extractable polynomial commitment scheme for multilinear polynomials to one that efficiently handles sparse multilinear polynomials; this technique is critical for achieving a time-optimal prover; and (3) a compact encoding of an R1CS instance as a low-degree polynomial. The end result is a public-coin succinct interactive argument of knowledge for NP (which can be viewed as a succinct variant of the sum-check protocol); we transform it into a zkSNARK using prior techniques. By applying SPARK to different commitment schemes, we obtain several zkSNARKs where the verifier’s costs and the proof size range from $O(log^2{n})$ to $O(\sqrt{n})$ depending on the underlying commitment scheme ($n$ denotes the size of the NP statement). These schemes do not require a trusted setup except for one that requires a universal trusted setup. We implement Spartan as a library in about 8,000 lines of Rust. We use the library to build a transparent zkSNARK in the random oracle model where security holds under the discrete logarithm assumption. We experimentally evaluate it and compare with recent zkSNARKs for R1CS instance sizes up to 220 constraints. Among schemes without trusted setup, Spartan offers the fastest prover with speedups of 36--$152\times$ depending on the baseline, produces proofs that are shorter by 1.2--$416\times$, and incurs the lowest verification times with speedups of 3.6--$1326\times$. When compared to the state-of-the-art zkSNARK with trusted setup, Spartan’s prover is $2\times$ faster for arbitrary R1CS instances and $16\times$ faster for data-parallel workloads.
2020
CRYPTO
A non-PCP Approach to Succinct Quantum-Safe Zero-Knowledge 📺
Today's most compact zero-knowledge arguments are based on the hardness of the discrete logarithm problem and related classical assumptions. If one is interested in quantum-safe solutions, then all of the known techniques stem from the PCP-based framework of Kilian (STOC 92) which can be instantiated based on the hardness of any collision-resistant hash function. Both approaches produce asymptotically logarithmic sized arguments but, by exploiting extra algebraic structure, the discrete logarithm arguments are a few orders of magnitude more compact in practice than the generic constructions.\\ In this work, we present the first (poly)-logarithmic \emph{post-quantum} zero-knowledge arguments that deviate from the PCP approach. At the core of succinct zero-knowledge proofs are succinct commitment schemes (in which the commitment and the opening proof are sub-linear in the message size), and we propose two such constructions based on the hardness of the (Ring)-Short Integer Solution (Ring-SIS) problem, each having certain trade-offs. For commitments to $N$ secret values, the communication complexity of our first scheme is $\tilde{O}(N^{1/c})$ for any positive integer $c$, and $O(\log^2 N)$ for the second. %Both of our protocols have somewhat large \emph{slack}, which in lattice constructions is the ratio of the norm of the extracted secrets to the norm of the secrets that the honest prover uses in the proof. The lower this factor, the smaller we can choose the practical parameters. For a fixed value of this factor, our $\tilde{O}(N^{1/c})$-argument actually achieves lower communication complexity. Both of these are a significant theoretical improvement over the previously best lattice construction by Bootle et al. (CRYPTO 2018) which gave $O(\sqrt{N})$-sized proofs.
2020
CRYPTO
Lower Bounds for Encrypted Multi-Maps and Searchable Encryption in the Leakage Cell Probe Model 📺
Encrypted multi-maps (EMMs) enable clients to outsource the storage of a multi-map to a potentially untrusted server while maintaining the ability to perform operations in a privacy-preserving manner. EMMs are an important primitive as they are an integral building block for many practical applications such as searchable encryption and encrypted databases. In this work, we formally examine the tradeoffs between privacy and efficiency for EMMs. Currently, all known dynamic EMMs with constant overhead reveal if two operations are performed on the same key or not that we denote as the global key-equality pattern. In our main result, we present strong evidence that the leakage of the global key-equality pattern is inherent for any dynamic EMM construction with $O(1)$ efficiency. In particular, we consider the slightly smaller leakage of decoupled key-equality pattern where leakage of key-equality between update and query operations is decoupled and the adversary only learns whether two operations of the same type are performed on the same key or not. We show that any EMM with at most decoupled key-equality pattern leakage incurs $\Omega(\log n)$ overhead in the leakage cell probe model. This is tight as there exist ORAM-based constructions of EMMs with logarithmic slowdown that leak no more than the decoupled key-equality pattern (and actually, much less). Furthermore, we present stronger lower bounds that encrypted multi-maps leaking at most the decoupled key-equality pattern but are able to perform one of either the update or query operations in the plaintext still require $\Omega(\log n)$ overhead. Finally, we extend our lower bounds to show that dynamic, response-hiding searchable encryption schemes must also incur $\Omega(log n)$ overhead even when one of either the document updates or searches may be performed in the plaintext.
2020
CRYPTO
Generically Speeding-Up Repeated Squaring is Equivalent to Factoring: Sharp Thresholds for All Generic-Ring Delay Functions 📺
Despite the fundamental importance of delay functions, repeated squaring in RSA groups (Rivest, Shamir and Wagner '96) is the only candidate offering both a useful structure and a realistic level of practicality. Somewhat unsatisfyingly, its sequentiality is provided directly by assumption (i.e., the function is assumed to be a delay function). We prove sharp thresholds on the sequentiality of all generic-ring delay functions relative to an RSA modulus based on the hardness of factoring in the standard model. In particular, we show that generically speeding-up repeated squaring (even with a preprocessing stage and any polynomial number parallel processors) is equivalent to factoring. More generally, based on the (essential) hardness of factoring, we prove that any generic-ring function is in fact a delay function, admitting a sharp sequentiality threshold that is determined by our notion of sequentiality depth. Moreover, we show that generic-ring functions admit not only sharp sequentiality thresholds, but also sharp pseudorandomness thresholds.
2020
CRYPTO
Scalable Pseudorandom Quantum States 📺
Efficiently sampling a quantum state that is hard to distinguish from a truly random quantum state is an elementary task in quantum information theory that has both computational and physical uses. This is often referred to as pseudorandom (quantum) state generator, or PRS generator for short. In existing constructions of PRS generators, security scales with the number of qubits in the states, i.e.\ the (statistical) security parameter for an $n$-qubit PRS is roughly $n$. Perhaps counter-intuitively, $n$-qubit PRS are not known to imply $k$-qubit PRS even for $k<n$. Therefore the question of \emph{scalability} for PRS was thus far open: is it possible to construct $n$-qubit PRS generators with security parameter $\secp$ for all $n, \secp$. Indeed, we believe that PRS with tiny (even constant) $n$ and large $\secp$ can be quite useful. We resolve the problem in this work, showing that any quantum-secure one-way function implies scalable PRS. We follow the paradigm of first showing a \emph{statistically} secure construction when given oracle access to a random function, and then replacing the random function with a quantum-secure (classical) pseudorandom function to achieve computational security. However, our methods deviate significantly from prior works since scalable pseudorandom states require randomizing the amplitudes of the quantum state, and not just the phase as in all prior works. We show how to achieve this using Gaussian sampling.
2020
CRYPTO
Overcoming Impossibility Results in Composable Security using Interval-Wise Guarantees 📺
Composable security definitions, at times called simulation-based definitions, provide strong security guarantees that hold in any context. However, they are also met with some skepticism due to many impossibility results; goals such as commitments and zero-knowledge that are achievable in a stand-alone sense were shown to be unachievable composably (without a setup) since provably no efficient simulator exists. In particular, in the context of adaptive security, the so-called "simulator commitment problem" arises: once a party gets corrupted, an efficient simulator is unable to be consistent with its pre-corruption outputs. A natural question is whether such impossibility results are unavoidable or only artifacts of frameworks being too restrictive. In this work, we propose a novel type of composable security statement that evades the commitment problem. Our new type is able to express the composable guarantees of schemes that previously did not have a clear composable understanding. To this end, we leverage the concept of system specifications in the Constructive Cryptography framework, capturing the conjunction of several interval-wise guarantees, each specifying the guarantees between two events. We develop the required theory and present the corresponding new composition theorem. We present three applications of our theory. First, we show in the context of symmetric encryption with adaptive corruption how our notion naturally captures the expected confidentiality guarantee---the messages remain confidential until either party gets corrupted---and that it can be achieved by any standard semantically secure scheme (negating the need for non-committing encryption). Second, we present a composable formalization of (so far only known to be standalone secure) commitment protocols, which is instantiable without a trusted setup like a CRS. We show it to be sufficient for being used in coin tossing over the telephone, one of the early intuitive applications of commitments. Third, we reexamine a result by Hofheinz, Matt, and Maurer [Asiacrypt'15] implying that IND-ID-CPA security is not the right notion for identity-based encryption, unmasking this claim as an unnecessary framework artifact.
2020
CRYPTO
Adaptively Secure Constrained Pseudorandom Functions in the Standard Model 📺
Constrained pseudorandom functions (CPRFs) allow learning "constrained" PRF keys that can evaluate the PRF on a subset of the input space, or based on some predicate. First introduced by Boneh and Waters [AC’13], Kiayias et al. [CCS’13] and Boyle et al. [PKC’14], they have shown to be a useful cryptographic primitive with many applications. These applications often require CPRFs to be adaptively secure, which allows the adversary to learn PRF values and constrained keys in an arbitrary order. However, there is no known construction of adaptively secure CPRFs based on a standard assumption in the standard model for any non-trivial class of predicates. Moreover, even if we rely on strong tools such as indistinguishability obfuscation (IO), the state-of-the-art construction of adaptively secure CPRFs in the standard model only supports the limited class of NC1 predicates. In this work, we develop new adaptively secure CPRFs for various predicates from different types of assumptions in the standard model. Our results are summarized below. - We construct adaptively secure and O(1)-collusion-resistant CPRFs for t-conjunctive normal form (t-CNF) predicates from one-way functions (OWFs) where t is a constant. Here, O(1)-collusion-resistance means that we can allow the adversary to obtain a constant number of constrained keys. Note that t-CNF includes bit-fixing predicates as a special case. - We construct adaptively secure and single-key CPRFs for inner-product predicates from the learning with errors (LWE) assumption. Here, single-key means that we only allow the adversary to learn one constrained key. Note that inner-product predicates include t-CNF predicates for a constant t as a special case. Thus, this construction supports a more expressive class of predicates than that supported by the first construction though it loses the collusion-resistance and relies on a stronger assumption. - We construct adaptively secure and O(1)-collusion-resistant CPRFs for all circuits from the LWE assumption and indistinguishability obfuscation (IO). The first and second constructions are the first CPRFs for any non-trivial predicates to achieve adaptive security outside of the random oracle model or relying on strong cryptographic assumptions. Moreover, the first construction is also the first to achieve any notion of collusion-resistance in this setting. Besides, we prove that the first and second constructions satisfy weak 1-key privacy, which roughly means that a constrained key does not reveal the corresponding constraint. The third construction is an improvement over previous adaptively secure CPRFs for less expressive predicates based on IO in the standard model.
2020
CRYPTO
Functional Encryption for Attribute-Weighted Sums from k-Lin 📺
We present functional encryption schemes for attribute-weighted sums, where encryption takes as input N attribute-value pairs (x_i,z_i) where x_i is public and z_i is private; secret keys are associated with arithmetic branching programs f, and decryption returns the weighted sum \sum_{i=1}^N f(x_i) z_i while leaking no additional information about the z_i's. Our main construction achieves (1) compact public parameters and key sizes that are independent of N and the secret key can decrypt a ciphertext for any a-priori unbounded N; (2) short ciphertexts that grow with N and the size of z_i but not x_i; (3) simulation-based security against unbounded collusions; (4) relies on the standard k-linear assumption in prime-order bilinear groups.
2020
CRYPTO
Multiparty Generation of an RSA Modulus 📺
We present a new multiparty protocol for the distributed generation of biprime RSA moduli, with security against any subset of maliciously colluding parties assuming oblivious transfer and the hardness of factoring. Our protocol is highly modular, and its uppermost layer can be viewed as a template that generalizes the structure of prior works and leads to a simpler security proof. We introduce a combined sampling-and-sieving technique that eliminates both the inherent leakage in the approach of Frederiksen et al. (Crypto'18), and the dependence upon additively homomorphic encryption in the approach of Hazay et al. (JCrypt'19). We combine this technique with an efficient, privacy-free check to detect malicious behavior retroactively when a sampled candidate is not a biprime, and thereby overcome covert rejection-sampling attacks and achieve both asymptotic and concrete efficiency improvements over the previous state of the art.
2020
CRYPTO
A key-recovery timing attack on post-quantum primitives using the Fujisaki-Okamoto transformation and its application on FrodoKEM 📺
In the implementation of post-quantum primitives, it is well known that all computations that handle secret information need to be implemented to run in constant time. Using the Fujisaki-Okamoto transformation or any of its different variants, a CPA-secure primitive can be converted into an IND-CCA secure KEM. In this paper we show that although the transformation does not handle secret information apart from calls to the CPA-secure primitive, it has to be implemented in constant time. Namely, if the ciphertext comparison step in the transformation is leaking side-channel information, we can launch a key-recovery attack. Several proposed schemes in round 2 of the NIST post-quantum standardization project are susceptible to the proposed attack and we develop and show the details of the attack on one of them, being FrodoKEM. It is implemented on the reference implementation of FrodoKEM, which is claimed to be secure against all timing attacks. In the experiments, the attack code is able to extract the secret key for all security levels using about $2^{30}$ decapsulation calls.
2020
CRYPTO
Interactive Proofs for Social Graphs 📺
We consider interactive proofs for social graphs, where the verifier has only oracle access to the graph and can query for the $i^{th}$ neighbor of a vertex $v$, given $i$ and $v$. In this model, we construct a doubly-efficient public-coin two-message interactive protocol for estimating the size of the graph to within a multiplicative factor $\epsilon>0$. The verifier performs $\widetilde{O}(1/\epsilon^2 \cdot \tau \cdot \Delta)$ queries to the graph, where $\tau$ is the mixing time of the graph and $\Delta$ is the average degree of the graph. The prover runs in quasi-linear time in the number of nodes in the graph. Furthermore, we develop a framework for computing the average of essentially any (reasonable) function $f$ of vertices of the graph. Using this framework, we can estimate many health measures of social graphs such as the clustering coefficients and the average degree, where the verifier performs only a small number of queries to the graph. Using the Fiat-Shamir paradigm, we are able to transform the above protocols to a non-interactive argument in the random oracle model. The result is that any social media company (Facebook, Twitter, etc.) can publish, once and for all, a short proof for the size or health of their social network. This proof can be publicly verified by any single user using a small number of queries to the graph.
2020
CRYPTO
NIZK from LPN and Trapdoor Hash via Approximate-Correlation Intractability 📺
We present new Non-Interactive Zero-Knowledge argument systems (NIZK), based on standard assumptions that were previously not known to imply it. In particular, we rely on the hardness of both the learning parity with noise (LPN) assumption, and the existence of trapdoor hash functions (TDH, defined by Döttling et al., Crypto 2019). TDH can be based on a number of standard assumptions, including DDH, QR, DCR, and LWE. We rely on the Correlation Intractability (CI) framework for converting \Sigma-protocols into NIZK, but deviate from prior works in considering CI for searchable relations where the search function has a probabilistic representation by a simple function class (linear or constant degree in our instantiations). Namely, there is a distribution over simple functions that computes each output bit of the search function with all but small (constant) probability. We present a new tool for proving CI for such function classes via a notion that we call Approximate-Correlation Intractability. This notion requires that CI holds even against approximations of a given function class. We show that approximate-correlation intractability for just constant degree functions suffices if the underlying \Sigma-protocol is implemented using an extractable commitment scheme with approximately low-degree extraction, and that such a commitment scheme can be constructed based on LPN. We then show how to construct approximate CI hash functions for this class from any suitable rate-1 TDH (with an enhanced correctness property that is satisfied by all existing constructions).
2020
CRYPTO
Random Self-reducibility of Ideal-SVP via Arakelov Random Walks 📺
Fixing a number field, the space of all ideal lattices, up to isometry, is naturally an Abelian group, called the *Arakelov class group*. This fact, well known to number theorists, has so far not been explicitly used in the literature on lattice-based cryptography. Remarkably, the Arakelov class group is a combination of two groups that have already led to significant cryptanalytic advances: the class group and the unit torus. In the present article, we show that the Arakelov class group has more to offer. We start with the development of a new versatile tool: we prove that, subject to the Riemann Hypothesis for Hecke L-functions, certain random walks on the Arakelov class group have a rapid mixing property. We then exploit this result to relate the average-case and the worst-case of the Shortest Vector Problem in ideal lattices. Our reduction appears particularly sharp: for Hermite-SVP in ideal lattices of certain cyclotomic number fields, it loses no more than a $\tilde O(\sqrt n)$ factor on the Hermite approximation factor. Furthermore, we suggest that this rapid-mixing theorem should find other applications in cryptography and in algorithmic number theory.
2020
CRYPTO
Nearly Optimal Robust Secret Sharing against Rushing Adversaries 📺
Robust secret sharing is a strengthening of standard secret sharing that allows the shared secret to be recovered even if some of the shares being used in the reconstruction have been adversarially modified. In this work, we study the setting where out of all the $n$ shares, the adversary is allowed to adaptively corrupt and modify up to $t$ shares, where $n = 2t+1$.\footnote{Note that if the adversary is allowed to modify any more shares, then correct reconstruction would be impossible.} Further, we deal with \emph{rushing} adversaries, meaning that the adversary is allowed to see the honest parties' shares before modifying its own shares. It is known that when $n = 2t+1$, to share a secret of length $m$ bits and recover it with error less than $2^{-\sec}$, shares of size at least $m+\sec$ bits are needed. Recently, Bishop, Pastro, Rajaraman, and Wichs (EUROCRYPT 2016) constructed a robust secret sharing scheme with shares of size $m + O(\sec\cdot\polylog(n,m,\sec))$ bits that is secure in this setting against non-rushing adversaries. Later, Fehr and Yuan (EUROCRYPT 2019) constructed a scheme that is secure against rushing adversaries, but has shares of size $m + O(\sec\cdot n^{\eps}\cdot \polylog(n,m,\sec))$ bits for an arbitrary constant $\eps > 0$. They also showed a variant of their construction with share size $m + O(\sec\cdot\polylog(n,m,\sec))$ bits, but with super-polynomial reconstruction time. We present a robust secret sharing scheme that is simultaneously close-to-optimal in all of these respects -- it is secure against rushing adversaries, has shares of size $m+O(\sec \log{n} (\log{n}+\log{m}))$ bits, and has polynomial-time sharing and reconstruction. Central to our construction is a polynomial-time algorithm for a problem on semi-random graphs that arises naturally in the paradigm of local authentication of shares used by us and in the aforementioned work.
2020
CRYPTO
Efficient Constant-Round MPC with Identifiable Abort and Public Verifiability 📺
Recent years have seen a tremendous growth in the interest in se- cure multiparty computation (MPC) and its applications. While much progress has been made concerning its efficiency, many current, state-of-the-art protocols are vulnerable to Denial of Service attacks, where a cheating party may prevent the honest parties from learning the output of the computation, whilst remaining anonymous. The security model of identifiable abort aims to prevent these at- tacks, by allowing honest parties to agree upon the identity of a cheating party, who can then be excluded in the future. Several existing MPC protocols offer security with identifiable abort against a dishonest majority of corrupted parties. However, all of these protocols have a round complexity that scales linearly with the depth of the circuit (and are therefore unsuitable for use in high latency net- works) or use cryptographic primitives or techniques that have a high computa- tional overhead. In this work, we present the first efficient MPC protocols with identifiable abort in the dishonest majority setting, which run in a constant number of rounds and make only black-box use of cryptographic primitives. Our main construction is built from highly efficient primitives in a careful way to achieve identifiability at a low cost. In particular, we avoid the use of public-key operations outside of a setup phase, incurring a relatively low overhead on top of the fastest currently known constant-round MPC protocols based on garbled circuits. Our construction also avoids the use of adaptively secure primitives and heavy zero-knowledge machinery, which was inherent in previous works. In addition, we show how to upgrade our protocol to achieve public verifiability using a public bulletin board, allowing any external party to verify correctness of the computation or identify a cheating party.
2020
CRYPTO
Shorter Non-Interactive Zero-Knowledge Arguments and ZAPs for Algebraic Languages 📺
We put forth a new framework for building pairing-based non-interactive zero-knowledge (NIZK) arguments for a wide class of algebraic languages, which are an extension of linear languages, containing disjunctions of linear languages and more. Our approach differs from the Groth-Sahai methodology, in that we rely on pairings to compile a Sigma-protocol into a NIZK. Our framework enjoys a number of interesting features: - conceptual simplicity, parameters derive from the Sigma-protocol; - proofs as short as resulting from the Fiat-Shamir heuristic applied to the underlying Sigma-protocol; - fully adaptive soundness and perfect zero-knowledge in the common random string model with a single random group element as CRS; - yields simple and efficient two-round, public coin, publicly-verifiable perfect witness- indistinguishable (WI) arguments(ZAPs) in the plain model. To our knowledge, this is the first construction of two-rounds statistical witness-indistinguishable arguments from pairing assumptions. Our proof system relies on a new (static, falsifiable) assumption over pairing groups which generalizes the standard kernel Diffie-Hellman assumption in a natural way and holds in the generic group model (GGM) and in the algebraic group model (AGM). Replacing Groth-Sahai \NIZKs with our new proof system allows to improve several important cryptographic primitives. In particular, we obtain the shortest tightly-secure structure-preserving signature scheme (which are a core component in anonymous credentials), the shortest tightly-secure quasi-adaptive \NIZK with unbounded simulation soundness (which in turns implies the shortest tightly-mCCA-secure cryptosystem), and shorter ring signatures.
2020
CRYPTO
Non-Malleability against Polynomial Tampering 📺
We present the first explicit construction of a non-malleable code that can handle tampering functions that are bounded-degree polynomials. Prior to our work, this was only known for degree-1 polynomials (affine tampering functions), due to Chattopadhyay and Li (STOC 2017). As a direct corollary, we obtain an explicit non-malleable code that is secure against tampering by bounded-size arithmetic circuits. We show applications of our non-malleable code in constructing non-malleable secret sharing schemes that are robust against bounded-degree polynomial tampering. In fact our result is stronger: we can handle adversaries that can adaptively choose the polynomial tampering function based on initial leakage of a bounded number of shares. Our results are derived from explicit constructions of seedless non-malleable extractors that can handle bounded-degree polynomial tampering functions. Prior to our work, no such result was known even for degree-2 (quadratic) polynomials.
2020
CRYPTO
The Summation-Truncation Hybrid: Reusing Discarded Bits for Free 📺
A well-established PRP-to-PRF conversion design is truncation: one evaluates an $n$-bit pseudorandom permutation on a certain input, and truncates the result to $a$ bits. The construction is known to achieve tight $2^{n-a/2}$ security. Truncation has gained popularity due to its appearance in the GCM-SIV key derivation function (ACM CCS 2015). This key derivation function makes four evaluations of AES, truncates the outputs to $n/2$ bits, and concatenates these to get a $2n$-bit subkey. In this work, we demonstrate that truncation is wasteful. In more detail, we present the Summation-Truncation Hybrid (STH). At a high level, the construction consists of two parallel evaluations of truncation, where the truncated $(n-a)$-bit chunks are not discarded but rather summed together and appended to the output. We prove that STH achieves a similar security level as truncation, and thus that the $n-a$ bits of extra output is rendered for free. In the application of GCM-SIV, the current key derivation can be used to output $3n$ bits of random material, or it can be reduced to three primitive evaluations. Both changes come with no security loss.
2020
CRYPTO
Security Analysis and Improvements for the IETF MLS Standard for Group Messaging 📺
Secure messaging (SM) protocols allow users to communicate securely over untrusted infrastructure. In contrast to most other secure communication protocols (such as TLS, SSH, or Wireguard), SM sessions may be long-lived (e.g., years) and highly asynchronous. In order to deal with likely state compromises of users during the lifetime of a session, SM protocols do not only protect authenticity and privacy, but they also guarantee forward secrecy (FS) and post-compromise security (PCS). The former ensures that messages sent and received before a state compromise remain secure, while the latter ensures that users can recover from state compromise as a consequence of normal protocol usage. SM has received considerable attention in the two-party case, where prior work has studied the well-known double-ratchet paradigm, in particular, and SM as a cryptographic primitive, in general. Unfortunately, this paradigm does not scale well to the problem of secure group messaging (SGM). In order to address the lack of satisfactory SGM protocols, the IETF has launched the message-layer security (MLS) working group, which aims to standardize an eponymous SGM protocol. In this work we analyze the TreeKEM protocol, which is at the core of the SGM protocol proposed by the MLS working group. On a positive note, we show that TreeKEM achieves PCS in isolation (and slightly more). However, we observe that the current version of TreeKEM does not provide an adequate form of FS. More precisely, our work proceeds by formally capturing the exact security of TreeKEM as a so-called continuous group key agreement (CGKA) protocol, which we believe to be a primitive of independent interest. To address the insecurity of TreeKEM, we propose a simple modification to TreeKEM inspired by recent work of Jost et al. (EUROCRYPT '19) and an idea due to Kohbrok (MLS Mailing List). We then show that the modified version of TreeKEM comes with almost no efficiency degradation but achieves optimal (according to MLS specification) CGKA security, including FS and PCS. Our work also lays out how a CGKA protocol can be used to design a full SGM protocol.
2020
CRYPTO
LWE with Side Information: Attacks and Concrete Security Estimation 📺
We propose a framework for cryptanalysis of lattice-based schemes, when side information --in the form of "hints''-- about the secret and/or error is available. Our framework generalizes the so-called primal lattice reduction attack, and allows the progressive integration of hints before running a final lattice reduction step. Our techniques for integrating hints include sparsifying the lattice, projecting onto and intersecting with hyperplanes, and/or altering the distribution of the secret vector. Our main contribution is to propose a toolbox and a methodology to integrate such hints into lattice reduction attacks and to predict the performance of those lattice attacks with side information. While initially designed for side-channel information, our framework can also be used in other cases: exploiting decryption failures, or simply exploiting constraints imposed by certain schemes (LAC, Round5, NTRU), that were previously not known to (slightly) benefit from lattice attacks. We implement a Sage 9.0 toolkit to actually mount such attacks with hints when computationally feasible, and to predict their performances on larger instances. We provide several end-to-end application examples, such as an improvement of a single trace attack on Frodo by Bos et al (SAC 2018). Contrary to ad-hoc practical attacks exploiting side-channel leakage, our work is a generic way to estimate security loss even given very little side-channel information.
2020
CRYPTO
Non-Malleable Secret Sharing against Bounded Joint-Tampering Attacks in the Plain Model 📺
Secret sharing enables a dealer to split a secret into a set of shares, in such a way that certain authorized subsets of share holders can reconstruct the secret, whereas all unauthorized subsets cannot. Non-malleable secret sharing (Goyal and Kumar, STOC 2018) additionally requires that, even if the shares have been tampered with, the reconstructed secret is either the original or a completely unrelated one. In this work, we construct non-malleable secret sharing tolerating $p$-time {\em joint-tampering} attacks in the plain model (in the computational setting), where the latter means that, for any $p>0$ fixed {\em a priori}, the attacker can tamper with the same target secret sharing up to $p$ times. In particular, assuming one-to-one one-way functions, we obtain: - A secret sharing scheme for threshold access structures which tolerates joint $p$-time tampering with subsets of the shares of maximal size ({\em i.e.}, matching the privacy threshold of the scheme). This holds in a model where the attacker commits to a partition of the shares into non-overlapping subsets, and keeps tampering jointly with the shares within such a partition (so-called {\em selective partitioning}). - A secret sharing scheme for general access structures which tolerates joint $p$-time tampering with subsets of the shares of size $O(\sqrt{\log n})$, where $n$ is the number of parties. This holds in a stronger model where the attacker is allowed to adaptively change the partition within each tampering query, under the restriction that once a subset of the shares has been tampered with jointly, that subset is always either tampered jointly or not modified by other tampering queries (so-called {\em semi-adaptive partitioning}). At the heart of our result for selective partitioning lies a new technique showing that every one-time {\em statistically} non-malleable secret sharing against joint tampering is in fact {\em leakage-resilient} non-malleable ({\em i.e.},\ the attacker can leak jointly from the shares prior to tampering). We believe this may be of independent interest, and in fact we show it implies lower bounds on the share size and randomness complexity of statistically non-malleable secret sharing against {\em independent} tampering.
2020
CRYPTO
Stacked Garbling: Garbled Circuit Proportional to Longest Execution Path 📺
Secure two party computation (2PC) of arbitrary programs can be efficiently achieved using garbled circuits (GC). The bottleneck of GC efficiency is communication. It is widely believed that it is necessary to transmit the entire GC during 2PC, even for conditional branches that are not taken. This folklore belief is false. We propose a novel GC technique, stacked garbling, that eliminates the communication cost of inactive conditional branches. We extend the ideas of conditional GC evaluation explored in (Kolesnikov, Asiacrypt 18) and (Heath and Kolesnikov, Eurocrypt 20). Unlike these works, ours is for general 2PC where no player knows which conditional branch is taken. Our garbling scheme, Stack, requires communication proportional to the longest execution path rather than to the entire circuit. Stack is compatible with state-of-the-art techniques, such as free XOR and half-gates. Stack is a garbling scheme. As such, it can be plugged into a variety of existing protocols, and the resulting round complexity is the same as that of standard GC. The approach does incur computation cost quadratic in the conditional branching factor vs linear in standard schemes, but the tradeoff is beneficial for most programs: GC computation even on weak hardware is faster than GC transmission on fast channels. We implemented Stack in C++. Stack reduces communication cost by approximately the branching factor: for 16 branches, communication is reduced by 10.5x. In terms of wall-clock time for circuits with branching factor 16 over a 50 Mbps WAN on a laptop, Stack outperforms state-of- the-art half-gates-based 2PC by more than 4x.
2020
CRYPTO
Out of Oddity -- New Cryptanalytic Techniques against Symmetric Primitives Optimized for Integrity Proof Systems 📺
The security and performance of many integrity proof systems like SNARKs, STARKs and Bulletproofs highly depend on the underlying hash function. For this reason several new proposals have recently been developed. These primitives obviously require an in-depth security evaluation, especially since their implementation constraints have led to less standard design approaches. This work compares the security levels offered by two recent families of such primitives, namely GMiMC and HadesMiMC. We exhibit low-complexity distinguishers against the GMiMC and HadesMiMC permutations for most parameters proposed in recently launched public challenges for STARK-friendly hash functions. In the more concrete setting of the sponge construction corresponding to the practical use in the ZK-STARK protocol, we present a practical collision attack on a round-reduced version of GMiMC and a preimage attack on some instances of HadesMiMC. To achieve those results, we adapt and generalize several cryptographic techniques to fields of odd characteristic.
2020
CRYPTO
Fast and Secure Updatable Encryption 📺
Updatable encryption allows a client to outsource ciphertexts to some untrusted server and periodically rotate the encryption key. The server can update ciphertexts from an old key to a new key with the help of an update token, received from the client, which should not reveal anything about plaintexts to an adversary. We provide a new and highly efficient suite of updatable encryption schemes that we collectively call SHINE. In the variant designed for short messages, ciphertext generation consists of applying one permutation and one exponentiation (per message block), while updating ciphertexts requires just one exponentiation. Variants for longer messages provide much stronger security guarantees than prior work that has comparable efficiency. We present a new confidentiality notion for updatable encryption schemes that implies prior notions. We prove that SHINE is secure under our new confidentiality definition while also providing ciphertext integrity.
2020
CRYPTO
Amplifying the Security of Functional Encryption, Unconditionally 📺
Security amplification is a fundamental problem in cryptography. In this work, we study security amplification for functional encryption. We show two main results: - For any constant epsilon in (0,1), we can amplify an epsilon-secure FE scheme for P/poly which is secure against all polynomial sized adversaries to a fully secure FE scheme for P/poly, unconditionally. - For any constant epsilon in (0,1), we can amplify an epsilon-secure FE scheme for P/poly which is secure against subexponential sized adversaries to a subexponentially secure FE scheme for P/poly, unconditionally. Furthermore, both of our amplification results preserve compactness of the underlying FE scheme. Previously, amplification results for FE were only known assuming subexponentially secure LWE. Along the way, we introduce a new form of homomorphic secret sharing called set homomorphic secret sharing that may be of independent interest. Additionally, we introduce a new technique, which allows one to argue security amplification of nested primitives, and prove a general theorem that can be used to analyze the security amplification of parallel repetitions.
2020
CRYPTO
A Classification of Computational Assumptions in the Algebraic Group Model 📺
We give a taxonomy of computational assumptions in the algebraic group model (AGM). We first analyze the Uber assumption family for bilinear groups defined by Boyen and then extend it in multiple ways to cover assumptions such as Gap Diffie-Hellman and the LRSW assumption. We show that in the AGM every member of these families reduces to the q-discrete logarithm (DL) problem, for some q that depends on the degrees of the polynomials defining the assumption. Using the meta-reduction technique, we then separate (q+1)-DL from q-DL, which thus yields a classification of all members of the extended Uber-assumption families. We finally show that there are strong assumptions, such as one-more DL, that provably fall outside our classification, as we prove that they cannot be reduced to q-DL even in the AGM.
2020
CRYPTO
Improved Primitives for MPC over Mixed Arithmetic-Binary Circuits 📺
This work introduces novel techniques to improve the translation between arithmetic and binary data types in multi-party computation. To this end, we introduce a new approach to performing these conversions, using what we call \emph{extended doubly-authenticated bits} (edaBits), which correspond to shared integers in the arithmetic domain whose bit decomposition is shared in the binary domain. These can be used to considerably increase the efficiency of non-linear operations such as truncation, secure comparison and bit-decomposition. Our eDaBits are similar to the \emph{daBits} technique introduced by Rotaru et al.~(Indocrypt 2019). However, our main observations are that (1) applications that benefit from daBits can also benefit from edaBits in the same way, and (2) we can generate edaBits directly in a much more efficeint way than computing them directly from a set of DaBits. Technically, the second contribution is much more challenging, and involves a novel cut and choose technique that may be of independent interest, and requires taking advantage of natural tamper-resilient properties of binary circuits that occur in our construction to obtain the best level of efficiency. Finally, we show how our eDaBits can be applied to efficiently implement various non-linear protocols of interest, and we thoroughly analyze their correctness for both signed and unsigned integers. The results of this work can be applied to any corruption threshold, although they seem best suited to dishonest majority protocols such as SPDZ. We implement and benchmark our constructions, and experimentally verify that our technique yield a substantial increase in effiency. Our eDaBits save in communication by a factor that lies between $2$ and $170$ for secure comparisons with respect to a purely arithmetic approach, and between $2$ and $60$ with respect to using daBits. Improvements in throughput per second are more subdued but still as high as a factor of $47$. We also apply our novel machinery to the tasks of biometric matching and convolutional neural networks, obtaining a noticeable improvement as well.
2020
CRYPTO
Leakage-Resilient Key Exchange and Two-Seed Extractors 📺
Can Alice and Bob agree on a uniformly random secret key without having any truly secret randomness to begin with? Here we consider a setting where Eve can get partial leakage on the internal state of both Alice and Bob individually before the protocol starts. They then run a protocol using their states without any additional randomness and need to agree on a shared key that looks uniform to Eve, even after observing the leakage and the protocol transcript. We focus on non-interactive (one round) key exchange (NIKE), where Alice and Bob send one message each without waiting for one another. We first consider this problem in the symmetric-key setting, where the states of Alice and Bob include a shared secret as well as individual uniform randomness. However, since Eve gets leakage on these states, Alice and Bob need to perform privacy amplification to derive a fresh secret key from them. Prior solutions require Alice and Bob to sample fresh uniform randomness during the protocol, while in our setting all of their randomness was already part of their individual states a priori and was therefore subject to leakage. We show an information-theoretic solution to this problem using a novel primitive that we call a two-seed extractor, which we in turn construct by drawing a connection to communication-complexity lower-bounds in the number-on-forehead (NOF) model. We then turn to studying this problem in the public-key setting, where the states of Alice and Bob consist of independent uniform randomness. Unfortunately, we give a black-box separation showing that leakage-resilient NIKE in this setting cannot be proven secure via a black-box reduction under any game-based assumption when the leakage is super-logarithmic. This includes virtually all assumptions used in cryptography, and even very strong assumptions such as indistinguishability obfuscation (iO). Nevertheless, we also provide positive results that get around the above separation: -We show that every key exchange protocol (e.g., Diffie-Hellman) is secure when the leakage amount is logarithmic, or potentially even greater if we assume sub-exponential security without leakage. -We notice that the black-box separation does not extend to schemes in the common reference string (CRS) model, or to schemes with preprocessing, where Alice and Bob can individually pre-process their random coins to derive their secret state prior to leakage. We give a solution in the CRS model with preprocessing using bilinear maps. We also give solutions in just the CRS model alone (without preprocessing) or just with preprocessing (without a CRS), using iO and lossy functions.
2020
EUROCRYPT
Key-Homomorphic Pseudorandom Functions from LWE with Small Modulus 📺
Pseudorandom functions (PRFs) are fundamental objects in cryptography that play a central role in symmetric-key cryptography. Although PRFs can be constructed from one-way functions generically, these black-box constructions are usually inefficient and require deep circuits to evaluate compared to direct PRF constructions that rely on specific algebraic assumptions. From lattices, one can directly construct PRFs from the Learning with Errors (LWE) assumption (or its ring variant) using the result of Banerjee, Peikert, and Rosen (Eurocrypt 2012) and its subsequent works. However, all existing PRFs in this line of work rely on the hardness of the LWE problem where the associated modulus is super-polynomial in the security parameter. In this work, we provide two new PRF constructions from the LWE problem that each focuses on either minimizing the depth of its evaluation circuit or providing key-homomorphism while relying on the hardness of the LWE problem with only a polynomial modulus. Along the way, we introduce a new variant of the LWE problem called the Learning with Rounding and Errors (LWRE) problem. We show that for certain settings of parameters, the LWRE problem is as hard as the LWE problem. We then show that the hardness of the LWRE problem naturally induces a pseudorandom synthesizer that can be used to construct a low-depth PRF. The techniques that we introduce to study the LWRE problem can then be used to derive variants of existing key-homomorphic PRFs whose security can be reduced from the hardness of the LWE problem with a much smaller modulus.
2020
EUROCRYPT
Stacked Garbling for Disjunctive Zero-Knowledge Proofs 📺
Zero-knowledge (ZK) proofs (ZKP) have received wide attention, focusing on non-interactivity, short proof size, and fast verification time. We focus on the fastest total proof time, in particular for large Boolean circuits. Under this metric, Garbled Circuit (GC)-based ZKP (Jawurek et al., [JKO], CCS 2013) remained the state-of-the-art technique due to the low-constant linear scaling of computing the garbling. We improve GC-ZKP for proof statements with conditional clauses. Our communication is proportional to the longest branch rather than to the entire proof statement. This is most useful when the number m of branches is large, resulting in up to factor m× improvement over JKO. In our proof-of-concept illustrative application, prover P demonstrates knowledge of a bug in a codebase consisting of any number of snippets of actual C code. Our computation cost is linear in the size of the code- base and communication is constant in the number of snippets. That is, we require only enough communication for a single largest snippet! Our conceptual contribution is stacked garbling for ZK, a privacy-free circuit garbling scheme that can be used with the JKO GC-ZKP protocol to construct more efficient ZKP. Given a Boolean circuit C and computational security parameter k, our garbling is L · k bits long, where L is the length of the longest execution path in C. All prior concretely efficient garbling schemes produce garblings of size |C| · k. The computational cost of our scheme is not increased over prior state-of-the-art. We implement our GC-ZKP and demonstrate significantly improved (m× over JKO) ZK performance for functions with branching factor m. Compared with recent ZKP (STARK, Libra, KKW, Ligero, Aurora, Bulletproofs), our scheme offers much better proof times for larger circuits (35-1000× or more, depending on circuit size and compared scheme). For our illustrative application, we consider four C code snippets, each of about 30-50 LOC; one snippet allows an invalid memory dereference. The entire proof takes 0.15 seconds and communication is 1.5 MB.
2020
EUROCRYPT
Lightweight Authenticated Encryption Mode Suitable for Threshold Implementation 📺
This paper proposes tweakable block cipher (TBC) based modes \textsf{PFB}\_\textsf{Plus} and \textsf{PFB}$\omega$ that are efficient in threshold implementations (TI). Let $t$ be an algebraic degree of a target function, e.g. $t=1$ (resp. $t>1$) for linear (resp. non-linear) function. The $d$-th order TI encodes the internal state into $d t + 1$ shares. Hence, the area size increases proportionally to the number of shares. This implies that TBC based modes can be smaller than block cipher (BC) based modes in TI because TBC requires $s$-bit block to ensure $s$-bit security, e.g. \textsf{PFB} and \textsf{Romulus}, while BC requires $2s$-bit block. However, even with those TBC based modes, the minimum we can reach is 3 shares of $s$-bit state with $t=2$ and the first-order TI ($d=1$). Our first design \textsf{PFB}\_\textsf{Plus} aims to break the barrier of the $3s$-bit state in TI. The block size of an underlying TBC is $s/2$ bits and the output of TBC is linearly expanded to $s$ bits. This expanded state requires only 2 shares in the first-order TI, which makes the total state size $2.5s$ bits. We also provide rigorous security proof of \textsf{PFB}\_\textsf{Plus}. Our second design \textsf{PFB}$\omega$ further increases a parameter $\omega$: a ratio of the security level $s$ to the block size of an underlying TBC. We prove security of \textsf{PFB}$\omega$ for any $\omega$ under some assumptions for an underlying TBC and for parameters used to update a state. Next, we show a concrete instantiation of \textsf{PFB}\_\textsf{Plus} for 128-bit security. It requires a TBC with 64-bit block, 128-bit key and 128-bit tweak, while no existing TBC can support it. We design a new TBC by extending \textsf{SKINNY} and provide basic security evaluation. Finally, we give hardware benchmarks of \textsf{PFB}\_\textsf{Plus} in the first-order TI to show that TI of \textsf{PFB}\_\textsf{Plus} is smaller than that of \textsf{PFB} by more than one thousand gates and is the smallest within the schemes having 128-bit security.
2020
EUROCRYPT
On Instantiating the Algebraic Group Model from Falsifiable Assumptions 📺
We provide a standard-model implementation (of a relaxation) of the algebraic group model (AGM, [Fuchsbauer, Kiltz, Loss, CRYPTO 2018]). Specifically, we show that every algorithm that uses our group is algebraic, and hence "must know" a representation of its output group elements in terms of its input group elements. Here, "must know" means that a suitable extractor can extract such a representation efficiently. We stress that our implementation relies only on falsifiable assumptions in the standard model, and in particular does not use any knowledge assumptions. As a consequence, our group allows to transport a number of results obtained in the AGM into the standard model, under falsifiable assumptions. For instance, we show that in our group, several Diffie-Hellman-like assumptions (including computational Diffie-Hellman) are equivalent to the discrete logarithm assumption. Furthermore, we show that our group allows to prove the Schnorr signature scheme tightly secure in the random oracle model. Our construction relies on indistinguishability obfuscation, and hence should not be considered as a practical group itself. However, our results show that the AGM is a realistic computational model (since it can be instantiated in the standard model), and that results obtained in the AGM are also possible with standard-model groups.
2020
EUROCRYPT
How to Extract Useful Randomness from Unreliable Sources 📺
For more than 30 years, cryptographers have been looking for public sources of uniform randomness in order to use them as a set-up to run appealing cryptographic protocols without relying on trusted third parties. Unfortunately, nowadays it is fair to assess that assuming the existence of physical phenomena producing public uniform randomness is far from reality.  It is known that uniform randomness cannot be extracted from a single weak source. A well-studied way to overcome this is to consider several independent weak sources. However, this means we must trust the various sampling processes of weak randomness from physical processes. Motivated by the above state of affairs, this work considers a set-up where players can access multiple {\em potential} sources of weak randomness, several of which may be jointly corrupted by a computationally unbounded adversary. We introduce {\em SHELA} (Somewhere Honest Entropic Look Ahead) sources to model this situation. We show that there is no hope of extracting uniform randomness from a {\em SHELA} source. Instead, we focus on the task of {\em Somewhere-Extraction} (i.e., outputting several candidate strings, some of which are uniformly distributed -- yet we do not know which). We give explicit constructions of {\em Somewhere-Extractors} for {\em SHELA} sources with good parameters. Then, we present applications of the above somewhere-extractor where the public uniform randomness can be replaced by the output of such extraction from corruptible sources, greatly outperforming trivial solutions. The output of somewhere-extraction is also useful in other settings, such as a suitable source of random coins for many randomized algorithms. In another front, we comprehensively study the problem of {\em Somewhere-Extraction} from a {\em weak} source, resulting in a series of bounds. Our bounds highlight the fact that, in most regimes of parameters (including those relevant for applications), {\em SHELA} sources significantly outperform {\em weak} sources of comparable parameters both when it comes to the process of {\em Somewhere-Extraction}, or in the task of amplification of success probability in randomized algorithms. Moreover, the low quality of somewhere-extraction from weak sources excludes its use in various efficient applications.
2020
EUROCRYPT
Everybody’s a Target: Scalability in Public-Key Encryption 📺
For 1<=m<=n, we consider a natural m-out-of-n multi-instance scenario for a public-key encryption (PKE) scheme. An adversary, given n independent instances of PKE, wins if he breaks at least m out of the n instances. In this work, we are interested in the scaling factor of PKE schemes, SF, which measures how well the difficulty of breaking m out of the n instances scales in m. That is, a scaling factor SF=l indicates that breaking m out of n instances is at least l times more difficult than breaking one single instance. A PKE scheme with small scaling factor hence provides an ideal target for mass surveillance. In fact, the Logjam attack (CCS 2015) implicitly exploited, among other things, an almost constant scaling factor of ElGamal over finite fields (with shared group parameters). For Hashed ElGamal over elliptic curves, we use the generic group model to describe how the scaling factor depends on the scheme's granularity. In low granularity, meaning each public key contains its independent group parameter, the scheme has optimal scaling factor SF=m; In medium and high granularity, meaning all public keys share the same group parameter, the scheme still has a reasonable scaling factor SF=sqrt(m). Our findings underline that instantiating ElGamal over elliptic curves should be preferred to finite fields in a multi-instance scenario. As our main technical contribution, we derive new generic-group lower bounds of Omega(sqrt(mp)) on the complexity of solving both the m-out-of-n Gap Discrete Logarithm and the m-out-of-n Gap Computational Diffie-Hellman problem over groups of prime order p, extending a recent result by Yun (EUROCRYPT 2015). We establish the lower bound by studying the hardness of a related computational problem which we call the search-by-hypersurface problem.
2020
EUROCRYPT
OptORAMa: Optimal Oblivious RAM 📺
Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC '87 and J. ACM '96) is a technique for provably obfuscating programs' access patterns, such that the access patterns leak no information about the programs' secret inputs. To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy's original ORAM work for statistical security and in a somewhat restricted model (the so called \emph{balls-and-bins} model), and recently by Larsen and Nielsen (CRYPTO '18) for computational security. A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS '18) who gave a construction with $O(\log N\cdot \log\log N)$ amortized blowup, assuming one-way functions. One of our building blocks of independent interest is a linear-time deterministic oblivious algorithm for tight compaction: Given an array of $n$ elements where some elements are marked, we permute the elements in the array so that all marked elements end up in the front of the array. Our $O(n)$ algorithm improves the previously best known deterministic or randomized algorithms whose running time is $O(n \cdot\log n)$ or $O(n \cdot\log \log n)$, respectively.
2020
EUROCRYPT
Improving Key-Recovery in Linear Attacks: Application to 28-round PRESENT 📺
Linear cryptanalysis is one of the most important tools in use for the security evaluation of symmetric primitives. Many improvements and refinements have been published since its introduction, and many applications on different ciphers have been found. Among these upgrades, Collard et al. proposed in 2007 an acceleration of the key-recovery part of Algorithm 2 for last-round attacks based on the FFT. In this paper we present a generalized, matrix-based version of the previous algorithm which easily allows to take into consideration an arbitrary number of key-recovery rounds. We also provide efficient variants that exploit the key-schedule relations and that can be combined with multiple linear attacks. Using our algorithms we provide some new cryptanalysis on PRESENT, including, to the best of our knowledge, the first attack on 28 rounds.
2020
EUROCRYPT
(One) failure is not an option: Bootstrapping the search for failures in lattice-based encryption schemes 📺
Lattice-based encryption schemes are often subject to the possibility of decryption failures, in which valid encryptions are decrypted incorrectly. Such failures, in large number, leak information about the secret key, enabling an attack strategy alternative to pure lattice reduction. Extending the "failure boosting" technique of D'Anvers et al. in PKC 2019, we propose an approach that we call "directional failure boosting" that uses previously found "failing ciphertexts" to accelerate the search for new ones. We analyse in detail the case where the lattice is defined over polynomial ring modules quotiented by <X^N + 1> and demonstrate it on a simple Mod-LWE-based scheme parametrized à la Kyber768/Saber. We show that, using our technique, for a given secret key (single-target setting), the cost of searching for additional failing ciphertexts after one or more have already been found, can be sped up dramatically. We thus demonstrate that, in this single-target model, these schemes should be designed so that it is hard to even obtain one decryption failure. Besides, in a wider security model where there are many target secret keys (multi-target setting), our attack greatly improves over the state of the art.
2020
EUROCRYPT
Which Languages Have 4-Round Fully Black-Box Zero-Knowledge Arguments from One-Way Functions? 📺
We prove that if a language $\cL$ has a 4-round fully black-box zero-knowledge argument with negligible soundness based on one-way functions, then $\overline{\cL} \in \MA$. Since $\coNP \subseteq \MA$ implies that the polynomial hierarchy collapses, our result implies that $\NP$-complete languages are unlikely to have 4-round fully black-box zero-knowledge arguments based on one-way functions. In TCC 2018, Hazay and Venkitasubramaniam, and Khurana, Ostrovsky, and Srinivasan demonstrated 4-round fully black-box zero-knowledge arguments for all languages in $\NP$ based on injective one-way functions. Their results also imply a 5-round protocol based on one-way functions. In essence, our result resolves the round complexity of fully black-box zero-knowledge arguments based on one-way functions.
2020
EUROCRYPT
Modeling for Three-Subset Division Property without Unknown Subset -- Improved Cube Attacks against Trivium and Grain-128AEAD 📺
A division property is a generic tool to search for integral distinguishers, and automatic tools such as MILP or SAT/SMT allow us to evaluate the propagation efficiently. In the application to stream ciphers, it enables us to estimate the security of cube attacks theoretically, and it leads to the best key-recovery attacks against well-known stream ciphers. However, it was reported that some of the key-recovery attacks based on the division property degenerate to distinguishing attacks due to the inaccuracy of the division property. Three-subset division property (without unknown subset) is a promising method to solve this inaccuracy problem, and a new algorithm using automatic tools for the three-subset division property was recently proposed at Asiacrypt2019. In this paper, we first show that this state-of-the-art algorithm is not always efficient and we cannot improve the existing key-recovery attacks. Then, we focus on the feature of the three-subset division property without unknown subset and propose another new efficient algorithm using automatic tools. Our algorithm is more efficient than existing algorithms, and it can improve existing key-recovery attacks. In the application to Trivium, we show a 841-round key-recovery attack. We also show that a 855-round key-recovery attack, which was proposed at CRYPTO2018, has a critical flaw and does not work. As a result, our 841-round attack becomes the best key-recovery attack. In the application to Grain-128AEAD, we show that the known 184-round key-recovery attack degenerates to distinguishing attacks. Then, the distinguishing attacks are improved up to 189 rounds, and we also show the best key-recovery attack against 190 rounds.
2020
EUROCRYPT
Indistinguishability Obfuscation Without Maps: Attacks and Fixes for Noisy Linear FE 📺
Candidates of Indistinguishability Obfuscation (iO) can be categorized as ``direct'' or ``bootstrapping based''. Direct constructions rely on high degree multilinear maps [GGH13,GGHRSW13] and provide heuristic guarantees, while bootstrapping based constructions [LV16,Lin17,LT17,AJLMS19,Agr19,JLMS19] rely, in the best case, on bilinear maps as well as new variants of the Learning With Errors (LWE) assumption and pseudorandom generators. Recent times have seen exciting progress in the construction of indistinguishability obfuscation (iO) from bilinear maps (along with other assumptions) [LT17,AJLMS19,JLMS19,Agr19]. As a notable exception, a recent work by Agrawal [Agr19] provided a construction for iO without using any maps. This work identified a new primitive, called Noisy Linear Functional Encryption (NLinFE) that provably suffices for iO and gave a direct construction of NLinFE from new assumptions on lattices. While a preliminary cryptanalysis for the new assumptions was provided in the original work, the author admitted the necessity of performing significantly more cryptanalysis before faith could be placed in the security of the scheme. Moreover, the author did not suggest concrete parameters for the construction. In this work, we fill this gap by undertaking the task of thorough cryptanalytic study of NLinFE. We design two attacks that let the adversary completely break the security of the scheme. Our attacks are completely new and unrelated to attacks that were hitherto used to break other candidates of iO. To achieve this, we develop new cryptanalytic techniques which (we hope) will inform future designs of the primitive of NLinFE. From the knowledge gained by our cryptanalytic study, we suggest modifications to the scheme. We provide a new scheme which overcomes the vulnerabilities identified before. We also provide a thorough analysis of all the security aspects of this scheme and argue why plausible attacks do not work. We additionally provide concrete parameters with which the scheme may be instantiated. We believe the security of NLinFE stands on significantly firmer footing as a result of this work.
2020
EUROCRYPT
Low Error Efficient Computational Extractors in the CRS Model 📺
In recent years, there has been exciting progress on building two-source extractors for sources with low min-entropy. Unfortunately, all known explicit constructions of two-source extractors in the low entropy regime suffer from non-negligible error, and building such extractors with negligible error remains an open problem. We investigate this problem in the computational setting, and obtain the following results. We construct an explicit 2-source extractor, and even an explicit non-malleable extractor, with negligible error, for sources with low min-entropy, under computational assumptions in the Common Random String (CRS) model. More specifically, we assume that a CRS is generated once and for all, and allow the min-entropy sources to depend on the CRS. We obtain our constructions by using the following transformations. - Building on the technique of [BHK11], we show a general transformation for converting any computational 2-source extractor (in the CRS model) into a computational non-malleable extractor (in the CRS model), for sources with similar min-entropy. We emphasize that the resulting computational non-malleable extractor is resilient to arbitrarily many tampering attacks (a property that is impossible to achieve information theoretically). This may be of independent interest. This transformation uses cryptography, and in particular relies on the sub-exponential hardness of the Decisional Diffie Hellman (DDH) assumption. - Next, using the blueprint of [BACD+17], we give a transformation converting our computational non-malleable extractor (in the CRS model) into a computational 2-source extractor for sources with low min-entropy (in the CRS model). Our 2-source extractor works for unbalanced sources: specifically, we require one of the sources to be larger than a specific polynomial in the other. This transformation does not incur any additional assumptions. Our analysis makes a novel use of the leakage lemma of Gentry and Wichs [GW11].
2020
EUROCRYPT
The Price of Active Security in Cryptographic Protocols 📺
We construct the first actively-secure Multi-Party Computation (MPC) protocols with an \emph{arbitrary} number of parties in the dishonest majority setting, for an \emph{arbitrary} field $\FF$ with \emph{constant communication overhead} over the ``passive-GMW'' protocol (Goldreich, Micali and Wigderson, STOC `87). Our protocols rely on passive implementations of Oblivious Transfer (OT) in the boolean setting and Oblivious Linear function Evaluation (OLE) in the arithmetic setting. Previously, such protocols were only known over sufficiently large fields (Genkin et al. STOC `14) or a constant number of parties (Ishai et al. CRYPTO `08). Conceptually, our protocols are obtained via a new compiler from a passively-secure protocol for a distributed multiplication functionality $\cF_\mult$, to an actively-secure protocol for general functionalities. Roughly, $\cF_\mult$ is parameterized by a linear-secret sharing scheme $\cS$, where it takes $\cS$-shares of two secrets and returns $\cS$-shares of their product. We show that our compilation is concretely efficient for sufficiently large fields, resulting in an overhead of 2 when securely computing natural circuits. Our compiler has two additional benefits: (1) it can rely on \emph{any} passive implementation of $\cF_\mult$, which, besides the standard implementation based on OT (for boolean) and OLE (for arithmetic) allows us to rely on implementations based on threshold cryptosystems (Cramer et al. Eurocrypt `01); and (2) it can rely on weaker-than-passive (i.e., imperfect/leaky) implementations, which in some parameter regimes yield actively-secure protocols with overhead less than 2. Instantiating this compiler with an ``honest-majority'' implementations of $\cF_\mult$, we obtain the first honest-majority protocol with optimal corruption threshold for boolean circuits with constant communication overhead over the best passive protocol (Damg{\aa}rd and Nielsen, CRYPTO `07).
2020
EUROCRYPT
Transparent SNARKs from DARK Compilers 📺
We construct a new polynomial commitment scheme for univariate and multivariate polynomials over finite fields, with public-coin evaluation proofs that have logarithmic communication and verification cost in the number of coefficients of the polynomial. The underlying technique is a Diophantine Argument of Knowledge (DARK), leveraging integer representations of polynomials and groups of unknown order. Security is shown from the strong RSA and the adaptive root assumption. Moreover, the scheme does not require a trusted setup if instantiated with class groups. We apply this new cryptographic compiler to a restricted class of algebraic linear IOPs in order to obtain doubly-efficient public-coin IPs with succinct communication and witness-extended emulation for any NP relation. Allowing for linear preprocessing, the online verifier's work is logarithmic in the circuit complexity of the relation. Concretely, we obtain quasi-linear prover time when compiling the IOP employed in Sonic(MBKM, CCS 19). Applying the Fiat-Shamir transform in the random oracle model results in a SNARK system with quasi-linear preprocessing, quasi-linear (online) prover time, logarithmic proof size, and logarithmic (online) verification time for arbitrary circuits. The SNARK is also concretely efficient with 8.4KB proofs and 75ms verification time for circuits with 1 million gates. Most importantly, this SNARK is transparent: it does not require a trusted setup. We also obtain zk-SNARKs by applying a variant of our polynomial commitment scheme that is hiding and offers zero-knowledge evaluation proofs. This construction is the first transparent zk-SNARK that has both a practical prover time as well as strictly logarithmic proof size and verification time. We call our system Supersonic.
2020
EUROCRYPT
TNT: How to Tweak a Block Cipher 📺
In this paper, we propose Tweak-aNd-Tweak (TNT for short) mode, which builds a tweakable block cipher from three independent block ciphers. TNT handles the tweak input by simply XOR-ing the unmodified tweak into the internal state of block ciphers twice. Due to its simplicity, TNT can also be viewed as a way of turning a block cipher into a tweakable block cipher by dividing the block cipher into three chunks, and adding the tweak at the two cutting points only. TNT is proven to be of beyond-birthday-bound $2^{2n/3}$ security, under the assumption that the three chunks are independent secure $n$-bit SPRPs. It clearly brings minimum possible overhead to both software and hardware implementations. To demonstrate this, an instantiation named TNT-AES with 6, 6, 6 rounds of AES as the underlying block ciphers is proposed. Besides the inherent proven security bound and tweak-independent rekeying feature of the TNT mode, the performance of TNT-AES is comparable with all existing TBCs designed through modular methods.
2020
EUROCRYPT
Quantum Security Analysis of CSIDH 📺
CSIDH is a recent proposal for post-quantum non-interactive key-exchange, based on supersingular elliptic curve isogenies. It is similar in design to a previous scheme by Couveignes, Rostovtsev and Stolbunov, but aims at an improved balance between efficiency and security. In the proposal, the authors suggest concrete parameters in order to meet some desired levels of quantum security. These parameters are based on the hardness of recovering a hidden isogeny between two elliptic curves, using a quantum subexponential algorithm of Childs, Jao and Soukharev. This algorithm combines two building blocks: first, a quantum algorithm for recovering a hidden shift in a commutative group. Second, a computation in superposition of all isogenies originating from a given curve, which the algorithm calls as a black box. In this paper, we give a comprehensive security analysis of CSIDH. Our first step is to revisit three quantum algorithms for the abelian hidden shift problem from the perspective of non-asymptotic cost, with trade-offs between their quantum and classical complexities. Second, we complete the non-asymptotic study of the black box in the hidden shift algorithm. We give a quantum procedure that evaluates CSIDH-512 using less than 40~000 logical qubits. This allows us to show that the parameters proposed by the authors of CSIDH do not meet their expected quantum security.
2020
EUROCRYPT
Continuous Verifiable Delay Functions 📺
We introduce the notion of a continuous verifiable delay function (cVDF): a function g which is (a) iteratively sequential---meaning that evaluating the iteration $g^{(t)}$ of g (on a random input) takes time roughly t times the time to evaluate g, even with many parallel processors, and (b) (iteratively) verifiable---the output of $g^{(t)}$ can be efficiently verified (in time that is essentially independent of t). In other words, the iterated function $g^{(t)}$ is a verifiable delay function (VDF) (Boneh et al., CRYPTO '18), having the property that intermediate steps of the computation (i.e., $g^{(t')}$ for t'<t) are publicly and continuously verifiable. We demonstrate that cVDFs have intriguing applications: (a) they can be used to construct public randomness beacons that only require an initial random seed (and no further unpredictable sources of randomness), (b) enable outsourceable VDFs where any part of the VDF computation can be verifiably outsourced, and (c) have deep complexity-theoretic consequences: in particular, they imply the existence of depth-robust moderately-hard Nash equilibrium problem instances, i.e. instances that can be solved in polynomial time yet require a high sequential running time. Our main result is the construction of a cVDF based on the repeated squaring assumption and the soundness of the Fiat-Shamir (FS) heuristic for constant-round proofs. We highlight that when viewed as a (plain) VDF, our construction requires a weaker FS assumption than previous ones (earlier constructions require the FS heuristic for either super-logarithmic round proofs, or for arguments).
2020
EUROCRYPT
Generic-Group Delay Functions Require Hidden-Order Groups 📺
Despite the fundamental importance of delay functions, underlying both the classic notion of a time-lock puzzle and the more recent notion of a verifiable delay function, the only known delay function that offers both sufficient structure for realizing these two notions and a realistic level of practicality is the ``iterated squaring'' construction of Rivest, Shamir and Wagner. This construction, however, is based on rather strong assumptions in groups of hidden orders, such as the RSA group (which requires a trusted setup) or the class group of an imaginary quadratic number field (which is still somewhat insufficiently explored from the cryptographic perspective). For more than two decades, the challenge of constructing delay functions in groups of known orders, admitting a variety of well-studied instantiations, has eluded the cryptography community. In this work we prove that there are no constructions of generic-group delay functions in cyclic groups of known orders: We show that for any delay function that does not exploit any particular property of the representation of the underlying group, there exists an attacker that completely breaks the function's sequentiality when given the group's order. As any time-lock puzzle and verifiable delay function give rise to a delay function, our result holds for these two notions we well, and explains the lack of success in resolving the above-mentioned long-standing challenge. Moreover, our result holds even if the underlying group is equipped with a d-linear map, for any constant d>=2 (and even for super-constant values of d under certain conditions).
2020
EUROCRYPT
Fractal: Post-Quantum and Transparent Recursive Proofs from Holography 📺
We present a new methodology to efficiently realize recursive composition of succinct non-interactive arguments of knowledge (SNARKs). Prior to this work, the only known methodology relied on pairing-based SNARKs instantiated on cycles of pairing-friendly elliptic curves, an expensive algebraic object. Our methodology does not rely on any special algebraic objects and, moreover, achieves new desirable properties: it is post-quantum and it is transparent (the setup is public coin). We exploit the fact that recursive composition is simpler for SNARKs with preprocessing, and the core of our work is obtaining a preprocessing zkSNARK for rank-1 constraint satisfiability (R1CS) that is post-quantum and transparent. We obtain this latter by establishing a connection between holography and preprocessing in the random oracle model, and then constructing a holographic proof for R1CS. We experimentally validate our methodology, demonstrating feasibility in practice.
2020
EUROCRYPT
Finding Hash Collisions with Quantum Computers by Using Differential Trails with Smaller Probability than Birthday Bound 📺
In this paper we spot light on dedicated quantum collision attacks on concrete hash functions, which has not received much attention so far. In the classical setting, the generic complexity to find collisions of an $n$-bit hash function is $O(2^{n/2})$, thus classical collision attacks based on differential cryptanalysis such as rebound attacks build differential trails with probability higher than $2^{-n/2}$. By the same analogy, generic quantum algorithms such as the BHT algorithm find collisions with complexity $O(2^{n/3})$. With quantum algorithms, a pair of messages satisfying a differential trail with probability $p$ can be generated with complexity $p^{-1/2}$. Hence, in the quantum setting, some differential trails with probability up to $2^{-2n/3}$ that cannot be exploited in the classical setting may be exploited to mount a collision attack in the quantum setting. In particular, the number of attacked rounds may increase. In this paper, we attack two international hash function standards: AES-MMO and Whirlpool. For AES-MMO, we present a $7$-round differential trail with probability $2^{-80}$ and use it to find collisions with a quantum version of the rebound attack, while only $6$ rounds can be attacked in the classical setting. For Whirlpool, we mount a collision attack based on a $6$-round differential trail from a classical rebound distinguisher with a complexity higher than the birthday bound. This improves the best classical attack on 5 rounds by 1. We also show that those trails are optimal in our approach. Our results have two important implications. First, there seems to exist a common belief that classically secure hash functions will remain secure against quantum adversaries. Indeed, several second-round candidates in the NIST post-quantum competition use existing hash functions, say SHA-3, as quantum secure ones. Our results disprove this common belief. Second, our observation suggests that differential trail search should not stop with probability $2^{-n/2}$ but should consider up to $2^{-2n/3}$. Hence it deserves to revisit the previous differential trail search activities.
2020
EUROCRYPT
Signatures from Sequential-OR Proofs 📺
OR-proofs enable a prover to show that it knows the witness for one of many statements, or that one out of many statements is true. OR-proofs are a remarkably versatile tool, used to strengthen security properties, design group and ring signature schemes, and achieve tight security. The common technique to build OR-proofs is based on an approach introduced by Cramer, Damgaard, and Schoenmakers (CRYPTO'94), where the prover splits the verifier's challenge into random shares and computes proofs for each statement in parallel. In this work we study a different, less investigated OR-proof technique, highlighted by Abe, Ohkubo, and Suzuki (ASIACRYPT'02). The difference is that the prover now computes the individual proofs sequentially. We show that such sequential OR-proofs yield signature schemes which can be proved secure in the non-programmable random oracle model. We complement this positive result with a black-box impossibility proof, showing that the same is unlikely to be the case for signatures derived from traditional OR-proofs. We finally argue that sequential-OR signature schemes can be proved secure in the quantum random oracle model, albeit with very loose bounds and by programming the random oracle.
2020
EUROCRYPT
Extracting Randomness from Extractor-Dependent Sources 📺
We revisit the well-studied problem of extracting nearly uniform randomness from an arbitrary source of sufficient min-entropy. Strong seeded extractors solve this problem by relying on a public random seed, which is unknown to the source. Here, we consider a setting where the seed is reused over time and the source may depend on prior calls to the extractor with the same seed. Can we still extract nearly uniform randomness? In more detail, we assume the seed is chosen randomly, but the source can make arbitrary oracle queries to the extractor with the given seed before outputting a sample. We require that the sample has entropy and differs from any of the previously queried values. The extracted output should look uniform even to a distinguisher that gets the seed. We consider two variants of the problem, depending on whether the source only outputs the sample, or whether it can also output some correlated public auxiliary information that preserves the sample's entropy. Our results are: * Without Auxiliary Information: We show that every pseudo-random function (PRF) with a sufficiently high security level is a good extractor in this setting, even if the distinguisher is computationally unbounded. We further show that the source necessarily needs to be computationally bounded and that such extractors imply one-way functions. * With Auxiliary Information: We construct secure extractors in this setting, as long as both the source and the distinguisher are computationally bounded. We give several constructions based on different intermediate primitives, yielding instantiations based on the DDH, DLIN, LWE or DCR assumptions. On the negative side, we show that one cannot prove security against computationally unbounded distinguishers in this setting under any standard assumption via a black-box reduction. Furthermore, even when restricting to computationally bounded distinguishers, we show that there exist PRFs that are insecure as extractors in this setting and that a large class of constructions cannot be proven secure via a black-box reduction from standard assumptions.
2020
EUROCRYPT
Measure-Rewind-Measure: Tighter Quantum Random Oracle Model Proofs for One-Way to Hiding and CCA Security 📺
We introduce a new technique called `Measure-Rewind-Measure' (MRM) to achieve tighter security proofs in the quantum random oracle model (QROM). We first apply our MRM technique to derive a new security proof for a variant of the `double-sided' quantum One-Way to Hiding Lemma (O2H) of Bindel et al. [TCC 2019] which, for the first time, avoids the square-root advantage loss in the security proof. In particular, it bypasses a previous `impossibility result' of Jiang, Zhang and Ma [IACR eprint 2019]. We then apply our new O2H Lemma to give a new tighter security proof for the Fujisaki-Okamoto transform for constructing a strong (INDCCA) Key Encapsulation Mechanism (KEM) from a weak (INDCPA) public-key encryption scheme satisfying a mild injectivity assumption.
2020
EUROCRYPT
Friet: an Authenticated Encryption Scheme with Built-in Fault Detection 📺
In this work we present a duplex-based authenticated encryption scheme Friet based on a new permutation called Friet-P. We designed Friet-P with a novel approach for cryptographic permutations and block ciphers that takes fault-attack resistance into account and that we introduce in this paper. In this method, we build a permutation f_C to be embedded in a larger one f. First, we define f as a sequence of steps that all abide a chosen error-correcting code C, i.e., that map C-codewords to C-codewords. Then, we embed f_C in f by first encoding its input to an element of C, applying f and then decoding back from C. This last step detects a fault when the output of f is not in C. We motivate the design of the permutation we use in Friet and report on performance in soft- and hardware. We evaluate the fault-detection capabilities of the software and simulated hardware implementations with attacks. Finally, we perform a leakage evaluation. Our code is available at https://github.com/thisimon/Friet.git.
2020
EUROCRYPT
New Slide Attacks on Almost Self-Similar Ciphers 📺
The slide attack is a powerful cryptanalytic tool which has the unusual property that it can break iterated block ciphers with a complexity that does not depend on their number of rounds. However, it requires complete self similarity in the sense that all the rounds must be identical. While this can be the case in Feistel structures, this rarely happens in SP networks since the last round must end with an additional post-whitening subkey. In addition, in many SP networks the final round has additional asymmetries - for example, in AES the last round omits the MixColumns operation. Such asymmetry in the last round can make it difficult to utilize most of the advanced tools which were developed for slide attacks, such as deriving from one slid pair additional slid pairs by repeatedly re-encrypting their ciphertexts. Consequently, almost all the successful applications of slide attacks against real cryptosystems (e.g., FF3, GOST, SHACAL-1, etc.) had targeted Feistel structures rather than SP networks. In this paper we overcome this last round problem by developing four new types of slide attacks. We demonstrate their power by applying them to many types of AES-like structures (with and without linear mixing in the last round, with known or secret S-boxes, with periodicity of 1,2 and 3 in their subkeys, etc). In most of these cases, the time complexity of our attack is close to $2^{n/2}$, the smallest possible complexity for most slide attacks. Our new slide attacks have several unique properties: The first uses slid sets in which each plaintext from the first set forms a slid pair with some plaintext from the second set, but without knowing the exact correspondence. The second makes it possible to create from several slid pairs an exponential number of new slid pairs which form a hypercube spanned by the given pairs. The third has the unusual property that it is always successful, and the fourth can use known messages instead of chosen messages, with only slightly higher time complexity.
2020
EUROCRYPT
Formalizing Data Deletion in the Context of the Right to be Forgotten 📺
The right of an individual to request the deletion of their personal data by an entity that might be storing it -- referred to as \emph{the right to be forgotten} -- has been explicitly recognized, legislated, and exercised in several jurisdictions across the world, including the European Union, Argentina, and California. However, much of the discussion surrounding this right offers only an intuitive notion of what it means for it to be fulfilled -- of what it means for such personal data to be deleted. In this work, we provide a formal definitional framework for the right to be forgotten using tools and paradigms from cryptography. In particular, we provide a precise definition of what could be (or should be) expected from an entity that collects individuals' data when a request is made of it to delete some of this data. Our framework captures most, though not all, relevant aspects of typical systems involved in data processing. While it cannot be viewed as expressing the statements of current laws (especially since these are rather vague in this respect), our work offers technically precise definitions that represent possibilities for what the law could reasonably expect, and alternatives for what future versions of the law could explicitly require. Finally, with the goal of demonstrating the applicability of our framework and definitions, we consider various natural and simple scenarios where the right to be forgotten comes up. For each of these scenarios, we highlight the pitfalls that arise even in genuine attempts at implementing systems offering deletion guarantees, and also describe technological solutions that provably satisfy our definitions. These solutions bring together techniques built by various communities.
2020
EUROCRYPT
PSI from PaXoS: Fast, Malicious Private Set Intersection 📺
We present a 2-party private set intersection (PSI) protocol which provides security against malicious participants, yet is almost as fast as the fastest known semi-honest PSI protocol of Kolesnikov et al. (CCS 2016). Our protocol is based on a new approach for two-party PSI, which can be instantiated to provide security against either malicious or semi-honest adversaries. The protocol is unique in that the only difference between the semi-honest and malicious versions is an instantiation with different parameters for a linear error-correction code. It is also the first PSI protocol which is concretely efficient while having linear communication and security against malicious adversaries, while running in the OT-hybrid model (assuming a non-programmable random oracle). State of the art semi-honest PSI protocols take advantage of cuckoo hashing, but it has proven a challenge to use cuckoo hashing for malicious security. Our protocol is the first to use cuckoo hashing for malicious- secure PSI. We do so via a new data structure, called a probe-and-XOR of strings (PaXoS), which may be of independent interest. This abstraction captures important properties of previous data structures, most notably garbled Bloom filters. While an encoding by a garbled Bloom filter is larger by a factor of $\Omega(\lambda)$ than the original data, we describe a significantly improved PaXoS based on cuckoo hashing that achieves constant rate while being no worse in other relevant efficiency measures.
2020
EUROCRYPT
Efficient simulation of random states and random unitaries 📺
We consider the problem of efficiently simulating random quantum states and random unitary operators, in a manner which is convincing to unbounded adversaries with black-box oracle access. This problem has previously only been considered for restricted adversaries. Against adversaries with an a priori bound on the number of queries, it is well-known that t-designs suffice. Against polynomial-time adversaries, one can use pseudorandom states (PRS) and pseudorandom unitaries (PRU), as defined in a recent work of Ji, Liu, and Song; unfortunately, no provably secure construction is known for PRUs. In our setting, we are concerned with unbounded adversaries. Nonetheless, we are able to give stateful quantum algorithms which simulate the ideal object in both settings of interest. In the case of Haar-random states, our simulator is polynomial-time, has negligible error, and can also simulate verification and reflection through the simulated state. This yields an immediate application to quantum money: a money scheme which is information-theoretically unforgeable and untraceable. In the case of Haar-random unitaries, our simulator takes polynomial space, but simulates both forward and inverse access with zero error. These results can be seen as the first significant steps in developing a theory of lazy sampling for random quantum objects.
2020
EUROCRYPT
Compact Adaptively Secure ABE from k-Lin: Beyond NC1 and towards NL 📺
We present a new general framework for constructing compact and adaptively secure attribute-based encryption (ABE) schemes from k-Lin in asymmetric bilinear pairing groups. Previously, the only construction [Kowalczyk and Wee, Eurocrypt '19] that simultaneously achieves compactness and adaptive security from static assumptions supports policies represented by Boolean formulae. Our framework enables supporting more expressive policies represented by arithmetic branching programs. Our framework extends to ABE for policies represented by uniform models of computation such as Turing machines. Such policies enjoy the feature of being applicable to attributes of arbitrary lengths. We obtain the first compact adaptively secure ABE for deterministic and non-deterministic finite automata (DFA and NFA) from k-Lin, previously unknown from any static assumptions. Beyond finite automata, we obtain the first ABE for large classes of uniform computation, captured by deterministic and non-deterministic logspace Turing machines (the complexity classes L and NL) based on k-Lin. Our ABE scheme has compact secret keys of size linear in the description size of the Turing machine M. The ciphertext size grows linearly in the input length, but also linearly in the time complexity, and exponentially in the space complexity. Irrespective of compactness, we stress that our scheme is the first that supports large classes of Turing machines based solely on standard assumptions. In comparison, previous ABE for general Turing machines all rely on strong primitives related to indistinguishability obfuscation.
2020
EUROCRYPT
Evolving Ramp Secret Sharing with a Small Gap 📺
Evolving secret-sharing schemes, introduced by Komargodski, Naor, and Yogev (TCC 2016b), are secret-sharing schemes in which there is no a-priory upper bound on the number of parties that will participate. The parties arrive one by one and when a party arrives the dealer gives it a share; the dealer cannot update this share when other parties arrive. Motivated by the fact that when the number of parties is known, ramp secret-sharing schemes are more efficient than threshold secret-sharing schemes, we study evolving ramp secret-sharing schemes. Specifically, we study evolving $(b(j),g(j))$-ramp secret-sharing schemes, where $g,b: \NN\to \NN$ are non-decreasing functions. In such schemes, any set of parties that for some $j$ contains $g(j)$ parties from the first parties that arrive can reconstruct the secret, and any set such that for every $j$ contains less than $b(j)$ parties from the first $j$ parties that arrive cannot learn any information about the secret. We focus on the case that the gap is small, namely $g(j)-b(j)=j^{\beta}$ for $0<\beta<1$. We show that there is an evolving ramp secret-sharing scheme with gap $t^{\beta}$, in which the share size of the $j$-th party is $\tilde{O}(j^{4-\frac{1}{\log^2 {1/\beta}}})$. Furthermore, we show that our construction results in much better share size for fixed values of $\beta$, i.e., there is an evolving ramp secret-sharing scheme with gap $\sqrt{j}$, in which the share size of the $j$-th party is $\tilde{O}(j)$. Our construction should be compared to the best known evolving $g(j)$-threshold secret-sharing schemes (i.e., when $b(j)=g(j)-1$) in which the share size of the $j$-th party is $\tilde{O}(j^4)$. Thus, our construction offers a significant improvement for every constant $\beta$, showing that allowing a gap between the sizes of the authorized and unauthorized sets can reduce the share size. In addition, we present an evolving $(k/2,k)$-ramp secret-sharing scheme for a constant $k$ (which can be very big), where any set of parties of size at least $k$ can reconstruct the secret and any set of parties of size at most $k/2$ cannot learn any information about the secret. The share size of the $j$-th party in our construction is $O(\log k\log j)$. This is an improvement over the best known evolving $k$-threshold secret-sharing schemes in which the share size of the $j$-th party is $O(k\log j)$.
2020
EUROCRYPT
Quantum-access-secure message authentication via blind-unforgeability 📺
Formulating and designing authentication of classical messages in the presence of adversaries with quantum query access has been a challenge, as the familiar classical notions of unforgeability do not directly translate into meaningful notions in the quantum setting. A particular difficulty is how to fairly capture the notion of ``predicting an unqueried value'' when the adversary can query in quantum superposition. We propose a natural definition of unforgeability against quantum adversaries called blind unforgeability. This notion defines a function to be predictable if there exists an adversary who can use "partially blinded" oracle access to predict values in the blinded region. We support the proposal with a number of technical results. We begin by establishing that the notion coincides with EUF-CMA in the classical setting and go on to demonstrate that the notion is satisfied by a number of simple guiding examples, such as random functions and quantum-query-secure pseudorandom functions. We then show the suitability of blind unforgeability for supporting canonical constructions and reductions. We prove that the "hash-and-MAC" paradigm and the Lamport one-time digital signature scheme are indeed unforgeable according to the definition. In this setting, we additionally define and study a new variety of quantum-secure hash functions called Bernoulli-preserving. Finally, we demonstrate that blind unforgeability is strictly stronger than a previous definition of Boneh and Zhandry [EUROCRYPT '13, CRYPTO '13] and resolve an open problem concerning this previous definition by constructing an explicit function family which is forgeable yet satisfies the definition.
2020
EUROCRYPT
He Gives C-Sieves on the CSIDH 📺
Recently, Castryck, Lange, Martindale, Panny, and Renes proposed \emph{CSIDH} (pronounced ``sea-side'') as a candidate post-quantum ``commutative group action.'' It has attracted much attention and interest, in part because it enables noninteractive Diffie--Hellman-like key exchange with quite small communication. Subsequently, CSIDH has also been used as a foundation for digital signatures. In 2003--04, Kuperberg and then Regev gave asymptotically subexponential quantum algorithms for ``hidden shift'' problems, which can be used to recover the CSIDH secret key from a public key. In late 2011, Kuperberg gave a follow-up quantum algorithm called the \emph{collimation sieve} (``c-sieve'' for short), which improves the prior ones, in particular by using exponentially less quantum memory and offering more parameter tradeoffs. While recent works have analyzed the concrete cost of the original algorithms (and variants) against CSIDH, nothing of this nature was previously available for the c-sieve. This work fills that gap. Specifically, we generalize Kuperberg's collimation sieve to work for arbitrary finite cyclic groups, provide some practical efficiency improvements, give a classical (i.e., non-quantum) simulator, run experiments for a wide range of parameters up to the actual CSIDH-512 group order, and concretely quantify the complexity of the c-sieve against CSIDH. Our main conclusion is that the proposed CSIDH parameters provide relatively little quantum security beyond what is given by the cost of quantumly evaluating the CSIDH group action itself (on a uniform superposition). For example, the cost of CSIDH-512 key recovery is only about~$2^{16}$ quantum evaluations using~$2^{40}$ bits of quantumly accessible \emph{classical} memory (plus relatively small other resources). This improves upon a prior estimate of~$2^{32.5}$ evaluations and~$2^{31}$ qubits of \emph{quantum} memory, for a variant of Kuperberg's original sieve. Under the plausible assumption that quantum evaluation does not cost much more than what is given by a recent ``best case'' analysis, CSIDH-512 can therefore be broken using significantly less than~$2^{64}$ quantum T-gates. This strongly invalidates its claimed NIST level~1 quantum security, especially when accounting for the MAXDEPTH restriction. Moreover, under analogous assumptions for CSIDH-1024 and -1792, which target higher NIST security levels, except near the high end of the MAXDEPTH range even these instantiations fall short of level~1.
2020
EUROCRYPT
Hardness of LWE on General Entropic Distributions 📺
The hardness of the Learning with Errors (LWE) problem is by now a cornerstone of the cryptographic landscape, allowing to con- struct cryptographic schemes with properties unknown under other as- sumptions, and being conjectured to be resilient to quantum attacks. LWE is essentially the task of solving a noisy system of random linear equations over uniformly random secret variables (“the LWE secret”), evaluated modulo some integer. In applications the secret variables usu- ally correspond to the secret key of the cryptographic scheme. It is therefore of great importance to understand what happens when the secret variables are not sampled uniformly (but still have some entropy). This is relevant for settings where an adversary manages to obtain partial information on the secret (a.k.a key leakage), for various theoretical ap- plications, and also for practical use where for efficiency or convenience it is easier to sample the secret from some non-uniform distribution. This so called “Entropic LWE” problem has been studied in a number of works, starting with Goldwasser et al. (ICS 2010). However, so far it was only known how to prove the hardness of Entropic LWE for secret distributions supported inside a ball of small radius. In this work we resolve the hardness of Entropic LWE with arbitrary long secrets, in the following sense. We show an entropy bound that guarantees the security of arbitrary Entropic LWE. This bound is higher than what is required in the ball-bounded setting, but we show that this is essentially tight. Tightness is shown unconditionally for highly-composite moduli, and using black-box impossibility for arbitrary moduli. Technically, we show that the entropic hardness of LWE relies on a sim- ple to describe lossiness property of the distribution of secrets itself. This is simply the probability of recovering a random sample from this distri- bution s, given s + e, where e is Gaussian noise (i.e. the quality of the distribution of secrets as an error correcting code for Gaussian noise). We hope that this characterization will make it easier to derive entropic LWE results more easily in the future. We also use our techniques to show new results for the ball-bounded setting, essentially showing that under a strong enough assumption even polylogarithmic entropy suffices.
2020
EUROCRYPT
Candidate iO From Homomorphic Encryption Schemes 📺
We propose a new approach to construct general-purpose indistinguishability obfuscation (iO). Our construction is obtained via a new intermediate primitive that we call split fully-homomorphic encryption (split FHE), which we show to be sufficient for constructing iO. Specifically, split FHE is FHE where decryption takes the following two-step syntactic form: (i) A secret decryption step uses the secret key and produces a hint which is (asymptotically) shorter than the length of the encrypted message, and (ii) a public decryption step that only requires the ciphertext and the previously generated hint (and not the entire secret key), and recovers the encrypted message. In terms of security, the hints for a set of ciphertexts should not allow one to violate semantic security for any other ciphertexts. Next, we show a generic candidate construction of split FHE based on three building blocks: (i) A standard FHE scheme with linear decrypt-and-multiply (which can be instantiated with essentially all LWE-based constructions), (ii) a linearly homomorphic encryption scheme with short decryption hints (such as the Damgard-Jurik encryption scheme, based on the DCR problem), and (iii) a cryptographic hash function (which can be based on a variety of standard assumptions). Our approach is heuristic in the sense that our construction is not provably secure and makes implicit assumptions about the interplay between these underlying primitives. We show evidence that this construction is secure by providing an argument in an appropriately defined oracle model. We view our construction as a big departure from the state-of-the-art constructions, and it is in fact quite simple.
2020
EUROCRYPT
Rational isogenies from irrational endomorphisms 📺
In this paper, we introduce a polynomial-time algorithm to compute a connecting $\mathcal{O}$-ideal between two supersingular elliptic curves over $\mathbb{F}_p$ with common $\mathbb{F}_p$-endomorphism ring $\mathcal{O}$, given a description of their full endomorphism rings. This algorithm provides a reduction of the security of the CSIDH cryptosystem to the problem of computing endomorphism rings of supersingular elliptic curves. A similar reduction for SIDH appeared at Asiacrypt 2016, but relies on totally different techniques. Furthermore, we also show that any supersingular elliptic curve constructed using the complex-multiplication method can be located precisely in the supersingular isogeny graph by explicitly deriving a path to a known base curve. This result prohibits the use of such curves as a building block for a hash function into the supersingular isogeny graph.
2020
EUROCRYPT
Resource-Restricted Cryptography: Revisiting MPC Bounds in the Proof-of-Work Era 📺
Traditional bounds on synchronous Byzantine agreement (BA) and secure multi-party computation (MPC) establish that in absence of a private correlated-randomness setup, such as a PKI, protocols can tolerate up to $t<n/3$ of the parties being malicious. The introduction of ``Nakamoto style'' consensus, based on Proof-of-Work (PoW) blockchains, put forth a somewhat different flavor of BA, showing that even a majority of corrupted parties can be tolerated as long as the majority of the computation resources remain at honest hands. This assumption on honest majority of some resource was also extended to other resources such as stake, space, etc., upon which blockchains achieving Nakamoto-style consensus were built that violated the $t<n/3$ bound in terms of number of party corruptions. The above state of affairs begs the question of whether the seeming mismatch is due to different goals and models, or whether the resource-restricting paradigm can be generically used to circumvent the $n/3$ lower bound. In this work we study this question and formally demonstrate how the above paradigm changes the rules of the game in cryptographic definitions. First, we abstract the core properties that the resource-restricting paradigm offers by means of a functionality {\em wrapper}, in the UC framework, which when applied to a standard point-to-point network restricts the ability (of the adversary) to send new messages. We show that such a wrapped network can be implemented using the resource-restricting paradigm---concretely, using PoWs and honest majority of computing power---and that the traditional $t<n/3$ impossibility results fail when the parties have access to such a network. Our construction is in the {\em fresh} Common Reference String (CRS) model---i.e., it assumes a CRS which becomes available to the parties at the same time as to the adversary. We then present constructions for BA and MPC, which given access to such a network tolerate $t<n/2$ corruptions without assuming a private correlated randomness setup. We also show how to remove the freshness assumption from the CRS by leveraging the power of a random oracle. Our MPC protocol achieves the standard notion of MPC security, where parties might have dedicated roles, as is for example the case in Oblivious Transfer protocols. This is in contrast to existing solutions basing MPC on PoWs, which associate roles to pseudonyms but do not link these pseudonyms with the actual parties.
2020
EUROCRYPT
Statistical ZAPR Arguments from Bilinear Maps 📺
Dwork and Naor (FOCS '00) defined ZAPs as 2-message witness-indistinguishable proofs that are public-coin. We relax this to \emph{ZAPs with private Randomness} (ZAPRs), where the verifier can use private coins to sample the first message (independently of the statement being proved), but the proof must remain publicly verifiable given only the protocol transcript. In particular, ZAPRs are \emph{reusable}, meaning that the first message can be reused for multiple proofs without compromising security. Known constructions of ZAPs from trapdoor permutations or bilinear maps are only computationally WI (and statistically sound). Two recent results of Badrinarayanan-Fernando-Jain-Khurana-Sahai and Goyal-Jain-Jin-Malavolta [EUROCRYPT '20] construct the first \emph{statistical ZAP arguments}, which are statistically WI (and computationally sound), from the quasi-polynomial LWE assumption. Here, we construct \emph{statistical ZAPR arguments} from the quasi-polynomial decision-linear (DLIN) assumption on groups with a bilinear map. Our construction relies on a combination of several tools including Groth-Ostrovsky-Sahai NIZK and NIWI [EUROCRYPT '06, CRYPTO '06, JACM '12], ``sometimes-binding statistically hiding commitments'' [Kalai-Khurana-Sahai, EUROCRYPT '18] and the ``MPC-in-the-head'' technique [Ishai-Kushilevitz-Ostrovsky-Sahai, STOC '07].
2020
EUROCRYPT
An Algebraic Attack on Rank Metric Code-Based Cryptosystems 📺
The Rank metric decoding problem is the main problem considered in cryptography based on codes in the rank metric. Very efficient schemes based on this problem or quasi-cyclic versions of it have been proposed recently, such as those in the submissions ROLLO and RQC currently at the second round of the NIST Post-Quantum Cryptography Standardization Process. While combinatorial attacks on this problem have been extensively studied and seem now well understood, the situation is not as satisfactory for algebraic attacks, for which previous work essentially suggested that they were ineffective for cryptographic parameters. In this paper, starting from Ourivski and Johansson's algebraic modelling of the problem into a system of polynomial equations, we show how to augment this system with easily computed equations so that the augmented system is solved much faster via Gröbner bases. This happens because the augmented system has solving degree $r$, $r+1$ or $r+2$ depending on the parameters, where $r$ is the rank weight, which we show by extending results from Verbel \emph{et al.} (PQCrypto 2019) on systems arising from the MinRank problem; with target rank $r$, Verbel \emph{et al.} lower the solving degree to $r+2$, and even less for some favorable instances that they call ``superdetermined''. We give complexity bounds for this approach as well as practical timings of an implementation using \texttt{magma}. This improves upon the previously known complexity estimates for both Gröbner basis and (non-quantum) combinatorial approaches, and for example leads to an attack in 200 bits on ROLLO-I-256 whose claimed security was 256 bits.
2020
EUROCRYPT
SPARKs: Succinct Parallelizable Arguments of Knowledge 📺
We introduce the notion of a Succinct Parallelizable Argument of Knowledge (SPARK). This is an argument system with the following three properties for computing and proving a time T (non-deterministic) computation: - The prover's (parallel) running time is T + polylog T. (In other words, the prover's running time is essentially T for large computation times!) - The prover uses at most polylog T processors. - The communication complexity and verifier complexity are both polylog T. While the third property is standard in succinct arguments, the combination of all three is desirable as it gives a way to leverage moderate parallelism in favor of near-optimal running time. We emphasize that even a factor two overhead in the prover's parallel running time is not allowed. Our main results are the following, all for non-deterministic polynomial-time RAM computation. We construct (1) an (interactive) SPARK based solely on the existence of collision-resistant hash functions, and (2) a non-interactive SPARK based on any collision-resistant hash function and any SNARK with quasi-linear overhead (as satisfied by recent SNARK constructions).
2020
EUROCRYPT
Statistical Zaps and New Oblivious Transfer Protocols 📺
We study the problem of achieving statistical privacy in interactive proof systems and oblivious transfer -- two of the most well studied two-party protocols -- when limited rounds of interaction are available. -- Statistical Zaps: We give the first construction of statistical Zaps, namely, two-round statistical witness-indistinguishable (WI) protocols with a public-coin verifier. Our construction achieves computational soundness based on the quasi-polynomial hardness of learning with errors assumption. -- Three-Round Statistical Receiver-Private Oblivious Transfer: We give the first construction of a three-round oblivious transfer (OT) protocol -- in the plain model -- that achieves statistical privacy for receivers and computational privacy for senders against malicious adversaries, based on polynomial-time assumptions. The round-complexity of our protocol is optimal. We obtain our first result by devising a public-coin approach to compress sigma protocols, without relying on trusted setup. To obtain our second result, we devise a general framework via a new notion of statistical hash commitments that may be of independent interest.
2020
EUROCRYPT
Impossibility Results for Lattice-Based Functional Encryption Schemes 📺
Functional Encryption denotes a form of encryption where a master secret key-holder can control which functions a user can evaluate on encrypted data. Learning With Errors (LWE) (Regev, STOC'05) is known to be a useful cryptographic hardness assumption which implies strong primitives such as, for example, fully homomorphic encryption (Brakerski et al., ITCS'12) and lockable obfuscation (Goyal et al., Wichs et al., FOCS'17). Despite its strength, however, there is just a limited number of functional encryption schemes which can be based on LWE. In fact, there are functional encryption schemes which can be achieved by using pairings but for which no secure instantiations from lattice-based assumptions are known: function-hiding inner product encryption (Lin, Baltico et al., CRYPTO'17) and compact quadratic functional encryption (Abdalla et al., CRYPTO'18). This raises the question whether there are some mathematical barriers which hinder us from realizing function-hiding and compact functional encryption schemes from lattice-based assumptions as LWE. To study this problem, we prove an impossibility result for function-hiding functional encryption schemes which meet some algebraic restrictions at ciphertext encryption and decryption. Those restrictions are met by a lot of attribute-based, identity-based and functional encryption schemes whose security stems from LWE. Therefore, we see our results as important indications why it is hard to construct new functional encryption schemes from LWE and which mathematical restrictions have to be overcome to construct secure lattice-based functional encryption schemes for new functionalities.
2020
EUROCRYPT
Non-Interactive Zero-Knowledge in Pairing-Free Groups from Weaker Assumptions 📺
We provide new constructions of non-interactive zero-knowledge arguments (NIZKs) for NP from discrete-logarithm-style assumptions over cyclic groups, without relying on pairings. A previous construction from (Canetti et al., Eurocrypt'18) achieves such NIZKs under the assumption that no efficient adversary can break the key-dependent message (KDM) security of (additive) ElGamal with respect to all (even inefficient) functions over groups of size $2^\lambda$, with probability better than $\poly(\lambda)/2^{\lambda}$. This is an extremely strong, non-falsifiable assumption. In particular, even mild (polynomial) improvements over the current best known attacks on the discrete logarithm problem would already contradict this assumption. (Canetti et al. STOC'19) describe how to improve the assumption to rely only on KDM security with respect to all efficient functions, therefore obtaining an assumption that is (in spirit) falsifiable. Our first construction improves this state of affairs. We provide a construction of NIZKs for NP under the CDH assumption together with the assumption that no efficient adversary can break the key-dependent message one-wayness of ElGamal with respect to efficient functions over groups of size $2^\lambda$, with probability better than $\poly(\lambda)/2^{c\lambda}$ (denoted $2^{-c\lambda}$-OWKDM), for a constant $c = 3/4$. Unlike the previous assumption, our assumption leaves an exponential gap between the best known attack and the required security guarantee. We also analyse whether we could build NIZKs when CDH does not hold. As a second contribution, we construct an infinitely often NIZK argument system for NP (where soundness and zero-knowledge are only guaranteed to hold for infinitely many security parameters), under the $2^{-c\lambda}$-OWKDM security of ElGamal with $c = 28/29+o(1)$, together with the existence of low-depth pseudorandom generators.
2020
EUROCRYPT
Security under Message-Derived Keys: Signcryption in iMessage 📺
At the core of Apple's iMessage is a SignCryption scheme that involves symmetric encryption of a message under a key that is derived from the message itself. To capture this, we formalize a primitive we call Encryption under Message-Derived Keys (EMDK). We prove security of the EMDK scheme underlying iMessage. We use this to prove security of the SignCryption scheme itself, with respect to definitions of SignCryption we give that enhance prior ones to cover issues peculiar to messaging protocols. Our provable-security results are quantitative, and we discuss the practical implications for iMessage.
2020
EUROCRYPT
Succinct Non-Interactive Secure Computation 📺
We present the first maliciously secure protocol for succinct non-interactive secure two-party computation (SNISC): Each player sends just a single message whose length is (essentially) independent of the running time of the function to be computed. The protocol does not require any trusted setup, satisfies superpolynomial-time simulation-based security (SPS), and is based on (subexponential) security of the Learning With Errors (LWE) assumption. We do not rely on SNARKs or "knowledge of exponent"-type assumptions. Since the protocol is non-interactive, the relaxation to SPS security is needed, as standard polynomial-time simulation is impossible; however, a slight variant of our main protocol yields a SNISC with polynomial-time simulation in the CRS model.
2020
EUROCRYPT
Separate Your Domains: NIST PQC KEMs, Oracle Cloning and Read-Only Indifferentiability 📺
It is convenient and common for schemes in the random oracle model to assume access to multiple random oracles (ROs), leaving to implementations the task --we call it oracle cloning-- of constructing them from a single RO. The first part of the paper is a case study of oracle cloning in KEM submissions to the NIST Post-Quantum Cryptography standardization process. We give key-recovery attacks on some submissions arising from mistakes in oracle cloning, and find other submissions using oracle cloning methods whose validity is unclear. Motivated by this, the second part of the paper gives a theoretical treatment of oracle cloning. We give a definition of what is an "oracle cloning method" and what it means for such a method to "work," in a framework we call read-only indifferentiability, a simple variant of classical indifferentiability that yields security not only for usage in single-stage games but also in multi-stage ones. We formalize domain separation, and specify and study many oracle cloning methods, including common domain-separating ones, giving some general results to justify (prove read-only indifferentiability of) certain classes of methods. We are not only able to validate the oracle cloning methods used in many of the unbroken NIST PQC KEMs, but also able to specify and validate oracle cloning methods that may be useful beyond that.
2020
EUROCRYPT
Efficient Constructions for Almost-everywhere Secure Computation 📺
We study the problem of {\em almost-everywhere reliable message transmission}; a key component in designing efficient and secure MPC protocols for sparsely connected networks. The goal is to design low-degree networks which allow a large fraction of honest nodes to communicate reliably even while linearly many nodes can experience byzantine corruption and deviate arbitrarily from the assigned protocol.\\ \noindent In this paper, we achieve a $\log$-degree network with a polylogarithmic work complexity protocol, thereby improving over the state-of-the-art result of Chandran {\em et al.} (ICALP 2010) who required a polylogarithmic-degree network and had a linear work complexity. In addition, we also achieve: \begin{itemize} \item A work efficient version of Dwork et. al.'s (STOC 1986) butterfly network. \item An improvement upon the state of the art protocol of Ben-or and Ron (Information Processing Letters 1996) in the randomized corruption model---both in work-efficiency and in resilience.
2020
EUROCRYPT
Broadcast-Optimal Two-Round MPC 📺
An intensive effort by the cryptographic community to minimize the round complexity of secure multi-party computation (MPC) has recently led to optimal two-round protocols from minimal assumptions. Most of the proposed solutions, however, make use of a broadcast channel in every round, and it is unclear if the broadcast channel can be replaced by standard point-to-point communication in a round-preserving manner, and if so, at what cost on the resulting security. In this work, we provide a complete characterization of the trade-off between number of broadcast rounds and achievable security level for two-round MPC tolerating arbitrarily many active corruptions. Specifically, we consider all possible combinations of broadcast and point-to-point rounds against the three standard levels of security for maliciously se- cure MPC protocols, namely, security with identifiable, unanimous, and selective abort. For each of these notions and each combination of broadcast and point-to-point rounds, we provide either a tight feasibility or an infeasibility result of two-round MPC. Our feasibility results hold assuming two-round OT in the CRS model, whereas our impossibility results hold given any correlated randomness.
2020
EUROCRYPT
Tornado: Automatic Generation of Probing-Secure Masked Bitsliced Implementations 📺
Cryptographic implementations deployed in real world devices often aim at (provable) security against the powerful class of side-channel attacks while keeping reasonable performances. Last year at Asiacrypt, a new formal verification tool named tightPROVE was put forward to exactly determine whether a masked implementation is secure in the well-deployed probing security model for any given security order t. Also recently, a compiler named Usuba was proposed to automatically generate bitsliced implementations of cryptographic primitives. This paper goes one step further in the security and performances achievements with a new automatic tool named Tornado. In a nutshell, from the high-level description of a cryptographic primitive, Tornado produces a functionally equivalent bitsliced masked implementation at any desired order proven secure in the probing model, but additionally in the so-called register probing model which much better fits the reality of software implementations. This framework is obtained by the integration of Usuba with tightPROVE+, which extends tightPROVE with the ability to verify the security of implementations in the register probing model and to fix them with inserting refresh gadgets at carefully chosen locations accordingly. We demonstrate Tornado on the lightweight cryptographic primitives selected to the second round of the NIST competition and which somehow claimed to be masking friendly. It advantageously displays performances of the resulting masked implementations for several masking orders and prove their security in the register probing model.
2020
EUROCRYPT
Mathematics and Cryptography: A Marriage of Convenience? 📺
Invited Paper
Mathematics and cryptography have a long history together, with the ups and downs inherent in any long relationship. Whether it is a marriage of convenience or a love match, their progeny have lives of their own and have had an impact on the world. This invited lecture will briefly recall some high points from the past, give speculation and encouragement for the future of this marriage, and give counseling on how to improve communication, resolve conflicts, and play well together, based on personal experience and lessons learned.
2020
EUROCRYPT
Mind the Composition: Birthday Bound Attacks on EWCDMD and SoKAC21 📺
In an early version of CRYPTO'17, Mennink and Neves proposed \textsf{EWCDMD}, a dual of \textsf{EWCDM}, and showed $n$-bit security, where $n$ is the block size of the underlying block cipher. In CRYPTO'19, Chen et al. proposed permutation based design \textsf{SoKAC21} and showed $2n/3$-bit security, where $n$ is the input size of the underlying permutation. In this paper we show birthday bound attacks on \textsf{EWCDMD} and \textsf{SoKAC21}, invalidating their security claims. Both attacks exploit an inherent composition nature present in the constructions. Motivated by the above two attacks exploiting the composition nature, we consider some generic relevant composition based constructions of ideal primitives (possibly in the ideal permutation and random oracle model) and present birthday bound distinguishers for them. In particular, we demonstrate a birthday bound distinguisher against (1) a secret random permutation followed by a public random function and (2) composition of two secret random functions. Our distinguishers for \textsf{SoKAC21} and \textsf{EWCDMD} are direct consequences of (1) and (2) respectively.
2020
EUROCRYPT
Double-Base Chains for Scalar Multiplications on Elliptic Curves 📺
Double-base chains (DBCs) are widely used to speed up scalar multiplications on elliptic curves. We present three results of DBCs. First, we display a structure of the set containing all DBCs and propose an iterative algorithm to compute the number of DBCs for a positive integer. This is the first polynomial time algorithm to compute the number of DBCs for positive integers. Secondly, we present an asymptotic lower bound on average Hamming weights of DBCs $\frac{\log n}{8.25}$ for a positive integer $n$. This result answers an open question about the Hamming weights of DBCs. Thirdly, we propose a new algorithm to generate an optimal DBC for any positive integer. The time complexity of this algorithm is $\mathcal{O}\left(\left(\log n\right)^2 \log\log n\right)$ bit operations and the space complexity is $\mathcal{O}\left(\left(\log n\right)^{2}\right)$ bits of memory. This algorithm accelerates the recoding procedure by more than $6$ times compared to the state-of-the-art Bernstein, Chuengsatiansup, and Lange's work. The Hamming weights of optimal DBCs are over $60$\% smaller than those of NAFs. Experimental results show that scalar multiplication using our optimal DBC is about $13$\% faster than that using non-adjacent form on elliptic curves over large prime fields.
2020
EUROCRYPT
Low Weight Discrete Logarithms and Subset Sum in $2^{0.65n}$ with Polynomial Memory 📺
We propose two heuristic polynomial memory collision finding algorithms for the low Hamming weight discrete logarithm problem in any abelian group $G$. The first one is a direct adaptation of the Becker-Coron-Joux (BCJ) algorithm for subset sum to the discrete logarithm setting. The second one significantly improves on this adaptation for all possible weights using a more involved application of the representation technique together with some new Markov chain analysis. In contrast to other low weight discrete logarithm algorithms, our second algorithm's time complexity interpolates to Pollard's $|G|^{\frac 1 2}$ bound for general discrete logarithm instances. We also introduce a new heuristic subset sum algorithm with polynomial memory that improves on BCJ's $2^{0.72n}$ time bound for random subset sum instances $a_1, \ldots, a_n, t \in \Z_{2^n}$. Technically, we introduce a novel nested collision finding for subset sum -- inspired by the NestedRho algorithm from Crypto '16 -- that recursively produces collisions. We first show how to instantiate our algorithm with run time $2^{0.649n}$. Using further tricks, we are then able to improve its complexity down to $2^{0.645n}$.
2020
EUROCRYPT
Private Information Retrieval with Sublinear Online Time 📺
Best Young Researcher Award
We present the first protocols for private information retrieval that allow fast (sublinear-time) database lookups without increasing the server-side storage requirements. To achieve these efficiency goals, our protocols work in an offline/online model. In an offline phase, which takes place before the client has decided which database bit it wants to read, the client fetches a short string from the servers. In a subsequent online phase, the client can privately retrieve its desired bit of the database by making a second query to the servers. By pushing the bulk of the server-side computation into the offline phase (which is independent of the client’s query), our protocols allow the online phase to complete very quickly—in time sublinear in the size of the database. Our protocols can provide statistical security in the two-server setting and computational security in the single-server setting. Finally, we prove that, in this model, our protocols are optimal in terms of the trade-off they achieve between communication and running time.
2020
EUROCRYPT
Side-channel Masking with Pseudo-Random Generator 📺
High-order masking countermeasures against side-channel attacks usually require plenty of randomness during their execution. For security against t probes, the classical ISW countermeasure requires O(t^2 s) random bits, where s is the circuit size. However running a True Random Number Generator (TRNG) can be costly in practice and become a bottleneck on embedded devices. In [IKL+13] the authors introduced the notion of robust pseudo-random number generator (PRG), which must remain secure even against an adversary who can probe at most t wires. They showed that when embedding a robust PRG within a private circuit, the number of random bits can be reduced to O(t^4), that is independent of the circuit size s (up to a logarithmic factor). Using bipartite expander graphs, this can be further reduced to O(t^(3+eps)); however the resulting construction is unpractical. In this paper we describe a practical construction where the number of random bits is only O(t^2) for security against t probes, without expander graphs; moreover the running time of each pseudo-random generation goes down from O(t^4) to O(t). Our technique consists in using multiple independent PRGs instead of a single one. We show that for ISW circuits, the robustness property of the PRG is not required anymore, which leads to simple and efficient constructions. For example, for AES we only need 48 bytes of randomness to get second-order security (t=2), instead of 2880 in the original Rivain-Prouff countermeasure; when implemented on an ARM-based embedded device with a relatively slow TRNG, we obtain a 50% speed-up compared to Rivain-Prouff.
2020
EUROCRYPT
Blackbox Secret Sharing Revisited: A Coding-Theoretic Approach with Application to Expansionless Near-Threshold Schemes 📺
A {\em blackbox} secret sharing (BBSS) scheme works in exactly the same way for all finite Abelian groups $G$; it can be instantiated for any such group $G$ and {\em only} black-box access to its group operations and to random group elements is required. A secret is a single group element and each of the $n$ players' shares is a vector of such elements. Share-computation and secret-reconstruction is by integer linear combinations. These do not depend on $G$, and neither do the privacy and reconstruction parameters $t,r$. This classical, fundamental primitive was introduced by Desmedt and Frankel (CRYPTO 1989) in their context of ``threshold cryptography.'' The expansion factor is the total number of group elements in a full sharing divided by $n$. For threshold BBSS with $t$-privacy ($1\leq t \leq n-1$), $t+1$-reconstruction and arbitrary $n$, constructions with minimal expansion $O(\log n)$ exist (CRYPTO 2002, 2005). These results are firmly rooted in number theory; each makes (different) judicious choices of orders in number fields admitting a vector of elements of very large length (in the number field degree) whose corresponding Vandermonde-determinant is sufficiently controlled so as to enable BBSS by a suitable adaptation of Shamir's scheme. Alternative approaches generally lead to very large expansion. The state of the art of BBSS has not changed for the last 15 years. Our contributions are two-fold. (1) We introduce a novel, nontrivial, effective construction of BBSS based on {\em coding theory} instead of number theory. For threshold-BBSS we also achieve minimal expansion factor $O(\log n)$. (2) Our method is more versatile. Namely, we show, for the first time, BBSS that is {\em near-threshold}, i.e., $r-t$ is an arbitrarily small constant fraction of $n$, {\em and} that has expansion factor~$O(1)$, i.e., individual share-vectors of {\em constant} length (``asymptotically expansionless''). Threshold can be concentrated essentially freely across full range. We also show expansion is minimal for near-threshold and that such BBSS cannot be attained by previous methods. Our general construction is based on a well-known mathematical principle, the local-global principle. More precisely, we first construct BBSS over local rings through either Reed-Solomon or algebraic geometry codes. We then ``glue'' these schemes together in a dedicated manner to obtain a global secret sharing scheme, i.e., defined over the integers, which, as we finally prove using novel insights, has the desired BBSS properties. Though our main purpose here is advancing BBSS for its own sake, we also briefly address possible protocol applications.
2020
EUROCRYPT
Tight Security Bounds for Double-block Hash-then-Sum MACs 📺
In this work, we study the security of deterministic MAC constructions with a double-block internal state, captured by the double-block hash-then-sum (DBH) paradigm. Most DBH constructions, including PolyMAC, SUM-ECBC, PMAC-Plus, 3kf9 and LightMAC-Plus, have been proved to be pseudorandom up to 2^{2n/3} queries when they are instantiated with an n-bit block cipher, while the best known generic attacks require 2^{3n/4} queries. We close this gap by proving the PRF-security of DBH constructions up to 2^{3n/4} queries (ignoring the maximum message length). The core of the security proof is to refine Mirror theory that systematically estimates the number of solutions to a system of equations and non-equations, and apply it to prove the security of the finalization function. Then we identify security requirements of the internal hash functions to ensure 3n/4-bit security of the resulting constructions when combined with the finalization function. Within this framework, we prove the security of DBH whose internal hash function is given as the concatenation of a universal hash function using two independent keys. This class of constructions include PolyMAC and SUM-ECBC. Moreover, we prove the security of PMAC-Plus, 3kf9 and LightMAC-Plus up to 2^{3n/4} queries.
2020
EUROCRYPT
On the Quantum Complexity of the Continuous Hidden Subgroup Problem 📺
The Hidden Subgroup Problem (HSP) aims at capturing all problems that are susceptible to be solvable in quantum polynomial time following the blueprints of Shor's celebrated algorithm. Successful solutions to this problems over various commutative groups allow to efficiently perform number-theoretic tasks such as factoring or finding discrete logarithms. The latest successful generalization (Eisenträger et al. STOC 2014) considers the problem of finding a full-rank lattice as the hidden subgroup of the continuous vector space R^m, even for large dimensions m. It unlocked new cryptanalytic algorithms (Biasse-Song SODA 2016, Cramer et al. EUROCRYPT 2016 and 2017), in particular to find mildly short vectors in ideal lattices. The cryptanalytic relevance of such a problem raises the question of a more refined and quantitative complexity analysis. In the light of the increasing physical difficulty of maintaining a large entanglement of qubits, the degree of concern may be different whether the above algorithm requires only linearly many qubits or a much larger polynomial amount of qubits. This is the question we start addressing with this work. We propose a detailed analysis of (a variation of) the aforementioned HSP algorithm, and conclude on its complexity as a function of all the relevant parameters. Our modular analysis is tailored to support the optimization of future specialization to cases of cryptanalytic interests. We suggest a few ideas in this direction.
2020
EUROCRYPT
Statistical ZAP Arguments 📺
Dwork and Naor (FOCS'00) first introduced and constructed two message public coin witness indistinguishable proofs (ZAPs) for NP based on trapdoor permutations. Since then, ZAPs have also been obtained based on the decisional linear assumption on bilinear maps, and indistinguishability obfuscation, and have proven extremely useful in the design of several cryptographic primitives. However, all known constructions of two-message public coin (or even publicly verifiable) proof systems only guarantee witness indistinguishability against computationally bounded verifiers. In this paper, we construct the first public coin two message witness indistinguishable (WI) arguments for NP with {\em statistical} privacy, assuming quasi-polynomial hardness of the learning with errors (LWE) assumption. We also show that the same protocol has a super-polynomial simulator (SPS), which yields the first public-coin SPS statistical zero knowledge argument. Prior to this, there were no known constructions of two-message publicly verifiable WI protocols under lattice assumptions, even satisfying the weaker notion of computational witness indistinguishability.
2020
EUROCRYPT
Tight Time-Space Lower Bounds for Finding Multiple Collision Pairs and Their Applications 📺
We consider a \emph{collision search problem} (CSP), where given a parameter $C$, the goal is to find $C$ collision pairs in a random function $f:[N] \rightarrow [N]$ (where $[N] = \{0,1,\ldots,N-1\})$ using $S$ bits of memory. Algorithms for CSP have numerous cryptanalytic applications such as space-efficient attacks on double and triple encryption. The best known algorithm for CSP is \emph{parallel collision search} (PCS) published by van Oorschot and Wiener, which achieves the time-space tradeoff $T^2 \cdot S = \tilde{O}(C^2 \cdot N)$. In this paper, we prove that any algorithm for CSP satisfies $T^2 \cdot S = \tilde{\Omega}(C^2 \cdot N)$, hence the best known time-space tradeoff is optimal (up to poly-logarithmic factors in $N$). On the other hand, we give strong evidence that proving similar unconditional time-space tradeoff lower bounds on CSP applications (such as breaking double and triple encryption) may be very difficult, and would imply a breakthrough in complexity theory. Hence, we propose a new restricted model of computation and prove that under this model, the best known time-space tradeoff attack on double encryption is optimal.
2020
EUROCRYPT
Integral Matrix Gram Root and Lattice Gaussian Sampling without Floats 📺
Many advanced lattice based cryptosystems require to sample lattice points from Gaussian distributions. One challenge for this task is that all current algorithms resort to floating-point arithmetic (FPA) at some point, which has numerous drawbacks in practice: it requires numerical stability analysis, extra storage for high-precision, lazy/backtracking techniques for efficiency, and may suffer from weak determinism which can completely break certain schemes. In this paper, we give techniques to implement Gaussian sampling over general lattices without using FPA. To this end, we revisit the approach of Peikert, using perturbation sampling. Peikert's approach uses continuous Gaussian sampling and some decomposition $\BSigma = \matA \matA^t$ of the target covariance matrix $\BSigma$. The suggested decomposition, e.g. the Cholesky decomposition, gives rise to a square matrix $\matA$ with real (not integer) entries. Our idea, in a nutshell, is to replace this decomposition by an integral one. While there is in general no integer solution if we restrict $\matA$ to being a square matrix, we show that such a decomposition can be efficiently found by allowing $\matA$ to be wider (say $n \times 9n$). This can be viewed as an extension of Lagrange's four-square theorem to matrices. In addition, we adapt our integral decomposition algorithm to the ring setting: for power-of-2 cyclotomics, we can exploit the tower of rings structure for improved complexity and compactness.
2020
EUROCRYPT
The Retracing Boomerang Attack 📺
Boomerang attacks are extensions of differential attacks, that make it possible to combine two unrelated differential properties of the first and second part of a cryptosystem with probabilities $p$ and $q$ into a new differential-like property of the whole cryptosystem with probability $p^2q^2$ (since each one of the properties has to be satisfied twice). In this paper we describe a new version of boomerang attacks which uses the counterintuitive idea of throwing out most of the data in order to force equalities between certain values on the ciphertext side. In certain cases, this creates a correlation between the four probabilistic events, which increases the probability of the combined property to $p^2q$ and increases the signal to noise ratio of the resultant distinguisher. We call this variant a {\it retracing boomerang attack} since we make sure that the boomerang we throw follows the same path on its forward and backward directions. To demonstrate the power of the new technique, we apply it to the case of 5-round AES. This version of AES was repeatedly attacked by a large variety of techniques, but for twenty years its complexity had remained stuck at $2^{32}$. At Crypto'18 it was finally reduced to $2^{24}$ (for full key recovery), and with our new technique we can further reduce the complexity of full key recovery to the surprisingly low value of $2^{16.5}$ (i.e., only $90,000$ encryption/decryption operations are required for a full key recovery on half the rounds of AES). In addition to improving previous attacks, our new technique unveils a hidden relationship between boomerang attacks and two other cryptanalytic techniques, the yoyo game and the recently introduced mixture differentials.
2020
EUROCRYPT
Optimal Merging in Quantum $k$-xor and $k$-sum Algorithms 📺
The $k$-xor or Generalized Birthday Problem aims at finding, given $k$ lists of bit-strings, a $k$-tuple among them XORing to 0. If the lists are unbounded, the best classical (exponential) time complexity has withstood since Wagner's CRYPTO 2002 paper. If the lists are bounded (of the same size) and such that there is a single solution, the \emph{dissection algorithms} of Dinur \emph{et al.} (CRYPTO 2012) improve the memory usage over a simple meet-in-the-middle. In this paper, we study quantum algorithms for the $k$-xor problem. With unbounded lists and quantum access, we improve previous work by Grassi \emph{et al.} (ASIACRYPT 2018) for almost all $k$. Next, we extend our study to lists of any size and with classical access only. We define a set of ``merging trees'' which represent the best known strategies for quantum and classical merging in $k$-xor algorithms, and prove that our method is optimal among these. Our complexities are confirmed by a Mixed Integer Linear Program that computes the best strategy for a given $k$-xor problem. All our algorithms apply also when considering modular additions instead of bitwise xors. This framework enables us to give new improved quantum $k$-xor algorithms for all $k$ and list sizes. Applications include the subset-sum problem, LPN with limited memory and the multiple-encryption problem.
2020
EUROCRYPT
Implementing Grover oracles for quantum key search on AES and LowMC 📺
Grover's search algorithm gives a quantum attack against block ciphers by searching for a key that matches a small number of plaintext-ciphertext pairs. This attack uses O(N) calls to the cipher to search a key space of size N. Previous work in the specific case of AES derived the full gate cost by analyzing quantum circuits for the cipher, but focused on minimizing the number of qubits. In contrast, we study the cost of quantum key search attacks under a depth restriction and introduce techniques that reduce the oracle depth, even if it requires more qubits. As cases in point, we design quantum circuits for the block ciphers AES and LowMC. Our circuits give a lower overall attack cost in both the gate count and depth-times-width cost models. In NIST's post-quantum cryptography standardization process, security categories are defined based on the concrete cost of quantum key search against AES. We present new, lower cost estimates for each category, so our work has immediate implications for the security assessment of post-quantum cryptography. As part of this work, we release Q# implementations of the full Grover oracle for AES-128, -192, -256 and for the three LowMC instantiations used in Picnic, including unit tests and code to reproduce our quantum resource estimates. To the best of our knowledge, these are the first two such full implementations and automatic resource estimations.
2020
EUROCRYPT
Marlin: Preprocessing zkSNARKs with Universal and Updatable SRS 📺
We present a general methodology to construct preprocessing zkSNARKs where the structured reference string (SRS) is universal and updatable. This exploits a novel application of *holographic* IOPs, a natural generalization of holographic PCPs [Babai et al., STOC 1991]. We use our methodology to obtain a preprocessing zkSNARK where the SRS has linear size and arguments have constant size. Our construction improves on Sonic [Maller et al., CCS 2019], the prior state of the art in this setting, in all efficiency parameters: proving is an order of magnitude faster and verification is twice as fast, even with smaller SRS size and argument size. Our construction is most efficient when instantiated in the algebraic group model (also used by Sonic), but we also demonstrate how to realize it under concrete knowledge assumptions. The core of our zkSNARK is a new holographic IOP for rank-1 constraint satisfiability (R1CS), which is the first to achieve linear proof length and constant query complexity (among other efficiency features).
2020
EUROCRYPT
Private Aggregation from Fewer Anonymous Messages 📺
Consider the setup where $n$ parties are each given an element~$x_i$ in the finite field $\F_q$ and the goal is to compute the sum $\sum_i x_i$ in a secure fashion and with as little communication as possible. We study this problem in the \emph{anonymized model} of Ishai et al.~(FOCS 2006) where each party may broadcast anonymous messages on an insecure channel. We present a new analysis of the one-round ``split and mix'' protocol of Ishai et al. In order to achieve the same security parameter, our analysis reduces the required number of messages by a $\Theta(\log n)$ multiplicative factor. We also prove lower bounds showing that the dependence of the number of messages on the domain size, the number of parties, and the security parameter is essentially tight. Using a reduction of Balle et al. (2019), our improved analysis of the protocol of Ishai et al. yields, in the same model, an $\left(\varepsilon, \delta\right)$-differentially private protocol for aggregation that, for any constant $\varepsilon > 0$ and any $\delta = \frac{1}{\poly(n)}$, incurs only a constant error and requires only a \emph{constant number of messages} per party. Previously, such a protocol was known only for $\Omega(\log n)$ messages per party.
2020
EUROCRYPT
Two-Round Oblivious Transfer from CDH or LPN 📺
We show a new general approach for constructing maliciously-secure two-round oblivious transfer (OT). Specifically, we provide a generic sequence of transformations to upgrade a very basic notion of two-roundOT, which we call elementary OT, to UC-secure OT. We then give simple constructions of elementary OT under the Computational Diffie-Hellman(CDH) assumption or the Learning Parity with Noise (LPN) assumption, yielding the first constructions of malicious (UC-secure) two-round OT under these assumptions. Since two-round OT is complete for two-round 2-party and multi-party computation in the malicious setting, we also achieve the first constructions of the latter under these assumptions.
2020
EUROCRYPT
New Constructions of Statistical NIZKs: Dual-Mode DV-NIZKs and More 📺
Non-interactive zero-knowledge proofs (NIZKs) are important primitives in cryptography. A major challenge since the early works on NIZKs has been to construct NIZKs with a statistical zero-knowledge guarantee against unbounded verifiers. In the common reference string (CRS) model, such "statistical NIZK arguments" are currently known from k-Lin in a pairing-group and from LWE. In the (reusable) designated-verifier model (DV-NIZK), where a trusted setup algorithm generates a reusable verification key for checking proofs, we also have a construction from DCR. If we relax our requirements to computational zero-knowledge, we additionally have NIZKs from factoring and CDH in a pairing group in the CRS model, and from nearly all assumptions that imply public-key encryption (e.g., CDH, LPN, LWE) in the designated-verifier model. Thus, there still remains a gap in our understanding of statistical NIZKs in both the CRS and the designated-verifier models. In this work, we develop new techniques for constructing statistical NIZK arguments. First, we construct statistical DV-NIZK arguments from the k-Lin assumption in pairing-free groups, the QR assumption, and the DCR assumption. These are the first constructions in pairing-free groups and from QR that satisfy statistical zero-knowledge. All of our constructions are secure even if the verification key is chosen maliciously (i.e., they are "malicious-designated-verifier" NIZKs), and moreover, they satisfy a "dual-mode" property where the CRS can be sampled from two computationally indistinguishable distributions: one distribution yields statistical DV-NIZK arguments while the other yields computational DV-NIZK proofs. We then show how to adapt our k-Lin construction in a pairing group to obtain new publicly-verifiable statistical NIZK arguments from pairings with a qualitatively weaker assumption than existing constructions of pairing-based statistical NIZKs. Our constructions follow the classic paradigm of Feige, Lapidot, and Shamir (FLS). While the FLS framework has traditionally been used to construct computational (DV)-NIZK proofs, we newly show that the same framework can be leveraged to construct dual-mode (DV)-NIZKs.
2020
EUROCRYPT
Combiners for Functional Encryption, Unconditionally 📺
Functional encryption (FE) combiners allow one to combine many candidates for a functional encryption scheme, possibly based on different computational assumptions, into another functional encryption candidate with the guarantee that the resulting candidate is secure as long as at least one of the original candidates is secure. The fundamental question in this area is whether FE combiners exist. There have been a series of works Ananth et. al. (CRYPTO '16), Ananth-Jain-Sahai (EUROCRYPT '17), Ananth et. al (TCC '19) on constructing FE combiners from various assumptions. We give the first unconditional construction of combiners for functional encryption, resolving this question completely. Our construction immediately implies an unconditional universal functional encryption scheme, an FE scheme that is secure if such an FE scheme exists. Previously such results either relied on algebraic assumptions or required subexponential security assumptions.
2020
EUROCRYPT
On the Streaming Indistinguishability of a Random Permutation and a Random Function 📺
An adversary with $S$ bits of memory obtains a stream of $Q$ elements that are uniformly drawn from the set $\{1,2,\ldots,N\}$, either with or without replacement. This corresponds to sampling $Q$ elements using either a random function or a random permutation. The adversary's goal is to distinguish between these two cases. This problem was first considered by Jaeger and Tessaro (EUROCRYPT 2019), which proved that the adversary's advantage is upper bounded by $\sqrt{Q \cdot S/N}$. Jaeger and Tessaro used this bound as a streaming switching lemma which allowed proving that known time-memory tradeoff attacks on several modes of operation (such as counter-mode) are optimal up to a factor of $O(\log N)$ if $Q \cdot S \approx N$. However, the bound's proof assumed an unproven combinatorial conjecture. Moreover, if $Q \cdot S \ll N$ there is a gap between the upper bound of $\sqrt{Q \cdot S/N}$ and the $Q \cdot S/N$ advantage obtained by known attacks. In this paper, we prove a tight upper bound (up to poly-logarithmic factors) of $O(\log Q \cdot Q \cdot S/N)$ on the adversary's advantage in the streaming distinguishing problem. The proof does not require a conjecture and is based on a hybrid argument that gives rise to a reduction from the unique-disjointness communication complexity problem to streaming.
2020
EUROCRYPT
Fault Template Attacks on Block Ciphers Exploiting Fault Propagation 📺
Fault attacks (FA) are one of the potent practical threats to modern cryptographic implementations. Over the years the FA techniques have evolved, gradually moving towards the exploitation of device-centric properties of the faults. In this paper, we exploit the fact that activation and propagation of a fault through a given combinational circuit (i.e., observability of a fault) is data-dependent. Next, we show that this property of combinational circuits leads to powerful Fault Template Attacks (FTA), even for implementations having dedicated protections against both power and fault-based vulnerabilities. The attacks found in this work are applicable even if the fault injection is made at the middle rounds of a block cipher, which are out of reach for most of the other existing fault analysis strategies. Quite evidently, they also work for a known-plaintext scenario. Moreover, the middle round attacks are entirely blind in the sense that no access to the ciphertexts (correct/faulty) or plaintexts are required. The adversary is only assumed to have the power of repeating an unknown plaintext several times. Practical validation over a hardware implementation of SCA-FA protected PRESENT, and simulated evaluation on a public software implementation of protected AES prove the efficacy of the proposed attacks.
2020
EUROCRYPT
On the Memory-Tightness of Hashed ElGamal 📺
We study the memory-tightness of security reductions in public-key cryptography, focusing in particular on Hashed ElGamal. We prove that any {\em straightline} (i.e., without rewinding) black-box reduction needs memory which grows linearly with the number of queries of the adversary it has access to, as long as this reduction treats the underlying group generically. This makes progress towards proving a conjecture by Auerbach {\em et al.} (CRYPTO 2017), and is also the first lower bound on memory-tightness for a concrete cryptographic scheme (as opposed to generalized reductions across security notions). Our proof relies on compression arguments in the generic group model.
2020
EUROCRYPT
On a Generalization of Substitution-Permutation Networks: The HADES Design Strategy 📺
Keyed and unkeyed cryptographic permutations often iterate simple round functions. Substitution-permutation networks (SPNs) are an approach that is popular since the mid 1990s. One of the new directions in the design of these round functions is to reduce the substitution (S-Box) layer from a full one to a partial one, uniformly distributed over all the rounds. LowMC and Zorro are examples of this approach. A relevant freedom in the design space is to allow for a highly non-uniform distribution of S-Boxes. However, choosing rounds that are so different from each other is very rarely done, as it makes security analysis and implementation much harder. We develop the design strategy HADES and an analysis framework for it, which despite this increased complexity allows for security arguments against many classes of attacks, similar to earlier simpler SPNs. The framework builds upon the wide trail design strategy, and it additionally allows for security arguments against algebraic attacks, which are much more of a concern when algebraically simple S-Boxes are used. Subsequently, this is put into practice by concrete instances and benchmarks for a use case that generally benefits from a smaller number of S-Boxes and showcases the diversity of design options we support: A candidate cipher natively working with objects in GF(p), for securing data transfers with distributed databases using secure multiparty computation (MPC). Compared to the currently fastest design MiMC, we observe significant improvements in online bandwidth requirements and throughput with a simultaneous reduction of preprocessing effort, while having a comparable online latency.
2020
EUROCRYPT
Security of Hedged Fiat-Shamir Signatures under Fault Attacks 📺
Deterministic generation of per-signature randomness has been a widely accepted solution to mitigate the catastrophic risk of randomness failure in Fiat--Shamir type signature schemes. However, recent studies have practically demonstrated that such de-randomized schemes, including EdDSA, are vulnerable to differential fault attacks, which enable adversaries to recover the entire secret signing key, by artificially provoking randomness reuse or corrupting computation in other ways. In order to balance concerns of both randomness failures and the threat of fault injection, some signature designs are advocating a ``hedged'' derivation of the per-signature randomness, by hashing the secret key, message, and a nonce. Despite the growing popularity of the hedged paradigm in practical signature schemes, to the best of our knowledge, there has been no attempt to formally analyze the fault resilience of hedged signatures. We perform a formal security analysis of the fault resilience of signature schemes constructed via the Fiat--Shamir transform. We propose a model to characterize bit-tampering fault attacks, and investigate their impact across different steps of the signing operation. We prove that, for some types of faults, attacks are mitigated by the hedged paradigm, while attacks remain possible for others. As concrete case studies, we then apply our results to XEdDSA, a hedged version of EdDSA used in the Signal messaging protocol, and to Picnic2, a hedged Fiat--Shamir signature scheme in Round 2 of the NIST Post-Quantum standardization process.
2020
EUROCRYPT
Compact NIZKs from Standard Assumptions on Bilinear Maps 📺
A non-interactive zero-knowledge (NIZK) protocol enables a prover to convince a verifier of the truth of a statement without leaking any other information by sending a single message. The main focus of this work is on exploring short pairing-based NIZKs for all NP languages based on standard assumptions. In this regime, the seminal work of Groth, Ostrovsky, and Sahai (J.ACM'12) (GOS-NIZK) is still considered to be the state-of-the-art. Although fairly efficient, one drawback of GOS-NIZK is that the proof size is multiplicative in the circuit size computing the NP relation. That is, the proof size grows by $O(|C|k)$, where $C$ is the circuit for the NP relation and $k$ is the security parameter. By now, there have been numerous follow-up works focusing on shortening the proof size of pairing-based NIZKs, however, thus far, all works come at the cost of relying either on a non-standard knowledge-type assumption or a non-static $q$-type assumption. Specifically, improving the proof size of the original GOS-NIZK under the same standard assumption has remained as an open problem. Our main result is a construction of a pairing-based NIZK for all of NP whose proof size is additive in $|C|$, that is, the proof size only grows by $|C| +poly(k)$, based on the decisional linear (DLIN) assumption. Since the DLIN assumption is the same assumption underlying GOS-NIZK, our NIZK is a strict improvement on their proof size. As by-products of our main result, we also obtain the following two results: (1) We construct a perfectly zero-knowledge NIZK (NIPZK) for NP relations computable in NC1 with proof size $|w|poly(k)$ where $|w|$ is the witness length based on the DLIN assumption. This is the first pairing-based NIPZK for a non-trivial class of NP languages whose proof size is independent of $|C|$ based on a standard assumption. (2) We construct a universally composable (UC) NIZK for NP relations computable in NC1 in the erasure-free adaptive setting whose proof size is $|w|poly(k)$ from the DLIN assumption. This is an improvement over the recent result of Katsumata, Nishimaki, Yamada, and Yamakawa (CRYPTO'19), which gave a similar scheme based on a non-static $q$-type assumption. The main building block for all of our NIZKs is a constrained signature scheme with decomposable online-offline efficiency. This is a property which we newly introduce in this paper and construct from the DLIN assumption. We believe this construction is of an independent interest.
2020
EUROCRYPT
Key Recovery from Gram--Schmidt Norm Leakage in Hash-and-Sign Signatures over NTRU Lattices 📺
In this paper, we initiate the study of side-channel leakage in hash-and-sign lattice-based signatures, with particular emphasis on the two efficient implementations of the original GPV lattice-trapdoor paradigm for signatures, namely NIST second-round candidate Falcon and its simpler predecessor DLP. Both of these schemes implement the GPV signature scheme over NTRU lattices, achieving great speed-ups over the general lattice case. Our results are mainly threefold. First, we identify a specific source of side-channel leakage in most implementations of those schemes, namely, the one-dimensional Gaussian sampling steps within lattice Gaussian sampling. It turns out that the implementations of these steps often leak the Gram--Schmidt norms of the secret lattice basis. Second, we elucidate the link between this leakage and the secret key, by showing that the entire secret key can be efficiently reconstructed solely from those Gram--Schmidt norms. The result makes heavy use of the algebraic structure of the corresponding schemes, which work over a power-of-two cyclotomic field. Third, we concretely demonstrate the side-channel attack against DLP (but not Falcon due to the different structures of the two schemes). The challenge is that timing information only provides an approximation of the Gram--Schmidt norms, so our algebraic recovery technique needs to be combined with pruned tree search in order to apply it to approximate values. Experimentally, we show that around $2^{35}$ DLP traces are enough to reconstruct the entire key with good probability.
2020
EUROCRYPT
Adaptively Secure ABE for DFA from k-Lin and More 📺
In this work, we present: - the first adaptively secure ABE for DFA from the k-Lin assumption in prime-order bilinear groups; this resolves one of open problems posed by Waters [CRYPTO'12]; - the first ABE for NFA from the k-Lin assumption, provided the number of accepting paths is smaller than the order of the underlying group; the scheme achieves selective security; - the first compact adaptively secure ABE (supporting unbounded multi-use of attributes) for branching programs from the k-Lin assumption, which generalizes and simplifies the recent result of Kowalczyk and Wee for boolean formula (NC1) [EUROCRYPT'19]. Our adaptively secure ABE for DFA relies on a new combinatorial mechanism avoiding the exponential security loss in the number of states when naively combining two recent techniques from CRYPTO'19 and EUROCRYPT'19. This requires us to design a selectively secure ABE for NFA; we give a construction which is sufficient for our purpose and of independent interest. Our ABE for branching programs leverages insights from our ABE for DFA.
2020
EUROCRYPT
Blind Schnorr Signatures and Signed ElGamal Encryption in the Algebraic Group Model 📺
The Schnorr blind signing protocol allows blind issuing of Schnorr signatures, one of the most widely used signatures. Despite its practical relevance, its security analysis is unsatisfactory. The only known security proof is informal and in the combination of the generic group model (GGM) and the random oracle model (ROM) assuming that the ``ROS problem'' is hard. The situation is similar for (Schnorr-)signed ElGamal encryption, a simple CCA2-secure variant of ElGamal. We analyze the security of these schemes in the algebraic group model (AGM), an idealized model closer to the standard model than the GGM. We first prove tight security of Schnorr signatures from the discrete logarithm assumption (DL) in the AGM+ROM. We then give a rigorous proof for blind Schnorr signatures in the AGM+ROM assuming hardness of the one-more discrete logarithm problem and ROS. As ROS can be solved in sub-exponential time using Wagner's algorithm, we propose a simple modification of the signing protocol, which leaves the signatures unchanged. It is therefore compatible with systems that already use Schnorr signatures, such as blockchain protocols. We show that the security of our modified scheme relies on the hardness of a problem related to ROS that appears much harder. Finally, we give tight reductions, again in the AGM+ROM, of the CCA2 security of signed ElGamal encryption to DDH and signed hashed ElGamal key encapsulation to DL.
2020
EUROCRYPT
Lower Bounds for Leakage-Resilient Secret Sharing 📺
Threshold secret sharing allows a dealer to split a secret into $n$ shares such that any authorized subset of cardinality at least $t$ of those shares efficiently reveals the secret, while at the same time any unauthorized subset of cardinality less than $t$ contains no information about the secret. Leakage-resilience additionally requires that the secret remains hidden even if one is given a bounded amount of additional leakage from every share. In this work, we study leakage-resilient secret sharing schemes and prove a lower bound on the share size and the required amount randomness of any information-theoretically secure scheme. We prove that for any information-theoretically secure leakage-resilient secret sharing scheme either the amount of randomness across all shares or the share size has to be linear in $n$. More concretely, for a secret sharing scheme with $p$-bit long shares, $\ell$-bit leakage per share, where $\widehat{t}$ shares uniquely define the remaining $n - \widehat{t}$ shares, it has to hold that $p \ge \frac{\ell (n - t)}{\widehat{t}}$. We use this lower bound to gain further insights into a question that was recently posed by Benhamouda et al. (CRYPTO'18), who ask to what extend existing regular secret sharing schemes already provide protection against leakage. The authors proved that Shamir's secret sharing is $1$-bit leakage-resilient for reconstruction thresholds $t \geq 0.85n$ and conjectured that it is also $1$-bit leakage-resilient for any other threshold that is a constant fraction of the total number of shares. We do not disprove their conjecture, but show that it is the best one could possibly hope for. Concretely, we show that for large enough $n$ and any constant $0< c < 1$ it holds that Shamir's secret sharing scheme is \emph{not} leakage-resilient for $t \leq \frac{cn}{\log n}$. In contrast to the setting with information-theoretic security, we show that our lower bound does not hold in the computational setting. That is, we show how to construct a leakage-resilient secret sharing scheme in the random oracle model that is secure against computationally bounded adversaries and violates the lower bound stated above.
2020
EUROCRYPT
Secure Multi-party Quantum Computation with a Dishonest Majority 📺
The cryptographic task of secure multi-party (classical) computation has received a lot of attention in the last decades. Even in the extreme case where a computation is performed between k mutually distrustful players, and security is required even for the single honest player if all other players are colluding adversaries, secure protocols are known. For quantum computation, on the other hand, protocols allowing arbitrary dishonest majority have only been proven for k=2. In this work, we generalize the approach taken by Dupuis, Nielsen and Salvail (CRYPTO 2012) in the two-party setting to devise a secure, efficient protocol for multi-party quantum computation for any number of players k, and prove security against up to k-1 colluding adversaries. The quantum round complexity of the protocol for computing a quantum circuit of {CNOT, T} depth d is O(k (d + log n)), where n is the security parameter. To achieve efficiency, we develop a novel public verification protocol for the Clifford authentication code, and a testing protocol for magic-state inputs, both using classical multi-party computation.
2020
EUROCRYPT
Sigma protocols for MQ, PKP and SIS, and fishy signature schemes 📺
This work presents sigma protocols to prove knowledge of: - a solution to a system of quadratic polynomials, - a solution to an instance of the Permuted Kernel Problem and - a witness for a variety of lattice statements (including SIS). Our sigma protocols have soundness error 1/q', where q' is any number bounded by the size of the underlying finite field. This is much better than existing proofs, which have soundness error 2/3 or (q'+1)/2q'. The prover and verifier time our proofs are O(q'). We achieve this by first constructing so-called sigma protocols with helper, which are sigma protocols where the prover and the verifier are assisted by a trusted third party, and then eliminating the helper from the proof with a "cut-and-choose" protocol. We apply the Fiat-Shamir transform to obtain signature schemes with security proof in the QROM. We show that the resulting signature schemes, which we call the "MUltivariate quaDratic FIat-SHamir" scheme (MUDFISH) and the "ShUffled Solution to Homogeneous linear SYstem FIat-SHamir" scheme (SUSHSYFISH), are more efficient than existing signatures based on the MQ problem and the Permuted Kernel Problem. Our proof system can be used to improve the efficiency of applications relying on (generalizations of) Stern's protocol. We show that the proof size of our SIS proof is smaller than that of Stern's protocol by an order of magnitude and that our proof is more efficient than existing post-quantum secure SIS proofs.
2020
FSE
Cryptanalysis of OCB2: the attacks and the story behind 📺
Invited talk
I will talk about OCB2, an authenticated encryption (AE) mode of operation proposed at 2004. It is a very popular scheme for its innovative design. The tweakable block cipher-based modular architecture of OCB2 was influenced to countless subsequent schemes. However, our paper presented at CRYPTO 2019 showed that it is completely broken with negligible amount of computation. In addition to the description of our attacks, I will tell a bit more on the story behind this break, how it started and evolved, hoping that it contributes to our understanding of practical provable security.
2020
FSE
Tweakable Block Cipher-Based Cryptography 📺
Invited talk
A tweakable block cipher (TBC) basically consists of a block cipher with an extra input, the tweak, that allows to select a family of keyed permutations. Since their first formalization by Liskov et al. at CRYPTO 2012, TCBCs have recently gained popularity as they can easily instantiate beyond birthday-bound operating modes. In particular, these modes are potentially very attractive for lightweight cryptography, where it is crucial to reach a security as high as possible for a state as small as possible. In this talk, we will review the latest advances in tweakable block ciphers. First, we will recall how to design TBCs from an existing primitive or from scratch. Then, using the example of lightweight authenticated encryption, we will study why TBCs are very competitive primitives in that scenario. Finally, we will exhibit other possible future usages of TBCs. Throughout the talk, we will try to identify several possibly interesting open research problems.
2020
PKC
Fast, Compact, and Expressive Attribute-Based Encryption 📺
Attribute-based encryption (ABE) is an advanced cryptographic tool and useful to build various types of access control systems. Toward the goal of making ABE more practical, we propose key-policy (KP) and ciphertext-policy (CP) ABE schemes, which first support unbounded sizes of attribute sets and policies with negation and multi-use of attributes, allow fast decryption, and are adaptively secure under a standard assumption, simultaneously. Our schemes are more expressive than previous schemes and efficient enough. To achieve the adaptive security along with the other properties, we refine the technique introduced by Kowalczyk and Wee (Eurocrypt’19) so that we can apply the technique more expressive ABE schemes. Furthermore, we also present a new proof technique that allows us to remove redundant elements used in their ABE schemes. We implement our schemes in 128-bit security level and present their benchmarks for an ordinary personal computer and smartphones. They show that all algorithms run in one second with the personal computer when they handle any policy or attribute set with one hundred attributes.
2020
PKC
Adaptive Simulation Security for Inner Product Functional Encryption 📺
Inner product functional encryption ( $${mathsf {IPFE}}$$ ) [ 1 ] is a popular primitive which enables inner product computations on encrypted data. In $${mathsf {IPFE}}$$ , the ciphertext is associated with a vector $$varvec{x}$$ , the secret key is associated with a vector $$varvec{y}$$ and decryption reveals the inner product $$langle varvec{x},varvec{y} angle $$ . Previously, it was known how to achieve adaptive indistinguishability ( $$mathsf {IND}$$ ) based security for $${mathsf {IPFE}}$$ from the $$mathsf {DDH}$$ , $$mathsf {DCR}$$ and $$mathsf {LWE}$$ assumptions [ 8 ]. However, in the stronger simulation ( $$mathsf {SIM}$$ ) based security game, it was only known how to support a restricted adversary that makes all its key requests either before or after seeing the challenge ciphertext, but not both. In more detail, Wee [ 46 ] showed that the $$mathsf {DDH}$$ -based scheme of Agrawal et al. (Crypto 2016) achieves semi-adaptive simulation-based security, where the adversary must make all its key requests after seeing the challenge ciphertext. On the other hand, O’Neill showed that all $$mathsf {IND}$$ -secure $${mathsf {IPFE}}$$ schemes (which may be based on $$mathsf {DDH}$$ , $$mathsf {DCR}$$ and $$mathsf {LWE}$$ ) satisfy $$mathsf {SIM}$$ based security in the restricted model where the adversary makes all its key requests before seeing the challenge ciphertext. In this work, we resolve the question of $$mathsf {SIM}$$ -based security for $${mathsf {IPFE}}$$ by showing that variants of the $${mathsf {IPFE}}$$ constructions by Agrawal et al. , based on $$mathsf {DDH}$$ , Paillier and $$mathsf {LWE}$$ , satisfy the strongest possible adaptive $$mathsf {SIM}$$ -based security where the adversary can make an unbounded number of key requests both before and after seeing the (single) challenge ciphertext. This establishes optimal security of the $${mathsf {IPFE}}$$ schemes, under all hardness assumptions on which it can (presently) be based.
2020
PKC
Verifiable Inner Product Encryption Scheme 📺
In the standard setting of functional encryption (FE), we assume both the Central Authority (CA) and the encryptors to run their respective algorithms faithfully. Badrinarayanan et al. [ASIACRYPT 2016] proposed the concept of verifiable FE, which essentially guarantees that dishonest encryptors and authorities, even when colluding together, are not able to generate ciphertexts and tokens that give “inconsistent” results. They also provide a compiler turning any perfectly correct FE into a verifiable FE, but do not give efficient constructions. In this paper we improve on this situation by considering Inner-Product Encryption (IPE), which is a special case of functional encryption and a primitive that has attracted wide interest from both practitioners and researchers in the last decade. Specifically, we construct the first efficient verifiable IPE (VIPE) scheme according to the inner-product functionality of Katz, Sahai and Waters [EUROCRYPT 2008]. To instantiate the general construction of Badrinarayanan et al. we need to solve several additional challenges. In particular, we construct the first efficient perfectly correct IPE scheme. Our VIPE satisfies unconditional verifiability, whereas its privacy relies on the DLin assumption.
2020
PKC
A New Paradigm for Public-Key Functional Encryption for Degree-2 Polynomials 📺
We give the first public-key functional encryption that supports the generation of functional decryption keys for degree-2 polynomials, with succinct ciphertexts, whose semi-adaptive simulation-based security is proven under standard assumptions. At the heart of our new paradigm lies a so-called partially function-hiding functional encryption scheme for inner products, which admits public-key instances, and that is sufficient to build functional encryption for degree-2 polynomials. Doing so, we improve upon prior works, such as the constructions from Lin (CRYPTO 17) or Ananth Sahai (EUROCRYPT 17), both of which rely on function-hiding inner product FE, that can only exist in the private-key setting. The simplicity of our construction yields the most efficient FE for quadratic functions from standard assumptions (even those satisfying a weaker security notion). The interest of our methodology is that the FE for quadratic functions that builds upon any partially function-hiding FE for inner products inherits the security properties of the latter. In particular, we build a partially function-hiding FE for inner products that enjoys simulation security, in the semi-adaptive setting, where the challenge sent from the adversary can be chosen adaptively after seeing the public key (but before corrupting functional decryption keys). This is in contrast from prior public-key FE for quadratic functions from Baltico et al. (CRYPTO 17), which only achieved an indistinguishability-based, selective security. As a bonus, we show that we can obtain security against Chosen-Ciphertext Attacks straightforwardly. Even though this is the de facto security notion for encryption, this was not achieved by prior functional encryption schemes for quadratic functions, where the generic Fujisaki Okamoto transformation (CRYPTO 99) does not apply.
2020
PKC
Master-Key KDM-Secure IBE from Pairings 📺
Identity-based encryption (IBE) is a generalization of public-key encryption (PKE) by allowing encryptions to be made to user identities. In this work, we seek to obtain IBE schemes that achieve key-dependent-message (KDM) security with respect to messages that depend on the master secret key. Previous KDM-secure schemes only achieved KDM security in simpler settings, in which messages may only depend on user secret keys. An important motivation behind studying master-KDM security is the application of this notion in obtaining generic constructions of KDM-CCA secure PKE, a primitive notoriously difficult to realize. We give the first IBE that achieves master-KDM security from standard assumptions in pairing groups. Our construction is modular and combines techniques from KDM-secure PKE based from hash-proof systems, together with IBE that admits a tight security proof in the multi-challenge setting, which happens to be unexpectedly relevant in the context of KDM security. In fact, to the best of our knowledge, this is the first setting where techniques developed in the context of realizing tightly secure cryptosystems have led to a new feasibility result. As a byproduct, our KDM-secure IBE, and thus the resulting KDM-CCA-secure PKE both enjoy a tight security reduction, independent of the number of challenge ciphertexts, which was not achieved before.
2020
PKC
Hierarchical Identity-Based Encryption with Tight Multi-challenge Security 📺
We construct the first hierarchical identity-based encryption (HIBE) scheme with tight adaptive security in the multi-challenge setting, where adversaries are allowed to ask for ciphertexts for multiple adaptively chosen identities. Technically, we develop a novel technique that can tightly introduce randomness into user secret keys for hierarchical identities in the multi-challenge setting, which cannot be easily achieved by the existing techniques for tightly multi-challenge secure IBE. In contrast to the previous constructions, the security of our scheme is independent of the number of user secret key queries and that of challenge ciphertext queries. We prove the tight security of our scheme based on the Matrix Decisional Diffie-Hellman Assumption, which is an abstraction of standard and simple decisional Diffie-Hellman assumptions, such as the k -Linear and SXDH assumptions. Finally, we also extend our ideas to achieve tight chosen-ciphertext security and anonymity, respectively. These security notions for HIBE have not been tightly achieved in the multi-challenge setting before.
2020
PKC
The Usefulness of Sparsifiable Inputs: How to Avoid Subexponential iO 📺
We consider the problem of removing subexponential reductions to indistinguishability obfuscation (iO) in the context of obfuscating probabilistic programs. Specifically, we show how to apply complexity absorption (Zhandry Crypto 2016) to the recent notion of probabilistic indistinguishability obfuscation (piO, Canetti et al. TCC 2015). As a result, we obtain a variant of piO which allows to obfuscate a large class of probabilistic programs, from polynomially secure indistinguishability obfuscation and extremely lossy functions. Particularly, our piO variant is able to obfuscate circuits with specific input domains regardless of the performed computation. We then revisit several (direct or indirect) applications of piO, and obtain – a fully homomorphic encryption scheme (without circular security assumptions), – a multi-key fully homomorphic encryption scheme with threshold decryption, – an encryption scheme secure under arbitrary key-dependent messages, – a spooky encryption scheme for all circuits, – a function secret sharing scheme with additive reconstruction for all circuits, all from polynomially secure iO, extremely lossy functions, and, depending on the scheme, also other (but polynomial and comparatively mild) assumptions. All of these assumptions are implied by polynomially secure iO and the (non-polynomial, but very well-investigated) exponential DDH assumption. Previously, all the above applications required to assume the subexponential security of iO (and more standard assumptions).
2020
PKC
Witness Maps and Applications 📺
We introduce the notion of Witness Maps as a cryptographic notion of a proof system. A Unique Witness Map (UWM) deterministically maps all witnesses for an $$mathbf {NP}$$ statement to a single representative witness, resulting in a computationally sound, deterministic-prover, non-interactive witness independent proof system. A relaxation of UWM, called Compact Witness Map (CWM), maps all the witnesses to a small number of witnesses, resulting in a “lossy” deterministic-prover, non-interactive proof-system. We also define a Dual Mode Witness Map (DMWM) which adds an “extractable” mode to a CWM. Our main construction is a DMWM for all $$mathbf {NP}$$ relations, assuming sub-exponentially secure indistinguishability obfuscation ( $${imathcal {O}}$$ ), along with standard cryptographic assumptions. The DMWM construction relies on a CWM and a new primitive called Cumulative All-Lossy-But-One Trapdoor Functions (C-ALBO-TDF), both of which are in turn instantiated based on $${imathcal {O}}$$ and other primitives. Our instantiation of a CWM is in fact a UWM; in turn, we show that a UWM implies Witness Encryption. Along the way to constructing UWM and C-ALBO-TDF, we also construct, from standard assumptions, Puncturable Digital Signatures and a new primitive called Cumulative Lossy Trapdoor Functions (C-LTDF). The former improves up on a construction of Bellare et al. (Eurocrypt 2016), who relied on sub-exponentially secure $${imathcal {O}}$$ and sub-exponentially secure OWF. As an application of our constructions, we show how to use a DMWM to construct the first leakage and tamper-resilient signatures with a deterministic signer , thereby solving a decade old open problem posed by Katz and Vaikunthanathan (Asiacrypt 2009), by Boyle, Segev and Wichs (Eurocrypt 2011), as well as by Faonio and Venturi (Asiacrypt 2016). Our construction achieves the optimal leakage rate of $$1 - o(1)$$ .
2020
PKC
Memory-Tight Reductions for Practical Key Encapsulation Mechanisms 📺
The efficiency of a black-box reduction is an important goal of modern cryptography. Traditionally, the time complexity and the success probability were considered as the main aspects of efficiency measurements. In CRYPTO 2017, Auerbach et al. introduced the notion of memory-tightness in cryptographic reductions and showed a memory-tight reduction of the existential unforgeability of the RSA-FDH signature scheme. Unfortunately, their techniques do not extend directly to the reductions involving intricate RO-programming. The problem seems to be inherent as all the other existing results on memory-tightness are lower bounds and impossibility results. In fact, Auerbach et al. conjectured that a memory-tight reduction for security of Hashed-ElGamal KEM is impossible. We refute the above conjecture. Using a simple RO simulation technique, we provide memory-tight reductions of security of the Cramer-Shoup and the ECIES version of Hashed-ElGamal KEM. We prove memory-tight reductions for different variants of Fujisaki-Okamoto Transformation. We analyze the modular transformations introduced by Hofheinz, Hövermanns and Kiltz (TCC 2017). In addition to the constructions involving implicit rejection, we present a memory-tight reduction for the security of the transformation $$mathsf{ ext {QFO}_m^perp }$$ . Our techniques can withstand correctness-errors, and applicable to several lattice-based KEM candidates.
2020
PKC
Toward RSA-OAEP Without Random Oracles 📺
We show new partial and full instantiation results under chosen-ciphertext security for the widely implemented and standardized RSA-OAEP encryption scheme of Bellare and Rogaway (EUROCRYPT 1994) and two variants. Prior work on such instantiations either showed negative results or settled for “passive” security notions like IND-CPA. More precisely, recall that RSA-OAEP adds redundancy and randomness to a message before composing two rounds of an underlying Feistel transform, whose round functions are modeled as random oracles (ROs), with RSA. Our main results are: Either of the two oracles (while still modeling the other as a RO) can be instantiated in RSA-OAEP under IND-CCA2 using mild standard-model assumptions on the round functions and generalizations of algebraic properties of RSA shown by Barthe, Pointcheval, and Báguelin (CCS 2012). The algebraic properties are only shown to hold at practical parameters for small encryption exponent ( $$e=3$$ ), but we argue they have value for larger e as well. Both oracles can be instantiated simultaneously for two variants of RSA-OAEP, called “ t -clear” and “ s -clear” RSA-OAEP. For this we use extractability-style assumptions in the sense of Canetti and Dakdouk (TCC 2010) on the round functions, as well as novel yet plausible “XOR-type” assumptions on RSA. While admittedly strong, such assumptions may nevertheless be necessary at this point to make positive progress. In particular, our full instantiations evade impossibility results of Shoup (J. Cryptology 2002), Kiltz and Pietrzak (EUROCRYPT 2009), and Bitansky et al. (STOC 2014). Moreover, our results for s -clear RSA-OAEP yield the most efficient RSA-based encryption scheme proven IND-CCA2 in the standard model (using bold assumptions on cryptographic hashing) to date.
2020
PKC
Public-Key Puncturable Encryption: Modular and Compact Constructions 📺
We revisit the method of designing public-key puncturable encryption schemes and present a generic conversion by leveraging the techniques of distributed key-distribution and revocable encryption. In particular, we first introduce a refined version of identity-based revocable encryption, named key-homomorphic identity-based revocable key encapsulation mechanism with extended correctness . Then, we propose a generic construction of puncturable key encapsulation mechanism from the former by merging the idea of distributed key-distribution. Compared to the state-of-the-art, our generic construction supports unbounded number of punctures and multiple tags per message, thus achieving more fine-grained revocation of decryption capability. Further, it does not rely on random oracles , not suffer from non-negligible correctness error, and results in a variety of efficient schemes with distinct features. More precisely, we obtain the first scheme with very compact ciphertexts in the standard model, and the first scheme with support for both unbounded size of tags per ciphertext and unbounded punctures as well as constant-time puncture operation. Moreover, we get a comparable scheme proven secure under the standard DBDH assumption, which enjoys both faster encryption and decryption than previous works based on the same assumption, especially when the number of tags associated with the ciphertext is large.
2020
PKC
Flexible Authenticated and Confidential Channel Establishment (fACCE): Analyzing the Noise Protocol Framework 📺
The Noise protocol framework is a suite of channel establishment protocols, of which each individual protocol ensures various security properties of the transmitted messages, but keeps specification, implementation, and configuration relatively simple. Implementations of the Noise protocols are themselves, due to the employed primitives, very performant. Thus, despite its relative youth, Noise is already used by large-scale deployed applications such as WhatsApp and Slack. Though the Noise specification describes and claims the security properties of the protocol patterns very precisely, there has been no computational proof yet. We close this gap. Noise uses only a limited number of cryptographic primitives which makes it an ideal candidate for reduction-based security proofs. Due to its patterns’ characteristics as channel establishment protocols, and the usage of established keys within the handshake, the authenticated and confidential channel establishment (ACCE) model (Jager et al. CRYPTO 2012) seems to perfectly fit for an analysis of Noise. However, the ACCE model strictly divides protocols into two non-overlapping phases: the pre-accept phase (i.e., the channel establishment) and post-accept phase (i.e., the channel). In contrast, Noise allows the transmission of encrypted messages as soon as any key is established (for instance, before authentication between parties has taken place), and then incrementally increases the channel’s security guarantees. By proposing a generalization of the original ACCE model, we capture security properties of such staged channel establishment protocols flexibly – comparably to the multi-stage key exchange model (Fischlin and Günther CCS 2014). We give security proofs for eight of the 15 basic Noise patterns in the full version (EPRINT 2019/436) and exemplify them by the proof of the  XK pattern in this article.
2020
PKC
Limits on the Efficiency of (Ring) LWE Based Non-interactive Key Exchange 📺
$$mathsf {LWE}$$ based key-exchange protocols lie at the heart of post-quantum public-key cryptography. However, all existing protocols either lack the non-interactive nature of Diffie-Hellman key-exchange or polynomial $$mathsf {LWE}$$ -modulus, resulting in unwanted efficiency overhead. We study the possibility of designing non-interactive $$mathsf {LWE}$$ -based protocols with polynomial $$mathsf {LWE}$$ -modulus. To this end, We identify and formalize simple non-interactive and polynomial $$mathsf {LWE}$$ -modulus variants of existing protocols, where Alice and Bob simultaneously exchange one or more (ring) $$mathsf {LWE}$$ samples with polynomial $$mathsf {LWE}$$ -modulus and then run individual key reconciliation functions to obtain the shared key. We point out central barriers and show that such non-interactive key-exchange protocols are impossible if: (1) the reconciliation functions first compute the inner product of the received $$mathsf {LWE}$$ sample with their private $$mathsf {LWE}$$ secret. This impossibility is information theoretic. (2) One of the reconciliation functions does not depend on the error of the transmitted $$mathsf {LWE}$$ sample. This impossibility assumes hardness of $$mathsf {LWE}$$ . We give further evidence that progress in either direction, of giving an $$mathsf {LWE}$$ -based $$mathrm {NIKE}$$ protocol or proving impossibility of one will lead to progress on some other well-studied questions in cryptography. Overall, our results show possibilities and challenges in designing simple (ring) $$mathsf {LWE}$$ -based non-interactive key exchange protocols.
2020
PKC
PAKEs: New Framework, New Techniques and More Efficient Lattice-Based Constructions in the Standard Model 📺
Password-based authenticated key exchange (PAKE) allows two parties with a shared password to agree on a session key. In the last decade, the design of PAKE protocols from lattice assumptions has attracted lots of attention. However, existing solutions in the standard model do not have appealing efficiency. In this work, we first introduce a new PAKE framework. We then provide two realizations in the standard model, under the Learning With Errors (LWE) and Ring-LWE assumptions, respectively. Our protocols are much more efficient than previous proposals, thanks to three novel technical ingredients that may be of independent interests. The first ingredient consists of two approximate smooth projective hash (ASPH) functions from LWE, as well as two ASPHs from Ring-LWE. The latter are the first ring-based constructions in the literature, one of which only has a quasi-linear runtime while its function value contains $$varTheta (n)$$ field elements (where n is the degree of the polynomial defining the ring). The second ingredient is a new key conciliation scheme that is approximately rate-optimal and that leads to a very efficient key derivation for PAKE protocols. The third one is a new authentication code that allows to verify a MAC with a noisy key.
2020
PKC
Constraining and Watermarking PRFs from Milder Assumptions 📺
Constrained pseudorandom functions (C-PRFs) let the possessor of a secret key delegate the ability to evaluate the function on certain authorized inputs, while keeping the remaining function values pseudorandom. A constraint-hiding constrained PRF (CHC-PRF) additionally conceals the predicate that determines which inputs are authorized. These primitives have a wealth of applications, including watermarking schemes, symmetric deniable encryption, and updatable garbled circuits. Recent works have constructed (CH)C-PRFs from rather aggressive parameterizations of Learning With Errors (LWE) with subexponential modulus-noise ratios, even for relatively simple “puncturing” or $$ ext {NC}^{1}$$ circuit constraints. This corresponds to strong lattice assumptions and inefficient constructions, and stands in contrast to LWE-based unconstrained PRFs and fully homomorphic encryption schemes, which can be based on quasi-polynomial or even (nearly) polynomial modulus-noise ratios. In this work we considerably improve the LWE assumptions needed for building (constraint-hiding) constrained PRFs and watermarking schemes. In particular, for CHC-PRFs and related watermarking schemes we improve the modulus-noise ratio to $$lambda ^{O((d+log lambda ) log lambda )}$$ for depth- d circuit constraints, which is merely quasi-polynomial for $$ ext {NC}^{1}$$ circuits and closely related watermarking schemes. For (constraint-revealing) C-PRFs for $$ ext {NC}^{1}$$ we do even better, obtaining a nearly polynomial $$lambda ^{omega (1)}$$ ratio. These improvements are partly enabled by slightly modifying the definition of C-PRFs, in a way that is still compatible with many of their applications. Finally, as a contribution of independent interest we build CHC-PRFs for special constraint classes from generic , weaker assumptions: we obtain bit-fixing constraints based on the minimal assumption of one-way functions, and hyperplane-membership constraints based on key-homomorphic PRFs.
2020
PKC
Bringing Order to Chaos: The Case of Collision-Resistant Chameleon-Hashes 📺
Chameleon-hash functions, introduced by Krawczyk and Rabin at NDSS 2000, are trapdoor collision-resistant hash-functions parametrized by a public key. If the corresponding secret key is known, arbitrary collisions for the hash function can be efficiently found. Chameleon-hash functions have prominent applications in the design of cryptographic primitives, such as lifting non-adaptively secure signatures to adaptively secure ones. Recently, this primitive also received a lot of attention as a building block in more complex cryptographic applications ranging from editable blockchains to advanced signature and encryption schemes. We observe that in latter applications various different notions of collision-resistance are used, and it is not always clear if the respective notion does really cover what seems intuitively required by the application. Therefore, we revisit existing collision-resistance notions in the literature, study their relations, and—using the example of the recent redactable blockchain proposals—discuss which practical impact different notions of collision-resistance might have. Moreover, we provide a stronger, and arguably more desirable, notion of collision-resistance than what is known from the literature. Finally, we present a surprisingly simple and efficient black-box construction of chameleon-hash functions achieving this strong notion.
2020
PKC
Concretely-Efficient Zero-Knowledge Arguments for Arithmetic Circuits and Their Application to Lattice-Based Cryptography 📺
In this work we present a new interactive Zero-Knowledge Argument of knowledge for general arithmetic circuits. Our protocol is based on the “MPC-in-the-head”-paradigm of Ishai et al. (STOC 2009) and follows the recent “MPC-in-the-head with Preprocessing” as proposed by Katz, Kolesnikov and Wang (ACM CCS 2018). However, in contrast to Katz et al. who used the “cut-and-choose” approach for pre-processing, we show how to incorporate the well-known “sacrificing” paradigm into “MPC-in-the-head”, which reduces the proof size when working over arithmetic circuits. Our argument system uses only lightweight symmetric-key primitives and utilizes a simplified version of the so-called SPDZ-protocol. Based on specific properties of our protocol we then show how it can be used to construct an efficient Zero-Knowledge Argument of Knowledge for instances of the Short Integer Solution (SIS) problem. We present different protocols that are tailored to specific uses of SIS, while utilizing the advantages of our scheme. In particular, we present a variant of our argument system that allows the parties to sample the circuit “on the fly”, which may be of independent interest. We furthermore implemented our Zero-Knowledge argument for SIS and show that using our protocols it is possible to run a complete interactive proof, even for general SIS instances which result in a circuit with $${>}10^6$$ gates, in less than 0.5 s .
2020
PKC
Updateable Inner Product Argument with Logarithmic Verifier and Applications 📺
We propose an improvement for the inner product argument of Bootle et al. (EUROCRYPT’16). The new argument replaces the unstructured common reference string (the commitment key) by a structured one. We give two instantiations of this argument, for two different distributions of the CRS. In the designated verifier setting, this structure can be used to reduce verification from linear to logarithmic in the circuit size. The argument can be compiled to the publicly verifiable setting in asymmetric bilinear groups. The new common reference string can easily be updateable. The argument can be directly used to improve verification of Bulletproofs range proofs (IEEE SP’18). On the other hand, to use the improved argument to prove circuit satisfiability with logarithmic verification, we adapt recent techniques from Sonic (ACM CCS’19) to work with the new common reference string. The resulting argument is secure under standard assumptions (in the Random Oracle Model), in contrast with Sonic and recent works that improve its efficiency (Plonk, Marlin, AuroraLight), which, apart from the Random Oracle Model, need either the Algebraic Group Model or Knowledge Type assumptions.
2020
PKC
On Black-Box Extensions of Non-interactive Zero-Knowledge Arguments, and Signatures Directly from Simulation Soundness 📺
Highly efficient non-interactive zero-knowledge arguments (NIZK) are often constructed for limited languages and it is not known how to extend them to cover wider classes of languages in general. In this work we initiate a study on black-box language extensions for conjunctive and disjunctive relations, that is, building a NIZK system for $${mathcal L}diamond hat{{mathcal L}}$$ (with $$diamond in {wedge , vee }$$ ) based on NIZK systems for languages $${mathcal L}$$ and $$hat{{mathcal L}}$$ . While the conjunctive extension of NIZKs is straightforward by simply executing the given NIZKs in parallel, it is not known how disjunctive extensions could be achieved in a black-box manner. Besides, observe that the simple conjunctive extension does not work in the case of simulation-sound NIZKs (SS-NIZKs), as pointed out by Sahai (Sahai, FOCS 1999). Our main contribution is an impossibility result that negates the existence of the above extensions and implies other non-trivial separations among NIZKs, SS-NIZKs, and labelled SS-NIZKs. Motivated by the difficulty of such transformations, we additionally present an efficient construction of signature schemes based on unbounded simulation-sound NIZKs (USS-NIZKs) for any language without language extensions.
2020
PKC
On QA-NIZK in the BPK Model 📺
Recently, Bellare et al. defined subversion-resistance (security in the case the CRS creator may be malicious) for NIZK. In particular, a Sub-ZK NIZK is zero-knowledge, even in the case of subverted CRS. We study Sub-ZK QA-NIZKs, where the CRS can depend on the language parameter. First, we observe that subversion zero-knowledge (Sub-ZK) in the CRS model corresponds to no-auxiliary-string non-black-box NIZK in the Bare Public Key model, and hence, the use of non-black-box techniques is needed to obtain Sub-ZK. Second, we give a precise definition of Sub-ZK QA-NIZKs that are (knowledge-)sound if the language parameter but not the CRS is subverted and zero-knowledge even if both are subverted. Third, we prove that the most efficient known QA-NIZK for linear subspaces by Kiltz and Wee is Sub-ZK under a new knowledge assumption that by itself is secure in (a weaker version of) the algebraic group model. Depending on the parameter setting, it is (knowledge-)sound under different non-falsifiable assumptions, some of which do not belong to the family of knowledge assumptions.
2020
PKC
Improved Discrete Gaussian and Subgaussian Analysis for Lattice Cryptography 📺
Discrete Gaussian distributions over lattices are central to lattice-based cryptography, and to the computational and mathematical aspects of lattices more broadly. The literature contains a wealth of useful theorems about the behavior of discrete Gaussians under convolutions and related operations. Yet despite their structural similarities, most of these theorems are formally incomparable, and their proofs tend to be monolithic and written nearly “from scratch,” making them unnecessarily hard to verify, understand, and extend. In this work we present a modular framework for analyzing linear operations on discrete Gaussian distributions. The framework abstracts away the particulars of Gaussians, and usually reduces proofs to the choice of appropriate linear transformations and elementary linear algebra. To showcase the approach, we establish several general properties of discrete Gaussians, and show how to obtain all prior convolution theorems (along with some new ones) as straightforward corollaries. As another application, we describe a self-reduction for Learning With Errors (LWE) that uses a fixed number of samples to generate an unlimited number of additional ones (having somewhat larger error). The distinguishing features of our reduction are its simple analysis in our framework, and its exclusive use of discrete Gaussians without any loss in parameters relative to a prior mixed discrete-and-continuous approach. As a contribution of independent interest, for subgaussian random matrices we prove a singular value concentration bound with explicitly stated constants, and we give tighter heuristics for specific distributions that are commonly used for generating lattice trapdoors. These bounds yield improvements in the concrete bit-security estimates for trapdoor lattice cryptosystems.
2020
PKC
Almost Tight Security in Lattice with Polynomial Moduli - PRF, IBE, All-but-many LTF, and More 📺
Achieving tight security is a fundamental task in cryptography. While one of the most important purposes of this task is to improve the overall efficiency of a construction (by allowing smaller security parameters), many current lattice-based instantiations do not completely achieve the goal. Particularly, a super-polynomial modulus seems to be necessary in all prior work for (almost) tight schemes that allow the adversary to conduct queries, such as PRF, IBE, and Signatures. As the super-polynomial modulus would affect the noise-to-modulus ratio and thus increase the parameters, this might cancel out the advantages (in efficiency) brought from the tighter analysis. To determine the full power of tight security/analysis in lattices, it is necessary to determine whether the super-polynomial modulus restriction is inherent. In this work, we remove the super-polynomial modulus restriction for many important primitives – PRF, IBE, All-but-many Lossy Trapdoor Functions, and Signatures. The crux relies on an improvement over the framework of Boyen and Li (Asiacrypt 16), and an almost tight reduction from LWE to LWR, which improves prior work by Alwen et al. (Crypto 13), Bogdanov et al. (TCC 16), and Bai et al. (Asiacrypt 15). By combining these two advances, we are able to derive these almost tight schemes under LWE with a polynomial modulus.
2020
PKC
The Randomized Slicer for CVPP: Sharper, Faster, Smaller, Batchier 📺
Following the recent line of work on solving the closest vector problem with preprocessing (CVPP) using approximate Voronoi cells, we improve upon previous results in the following ways: We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis–Laarhoven–De Weger [PQCrypto 2019] and Laarhoven [MathCrypt 2019]. We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities. We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker–Gama–Joux [Cryptology ePrint Archive, 2015]. Using $$2^{0.185d + o(d)}$$ memory, we can solve a single CVPP instance in $$2^{0.264d + o(d)}$$ time. We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using $$2^{0.208d + o(d)}$$ memory, we can heuristically solve CVPP instances in $$2^{0.234d + o(d)}$$ amortized time, for batches of size at least $$2^{0.058d + o(d)}$$ . Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure.
2020
PKC
Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices: KEMs and Signatures of Smaller Sizes 📺
Currently, lattice-based cryptosystems are less efficient than their number-theoretic counterparts (based on RSA, discrete logarithm, etc.) in terms of key and ciphertext (signature) sizes. For adequate security the former typically needs thousands of bytes while in contrast the latter only requires at most hundreds of bytes. This significant difference has become one of the main concerns in replacing currently deployed public-key cryptosystems with lattice-based ones. Observing the inherent asymmetries in existing lattice-based cryptosystems, we propose asymmetric variants of the (module-)LWE and (module-)SIS assumptions, which yield further size-optimized KEM and signature schemes than those from standard counterparts. Following the framework of Lindner and Peikert (CT-RSA 2011) and the Crystals-Kyber proposal (EuroS&P 2018), we propose an IND-CCA secure KEM scheme from the hardness of the asymmetric module-LWE (AMLWE), whose asymmetry is fully exploited to obtain shorter public keys and ciphertexts. To target at a 128-bit quantum security, the public key (resp., ciphertext) of our KEM only has 896 bytes (resp., 992 bytes). Our signature scheme bears most resemblance to and improves upon the Crystals-Dilithium scheme (ToCHES 2018). By making full use of the underlying asymmetric module-LWE and module-SIS assumptions and carefully selecting the parameters, we construct an SUF-CMA secure signature scheme with shorter public keys and signatures. For a 128-bit quantum security, the public key (resp., signature) of our signature scheme only has 1312 bytes (resp., 2445 bytes). We adapt the best known attacks and their variants to our AMLWE and AMSIS problems and conduct a comprehensive and thorough analysis of several parameter choices (aiming at different security strengths) and their impacts on the sizes, security and error probability of lattice-based cryptosystems. Our analysis demonstrates that AMLWE and AMSIS problems admit more flexible and size-efficient choices of parameters than the respective standard versions.
2020
PKC
MPSign: A Signature from Small-Secret Middle-Product Learning with Errors 📺
We describe a digital signature scheme $$mathsf {MPSign}$$ , whose security relies on the conjectured hardness of the Polynomial Learning With Errors problem ( $$mathsf {PLWE}$$ ) for at least one defining polynomial within an exponential-size family (as a function of the security parameter). The proposed signature scheme follows the Fiat-Shamir framework and can be viewed as the Learning With Errors counterpart of the signature scheme described by Lyubashevsky at Asiacrypt 2016, whose security relies on the conjectured hardness of the Polynomial Short Integer Solution ( $$mathsf {PSIS}$$ ) problem for at least one defining polynomial within an exponential-size family. As opposed to the latter, $$mathsf {MPSign}$$ enjoys a security proof from $$mathsf {PLWE}$$ that is tight in the quantum-access random oracle model. The main ingredient is a reduction from $$mathsf {PLWE}$$ for an arbitrary defining polynomial among exponentially many, to a variant of the Middle-Product Learning with Errors problem ( $$mathsf {MPLWE}$$ ) that allows for secrets that are small compared to the working modulus. We present concrete parameters for $$mathsf {MPSign}$$ using such small secrets, and show that they lead to significant savings in signature length over Lyubashevsky’s Asiacrypt 2016 scheme (which uses larger secrets) at typical security levels. As an additional small contribution, and in contrast to $$mathsf {MPSign}$$ (or $$mathsf {MPLWE}$$ ), we present an efficient key-recovery attack against Lyubashevsky’s scheme (or the inhomogeneous $$mathsf {PSIS}$$ problem), when it is used with sufficiently small secrets, showing the necessity of a lower bound on secret size for the security of that scheme.
2020
PKC
Witness Indistinguishability for Any Single-Round Argument with Applications to Access Control 📺
Consider an access policy for some resource which only allows access to users of the system who own a certain set of attributes. Specifically, we consider the case where such an access structure is defined by some monotone function $$f:{0,1}^N ightarrow {0,1}$$ , belonging to some class of function $$F$$ (e.g. conjunctions, space bounded computation), where N is the number of possible attributes. In this work we show that any succinct single-round delegation scheme for the function class $$F$$ can be converted into a succinct single-round private access control protocol. That is, a verifier can be convinced that an approved user (i.e. one which holds an approved set of attributes) is accessing the system, without learning any additional information about the user or the set of attributes. As a main tool of independent interest, we show that assuming a quasi-polynomially secure two-message oblivious transfer scheme with statistical sender privacy (which can be based on quasi-polynomial hardness of the DDH, QR, DCR or LWE assumptions), we can convert any single-round protocol into a witness indistinguishable one, with similar communication complexity.
2020
PKC
Boosting Verifiable Computation on Encrypted Data 📺
We consider the setting in which an untrusted server stores a collection of data and is asked to compute a function over it. In this scenario, we aim for solutions where the untrusted server does not learn information about the data and is prevented from cheating. This problem is addressed by verifiable and private delegation of computation, proposed by Gennaro, Gentry and Parno (CRYPTO’10), a notion that is close to both the active areas of homomorphic encryption and verifiable computation (VC). However, in spite of the efficiency advances in the respective areas, VC protocols that guarantee privacy of the inputs are still expensive. The only exception is a protocol by Fiore, Gennaro and Pastro (CCS’14) that supports arithmetic circuits of degree at most 2. In this paper we propose new efficient protocols for VC on encrypted data that improve over the state of the art solution of Fiore et al. in multiple aspects. First, we can support computations of degree higher than 2. Second, we achieve public delegatability and public verifiability whereas Fiore et al. need the same secret key to encode inputs and verify outputs. Third, we achieve a new property that guarantees that verifiers can be convinced about the correctness of the outputs without learning information on the inputs. The key tool to obtain our new protocols is a new SNARK that can efficiently handle computations over a quotient polynomial ring, such as the one used by Ring-LWE somewhat homomorphic encryption schemes. This SNARK in turn relies on a new commit-and-prove SNARK for proving evaluations on the same point of several committed polynomials. We propose a construction of this scheme under an extractability assumption over bilinear groups in the random oracle model.
2020
PKC
Lossy CSI-FiSh: Efficient Signature Scheme with Tight Reduction to Decisional CSIDH-512 📺
Recently, Beullens, Kleinjung, and Vercauteren (Asiacrypt’19) provided the first practical isogeny-based digital signature, obtained from the Fiat-Shamir (FS) paradigm. They worked with the CSIDH-512 parameters and passed through a new record class group computation. However, as with all standard FS signatures, the security proof is highly non-tight and the concrete parameters are set under the heuristic that the only way to attack the scheme is by finding collisions for a hash function. In this paper, we propose an FS-style signature scheme, called Lossy CSI-FiSh, constructed using the CSIDH-512 parameters and with a security proof based on the “Lossy Keys” technique introduced by Kiltz, Lyubashevsky and Schaffner (Eurocrypt’18). Lossy CSI-FiSh is provably secure under the same assumption which underlies the security of the key exchange protocol CSIDH (Castryck et al. (Asiacrypt’18)) and is almost as efficient as CSI-FiSh. For instance, aiming for small signature size, our scheme is expected to take around $$approx 800$$  ms to sign/verify while producing signatures of size $$approx 280$$ bytes. This is only twice slower than CSI-FiSh while having similar signature size for the same parameter set. As an additional benefit, our scheme is by construction secure both in the classical and quantum random oracle model.
2020
PKC
Threshold Schemes from Isogeny Assumptions 📺
We initiate the study of threshold schemes based on the Hard Homogeneous Spaces (HHS) framework of Couveignes. Quantum-resistant HHS based on supersingular isogeny graphs have recently become usable thanks to the record class group precomputation performed for the signature scheme CSI-FiSh. Using the HHS equivalent of the technique of Shamir’s secret sharing in the exponents , we adapt isogeny based schemes to the threshold setting. In particular we present threshold versions of the CSIDH public key encryption, and the CSI-FiSh signature schemes. The main highlight is a threshold version of CSI-FiSh which runs almost as fast as the original scheme, for message sizes as low as 1880 B, public key sizes as low as 128 B, and thresholds up to 56; other speed-size-threshold compromises are possible.
2020
PKC
Topology-Hiding Computation for Networks with Unknown Delays 📺
Topology-Hiding Computation (THC) allows a set of parties to securely compute a function over an incomplete network without revealing information on the network topology. Since its introduction in TCC’15 by Moran et al., the research on THC has focused on reducing the communication complexity, allowing larger graph classes, and tolerating stronger corruption types. All of these results consider a fully synchronous model with a known upper bound on the maximal delay of all communication channels. Unfortunately, in any realistic setting this bound has to be extremely large, which makes all fully synchronous protocols inefficient. In the literature on multi-party computation, this is solved by considering the fully asynchronous model. However, THC is unachievable in this model (and even hard to define), leaving even the definition of a meaningful model as an open problem. The contributions of this paper are threefold. First, we introduce a meaningful model of unknown and random communication delays for which THC is both definable and achievable. The probability distributions of the delays can be arbitrary for each channel, but one needs to make the (necessary) assumption that the delays are independent. The existing fully-synchronous THC protocols do not work in this setting and would, in particular, leak information about the topology. Second, in the model with trusted stateless hardware boxes introduced at Eurocrypt’18 by Ball et al., we present a THC protocol that works for any graph class. Third, we explore what is achievable in the standard model without trusted hardware and present a THC protocol for specific graph types (cycles and trees) secure under the DDH assumption. The speed of all protocols scales with the actual (unknown) delay times, in contrast to all previously known THC protocols whose speed is determined by the assumed upper bound on the network delay.
2020
PKC
Sublinear-Round Byzantine Agreement Under Corrupt Majority 📺
Although Byzantine Agreement (BA) has been studied for three decades, perhaps somewhat surprisingly, there still exist significant gaps in our understanding regarding its round complexity. A long-standing open question is the following: can we achieve BA with sublinear round complexity under corrupt majority? Due to the beautiful works by Garay et al. (FOCS’07) and Fitzi and Nielsen (DISC’09), we have partial and affirmative answers to this question albeit for the narrow regime $$f = n/2 + o(n)$$ where f is the number of corrupt nodes and n is the total number of nodes. So far, no positive result is known about the setting $$f > 0.51n$$ even for static corruption! In this paper, we make progress along this somewhat stagnant front. We show that there exists a corrupt-majority BA protocol that terminates in $$O(frac{1}{epsilon } log frac{1}{delta })$$ rounds in the worst case, satisfies consistency with probability at least $$1 - delta $$ , and tolerates $$(1-epsilon )$$ fraction of corrupt nodes. Our protocol secures against an adversary that can corrupt nodes adaptively during the protocol execution but cannot perform “after-the-fact” removal of honest messages that have already been sent prior to corruption. Our upper bound is optimal up to a logarithmic factor in light of the elegant $$varOmega (1/epsilon )$$ lower bound by Garay et al. (FOCS’07).
2020
PKC
Bandwidth-Efficient Threshold EC-DSA 📺
Threshold Signatures allow n parties to share the power of issuing digital signatures so that any coalition of size at least $$t+1$$ can sign, whereas groups of t or less players cannot. Over the last few years many schemes addressed the question of realizing efficient threshold variants for the specific case of EC-DSA signatures. In this paper we present new solutions to the problem that aim at reducing the overall bandwidth consumption. Our main contribution is a new variant of the Gennaro and Goldfeder protocol from ACM CCS 2018 that avoids all the required range proofs, while retaining provable security against malicious adversaries in the dishonest majority setting. Our experiments show that – for all levels of security – our signing protocol reduces the bandwidth consumption of best previously known secure protocols for factors varying between 4.4 and 9, while key generation is consistently two times less expensive. Furthermore compared to these same protocols, our signature generation is faster for 192-bits of security and beyond.
2020
PKC
Blazing Fast OT for Three-Round UC OT Extension 📺
Oblivious Transfer (OT) is an important building block for multi-party computation (MPC). Since OT requires expensive public-key operations, efficiency-conscious MPC protocols use an OT extension (OTE) mechanism [Beaver 96, Ishai et al. 03] to provide the functionality of many independent OT instances with the same sender and receiver, using only symmetric-key operations plus few instances of some base OT protocol. Consequently there is significant interest in constructing OTE friendly protocols, namely protocols that, when used as base-OT for OTE, result in extended OT that are both round-efficient and cost-efficient. We present the most efficient OTE-friendly protocol to date. Specifically: Our base protocol incurs only 3 exponentiations per instance. Our base protocol results in a 3 round extended OT protocol. The extended protocol is UC secure in the Observable Random Oracle Model (ROM) under the CDH assumption. For comparison, the state of the art for base OTs that result in 3-round OTE are proven only in the programmable ROM, and require 4 exponentiations under Interactive DDH or 6 exponentiations under DDH [Masney-Rindal 19]. We also implement our protocol and benchmark it against the Simplest OT protocol [Chou and Orlandi, Latincrypt 2015], which is the most efficient and widely used OT protocol but not known to suffice for OTE. The computation cost is roughly the same in both cases. Interestingly, our base OT is also 3 rounds. However, we slightly modify the extension mechanism (which normally adds a round) so as to preserve the number of rounds in our case.
2020
PKC
Going Beyond Dual Execution: MPC for Functions with Efficient Verification 📺
The dual execution paradigm of Mohassel and Franklin (PKC’06) and Huang, Katz and Evans (IEEE ’12) shows how to achieve the notion of 1-bit leakage security at roughly twice the cost of semi-honest security for the special case of two-party secure computation . To date, there are no multi-party computation (MPC) protocols that offer such a strong trade-off between security and semi-honest performance. Our main result is to address this shortcoming by designing 1-bit leakage protocols for the multi-party setting, albeit for a special class of functions. We say that function f ( x ,  y ) is efficiently verifiable by g if the running time of g is always smaller than f and $$g(x,y,z)=1$$ if and only if $$f(x,y)=z$$ . In the two-party setting, we first improve dual execution by observing that the “second execution” can be an evaluation of g instead of f , and that by definition, the evaluation of g is asymptotically more efficient. Our main MPC result is to construct a 1-bit leakage protocol for such functions from any passive protocol for f that is secure up to additive errors and any active protocol for g . An important result by Genkin et al. (STOC ’14) shows how the classic protocols by Goldreich et al. (STOC ’87) and Ben-Or et al. (STOC ’88) naturally support this property, which allows to instantiate our compiler with two-party and multi-party protocols. A key technical result we prove is that the passive protocol for distributed garbling due to Beaver et al. (STOC ’90) is in fact secure up to additive errors against malicious adversaries, thereby, yielding another powerful instantiation of our paradigm in the constant-round multi-party setting. As another concrete example of instantiating our approach, we present a novel protocol for computing perfect matching that is secure in the 1-bit leakage model and whose communication complexity is less than the honest-but-curious implementations of textbook algorithms for perfect matching.
2020
PKC
Mon$\mathbb {Z}_{2^{k}}$a: Fast Maliciously Secure Two Party Computation on $\mathbb {Z}_{2^{k}}$ 📺
In this paper we present a new 2-party protocol for secure computation over rings of the form $$mathbb {Z}_{2^k}$$ . As many recent efficient MPC protocols supporting dishonest majority, our protocol consists of a heavier (input-independent) pre-processing phase and a very efficient online stage. Our offline phase is similar to BeDOZa (Bendlin et al. Eurocrypt 2011) but employs Joye-Libert (JL, Eurocrypt 2013) as underlying homomorphic cryptosystem and, notably, it can be proven secure without resorting to the expensive sacrifice step. JL turns out to be particularly well suited for the ring setting as it naturally supports $$mathbb {Z}_{2^k}$$ as underlying message space. Moreover, it enjoys several additional properties (such as valid ciphertext-verifiability and efficiency) that make it a very good fit for MPC in general. As a main technical contribution we show how to take advantage of all these properties (and of more properties that we introduce in this work, such as a ZK proof of correct multiplication) in order to design a two-party protocol that is efficient, fast and easy to implement in practice. Our solution is particularly well suited for relatively large choices of k ( e.g. $$k=128$$ ), but compares favorably with the state of the art solution of SPD $$mathbb {Z}_{2^k}$$ (Cramer et al. Crypto 2018) already for the practically very relevant case of $$mathbb {Z}_{2^{64}}$$ .
2020
PKC
Generic Authenticated Key Exchange in the Quantum Random Oracle Model 📺
We propose $$mathsf {FO_mathsf {AKE}}$$ , a generic construction of two-message authenticated key exchange (AKE) from any passively secure public key encryption (PKE) in the quantum random oracle model (QROM). Whereas previous AKE constructions relied on a Diffie-Hellman key exchange or required the underlying PKE scheme to be perfectly correct, our transformation allows arbitrary PKE schemes with non-perfect correctness. Dealing with imperfect schemes is one of the major difficulties in a setting involving active attacks. Our direct construction, when applied to schemes such as the submissions to the recent NIST post-quantum competition, is more natural than previous AKE transformations. Furthermore, we avoid the use of (quantum-secure) digital signature schemes which are considerably less efficient than their PKE counterparts. As a consequence, we can instantiate our AKE transformation with any of the submissions to the recent NIST competition, e.g., ones based on codes and lattices. $$mathsf {FO_mathsf {AKE}}$$ can be seen as a generalisation of the well known Fujisaki-Okamoto transformation (for building actively secure PKE from passively secure PKE) to the AKE setting. As a helper result, we also provide a security proof for the Fujisaki-Okamoto transformation in the QROM for PKE with non-perfect correctness which is tighter and tolerates a larger correctness error than previous proofs.
2020
PKC
Threshold Ring Signatures: New Definitions and Post-quantum Security 📺
A t -out-of- N threshold ring signature allows t parties to jointly and anonymously compute a signature on behalf on N public keys, selected in an arbitrary manner among the set of all public keys registered in the system. Existing definitions for t -out-of- N threshold ring signatures guarantee security only when the public keys are honestly generated, and many even restrict the ability of the adversary to actively participate in the computation of the signatures. Such definitions do not capture the open settings envisioned for threshold ring signatures, where parties can independently add themselves to the system, and join other parties for the computation of the signature. Furthermore, known constructions of threshold ring signatures are not provably secure in the post-quantum setting, either because they are based on non-post quantum secure problems (e.g. Discrete Log, RSA), or because they rely on transformations such as Fiat-Shamir, that are not always secure in the quantum random oracle model (QROM). In this paper, we provide the first definition of t -out-of- N threshold ring signatures against active adversaries who can participate in the system and arbitrarily deviate from the prescribed procedures. Second, we present a post-quantum secure realization based on any (post-quantum secure) trapdoor commitment, which we prove secure in the QROM. Our construction is black-box and it can be instantiated with any trapdoor commitment, thus allowing the use of a variety of hardness assumptions.
2020
PKC
Tight and Optimal Reductions for Signatures Based on Average Trapdoor Preimage Sampleable Functions and Applications to Code-Based Signatures 📺
The GPV construction [ GPV08 ] presents a generic construction of signature schemes in the Hash and Sign paradigm and is used in some lattice based signatures. This construction requires a family $$mathcal {F}$$ of trapdoor preimage sampleable functions (TPSF). In this work we extend this notion to the weaker Average TPSF (ATPSF) and show that the GPV construction also holds for ATPSF in the Random Oracle Model (ROM). We also introduce the problem of finding a Claw with a random function (Claw(RF)) and present a tight security reduction to the Claw(RF) problem. Our reduction is also optimal meaning that an algorithm that solves the Claw(RF) problem breaks the scheme. We extend these results to the quantum setting and prove this same tight and optimal reduction in the QROM. Finally, we apply these results to code-based signatures, notably the Wave signature scheme and prove security for it in the ROM and the QROM, improving and extending the original analysis of [ DST19a ].
2020
PKC
Faster Cofactorization with ECM Using Mixed Representations 📺
This paper introduces a novel implementation of the elliptic curve factoring method specifically designed for medium-size integers such as those arising by billions in the cofactorization step of the Number Field Sieve. In this context, our algorithm requires fewer modular multiplications than any other publicly available implementation. The main ingredients are: the use of batches of primes, fast point tripling, optimal double-base decompositions and Lucas chains, and a good mix of Edwards and Montgomery representations.
2020
PKC
Improved Classical Cryptanalysis of SIKE in Practice 📺
The main contribution of this work is an optimized implementation of the van Oorschot-Wiener (vOW) parallel collision finding algorithm. As is typical for cryptanalysis against conjectured hard problems (e. g. factoring or discrete logarithms), challenges can arise in the implementation that are not captured in the theory, making the performance of the algorithm in practice a crucial element of estimating security. We present a number of novel improvements, both to generic instantiations of the vOW algorithm finding collisions in arbitrary functions, and to its instantiation in the context of the supersingular isogeny key encapsulation (SIKE) protocol, that culminate in an improved classical cryptanalysis of the computational supersingular isogeny (CSSI) problem. In particular, we present a scalable implementation that can be applied to the Round-2 parameter sets of SIKE that can be used to give confidence in their security levels.
2020
PKC
A Short-List of Pairing-Friendly Curves Resistant to Special TNFS at the 128-Bit Security Level 📺
There have been notable improvements in discrete logarithm computations in finite fields since 2015 and the introduction of the Tower Number Field Sieve algorithm (TNFS) for extension fields. The Special TNFS is very efficient in finite fields that are target groups of pairings on elliptic curves, where the characteristic is special (e.g. sparse). The key sizes for pairings should be increased, and alternative pairing-friendly curves can be considered. We revisit the Special variant of TNFS for pairing-friendly curves. In this case the characteristic is given by a polynomial of moderate degree (between 4 and 38) and tiny coefficients, evaluated at an integer (a seed). We present a polynomial selection with a new practical trade-off between degree and coefficient size. As a consequence, the security of curves computed by Barbulescu, El Mrabet and Ghammam in 2019 should be revised: we obtain a smaller estimated cost of STNFS for all curves except BLS12 and BN. To obtain TNFS-secure curves, we reconsider the Brezing–Weng generic construction of families of pairing-friendly curves and estimate the cost of our new Special TNFS algorithm for these curves. This improves on the work of Fotiadis and Konstantinou, Fotiadis and Martindale, and Barbulescu, El Mrabet and Ghammam. We obtain a short-list of interesting families of curves that are resistant to the Special TNFS algorithm, of embedding degrees 10 to 16 for the 128-bit security level. We conclude that at the 128-bit security level, BLS-12 and Fotiadis–Konstantinou–Martindale curves with $$k=12$$ over a 440 to 448-bit prime field seem to be the best choice for pairing efficiency. We also give hints at the 192-bit security level.
2020
PKC
Privacy-Preserving Authenticated Key Exchange and the Case of IKEv2 📺
In this paper, we present a strong, formal, and general-purpose cryptographic model for privacy-preserving authenticated key exchange (PPAKE) protocols. PPAKE protocols are secure in the traditional AKE sense but additionally guarantee the confidentiality of the identities used in communication sessions. Our model has several useful and novel features, among others: it is a proper extension of classical AKE models, guarantees in a strong sense that the confidentiality of session keys is independent from the secrecy of the used identities, and it is the first to support what we call dynamic modes, where the responsibility of selecting the identities of the communication partners may vary over several protocol runs. We show the validity of our model by applying it to the cryptographic core of IPsec IKEv2 with signature-based authentication where the need for dynamic modes is practically well-motivated. In our analysis, we not only show that this protocol provides strong classical AKE security guarantees but also that the identities that are used by the parties remain hidden in successful protocol runs. Historically, the Internet Key Exchange (IKE) protocol was the first real-world AKE to incorporate privacy-preserving techniques. However, lately privacy-preserving techniques have gained renewed interest in the design process of important protocols like TLS 1.3 (with encrypted SNI) and NOISE. We believe that our new model can be a solid foundation to analyze these and other practical protocols with respect to their privacy guarantees, in particular, in the now so wide-spread scenario where multiple virtual servers are hosted on a single machine.
2020
PKC
Linearly-Homomorphic Signatures and Scalable Mix-Nets 📺
Anonymity is a primary ingredient for our digital life. Several tools have been designed to address it such as, for authentication, blind signatures, group signatures or anonymous credentials and, for confidentiality, randomizable encryption or mix-nets. When it comes to complex electronic voting schemes, random shuffling of authenticated ciphertexts with mix-nets is the only known tool. However, it requires huge and complex zero-knowledge proofs to guarantee the actual permutation of the initial ciphertexts in a privacy-preserving way. In this paper, we propose a new approach for proving correct shuffling of signed ElGamal ciphertexts: the mix-servers can simply randomize individual ballots, which means the ciphertexts, the signatures, and the verification keys, with an additional global proof of constant size, and the output will be publicly verifiable. The security proof is in the generic bilinear group model. The computational complexity for the each mix-server is linear in the number of ballots. Verification is also linear in the number of ballots, but independent of the number of rounds of mixing. This leads to a new highly scalable technique. Our construction makes use of linearly-homomorphic signatures, with new features, that are of independent interest.
2020
PKC
Efficient Redactable Signature and Application to Anonymous Credentials 📺
Let us assume that Alice has received a constant-size signature on a set of messages $${m_i}_{i=1}^n$$ from some organization. Depending on the situation, Alice might need to disclose, prove relations about or hide some of these messages. Ideally, the complexity of the corresponding protocols should not depend on the hidden messages. In particular, if Alice wants to disclose only k messages, then the authenticity of the latter should be verifiable in at most O ( k ) operations. Many solutions were proposed over the past decades, but they only provide a partial answer to this problem. In particular, we note that they suffer either from the need to prove knowledge of the hidden elements or from the inability to prove that the latter satisfy some relations. In this paper, we propose a very efficient constant-size redactable signature scheme that addresses all the problems above. Signatures can indeed be redacted to remain valid only on a subset of k messages included in $${m_i}_{i=1}^n$$ . The resulting redacted signature consists of 4 elements and can be verified with essentially k exponentiations. Different shows of the same signature can moreover be made unlinkable leading to a very efficient anonymous credentials system.
2020
TCC
Non-Malleable Codes, Extractors and Secret Sharing for Interleaved Tampering and Composition of Tampering 📺
Non-malleable codes were introduced by Dziembowski, Pietrzak, and Wichs (JACM 2018) as a generalization of standard error correcting codes to handle severe forms of tampering on codewords. This notion has attracted a lot of recent research, resulting in various explicit constructions, which have found applications in tamper-resilient cryptography and connections to other pseudorandom objects in theoretical computer science. We continue the line of investigation on explicit constructions of non-malleable codes in the information theoretic setting, and give explicit constructions for several new classes of tampering functions. These classes strictly generalize several previously studied classes of tampering functions, and in particular extend the well studied split-state model which is a "compartmentalized" model in the sense that the codeword is partitioned \emph{a prior} into disjoint intervals for tampering. Specifically, we give explicit non-malleable codes for the following classes of tampering functions. (1) Interleaved split-state tampering: Here the codeword is partitioned in an unknown way by an adversary, and then tampered with by a split-state tampering function. (2) Affine tampering composed with split-state tampering: In this model, the codeword is first tampered with by a split-state adversary, and then the whole tampered codeword is further tampered with by an affine function. In fact our results are stronger, and we can handle affine tampering composed with interleaved split-state tampering. Our results are the first explicit constructions of non-malleable codes in any of these tampering models. As applications, they also directly give non-malleable secret-sharing schemes with binary shares in the split-state joint tampering model and the stronger model of affine tampering composed with split-state joint tampering. We derive all these results from explicit constructions of seedless non-malleable extractors, which we believe are of independent interest. Using our techniques, we also give an improved seedless extractor for an unknown interleaving of two independent sources.
2020
TCC
On Pseudorandom Encodings 📺
We initiate a study of \emph{pseudorandom encodings}: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, ``honey encryption'' and steganography. The main question we ask is whether \emph{every} efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a two-way relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multi-party computation and questions in the domain of steganography.
2020
TCC
On the Complexity of Arithmetic Secret Sharing 📺
Since the mid 2000s, asymptotically-good strongly-multiplicative linear (ramp) secret sharing schemes over a fixed finite field have turned out as a central theoretical primitive in numerous constant-communication-rate results in multi-party cryptographic scenarios, and, surprisingly, in two-party cryptography as well. Known constructions of this most powerful class of arithmetic secret sharing schemes all rely heavily on algebraic geometry (AG), i.e., on dedicated AG codes based on asymptotically good towers of algebraic function fields defined over finite fields. It is a well-known open question since the first (explicit) constructions of such schemes appeared in CRYPTO 2006 whether the use of ``heavy machinery'' can be avoided here. i.e., the question is whether the mere existence of such schemes can also be proved by ``elementary'' techniques only (say, from classical algebraic coding theory), even disregarding effective construction. So far, there is no progress. In this paper we show the theoretical result that, (1) {\em no matter whether this open question has an affirmative answer or not}, these schemes {\em can} be constructed explicitly by {\em elementary algorithms} defined in terms of basic algebraic coding theory. This pertains to all relevant operations associated to such schemes, including, notably, the generation of an instance for a given number of players $n$, as well as error correction in the presence of corrupt shares. We further show that (2) the algorithms are {\em quasi-linear time} (in $n$); this is (asymptotically) significantly more efficient than the known constructions. That said, the {\em analysis} of the mere termination of these algorithms {\em does} still rely on algebraic geometry, in the sense that it requires ``blackbox application'' of suitable {\em existence} results for these schemes. Our method employs a nontrivial, novel adaptation of a classical (and ubiquitous) paradigm from coding theory that enables transformation of {\em existence} results on asymptotically good codes into {\em explicit construction} of such codes via {\em concatenation}, at some constant loss in parameters achieved. In a nutshell, our generating idea is to combine a cascade of explicit but ``asymptotically-bad-yet-good-enough schemes'' with an asymptotically good one in such a judicious way that the latter can be selected with exponentially small number of players in that of the compound scheme. This opens the door to efficient, elementary exhaustive search. In order to make this work, we overcome a number of nontrivial technical hurdles. Our main handles include a novel application of the recently introduced notion of Reverse Multiplication-Friendly Embeddings (RMFE) from CRYPTO 2018, as well as a novel application of a natural variant in arithmetic secret sharing from EUROCRYPT 2008.
2020
TCC
Proof-Carrying Data from Accumulation Schemes 📺
Recursive proof composition has been shown to lead to powerful primitives such as incrementally-verifiable computation (IVC) and proof-carrying data (PCD). All existing approaches to recursive composition take a succinct non-interactive argument of knowledge (SNARK) and use it to prove a statement about its own verifier. This technique requires that the verifier run in time sublinear in the size of the statement it is checking, a strong requirement that restricts the class of SNARKs from which PCD can be built. This in turn restricts the efficiency and security properties of the resulting scheme. Bowe, Grigg, and Hopwood (ePrint 2019/1021) outlined a novel approach to recursive composition, and applied it to a particular SNARK construction which does *not* have a sublinear-time verifier. However, they omit details about this approach and do not prove that it satisfies any security property. Nonetheless, schemes based on their ideas have already been implemented in software. In this work we present a collection of results that establish the theoretical foundations for a generalization of the above approach. We define an *accumulation scheme* for a non-interactive argument, and show that this suffices to construct PCD, even if the argument itself does not have a sublinear-time verifier. Moreover we give constructions of accumulation schemes for SNARKs, which yield PCD schemes with novel efficiency and security features.
2020
TCC
Linear-Time Arguments with Sublinear Verification from Tensor Codes 📺
Minimizing the computational cost of the prover is a central goal in the area of succinct arguments. In particular, it remains a challenging open problem to construct a succinct argument where the prover runs in linear time and the verifier runs in polylogarithmic time. We make progress towards this goal by presenting a new linear-time probabilistic proof. For any fixed ? > 0, we construct an interactive oracle proof (IOP) that, when used for the satisfiability of an N-gate arithmetic circuit, has a prover that uses O(N) field operations and a verifier that uses O(N^?) field operations. The sublinear verifier time is achieved in the holographic setting for every circuit (the verifier has oracle access to a linear-size encoding of the circuit that is computable in linear time). When combined with a linear-time collision-resistant hash function, our IOP immediately leads to an argument system where the prover performs O(N) field operations and hash computations, and the verifier performs O(N^?) field operations and hash computations (given a short digest of the N-gate circuit).
2020
TCC
On Perfect Correctness in (Lockable) Obfuscation 📺
In a lockable obfuscation scheme a party takes as input a program P, a lock value alpha, a message msg, and produces an obfuscated program P'. The obfuscated program can be evaluated on an input x to learn the message msg if P(x)= alpha. The security of such schemes states that if alpha is randomly chosen (independent of P and msg), then one cannot distinguish an obfuscation of $P$ from a dummy obfuscation. Existing constructions of lockable obfuscation achieve provable security under the Learning with Errors assumption. One limitation of these constructions is that they achieve only statistical correctness and allow for a possible one-sided error where the obfuscated program could output the msg on some value x where P(x) \neq alpha. In this work we motivate the problem of studying perfect correctness in lockable obfuscation for the case where the party performing the obfuscation might wish to inject a backdoor or hole in the correctness. We begin by studying the existing constructions and identify two components that are susceptible to imperfect correctness. The first is in the LWE-based pseudo-random generators (PRGs) that are non-injective, while the second is in the last level testing procedure of the core constructions. We address each in turn. First, we build upon previous work to design injective PRGs that are provably secure from the LWE assumption. Next, we design an alternative last level testing procedure that has an additional structure to prevent correctness errors. We then provide surgical proof of security (to avoid redundancy) that connects our construction to the construction by Goyal, Koppula, and Waters (GKW). Specifically, we show how for a random value alpha an obfuscation under our new construction is indistinguishable from an obfuscation under the existing GKW construction.
2020
TCC
Barriers for Succinct Arguments in the Random Oracle Model 📺
We establish barriers on the efficiency of succinct arguments in the random oracle model. We give evidence that, under standard complexity assumptions, there do not exist succinct arguments where the argument verifier makes a small number of queries to the random oracle. The new barriers follow from new insights into how probabilistic proofs play a fundamental role in constructing succinct arguments in the random oracle model. *IOPs are necessary for succinctness.* We prove that any succinct argument in the random oracle model can be transformed into a corresponding interactive oracle proof (IOP). The query complexity of the IOP is related to the succinctness of the argument. *Algorithms for IOPs.* We prove that if a language has an IOP with good soundness relative to query complexity, then it can be decided via a fast algorithm with small space complexity. By combining these results we obtain barriers for a large class of deterministic and non-deterministic languages. For example, a succinct argument for 3SAT with few verifier queries implies an IOP with good parameters, which in turn implies a fast algorithm for 3SAT that contradicts the Exponential-Time Hypothesis. We additionally present results that shed light on the necessity of several features of probabilistic proofs that are typically used to construct succinct arguments, such as holography and state restoration soundness. Our results collectively provide an explanation for "why" known constructions of succinct arguments have a certain structure.
2020
TCC
Schr{\"o}dinger's Pirate: How To Trace a Quantum Decoder 📺
We explore the problem of traitor tracing where the pirate decoder can contain a quantum state. Our main results include: - We show how to overcome numerous definitional challenges to give a meaningful notion of tracing for quantum decoders - We give negative results, demonstrating barriers to adapting classical tracing algorithms to the quantum decoder setting. - On the other hand, we show how to trace quantum decoders in the setting of (public key) private linear broadcast encryption, capturing a common approach to traitor tracing.
2020
TCC
Expected Constant Round Byzantine Broadcast under Dishonest Majority 📺
Byzantine Broadcast (BB) is a central question in distributed systems, and an important challenge is to understand its round complexity. Under the honest majority setting, it is long known that there exist randomized protocols that can achieve BB in expected constant rounds, regardless of the number of nodes $n$. However, whether we can match the expected constant round complexity in the corrupt majority setting --- or more precisely, when $f \geq n/2 + \omega(1)$ --- remains unknown, where $f$ denotes the number of corrupt nodes. In this paper, we are the first to resolve this long-standing question. We show how to achieve BB in expected $O((n/(n-f))^2)$ rounds. In particular, even when 99\% of the nodes are corrupt we can achieve expected constant rounds. Our results hold under both a static adversary and a weakly adaptive adversary who cannot perform ``after-the-fact removal'' of messages already sent by a node before it becomes corrupt.
2020
TCC
A Lower Bound for One-Round Oblivious RAM 📺
We initiate a fine-grained study of the round complexity of Oblivious RAM (ORAM). We prove that any one-round balls-in-bins ORAM that does not duplicate balls must have either $\Omega(\sqrt{N})$ bandwidth or $\Omega(\sqrt{N})$ client memory, where $N$ is the number of memory slots being simulated. This shows that such schemes are strictly weaker than general (multi-round) ORAMs or those with server computation, and in particular implies that a one-round version of the original square-root ORAM of Goldreich and Ostrovksy (J. ACM 1996) is optimal. We prove this bound via new techniques that differ from those of Goldreich and Ostrovksy, and of Larsen and Nielsen (CRYPTO 2018), which achieved an $\Omega(\log N)$ bound for balls-in-bins and general multi-round ORAMs respectively. Finally we give a weaker extension of our bound that allows for limited duplication of balls, and also show that our bound extends to multiple-round ORAMs of a restricted form that include the best known constructions.
2020
TCC
Can a Blockchain Keep a Secret? 📺
Blockchains are gaining traction and acceptance, not just for cryptocurrencies, but increasingly as an architecture for distributed computing. In this work we seek solutions that allow a \emph{public} blockchain to act as a trusted long-term repository of secret information: Our goal is to deposit a secret with the blockchain, specify how it is to be used (e.g., the conditions under which it is released), and have the blockchain keep the secret and use it only in the specified manner (e.g., release only it once the conditions are met). This simple functionality enables many powerful applications, including signing statements on behalf of the blockchain, using it as the control plane for a storage system, performing decentralized program-obfuscation-as-a-service, and many more. Using proactive secret sharing techniques, we present a scalable solution for implementing this functionality on a public blockchain, in the presence of a mobile adversary controlling a small minority of the participants. The main challenge is that, on the one hand, scalability requires that we use small committees to represent the entire system, but, on the other hand, a mobile adversary may be able to corrupt the entire committee if it is small. For this reason, existing proactive secret sharing solutions are either non-scalable or insecure in our setting. We approach this challenge via "player replaceability", which ensures the committee is anonymous until after it performs its actions. Our main technical contribution is a system that allows sharing and re-sharing of secrets among the members of small dynamic committees, without knowing who they are until after they perform their actions and erase their secrets. Our solution handles a fully mobile adversary corrupting roughly 1/4 of the participants at any time, and is scalable in terms of both the number of parties and the number of time intervals.
2020
TCC
New Techniques in Replica Encodings with Client Setup 📺
A proof of replication system is a cryptographic primitive that allows a server (or group of servers) to prove to a client that it is dedicated to storing multiple copies or replicas of a file. Until recently, all such protocols required fined-grained timing assumptions on the amount of time it takes for a server to produce such replicas. Damgard, Ganesh, and Orlandi [DGO19] proposed a novel notion that we will call proof of replication with client setup. Here, a client first operates with secret coins to generate the replicas for a file. Such systems do not inherently have to require fine-grained timing assumptions. At the core of their solution to building proofs of replication with client setup is an abstraction called replica encodings. Briefly, these comprise a private coin scheme where a client algorithm given a file m can produce an encoding \sigma. The encodings have the property that, given any encoding \sigma, one can decode and retrieve the original file m. Secondly, if a server has significantly less than n·|m| bit of storage, it cannot reproduce n encodings. The authors give a construction of encodings from ideal permutations and trapdoor functions. In this work, we make three central contributions: 1) Our first contribution is that we discover and demonstrate that the security argument put forth by [DGO19] is fundamentally flawed. Briefly, the security argument makes assumptions on the attacker's storage behavior that does not capture general attacker strategies. We demonstrate this issue by constructing a trapdoor permutation which is secure assuming indistinguishability obfuscation, serves as a counterexample to their claim (for the parameterization stated). 2) In our second contribution we show that the DGO construction is actually secure in the ideal permutation model from any trapdoor permutation when parameterized correctly. In particular, when the number of rounds in the construction is equal to \lambda·n·b where \lambda is the security parameter, n is the number of replicas and b is the number of blocks. To do so we build up a proof approach from the ground up that accounts for general attacker storage behavior where we create an analysis technique that we call "sequence-then-switch". 3) Finally, we show a new construction that is provably secure in the random oracle (or random function) model. Thus requiring less structure on the ideal function.
2020
TCC
Round-Efficient Byzantine Broadcast under Strongly Adaptive and Majority Corruptions 📺
The round complexity of Byzantine Broadcast (BB) has been a central question in distributed systems and cryptography. In the honest majority setting, expected constant round protocols have been known for decades even in the presence of a strongly adaptive adversary. In the corrupt majority setting, however, no protocol with sublinear round complexity is known, even when the adversary is allowed to {\it strongly adaptively} corrupt only 51\% of the players, and even under reasonable setup or cryptographic assumptions. Recall that a strongly adaptive adversary can examine what original message an honest player would have wanted to send in some round, adaptively corrupt the player in the same round and make it send a completely different message instead. In this paper, we are the first to construct a BB protocol with sublinear round complexity in the corrupt majority setting. Specifically, assuming the existence of time-lock puzzles with suitable hardness parameters and other standard cryptographic assumptions, we show how to achieve BB in $(\frac{n}{n-f})^2 \cdot \poly\log \lambda$ rounds with $1-\negl(\lambda)$ probability, where $n$ denotes the total number of players, $f$ denotes the maximum number of corrupt players, and $\lambda$ is the security parameter. Our protocol completes in polylogarithmically many rounds even when 99\% of the players can be corrupt.
2020
TCC
Quantum encryption with certified deletion 📺
Given a ciphertext, is it possible to prove the deletion of the underlying plaintext? Since classical ciphertexts can be copied, clearly such a feat is impossible using classical information alone. In stark contrast to this, we show that quantum encodings enable certified deletion. More precisely, we show that it is possible to encrypt classical data into a quantum ciphertext such that the recipient of the ciphertext can produce a classical string which proves to the originator that the recipient has relinquished any chance of recovering the plaintext should the key be revealed. Our scheme is feasible with current quantum technology: the honest parties only require quantum devices for single-qubit preparation and measurements; the scheme is also robust against noise in these devices.Furthermore, we provide an analysis that is suitable in the finite-key regime.
2020
TCC
Secure Quantum Extraction Protocols 📺
\noindent Knowledge extraction, typically studied in the classical setting, is at the heart of several cryptographic protocols. The prospect of quantum computers forces us to revisit the concept of knowledge extraction in the presence of quantum adversaries. \par We introduce the notion of secure quantum extraction protocols. A secure quantum extraction protocol for an NP relation $\rel$ is a classical interactive protocol between a sender and a receiver, where the sender gets as input the instance $\inst$ and witness $\witness$ while the receiver only gets the instance $\inst$ as input. There are two properties associated with a secure quantum extraction protocol: (a) {\em Extractability}: for any efficient quantum polynomial-time (QPT) adversarial sender, there exists a QPT extractor that can extract a witness $\witness'$ such that $(\inst,\witness') \in \rel$ and, (b) {\em Zero-Knowledge}: a malicious receiver, interacting with the sender, should not be able to learn any information about $\witness$. \par We study and construct two flavors of secure quantum extraction protocols. \begin{itemize} \item {\bf Security against QPT malicious receivers}: First we consider the setting when the malicious receiver is a QPT adversary. In this setting, we construct a secure quantum extraction protocol for NP assuming the existence of quantum fully homomorphic encryption satisfying some mild properties (already satisfied by existing constructions [Mahadev, FOCS'18, Brakerski CRYPTO'18]) and quantum hardness of learning with errors. The novelty of our construction is a new non-black-box technique in the quantum setting. All previous extraction techniques in the quantum setting were solely based on quantum rewinding. \item {\bf Security against classical PPT malicious receivers}: We also consider the setting when the malicious receiver is a classical probabilistic polynomial time (PPT) adversary. In this setting, we construct a secure quantum extraction protocol for NP solely based on the quantum hardness of learning with errors. Furthermore, our construction satisfies {\em quantum-lasting security}: a malicious receiver cannot later, long after the protocol has been executed, use a quantum computer to extract a valid witness from the transcript of the protocol. \end{itemize} \noindent Both the above extraction protocols are {\em constant round} protocols. \par We present an application of secure quantum extraction protocols to zero-knowledge (ZK). Assuming quantum hardness of learning with errors, we present the first construction of ZK argument systems for NP in constant rounds based on the quantum hardness of learning with errors with: (a) zero-knowledge against QPT malicious verifiers and, (b) soundness against classical PPT adversaries. Moreover, our construction satisfies the stronger (quantum) auxiliary-input zero knowledge property and thus can be composed with other protocols secure against quantum adversaries.
2020
TCC
On Statistical Security in Two-Party Computation 📺
There has been a large body of work characterizing the round complexity of general-purpose maliciously secure two-party computation (2PC) against probabilistic polynomial time adversaries. This is particularly true for zero-knowledge, which is a special case of 2PC. In fact, in the special case of zero knowledge, optimal protocols with unconditional security against one of the two players have also been meticulously studied and constructed. On the other hand, general-purpose maliciously secure 2PC with statistical or unconditional security against one of the two participants, has remained largely unexplored so far. In this work, we initiate the study of such protocols, which we refer to as 2PC with one-sided statistical security. We completely settle the round complexity of 2PC with one-sided statistical security with respect to black-box simulation by obtaining the following tight results: - In a setting where only one party obtains an output, we design 2PC in 4 rounds with statistical security against receivers and computational security against senders. - In a setting where both parties obtain outputs, we design 2PC in 5 rounds with computational security against the party that obtains output first and statistical security against the party that obtains output last. Katz and Ostrovsky (CRYPTO 2004) showed that 2PC with black-box simulation requires at least 4 rounds when one party obtains an output and 5 rounds when both parties obtain outputs, even when only computational security is desired against both parties. Thus in these settings, not only are our results tight, but they also show that statistical security is achievable at no extra cost to round complexity. This still leaves open the question of whether 2PC can be achieved with black-box simulation in 4 rounds with statistical security against senders and computational security against receivers. Based on a lower bound on computational zero-knowledge proofs due to Katz (TCC 2008), we observe that the answer is negative unless the polynomial hierarchy collapses.
2020
TCC
CP-ABE for Circuits (and more) in the Symmetric Key Setting 📺
The celebrated work of Gorbunov, Vaikuntanathan and Wee [GVW13] provided the first key policy attribute based encryption scheme (ABE) for circuits from the Learning With Errors (LWE) assumption. However, the arguably more natural ciphertext policy variant has remained elusive, and is a central primitive not yet known from LWE. In this work, we construct the first symmetric key ciphertext policy attribute based encryption scheme (CP-ABE) for all polynomial sized circuits from the learning with errors (LWE) assumption. In more detail, the ciphertext for a message m is labelled with an access control policy f, secret keys are labelled with public attributes x from the domain of f and decryption succeeds to yield the hidden message m if and only if f(x) = 1. The size of our public and secret key do not depend on the size of the circuits supported by the scheme – this enables our construction to support circuits of unbounded size (but bounded depth). Our construction is secure against collusions of unbounded size. We note that current best CP-ABE schemes [BSW07, Wat11, LOS+10, OT10, LW12, RW13, Att14, Wee14, AHY15, CGW15, AC17, KW19] rely on pairings and only support circuits in the class NC1 (albeit in the public key setting). We adapt our construction to the public key setting for the case of bounded size circuits. The size of the ciphertext and secret key as well as running time of encryption, key generation and decryption satisfy the efficiency properties desired from CP-ABE, assuming that all algorithms have RAM access to the public key. However, the running time of the setup algorithm and size of the public key depends on the circuit size bound, restricting the construction to support circuits of a-priori bounded size. We remark that the inefficiency of setup is somewhat mitigated by the fact that setup must only be run once. We generalize our construction to consider attribute and function hiding. The compiler of lockable obfuscation upgrades any attribute based encryption scheme to predicate encryption, i.e. with attribute hiding [GKW17, WZ17]. Since lockable obfuscation can be constructed from LWE, we achieve ciphertext policy predicate encryption immediately. For function privacy, we show that the most natural notion of function hiding ABE for circuits, even in the symmetric key setting, is sufficient to imply indistinguishability obfuscation. We define a suitable weakening of function hiding to sidestep the implication and provide a construction to achieve this notion for both the key policy and ciphertext policy case. Previously, the largest function class for which function private predicate encryption (supporting unbounded keys) could be achieved was inner product zero testing, by Shen, Shi and Waters [SSW09].
2020
TCC
Optimal Broadcast Encryption from LWE and Pairings in the Standard Model 📺
Broadcast Encryption with optimal parameters was a long-standing problem, whose first solution was provided in an elegant work by Boneh, Waters and Zhandry \cite{BWZ14}. However, this work relied on multilinear maps of logarithmic degree, which is not considered a standard assumption. Recently, Agrawal and Yamada \cite{AY20} improved this state of affairs by providing the first construction of optimal broadcast encryption from Bilinear Maps and Learning With Errors (LWE). However, their proof of security was in the generic bilinear group model. In this work, we improve upon their result by providing a new construction and proof in the standard model. In more detail, we rely on the Learning With Errors (LWE) assumption and the Knowledge of OrthogonALity Assumption (KOALA) \cite{BW19} on bilinear groups. Our construction combines three building blocks: a (computational) nearly linear secret sharing scheme with compact shares which we construct from LWE, an inner-product functional encryption scheme with special properties which is constructed from the bilinear Matrix Decision Diffie Hellman (MDDH) assumption, and a certain form of hyperplane obfuscation, which is constructed using the KOALA assumption. While similar to that of Agrawal and Yamada, our construction provides a new understanding of how to decompose the construction into simpler, modular building blocks with concrete and easy-to-understand security requirements for each one. We believe this sheds new light on the requirements for optimal broadcast encryption, which may lead to new constructions in the future.
2020
TCC
Equipping Public-Key Cryptographic Primitives with Watermarking (or: A Hole Is to Watermark) 📺
Program watermarking enables users to embed an arbitrary string called a mark into a program while preserving the functionality of the program. Adversaries cannot remove the mark without destroying the functionality. Although there exist generic constructions of watermarking schemes for public-key cryptographic (PKC) primitives, those schemes are constructed from scratch and not efficient. In this work, we present a general framework to equip a broad class of PKC primitives with an efficient watermarking scheme. The class consists of PKC primitives that have a canonical all-but-one (ABO) reduction. Canonical ABO reductions are standard techniques to prove selective security of PKC primitives, where adversaries must commit a target attribute at the beginning of the security game. Thus, we can obtain watermarking schemes for many existing efficient PKC schemes from standard cryptographic assumptions via our framework. Most well-known selectively secure PKC schemes have canonical ABO reductions. Notably, we can achieve watermarking for public-key encryption whose ciphertexts and secret-keys are constant-size, and that is chosen-ciphertext secure. Our approach accommodates the canonical ABO reduction technique to the puncturable pseudorandom function (PRF) technique, which is used to achieve watermarkable PRFs. We find that canonical ABO reductions are compatible with such puncturable PRF-based watermarking schemes.
2020
TCC
Robust Secret Sharing with Almost Optimal Share Size and Security Against Rushing Adversaries 📺
We show a robust secret sharing scheme for a maximal threshold $t < n/2$ that features an optimal overhead in share size, offers security against a rushing adversary, and runs in polynomial time. Previous robust secret sharing schemes for $t < n/2$ either suffered from a suboptimal overhead, offered no (provable) security against a rushing adversary, or ran in superpolynomial time.
2020
TCC
The Share Size of Secret-Sharing Schemes for Almost All Access Structures and Graphs 📺
The share size of general secret-sharing schemes is poorly understood. The gap between the best known upper bound on the total share size per party of $2^{0.64n}$ (Applebaum et al., STOC 2020) and the best known lower bound of $\Omega(n/\log n)$ (Csirmaz, J. of Cryptology 1997) is huge (where $n$ is the number of parties in the scheme). To gain some understanding on this problem, we study the share size of secret-sharing schemes of almost all access structures, i.e., of almost all collections of authorized sets. This is motivated by the fact that in complexity, many times almost all objects are hardest (e.g., most Boolean functions require exponential size circuits). All previous constructions of secret-sharing schemes were for the worst access structures (i.e., all access structures) or for specific families of access structures. We prove upper bounds on the share size for almost all access structures. We combine results on almost all monotone Boolean functions (Korshunov, Probl. Kibern. 1981) and a construction of (Liu and Vaikuntanathan, STOC 2018) and conclude that almost all access structures have a secret-sharing scheme with share size $2^{\tilde{O}(\sqrt{n})}$. We also study graph secret-sharing schemes. In these schemes, the parties are vertices of a graph and a set can reconstruct the secret if and only if it contains an edge. Again, for this family there is a huge gap between the upper bounds -- $O(n/\log n)$ (Erd\"{o}s and Pyber, Discrete Mathematics 1997) -- and the lower bounds -- $\Omega(\log n)$ (van Dijk, Des. Codes Crypto. 1995). We show that for almost all graphs, the share size of each party is $n^{o(1)}$. This result is achieved by using robust 2-server conditional disclosure of secrets protocols, a new primitive introduced and constructed in (Applebaum et al., STOC 2020), and the fact that the size of the maximal independent set in a random graph is small. Finally, using robust conditional disclosure of secrets protocols, we improve the total share size for all very dense graphs.
2020
TCC
Batch Verification for Statistical Zero Knowledge Proofs 📺
A statistical zero-knowledge proof (SZK) for a problem $\Pi$ enables a computationally unbounded prover to convince a polynomial-time verifier that $x \in \Pi$ without revealing any additional information about $x$ to the verifier, in a strong information-theoretic sense. Suppose, however, that the prover wishes to convince the verifier that $k$ separate inputs $x_1,\dots,x_k$ all belong to $\Pi$ (without revealing anything else). A naive way of doing so is to simply run the SZK protocol separately for each input. In this work we ask whether one can do better -- that is, is efficient batch verification possible for SZK? We give a partial positive answer to this question by constructing a batch verification protocol for a natural and important subclass of SZK -- all problems $\Pi$ that have a non-interactive SZK protocol (in the common random string model). More specifically, we show that, for every such problem $\Pi$, there exists an honest-verifier SZK protocol for batch verification of $k$ instances, with communication complexity $poly(n) + k \cdot poly(\log{n},\log{k})$, where $poly$ refers to a fixed polynomial that depends only on $\Pi$ (and not on $k$). This result should be contrasted with the naive solution, which has communication complexity $k \cdot poly(n)$. Our proof leverages a new NISZK-complete problem, called Approximate Injectivity, that we find to be of independent interest. The goal in this problem is to distinguish circuits that are nearly injective, from those that are non-injective on almost all inputs.
2020
TCC
Coupling of Random Systems 📺
This paper makes three contributions. First, we present a simple theory of random systems. The main idea is to think of a probabilistic system as an equivalence class of distributions over deterministic systems. Second, we demonstrate how in this new theory, the optimal information-theoretic distinguishing advantage between two systems can be characterized merely in terms of the statistical distance of probability distributions, providing a more elementary understanding of the distance of systems. In particular, two systems that are epsilon-close in terms of the best distinguishing advantage can be understood as being equal with probability 1-epsilon, a property that holds statically, without even considering a distinguisher, let alone its interaction with the systems. Finally, we exploit this new characterization of the distinguishing advantage to prove that any threshold combiner is an amplifier for indistinguishability in the information-theoretic setting, generalizing and simplifying results from Maurer, Pietrzak, and Renner (CRYPTO 2007).
2020
TCC
Towards Defeating Backdoored Random Oracles: Indifferentiability with Bounded Adaptivity 📺
In the backdoored random-oracle (BRO) model, besides access to a random function $\hash$, adversaries are provided with a backdoor oracle that can compute arbitrary leakage functions $f$ of the function table of $\hash$. Thus, an adversary would be able to invert points, find collisions, test for membership in certain sets, and more. This model was introduced in the work of Bauer, Farshim, and Mazaheri (Crypto 2018) and extends the auxiliary-input idealized models of Unruh (Crypto 2007), Dodis, Guo, and Katz (Eurocrypt 2017), Coretti et al. (Eurocrypt 2018), and Coretti, Dodis, and Guo (Crypto~2018). It was shown that certain security properties, such as one-wayness, pseudorandomness, and collision resistance can be re-established by combining two independent BROs, even if the adversary has access to both backdoor oracles. In this work we further develop the technique of combining two or more independent BROs to render their backdoors useless in a more general sense. More precisely, we study the question of building an \emph{indifferentiable} and backdoor-free random function by combining multiple BROs. Achieving full indifferentiability in this model seems very challenging at the moment. We however make progress by showing that the xor combiner goes well beyond security against preprocessing attacks and offers indifferentiability as long as the adaptivity of queries to different backdoor oracles remains logarithmic in the input size of the BROs. We even show that an extractor-based combiner of three BROs can achieve indifferentiability with respect to a linear adaptivity of backdoor queries. Furthermore, a natural restriction of our definition gives rise to a notion of \emph{indifferentiability with auxiliary input}, for which we give two positive feasibility results. To prove these results we build on and refine techniques by Göös et al. (STOC 2015) and Kothari et al. (STOC 2017) for decomposing distributions with high entropy into distributions with more structure and show how they can be applied in the more involved adaptive settings.
2020
TCC
Revisiting Fairness in MPC: Polynomial Number of Parties and General Adversarial Structures 📺
We investigate fairness in secure multiparty computation when the number of parties n = poly(lambda) grows polynomially in the security parameter, lambda. Prior to this work, efficient protocols achieving fairness with no honest majority and polynomial number of parties were known only for the AND and OR functionalities (Gordon and Katz, TCC'09). We show the following: --We first consider symmetric Boolean functions F : {0,1}^n -> {0,1}, where the underlying function f_{n/2,n/2}: {0, ..., n/2} x {0, ..., n/2} -> {0,1} can be computed fairly and efficiently in the 2-party setting. We present an efficient protocol for any such F tolerating n/2 or fewer corruptions, for n = poly(lambda) number of parties. --We present an efficient protocol for n-party majority tolerating n/2+1 or fewer corruptions, for n = poly(lambda) number of parties. The construction extends to n/2+c or fewer corruptions, for constant c. --We extend both of the above results to more general types of adversarial structures and present instantiations of non-threshold adversarial structures of these types. These instantiations are obtained via constructions of projective planes and combinatorial designs.
2020
TCC
On the Price of Concurrency in Group Ratcheting Protocols 📺
Post-Compromise Security, or PCS, refers to the ability of a given protocol to recover—by means of normal protocol operations—from the exposure of local states of its (otherwise honest) participants. While PCS in the two-party setting has attracted a lot of attention recently, the problem of achieving PCS in the group setting—called group ratcheting here—is much less understood. On the one hand, one can achieve excellent security by simply executing, in parallel, a two-party ratcheting protocol (e.g., Signal) for each pair of members in a group. However, this incurs O(n) communication overhead for every message sent, where n is the group size. On the other hand, several related protocols were recently developed in the context of the IETF Messaging Layer Security (MLS) effort that improve the communication overhead per message to O(log n). However, this reduction of communication overhead involves a great restriction: group members are not allowed to send and recover from exposures concurrently such that reaching PCS is delayed up to n communication time slots (potentially even more). In this work we formally study the trade-off between PCS, concurrency, and communication overhead in the context of group ratcheting. Since our main result is a lower bound, we define the cleanest and most restrictive setting where the tension already occurs: static groups equipped with a synchronous (and authenticated) broadcast channel, where up to t arbitrary parties can concurrently send messages in any given round. Already in this setting, we show in a symbolic execution model that PCS requires Omega(t) communication overhead per message. Our symbolic model permits as building blocks black-box use of (even "dual") PRFs, (even key-updatable) PKE (which in our symbolic definition is at least as strong as HIBE), and broadcast encryption, covering all tools used in previous constructions, but prohibiting the use of exotic primitives. To complement our result, we also prove an almost matching upper bound of O(t(1+log(n/t))), which smoothly increases from O(log n) with no concurrency, to O(n) with unbounded concurrency, matching the previously known protocols.
2020
TCC
Stronger Security and Constructions of Multi-Designated Verifier Signatures 📺
Off-the-Record (OTR) messaging is a two-party message authentication protocol that also provides plausible deniability: there is no record that can later convince a third party what messages were actually sent. The challenge in group OTR, is to enable the sender to sign his messages so that group members can verify who sent a message (signatures should be unforgeable, even by group members). Also, we want the off-the-record property: even if some verifiers are corrupt and collude, they should not be able to prove the authenticity of a message to any outsider. Finally, we need consistency, meaning that if any group member accepts a signature, then all of them do. To achieve these properties it is natural to consider Multi-Designated Verifier Signatures (MDVS). However, existing literature defines and builds only limited notions of MDVS, where (a) the off-the-record property (source hiding) only holds when all verifiers could conceivably collude, and (b) the consistency property is not considered. The contributions of this paper are two-fold: stronger definitions for MDVS, and new constructions meeting those definitions. We strengthen source-hiding to support any subset of corrupt verifiers, and give the first formal definition of consistency. We build three new MDVS: one from generic standard primitives (PRF, key agreement, NIZK), one with concrete efficiency and one from functional encryption.
2020
TCC
Non-interactive classical verification of quantum computation 📺
In a recent breakthrough, Mahadev constructed an interactive protocol that enables a purely classical party to delegate any quantum computation to an untrusted quantum prover. We show that this same task can in fact be performed non-interactively (with setup) and in zero-knowledge. Our protocols result from a sequence of significant improvements to the original four-message protocol of Mahadev. We begin by making the first message instance-independent and moving it to an offline setup phase. We then establish a parallel repetition theorem for the resulting three-message protocol, with an asymptotically optimal rate. This, in turn, enables an application of the Fiat-Shamir heuristic, eliminating the second message and giving a non-interactive protocol. Finally, we employ classical non-interactive zero-knowledge (NIZK) arguments and classical fully homomorphic encryption (FHE) to give a zero-knowledge variant of this construction. This yields the first purely classical NIZK argument system for QMA, a quantum analogue of NP. We establish the security of our protocols under standard assumptions in quantum-secure cryptography. Specifically, our protocols are secure in the Quantum Random Oracle Model, under the assumption that Learning with Errors is quantumly hard. The NIZK construction also requires circuit-private FHE.
2020
TCC
Accumulators in (and Beyond) Generic Groups: Non-Trivial Batch Verification Requires Interaction 📺
We prove a tight lower bound on the number of group operations required for batch verification by any generic-group accumulator that stores a less-than-trivial amount of information. Specifically, we show that $\Omega(t \cdot (\lambda / \log \lambda))$ group operations are required for the batch verification of any subset of $t \geq 1$ elements, where $\lambda \in \mathbb{N}$ is the security parameter, thus ruling out non-trivial batch verification in the standard non-interactive manner. Our lower bound applies already to the most basic form of accumulators (i.e., static accumulators that support membership proofs), and holds both for known-order (and even multilinear) groups and for unknown-order groups, where it matches the asymptotic performance of the known bilinear and RSA accumulators, respectively. In addition, it complements the techniques underlying the generic-group accumulators of Boneh, B{\"{u}}nz and Fisch (CRYPTO '19) and Thakur (ePrint '19) by justifying their application of the Fiat-Shamir heuristic for transforming their interactive batch-verification protocols into non-interactive procedures. Moreover, motivated by a fundamental challenge introduced by Aggarwal and Maurer (EUROCRYPT '09), we propose an extension of the generic-group model that enables us to capture a bounded amount of arbitrary non-generic information (e.g., least-significant bits or Jacobi symbols that are hard to compute generically but are easy to compute non-generically). We prove our lower bound within this extended model, which may be of independent interest for strengthening the implications of impossibility results in idealized models.
2020
TCC
On the Power of an Honest Majority in Three-Party Computation Without Broadcast 📺
Fully secure multiparty computation (MPC) allows a set of parties to compute some function of their inputs, while guaranteeing correctness, privacy, fairness, and output delivery. Understanding the necessary and sufficient assumptions that allow for fully secure MPC is an important goal. Cleve (STOC'86) showed that full security cannot be obtained in general without an honest majority. Conversely, by Rabin and Ben-Or (FOCS'89), assuming a broadcast channel and an honest majority, any function can be computed with full security. Our goal is to characterize the set of functionalities that can be computed with full security, assuming an honest majority, but no broadcast. This question was fully answered by Cohen et al. (TCC'16) -- for the restricted class of \emph{symmetric} functionalities (where all parties receive the same output). Instructively, their results crucially rely on \emph{agreement} and do not carry over to general \emph{asymmetric} functionalities. In this work, we focus on the case of three-party asymmetric functionalities, providing a variety of necessary and sufficient conditions to enable fully secure computation. An interesting use-case of our results is \emph{server-aided} computation, where an untrusted server helps two parties to carry out their computation. We show that without a broadcast assumption, the resource of an external non-colluding server provides no additional power. Namely, a functionality can be computed with the help of the server if and only if it can be computed without it. For fair coin tossing, we further show that the optimal bias for three-party (server-aided) $r$-round protocol remains $\Theta(1/r)$ (as in the two-party setting).
2020
TCC
Synchronous Constructive Cryptography 📺
This paper proposes a simple synchronous composable security framework as an instantiation of the Constructive Cryptography framework, aiming to capture minimally, without unnecessary artefacts, exactly what is needed to state synchronous security guarantees. The objects of study are specifications (i.e., sets) of systems, and traditional security properties like consistency and validity can naturally be understood as specifications, thus unifying composable and property-based definitions. The framework's simplicity is in contrast to current composable frameworks for synchronous computation which are built on top of an asynchronous framework (e.g. the UC framework), thus not only inheriting artefacts and complex features used to handle asynchronous communication, but adding additional overhead to capture synchronous communication. As a second, independent contribution we demonstrate how secure (synchronous) multi-party computation protocols can be understood as constructing a computer that allows a set of parties to perform an arbitrary, on-going computation. An interesting aspect is that the instructions of the computation need not be fixed before the protocol starts but can also be determined during an on-going computation, possibly depending on previous outputs.
2020
TCC
Characterizing Deterministic-Prover Zero Knowledge 📺
Randomness is typically thought to be essential for zero knowledge protocols. Following this intuition, Goldreich and Oren (Journal of Cryptology 94) proved that auxiliary-input zero knowledge cannot be achieved with a deterministic prover. On the other hand, positive results are only known in the honest-verifier setting, or when the prover is given at least a restricted source of entropy. We prove that removing (or just bounding) the verifier's auxiliary input, deterministic-prover zero knowledge becomes feasible: - Assuming non-interactive witness-indistinguishable proofs and subexponential indistinguishability obfuscation and one-way functions, we construct deterministic-prover zero-knowledge arguments for $\NP\cap \coNP$ against verifiers with bounded non-uniform auxiliary input. - Assuming also keyless hash functions that are collision-resistant against bounded-auxiliary-input quasipolynomial-time attackers, we construct similar arguments for all of $\NP$. Together with the result of Goldreich and Oren, this characterizes when deterministic-prover zero knowledge is feasible. We also demonstrate the necessity of strong assumptions, by showing that deterministic prover zero knowledge arguments for a given language imply witness encryption for that language. We further prove that such arguments can always be collapsed to two messages and be made laconic. These implications rely on a more general connection with the notion of predictable arguments by Faonio, Nielsen, and Venturi (PKC 17).
2020
TCC
Zero-Communication Reductions 📺
We introduce a new primitive in information-theoretic cryptography, namely zero-communication reductions (ZCR), with different levels of security. We relate ZCR to several other important primitives, and obtain new results on upper and lower bounds. In particular, we obtain new upper bounds for PSM, CDS and OT complexity of functions, which are exponential in the information complexity of the functions. These upper bounds complement the results of Beimel et al. (2014) which broke the circuit-complexity barrier for ``high complexity'' functions; our results break the barrier of input size for ``low complexity'' functions. We also show that lower bounds on secure ZCR can be used to establish lower bounds for OT-complexity. We recover the known (linear) lower bounds on OT-complexity by Beimal and Malkin (2004) via this new route. We also formulate the lower bound problem for secure ZCR in purely linear-algebraic terms, by defining the invertible rank of a matrix. We present an Invertible Rank Conjecture, proving which will establish super-linear lower bounds for OT-complexity (and if accompanied by an explicit construction, will provide explicit functions with super-linear circuit lower bounds).
2020
TCC
NIZK from SNARG 📺
We give a construction of a non-interactive zero-knowledge (NIZK) argument for all NP languages based on a succinct non-interactive argument (SNARG) for all NP languages and a one-way function. The succinctness requirement for the SNARG is rather mild: We only require that the proof size be $|\pi|=\mathsf{poly}(\lambda)(|x|+|w|)^c$ for some constant $c<1/2$, where $|x|$ is the statement length, $|w|$ is the witness length, and $\lambda$ is the security parameter. Especially, we do not require anything about the efficiency of the verification. Based on this result, we also give a generic conversion from a SNARG to a zero-knowledge SNARG assuming the existence of CPA secure public-key encryption. For this conversion, we require a SNARG to have efficient verification, i.e., the computational complexity of the verification algorithm is $\mathsf{poly}(\lambda)(|x|+|w|)^{o(1)}$. Before this work, such a conversion was only known if we additionally assume the existence of a NIZK. Along the way of obtaining our result, we give a generic compiler to upgrade a NIZK for all NP languages with non-adaptive zero-knowledge to one with adaptive zero-knowledge. Though this can be shown by carefully combining known results, to the best of our knowledge, no explicit proof of this generic conversion has been presented.
2020
TCC
Weakly Extractable One-Way Functions 📺
A family of one-way functions is extractable if given a random function in the family, an efficient adversary can only output an element in the image of the function if it knows a corresponding preimage. This knowledge extraction guarantee is particularly powerful since it does not require interaction. However, extractable one-way functions (EFs) are subject to a strong barrier: assuming indistinguishability obfuscation, no EF can have a knowledge extractor that works against all polynomial-size non-uniform adversaries. This holds even for non-black-box extractors that use the adversary's code. Accordingly, the literature considers either EFs based on non-falsifiable knowledge assumptions, where the extractor is not explicitly given, but it is only assumed to exist, or EFs against a restricted class of adversaries with a bounded non-uniform advice. This falls short of cryptography's gold standard of security that requires an explicit reduction against non-uniform adversaries of arbitrary polynomial size. Motivated by this gap, we put forward a new notion of weakly extractable one-way functions (WEFs) that circumvents the known barrier. We then prove that WEFs are inextricably connected to the long standing question of three-message zero knowledge protocols. We show that different flavors of WEFs are sufficient and necessary for three-message zero knowledge to exist. The exact flavor depends on whether the protocol is computational or statistical zero knowledge and whether it is publicly or privately verifiable. Combined with recent progress on constructing three message zero-knowledge, we derive a new connection between keyless multi-collision resistance and the notion of incompressibility and the feasibility of non-interactive knowledge extraction. Another interesting corollary of our result is that in order to construct three-message zero knowledge arguments, it suffices to construct such arguments where the honest prover strategy is unbounded.
2020
TCC
Algebraic Distinguishers: From Discrete Logarithms to Decisional Uber Assumptions 📺
The algebraic group model, introduced by Fuchsbauer, Kiltz and Loss (CRYPTO '18), is a substantial relaxation of the generic group model capturing algorithms that may exploit the representation of the underlying group. This idealized yet realistic model was shown useful for reasoning about cryptographic assumptions and security properties defined via computational problems. However, it does not generally capture assumptions and properties defined via decisional problems. As such problems play a key role in the foundations and applications of cryptography, this leaves a significant gap between the restrictive generic group model and the standard model. We put forward the notion of algebraic distinguishers, strengthening the algebraic group model by enabling it to capture decisional problems. Within our framework we then reveal new insights on the algebraic interplay between a wide variety of decisional assumptions. These include the decisional Diffie-Hellman assumption, the family of Linear assumptions in multilinear groups, and the family of Uber assumptions in bilinear groups. Our main technical results establish that, from an algebraic perspective, these decisional assumptions are in fact all polynomially equivalent to either the most basic discrete logarithm assumption or to its higher-order variant, the $q$-discrete logarithm assumption. On the one hand, these results increase the confidence in these strong decisional assumptions, while on the other hand, they enable to direct cryptanalytic efforts towards either extracting discrete logarithms or significantly deviating from standard algebraic techniques.
2020
TCC
Continuous Group Key Agreement with Active Security 📺
A continuous group key agreement (CGKA) protocol allows a long-lived group of parties to agree on a continuous stream of fresh secret key material. CGKA protocols allow parties to join and leave mid-session but may neither rely on special group managers, trusted third parties, nor on any assumptions about if, when, or for how long members are online. CGKA captures the core of an emerging generation of highly practical end-to-end secure group messaging (SGM) protocols. In light of their practical origins, past work on CGKA protocols have been subject to stringent engineering and efficiency constraints at the cost of diminished security properties. In this work, we somewhat relax those constraints, instead considering progressively more powerful adversaries. To that end, we present 3 new security notions of increasing strength. Already the weakest of the 3 (passive security) captures attacks to which all prior CGKA constructions are vulnerable. Moreover, the 2 stronger (active security) notions even allow the adversary to use parties' exposed states combined with full network control to mount attacks. In particular, this is closely related to so-called insider attacks which involve malicious group members actively deviating from the protocol. Although insiders are of explicit interest to practical CGKA/SGM designers, our understanding of this class of attackers is still quite nascent. Indeed, we believe ours to be the first security notions in the literature to precisely formulate meaningful guarantees against (a broad class of) insiders. For each of the 3 new security notions we give a new CGKA scheme enjoying sub-linear (potentially even logarithmic) communication complexity in the number of group members (on par with the asymptotics of state-of-the-art practical constructions). We prove each scheme optimally secure, in the sense that the only security violations possible are those necessarily implied by correctness.
2020
TCC
Classical Verification of Quantum Computations with Efficient Verifier 📺
In this paper, we extend the protocol of classical verification of quantum computations (CVQC) recently proposed by Mahadev to make the verification efficient. Our result is obtained in the following three steps: \begin{itemize} \item We show that parallel repetition of Mahadev's protocol has negligible soundness error. This gives the first constant round CVQC protocol with negligible soundness error. In this part, we only assume the quantum hardness of the learning with error (LWE) problem similar to Mahadev's work. \item We construct a two-round CVQC protocol in the quantum random oracle model (QROM) where a cryptographic hash function is idealized to be a random function. This is obtained by applying the Fiat-Shamir transform to the parallel repetition version of Mahadev's protocol. \item We construct a two-round CVQC protocol with an efficient verifier in the CRS+QRO model where both prover and verifier can access a (classical) common reference string generated by a trusted third party in addition to quantum access to QRO. Specifically, the verifier can verify a $\mathsf{QTIME}(T)$ computation in time $\mathsf{poly}(\lambda,\log T)$ where $\lambda$ is the security parameter. For proving soundness, we assume that a standard model instantiation of our two-round protocol with a concrete hash function (say, SHA-3) is sound and the existence of post-quantum indistinguishability obfuscation and post-quantum fully homomorphic encryption in addition to the quantum hardness of the LWE problem. \end{itemize}
2020
TCC
Blockchains from Non-Idealized Hash Functions 📺
The formalization of concrete, non-idealized hash function properties sufficient to prove the security of Bitcoin and related protocols has been elusive, as all previous security analyses of blockchain protocols have been performed in the random oracle model. In this paper we identify three such properties, and then construct a blockchain protocol whose security can be reduced to them in the standard model assuming a common reference string (CRS). The three properties are: {\em collision resistance}, {\em computational randomness extraction} and {\em iterated hardness}. While the first two properties have been extensively studied, iterated hardness has been empirically stress-tested since the rise of Bitcoin; in fact, as we demonstrate in this paper, any attack against it (assuming the other two properties hold) results in an attack against Bitcoin. In addition, iterated hardness puts forth a new class of search problems which we term {\em iterated search problems} (ISP). ISPs enable the concise and modular specification of blockchain protocols, and may be of independent interest.
2020
TCC
Lower Bounds on the Time/Memory Tradeoff of Function Inversion 📺
We study time/memory tradeoffs of function inversion: an algorithm, i.e., an inverter, equipped with an s-bit advice on a randomly chosen function f:[n]->[n] and using q oracle queries to f, tries to invert a randomly chosen output y of f (i.e., to find x such that f(x)=y). Much progress was done regarding adaptive function inversion - the inverter is allowed to make adaptive oracle queries. Hellman [IEEE transactions on Information Theory '80] presented an adaptive inverter that inverts with high probability a random f. Fiat and Naor [SICOMP '00] proved that for any s,q with s^3 q = n^3 (ignoring low-order terms), an s-advice, q-query variant of Hellman's algorithm inverts a constant fraction of the image points of any function. Yao [STOC '90] proved a lower bound of sq<=n for this problem. Closing the gap between the above lower and upper bounds is a long-standing open question. Very little is known of the non-adaptive variant of the question - the inverter chooses its queries in advance. The only known upper bounds, i.e., inverters, are the trivial ones (with s+q=n), and the only lower bound is the above bound of Yao. In a recent work, Corrigan-Gibbs and Kogan [TCC '19] partially justified the difficulty of finding lower bounds on non-adaptive inverters, showing that a lower bound on the time/memory tradeoff of non-adaptive inverters implies a lower bound on low-depth Boolean circuits. Bounds that, for a strong enough choice of parameters, are notoriously hard to prove. We make progress on the above intriguing question, both for the adaptive and the non-adaptive case, proving the following lower bounds on restricted families of inverters: Linear-advice (adaptive inverter): If the advice string is a linear function of f (e.g., A*f, for some matrix A, viewing f as a vector in [n]^n), then s+q is \Omega(n). The bound generalizes to the case where the advice string of f_1 + f_2, i.e., the coordinate-wise addition of the truth tables of f_1 and f_2, can be computed from the description of f_1 and f_2 by a low communication protocol. Affine non-adaptive decoders: If the non-adaptive inverter has an affine decoder - it outputs a linear function, determined by the advice string and the element to invert, of the query answers - then s is \Omega(n) (regardless of q). Affine non-adaptive decision trees: If the non-adaptive inverter is a d-depth affine decision tree - it outputs the evaluation of a decision tree whose nodes compute a linear function of the answers to the queries - and q < cn for some universal c>0, then s is \Omega(n/d \log n).
2020
TCC
Ledger Combiners for Fast Settlement 📺
Blockchain protocols based on variations of the longest-chain rule—whether following the proof-of-work paradigm or one of its alternatives—suffer from a fundamental latency barrier. This arises from the need to collect a sufficient number of blocks on top of a transaction-bearing block to guarantee the transaction’s stability while limiting the rate at which blocks can be created in order to prevent security-threatening forks. Our main result is a black-box security-amplifying combiner based on parallel composition of m blockchains that achieves \Theta(m)-fold security amplification for conflict-free transactions or, equivalently, \Theta(m)-fold reduction in latency. Our construction breaks the latency barrier to achieve, for the first time, a ledger based purely on Nakamoto longest-chain consensus guaranteeing worst-case constant-time settlement for conflict-free transactions: settlement can be accelerated to a constant multiple of block propagation time with negligible error. Operationally, our construction shows how to view any family of blockchains as a unified, virtual ledger without requiring any coordination among the chains or any new protocol metadata. Users of the system have the option to inject a transaction into a single constituent blockchain or---if they desire accelerated settlement---all of the constituent blockchains. Our presentation and proofs introduce a new formalism for reasoning about blockchains, the dynamic ledger, and articulate our constructions as transformations of dynamic ledgers that amplify security. We also illustrate the versatility of this formalism by presenting robust-combiner constructions for blockchains that can protect against complete adversarial control of a minority of a family of blockchains.
2020
TCC
Round Optimal Secure Multiparty Computation from Minimal Assumptions 📺
We construct a four round secure multiparty computation (MPC) protocol in the plain model that achieves security against any dishonest majority. The security of our protocol relies only on the existence of four round oblivious transfer. This culminates the long line of research on constructing round-efficient MPC from minimal assumptions (at least w.r.t. black-box simulation).
2020
TCC
Reusable Two-Round MPC from DDH 📺
We present a reusable two-round multi-party computation (MPC) protocol from the Decisional Diffie Hellman assumption (DDH). In particular, we show how to upgrade any secure two-round MPC protocol to allow reusability of its first message across multiple computations, using Homomorphic Secret Sharing (HSS) and pseudorandom functions in NC1 — each of which can be instantiated from DDH. In our construction, if the underlying two-round MPC protocol is secure against semi-honest adversaries (in the plain model) then so is our reusable two-round MPC protocol. Similarly, if the underlying two-round MPC protocol is secure against malicious adversaries (in the common random/reference string model) then so is our reusable two-round MPC protocol. Previously, such reusable two-round MPC protocols were only known under assumptions on lattices. At a technical level, we show how to upgrade any two-round MPC protocol to a first message succinct two-round MPC protocol, where the first message of the protocol is generated independently of the computed circuit (though it is not reusable). This step uses homomorphic secret sharing (HSS) and low-depth pseudorandom functions. Next, we show a generic transformation that upgrades any first message succinct two-round MPC to allow for reusability of its first message.
2020
TCC
Transparent Error Correcting in a Computationally Bounded World 📺
We construct uniquely decodable codes against channels which are computationally bounded. Our construction requires only a public-coin (transparent) setup. All prior work for such channels either required a setup with secret keys and states, could not achieve unique decoding, or got worse rates (for a given bound on codeword corruptions). On the other hand, our construction relies on a strong cryptographic hash function with security properties that we only instantiate in the random oracle model.
2020
TCC
On Computational Shortcuts for Information-Theoretic PIR 📺
Information-theoretic {\em private information retrieval} (PIR) schemes have attractive concrete efficiency features. However, in the standard PIR model, the computational complexity of the servers must scale linearly with the database size. We study the possibility of bypassing this limitation in the case where the database is a truth table of a ``simple'' function, such as a union of (multi-dimensional) intervals or convex shapes, a decision tree, or a DNF formula. This question is motivated by the goal of obtaining lightweight {\em homomorphic secret sharing} (HSS) schemes and secure multiparty computation (MPC) protocols for the corresponding families. We obtain both positive and negative results. For ``first-generation'' PIR schemes based on Reed-Muller codes, we obtain computational shortcuts for the above function families, with the exception of DNF formulas for which we show a (conditional) hardness results. For ``third-generation'' PIR schemes based on matching vectors, we obtain stronger hardness results that apply to all of the above families. Our positive results yield new information-theoretic HSS schemes and MPC protocols with attractive efficiency features for simple but useful function families. Our negative results establish new connections between information-theoretic cryptography and fine-grained complexity.
2020
TCC
Constant Ciphertext-Rate Non-Committing Encryption from Standard Assumptions 📺
Non-committing encryption (NCE) is a type of public key encryption which comes with the ability to equivocate ciphertexts to encryptions of arbitrary messages, i.e., it allows one to find coins for key generation and encryption which ``explain'' a given ciphertext as an encryption of any message. NCE is the cornerstone to construct adaptively secure multiparty computation [Canetti et al. STOC'96] and can be seen as the quintessential notion of security for public key encryption to realize ideal communication channels. A large body of literature investigates what is the best message-to-ciphertext ratio (i.e., the rate) that one can hope to achieve for NCE. In this work we propose a near complete resolution to this question and we show how to construct NCE with constant rate in the plain model from a variety of assumptions, such as the hardness of the learning with errors (LWE), the decisional Diffie-Hellman (DDH), or the quadratic residuosity (QR) problem. Prior to our work, constructing NCE with constant rate required a trusted setup and indistinguishability obfuscation [Canetti et al. ASIACRYPT'17].
2020
TCC
Super-Linear Time-Memory Trade-Offs for Symmetric Encryption 📺
We build symmetric encryption schemes from a pseudorandom function/permutation with domain size $N$ which have very high security -- in terms of the amount of messages $q$ they can securely encrypt -- assuming the adversary has $S < N$ bits of memory. We aim to minimize the number of calls $k$ we make to the underlying primitive to achieve a certain $q$, or equivalently, to maximize the achievable $q$ for a given $k$. We target in particular $q \gg N$, in contrast to recent works (Jaeger and Tessaro, EUROCRYPT '19; Dinur, EUROCRYPT '20) which aim to beat the birthday barrier with one call when $S < \sqrt{N}$. Our first result gives new and explicit bounds for the Sample-then-Extract paradigm by Tessaro and Thiruvengadam (TCC '18). We show instantiations for which $q =\Omega((N/S)^{k})$. If $S < N^{1- \alpha}$, Thiruvengadam and Tessaro's weaker bounds only guarantee $q > N$ when $k = \Omega(\log N)$. In contrast, here, we show this is true already for $k = O(1/\alpha)$. We also consider a scheme by Bellare, Goldreich and Krawczyk (CRYPTO '99) which evaluates the primitive at $k$ independent random strings, and masks the message with the XOR of the outputs. Here, we show $q= \Omega((N/S)^{k/2})$, using new combinatorial bounds on the list-decodability of XOR codes which are of independent interest. We also study best-possible attacks against this construction.
2020
TCC
A Secret-Sharing Based MPC Protocol for Boolean Circuits with Good Amortized Complexity 📺
We present a new secure multiparty computation protocol in the preprocessing model that allows for the evaluation of a number of instances of a boolean circuit in parallel, with a small online communication complexity per instance of $10$ bits per party and multiplication gate. Our protocol is secure against an active dishonest majority, and can also be transformed, via existing techniques, into a protocol for the evaluation of a single “well-formed” boolean circuit with the same complexity per multiplication gate at the cost of some overhead that depends on the topology of the circuit. Our protocol uses an approach introduced recently in the setting of honest majority and information-theoretical security which, using an algebraic notion called reverse multiplication friendly embeddings, essentially transforms a batch of evaluations of an arithmetic circuit over a small ?eld into one evaluation of another arithmetic circuit over a larger ?eld. To obtain security against a dishonest majority we combine this approach with the well-known SPDZ protocol that operates over a large ?eld. Structurally our protocol is most similar to MiniMAC, a protocol which bases its security on the use of error-correcting codes, but our protocol has a communication complexity which is half of that of MiniMAC when the best available binary codes are used. With respect to certain variant of MiniMAC that utilizes codes over larger ?elds, our communication complexity is slightly worse; however, that variant of MiniMAC needs a much larger preprocessing than ours. We also show that our protocol also has smaller amortized communication complexity than Committed MPC, a protocol for general ?elds based on homomorphic commitments, if we use the best available constructions for those commitments. Finally, we construct a preprocessing phase from oblivious transfer based on ideas from MASCOT and Committed MPC.
2020
TCC
Asynchronous Byzantine Agreement with Subquadratic Communication 📺
Understanding the communication complexity of Byzantine agreement (BA) is a fundamental problem in distributed computing. In particular, as protocols are run with a large number of parties (as, e.g., in the context of blockchain protocols), it is important to understand the dependence of the communication on the number of parties~$n$. Although adaptively secure BA protocols with $o(n^2)$ communication are known in the synchronous and partially synchronous settings, no such protocols are known in the fully asynchronous case. We show here an asynchronous BA protocol with subquadratic communication tolerating an adaptive adversary who can corrupt $f<(1-\epsilon)n/3$ of the parties (for any $\epsilon>0$). One variant of our protocol assumes initial setup done by a trusted dealer, after which an unbounded number of BA executions can be run; alternately, we can achieve subquadratic \emph{amortized} communication with no prior setup. We also show that some form of setup is needed for (non-amortized) subquadratic BA tolerating $\Theta(n)$ corrupted parties. As a contribution of independent interest, we show a secure-computation protocol in the same threat model that has $o(n^2)$ communication when computing no-input functionalities with short output (e.g., coin tossing).
2020
TCC
Lower Bounds for Multi-Server Oblivious RAMs 📺
In this work, we consider oblivious RAMs (ORAM) in a setting with multiple servers and the adversary may corrupt a subset of the servers. We present an $\Omega(log n)$ overhead lower bound for any k-server ORAM that limits any PPT adversary to distinguishing advantage at most $1/4k$ when only one server is corrupted. In other words, if one insists on negligible distinguishing advantage, then multi-server ORAMs cannot be faster than single-server ORAMs even with polynomially many servers of which only one unknown server is corrupted. Our results apply to ORAMs that may err with probability at most 1/128 as well as scenarios where the adversary corrupts larger subsets of servers. We also extend our lower bounds to other important data structures including oblivious stacks, queues, deques, priority queues and search trees.
2020
TCC
On the Security of Time-Lock Puzzles and Timed Commitments 📺
Time-lock puzzles—problems whose solution requires some amount of \emph{sequential} effort—have recently received increased interest (e.g., in the context of verifiable delay functions). Most constructions rely on the sequential-squaring conjecture that computing $g^{2^T} \bmod N$ for a uniform~$g$ requires at least $T$ (sequential) steps. We study the security of time-lock primitives from two perspectives: 1. We give the first hardness result about the sequential-squaring conjecture. Namely, in a quantitative version of the algebraic group model (AGM) that we call the \emph{strong} AGM, we show that any speed up of sequential squaring is as hard as factoring $N$. 2. We then focus on \emph{timed commitments}, one of the most important primitives that can be obtained from time-lock puzzles. We extend existing security definitions to settings that may arise when using timed commitments in higher-level protocols, and give the first construction of \emph{non-malleable} timed commitments. As a building block of independent interest, we also define (and give constructions for) a related primitive called \emph{timed public-key encryption}.
2020
TCC
Lossiness and Entropic Hardness for Ring-LWE 📺
The hardness of the Ring Learning with Errors problem (RLWE) is a central building block for efficiency-oriented lattice-based cryptography. Many applications use an ``entropic'' variant of the problem where the so-called ``secret'' is not distributed uniformly as prescribed but instead comes from some distribution with sufficient min-entropy. However, the hardness of the entropic variant has not been substantiated thus far. For standard LWE (not over rings) entropic results are known, using a ``lossiness approach'' but it was not known how to adapt this approach to the ring setting. In this work we present the first such results, where entropic security is established either under RLWE or under the Decisional Small Polynomial Ratio (DSPR) assumption which is a mild variant of the NTRU assumption. In the context of general entropic distributions, our results in the ring setting essentially match the known lower bounds (Bolboceanu et al., Asiacrypt 2019; Brakerski and Döttling, Eurocrypt 2020).
2020
TCC
Towards Non-Interactive Witness Hiding 📺
Witness hiding proofs require that the verifier cannot find a witness after seeing a proof. The exact round complexity needed for witness hiding proofs has so far remained an open question. In this work, we provide compelling evidence that witness hiding proofs are achievable non-interactively for wide classes of languages. We use non-interactive witness indistinguishable proofs as the basis for all of our protocols. We give four schemes in different settings under different assumptions: – A universal non-interactive proof that is witness hiding as long as any proof system, possibly an inefficient and/or non-uniform scheme, is witness hiding, has a known bound on verifier runtime, and has short proofs of soundness. – A non-uniform non-interactive protocol justified under a worst-case complexity assumption that is witness hiding and efficient, but may not have short proofs of soundness. – A new security analysis of the two-message argument of Pass [Crypto 2003], showing witness hiding for any non-uniformly hard distribution. We propose a heuristic approach to removing the first message, yielding a non-interactive argument. – A witness hiding non-interactive proof system for languages with unique witnesses, assuming the non-existence of a weak form of witness encryption for any language in NP ? coNP.
2020
TCC
Multi-key Fully-Homomorphic Encryption in the Plain Model 📺
The notion of multi-key fully homomorphic encryption (multi-key FHE) [Lopez-Alt, Tromer, Vaikuntanathan, STOC'12] was proposed as a generalization of fully homomorphic encryption to the multiparty setting. In a multi-key FHE scheme for $n$ parties, each party can individually choose a key pair and use it to encrypt its own private input. Given n ciphertexts computed in this manner, the parties can homomorphically evaluate a circuit C over them to obtain a new ciphertext containing the output of C, which can then be decrypted via a decryption protocol. The key efficiency property is that the size of the (evaluated) ciphertext is independent of the size of the circuit. Multi-key FHE with one-round decryption [Mukherjee and Wichs, Eurocrypt'16], has found several powerful applications in cryptography over the past few years. However, an important drawback of all such known schemes is that they require a trusted setup. In this work, we address the problem of constructing multi-key FHE in the plain model. We obtain the following results: - A multi-key FHE scheme with one-round decryption based on the hardness of learning with errors (LWE), ring LWE, and decisional small polynomial ratio (DSPR) problems. - A variant of multi-key FHE where we relax the decryption algorithm to be non-compact -- i.e., where the decryption complexity can depend on the size of C -- based on the hardness of LWE. We call this variant multi-homomorphic encryption (MHE). We observe that MHE is already sufficient for some of the applications of multi-key FHE.
2020
TCC
Expected-Time Cryptography: Generic Techniques and Applications to Concrete Soundness 📺
This paper studies concrete security with respect to expected-time adversaries. Our first contribution is a set of generic tools to obtain tight bounds on the advantage of an adversary with expected-time guarantees. We apply these tools to derive bounds in the random-oracle and generic-group models, which we show to be tight. As our second contribution, we use these results to derive concrete bounds on the soundness of public-coin proofs and arguments of knowledge. Under the lens of concrete security, we revisit a paradigm by Bootle at al. (EUROCRYPT '16) that proposes a general Forking Lemma for multi-round protocols which implements a rewinding strategy with expected-time guarantees. We give a tighter analysis, as well as a modular statement. We adopt this to obtain the first quantitative bounds on the soundness of Bulletproofs (Bünz et al., S&P 2018), which we instantiate with our expected-time generic-group analysis to surface inherent dependence between the concrete security and the statement to be proved.
2020
TCC
FHE-Based Bootstrapping of Designated-Prover NIZK 📺
We present a novel tree-based technique that can convert any designated-prover NIZK proof system (DP-NIZK) which maintains zero-knowledge only for single statement, into one that allows to prove an unlimited number of statements in ZK, while maintaining all parameters succinct. Our transformation requires leveled fully-homomorphic encryption. We note that single-statement DP-NIZK can be constructed from any one-way function. We also observe a two-way derivation between DP-NIZK and attribute-based signatures (ABS), and as a result derive now constructions of ABS and homomorphic signatures (HS). Our construction improves upon the prior construction of lattice-based DP-NIZK by Kim and Wu (Crypto 2018) since we only require leveled FHE as opposed to HS (which also translates to improved LWE parameters when instantiated). Alternatively, the recent construction of NIZK without preprocessing from either circular-secure FHE (Canetti et al., STOC 2019) or polynomial Learning with Errors (Peikert and Shiehian, Crypto 2019) could be used to obtain a similar final statement. Nevertheless, we note that our statement is formally incomparable to these works (since leveled FHE is not known to imply circular secure FHE or the hardness of LWE). We view this as evidence for the potential in our technique, which we hope can find additional applications in future works.
2020
TCC
Secure Massively Parallel Computation for Dishonest Majority 📺
This work concerns secure protocols in the massively parallel computation (MPC) model, which is one of the most widely-accepted models for capturing the challenges of writing protocols for the types of parallel computing clusters which have become commonplace today (MapReduce, Hadoop, Spark, etc.). Recently, the work of Chan et al. (ITCS ’20) initiated this study, giving a way to compile any MPC protocol into a secure one in the common random string model, achieving the standard secure multi-party computation definition of security with up to 1/3 of the parties being corrupt. We are interested in achieving security for much more than 1/3 corruptions. To that end, we give two compilers for MPC protocols, which assume a simple public-key infrastructure, and achieve semi-honest security for all-but-one corruptions. Our first compiler assumes hardness of the learning-with-errors (LWE) problem, and works for any MPC protocol with “short” output—that is, where the output of the protocol can fit into the storage space of one machine, for instance protocols that output a trained machine learning model. Our second compiler works for any MPC protocol (even ones with a long output, such as sorting) but assumes, in addition to LWE, indistinguishability obfuscation and a circular secure variant of threshold FHE.
2020
TCC
Batch Verification and Proofs of Proximity with Polylog Overhead 📺
Suppose Alice wants to convince Bob of the correctness of k NP statements. Alice could send k witnesses to Bob, but as k grows the communication becomes prohibitive. Is it possible to convince Bob using smaller communication (without making cryptographic assumptions or bounding the computational power of a malicious Alive)? This is the question of batch verification for NP statements. Our main result is a new interactive proof protocol for verifying the correctness of k UP statements (NP statements with a unique witness) using communication that is poly-logarithmic in k (and a fixed polynomial in the length of a single witness). This result is obtained by making progress on a different question in the study of interactive proofs. Suppose Alice wants to convince Bob that a huge dataset has some property. Can this be done if Bob can't even read the entire input? In other words, what properties can be verified in sublinear time? An Interactive Proof of Proximity guarantees that Bob accepts if the input has the property, and rejects if the input is far (say in Hamming distance) from having the property. Two central complexity measures of such a protocol are the query and communication complexities (which should both be sublinear). For every query parameter $q$, and for every language in logspace uniform NC, we construct an interactive proof of proximity with query complexity $q$ and communication complexity $(n/q) \cdot \polylog(n)$. Both results are optimal up to poly-logarithmic factors, under reasonable complexity-theoretic or cryptographic assumptions. The second result, which is our main technical contribution, builds on a distance amplification technique introduced in a beautiful recent work of Ben-Sasson, Kopparty and Saraf [CCC 2018].
2020
TCC
Towards Multiparty Computation Withstanding Coercion of All Parties 📺
Incoercible multi-party computation [Canetti-Gennaro ’96] allows parties to engage in secure computation with the additional guarantee that the public transcript of the computation cannot be used by a coercive external entity to verify representations made by the parties regarding their inputs to and outputs from the computation. That is, any deductions regarding the truthfulness of such representations made by the parties could be made even without access to the public transcript. To date, all incoercible secure computation protocols withstand coercion of only a fraction of the parties, or else assume that all parties use an execution environment that makes some crucial parts of their local states physically inaccessible even to themselves. We consider, for the first time, the setting where all parties are coerced, and the coercer expects to see the entire history of the computation.In this setting we construct: - A general multi-party computation protocol that withstands coercion of all parties, as long as none of the coerced parties cooperates with the coercer, namely they all use the prescribed ``faking algorithm'' upon coercion. We refer to this case as cooperative incoercibility. The protocol uses deniable encryption and indistiguishability obfuscation, and takes 4 rounds of communication. - A general two-party computation protocol that withstands even the ``mixed'' case where some of the coerced parties cooperate with the coercer and disclose their true local states. This protocol is limited to computing functions where the input of one of the parties is taken from a small (poly-size) domain. This protocol uses deniable encryption with public deniability for one of the parties; when instantiated using the deniable encryption of Canetti, Park, and Poburinnaya [Crypto'20], it takes 3 rounds of communication. Finally, we show that protocols with certain communication pattern cannot be incoercible, even in a weaker setting where only some parties are coerced.
2020
TCC
Mr NISC: Multiparty Reusable Non-Interactive Secure Computation 📺
Reducing interaction in Multiparty Computation (MPC) is a highly desirable goal in cryptography. It is known that 2-round MPC can be based on the minimal assumption of 2-round Oblivious Transfer (OT) [Benhamouda and Lin, Garg and Srinivasan, EC 2018], and 1-round MPC is impossible in general. In this work, we propose a natural ``hybrid'' model, called \emph{multiparty reusable Non-Interactive Secure Computation (mrNISC)}. In this model, parties publish encodings of their private inputs $x_i$ on a public bulletin board, once and for all. Later, any subset $I$ of them can compute \emph{on-the-fly} a function $f$ on their inputs $\vec x_I = {\{x_i\}}_{i \in I}$ by just sending a single message to a stateless evaluator, conveying the result $f(\vec x_I)$ and nothing else. Importantly, the input encodings can be \emph{reused} in any number of on-the-fly computations, and the same classical simulation security guaranteed by multi-round MPC, is achieved. In short, mrNISC has a minimal yet ``tractable'' interaction pattern. We initiate the study of mrNISC on several fronts. First, we formalize the model of mrNISC protocols, and present both a UC security definition and a game-based security definition. Second, we construct mrNISC protocols in the plain model with semi-honest and semi-malicious security based on pairing groups. Third, we demonstrate the power of mrNISC by showing two applications: non-interactive MPC (NIMPC) with reusable setup and a distributed version of program obfuscation. At the core of our construction of mrNISC is a witness encryption scheme for a special language that verifies Non-Interactive Zero-Knowledge (NIZK) proofs of the validity of computations over committed values, which is of independent interest.
2020
TCC
Security analysis of SPAKE2+ 📺
We show that a slight variant of Protocol SPAKE2+, which was presented but not analyzed in [Cash, Kiltz, Shoup 2008], is a secure *asymmetric* password-authenticated key exchange protocol (PAKE), meaning that the protocol still provides good security guarantees even if a server is compromised and the password file stored on the server is leaked to an adversary. The analysis is done in the UC framework (i.e., a simulation-based security model), under the computational Diffie-Hellman (CDH) assumption, and modeling certain hash functions as random oracles. The main difference between our variant and the original Protocol SPAKE2+ is that our variant includes standard key confirmation flows; also, adding these flows allows some slight simplification to the remainder of the protocol. Along the way, we also (i) provide the first proof (under the same assumptions) that a slight variant of Protocol SPAKE2 from [Abdalla, Pointcheval 2005] is a secure *symmetric* PAKE in the UC framework (previous security proofs were all in the weaker BPR framework [Bellare, Pointcheval, Rogaway 2000]); (ii) provide a proof (under very similar assumptions) that a variant of Protocol SPAKE2+ that is currently being standardized is also a secure asymmetric PAKE; (iii) repair several problems in earlier UC formulations of secure symmetric and asymmetric PAKE.
2020
TCC
The Resiliency of MPC with Low Interaction: The Benefit of Making Errors 📺
We study information-theoretic secure multiparty protocols that achieve full security, including guaranteed output delivery, at the presence of an active adversary that corrupts a constant fraction of the parties. It is known that 2 rounds are insufficient for such protocols even when the adversary corrupts only two parties (Gennaro, Ishai, Kushilevitz, and Rabin; Crypto 2002), and that perfect protocols can be implemented in three rounds as long as the adversary corrupts less than a quarter of the parties (Applebaum , Brakerski, and Tsabary; Eurocrypt, 2019). Furthermore, it was recently shown that the quarter threshold is tight for any 3-round \emph{perfectly-secure} protocol (Applebaum, Kachlon, and Patra; FOCS 2020). Nevertheless, one may still hope to achieve a better-than-quarter threshold at the expense of allowing some negligible correctness errors and/or statistical deviations in the security. Our main results show that this is indeed the case. Every function can be computed by 3-round protocols with \emph{statistical} security as long as the adversary corrupts less than third of the parties. Moreover, we show that any better resiliency threshold requires four rounds. Our protocol is computationally inefficient and has an exponential dependency in the circuit's depth $d$ and in the number of parties $n$. We show that this overhead can be avoided by relaxing security to computational, assuming the existence of a non-interactive commitment (NICOM). Previous 3-round computational protocols were based on stronger public-key assumptions. When instantiated with statistically-hiding NICOM, our protocol provides \emph{everlasting statistical} security, i.e., it is secure against adversaries that are computationally unlimited \emph{after} the protocol execution. To prove these results, we introduce a new hybrid model that allows for 2-round protocols with linear resliency threshold. Here too we prove that, for perfect protocols, the best achievable resiliency is $n/4$, whereas statistical protocols can achieve a threshold of $n/3$. We also construct the first 2-round $n/3$-statistical verifiable secret sharing that supports second-level sharing and prove a matching lower-bound, extending the results of Patra, Choudhary, Rabin, and Rangan (Crypto 2009). Overall, our results refines the differences between statistical and perfect models of security, and show that there are efficiency gaps even in the regime of realizable thresholds.
2020
TCC
Topology-Hiding Communication from Minimal Assumptions. 📺
Topology-hiding broadcast (THB) enables parties communicating over an incomplete network to broadcast messages while hiding the topology from within a given class of graphs. THB is a central tool underlying general topology-hiding secure computation (THC) (Moran et al. TCC’15). Although broadcast is a privacy-free task, it was recently shown that THB for certain graph classes necessitates computational assumptions, even in the semi-honest setting, and even given a single corrupted party. In this work we investigate the minimal assumptions required for topology-hiding communication—both Broadcast or Anonymous Broadcast (where the broadcaster’s identity is hidden). We develop new techniques that yield a variety of necessary and sufficient conditions for the feasibility of THB/THAB in different cryptographic settings: information theoretic, given existence of key agreement, and given existence of oblivious transfer. Our results show that feasibility can depend on various properties of the graph class, such as connectivity, and highlight the role of different properties of topology when kept hidden, including direction, distance, and/or distance-of-neighbors to the broadcaster. An interesting corollary of our results is a dichotomy for THC with a public number of at least three parties, secure against one corruption: information-theoretic feasibility if all graphs are 2-connected; necessity and sufficiency of key agreement otherwise.
2020
TCC
On the Round Complexity of the Shuffle Model 📺
The shuffle model of differential privacy [Bittau et al. SOSP 2017; Erlingsson et al. SODA 2019; Cheu et al. EUROCRYPT 2019] was proposed as a viable model for performing distributed differentially private computations. Informally, the model consists of an untrusted analyzer that receives messages sent by participating parties via a shuffle functionality, the latter potentially disassociates messages from their senders. Prior work focused on one-round differentially private shuffle model protocols, demonstrating that functionalities such as addition and histograms can be performed in this model with accuracy levels similar to that of the curator model of differential privacy, where the computation is performed by a fully trusted party. A model closely related to the shuffle model was presented in the seminal work of Ishai et al. on establishing cryptography from anonymous communication [FOCS 2006]. Focusing on the round complexity of the shuffle model, we ask in this work what can be computed in the shuffle model of differential privacy with two rounds. Ishai et al. showed how to use one round of the shuffle to establish secret keys between every two parties. Using this primitive to simulate a general secure multi-party protocol increases its round complexity by one. We show how two parties can use one round of the shuffle to send secret messages without having to first establish a secret key, hence retaining round complexity. Combining this primitive with the two-round semi-honest protocol of Applebaum, Brakerski, and Tsabary [TCC 2018], we obtain that every randomized functionality can be computed in the shuffle model with an honest majority, in merely two rounds. This includes any differentially private computation. We hence move to examine differentially private computations in the shuffle model that (i) do not require the assumption of an honest majority, or (ii) do not admit one-round protocols, even with an honest majority. For that, we introduce two computational tasks: common element, and nested common element with parameter $\alpha$. For the common element problem we show that for large enough input domains, no one-round differentially private shuffle protocol exists with constant message complexity and negligible $\delta$, whereas a two-round protocol exists where every party sends a single message in every round. For the nested common element we show that no one-round differentially private protocol exists for this problem with adversarial coalition size $\alpha n$. However, we show that it can be privately computed in two rounds against coalitions of size $cn$ for every $c < 1$. This yields a separation between one-round and two-round protocols. We further show a one-round protocol for the nested common element problem that is differentially private with coalitions of size smaller than $c n$ for all $0 < c < \alpha < 1 / 2$.
2020
TCC
On Average-Case Hardness in TFNP from One-Way Functions 📺
The complexity class TFNP consists of all NP search problems that are total in the sense that a solution is guaranteed to exist for all instances. Over the years, this class has proved to illuminate surprising connections among several diverse subfields of mathematics like combinatorics, computational topology, and algorithmic game theory. More recently, we are starting to better understand its interplay with cryptography. We know that certain cryptographic primitives (e.g. one-way permutations, collision-resistant hash functions, or indistinguishability obfuscation) imply average-case hardness in TFNP and its important subclasses. However, its relationship with the most basic cryptographic primitive -- \ie one-way functions (OWFs) -- still remains unresolved. Under an additional complexity theoretic assumption, OWFs imply hardness in TFNP (Hubá?ek, Naor, and Yogev, ITCS 2017). It is also known that average-case hardness in most structured subclasses of TFNP does not imply any form of cryptographic hardness in a black-box way (Rosen, Segev, and Shahaf, TCC 2017) and, thus, one-way functions might be sufficient. Specifically, no negative result which would rule out basing average-case hardness in TFNP \emph{solely} on OWFs is currently known. In this work, we further explore the interplay between TFNP and OWFs and give the first negative results. As our main result, we show that there cannot exist constructions of average-case (and, in fact, even worst-case) hard TFNP problem from OWFs with a certain type of simple black-box security reductions. The class of reductions we rule out is, however, rich enough to capture many of the currently known cryptographic hardness results for TFNP. Our results are established using the framework of black-box separations (Impagliazzo and Rudich, STOC 1989) and involve a novel application of the reconstruction paradigm (Gennaro and Trevisan, FOCS 2000).
2020
TCC
Perfect Zero Knowledge: New Upperbounds and Relativized Separations 📺
We investigate the complexity of problems that admit perfect zero-knowledge interactive protocols and establish new unconditional upper bounds and oracle separation results. We establish our results by investigating certain distribution testing problems: computational problems over high-dimensional distributions represented by succinct Boolean circuits. A relatively less-investigated complexity class SBP emerged as significant in this study. The main results we establish are: (1) A unconditional inclusion that $\NIPZK \subseteq \CoSBP$. (2) Construction of a relativized world in which there is a distribution testing problem that lies in NIPZK but not in SBP, thus giving a relativized separation of $\NIPZK$ (and hence PZK) from SBP. (3) Construction of a relativized world in which there is a distribution testing problem that lies in $\PZK$ but not in $\CoSBP$, thus giving a relativized separation of PZK$ from CoSBP. Results (1) and (3) imply an oracle separating PZK from NIPZK. Our results refine the landscape of perfect zero-knowledge classes in relation to traditional complexity classes.
2020
TCC
Information-Theoretic 2-Round MPC without Round Collapsing: Adaptive Security, and More 📺
We present simpler and improved constructions of 2-round protocols for secure multi-party computation (MPC) in the semi-honest setting. Our main results are new information-theoretically secure protocols for arithmetic NC1 in two settings: (i) the plain model tolerating up to $t < n/2$ corruptions; and (ii) in the OLE-correlation model tolerating any number of corruptions. Our protocols achieve adaptive security and require only black-box access to the underlying field, whereas previous results only achieve static security and require non-black-box field access. Moreover, both results extend to polynomial-size circuits with computational and adaptive security, while relying on black-box access to a pseudorandom generator. In the OLE correlation model, the extended protocols for circuits tolerate up to $n-1$ corruptions. Along the way, we introduce a conceptually novel framework for 2-round MPC that does not rely on the round collapsing framework underlying all of the recent advances in 2-round MPC.
2020
TCC
Public-Coin Zero-Knowledge Arguments with (almost) Minimal Time and Space Overheads 📺
Zero-knowledge protocols enable the truth of a mathematical statement to be certified by a verifier without revealing any other information. Such protocols are a cornerstone of modern cryptography and recently are becoming more and more practical. However, a major bottleneck in deployment is the efficiency of the prover and, in particular, the space-efficiency of the protocol. For every $\mathsf{NP}$ relation that can be verified in time $T$ and space $S$, we construct a public-coin zero-knowledge argument in which the prover runs in time $T \cdot \mathrm{polylog}(T)$ and space $S \cdot \mathrm{polylog}(T)$. Our proofs have length $\mathrm{polylog}(T)$ and the verifier runs in time $T \cdot \mathrm{polylog}(T)$ (and space $\mathrm{polylog}(T)$). Our scheme is in the random oracle model and relies on the hardness of discrete log in prime-order groups. Our main technical contribution is a new space efficient \emph{polynomial commitment scheme} for multi-linear polynomials. Recall that in such a scheme, a sender commits to a given multi-linear polynomial $P:\mathbb{F}^n \to \mathbb{F}$ so that later on it can prove to a receiver statements of the form ``$P(x)=y$''. In our scheme, which builds on commitments schemes of Bootle et al. (Eurocrypt 2016) and B{\"u}nz et al. (S\&P 2018), we assume that the sender is given multi-pass streaming access to the evaluations of $P$ on the Boolean hypercube and we show how to implement both the sender and receiver in roughly time $2^n$ and space $n$ and with communication complexity roughly $n$.
2020
TCC
Universal Composition with Global Subroutines: Capturing Global Setup within plain UC 📺
The Global and Externalized UC frameworks [Canetti-Dodis-Pass-Walfish, TCC 07] extend the plain UC framework to additionally handle protocols that use a ``global setup'', namely a mechanism that is also used by entities outside the protocol. These frameworks have broad applicability: Examples include public-key infrastructures, common reference strings, shared synchronization mechanisms, global blockchains, or even abstractions such as the random oracle. However, the need to work in a specialized framework has been a source of confusion, incompatibility, and an impediment to broader use. We show how security in the presence of a global setup can be captured within the plain UC framework, thus significantly simplifying the treatment. This is done as follows: - We extend UC-emulation to the case where both the emulating protocol $\pi$ and the emulated protocol $\phi$ make subroutine calls to protocol $\gamma$ that is accessible also outside $\pi$ and $\phi$. As usual, this notion considers only a single instance of $\phi$ or $\pi$ (alongside $\gamma$). - We extend the UC theorem to hold even with respect to the new notion of UC emulation. That is, we show that if $\pi$ UC-emulates $\phi$ in the presence of $\gamma$, then $\rho^{\phi\rightarrow\pi}$ UC-emulates $\rho$ for any protocol $\rho$, even when $\rho$ uses $\gamma$ directly, and in addition calls many instances of $\phi$, all of which use the same instance of $\gamma$. We prove this extension using the existing UC theorem as a black box, thus further simplifying the treatment. We also exemplify how our treatment can be used to streamline, within the plain UC model, proofs of security of systems that involve global set-up, thus providing greater simplicity and flexibility.
2020
TCHES
Compact Dilithium Implementations on Cortex-M3 and Cortex-M4 📺
We present implementations of the lattice-based digital signature scheme Dilithium for ARM Cortex-M3 and ARM Cortex-M4. Dilithium is one of the three signature finalists of the NIST post-quantum cryptography competition. As our Cortex-M4 target, we use the popular STM32F407-DISCOVERY development board. Compared to the previous speed records on the Cortex-M4 by Ravi, Gupta, Chattopadhyay, and Bhasin we speed up the key operations NTT and NTT−1 by 20% which together with other optimizations results in speedups of 7%, 15%, and 9% for Dilithium3 key generation, signing, and verification respectively. We also present the first constant-time Dilithium implementation on the Cortex-M3 and use the Arduino Due for benchmarks. For Dilithium3, we achieve on average 2 562 kilocycles for key generation, 10 667 kilocycles for signing, and 2 321 kilocycles for verification.Additionally, we present stack consumption optimizations applying to both our Cortex- M3 and Cortex-M4 implementation. Due to the iterative nature of the Dilithium signing algorithm, there is no optimal way to achieve the best speed and lowest stack consumption at the same time. We present three different strategies for the signing procedure which allow trading more stack and flash memory for faster speed or viceversa. Our implementation of Dilithium3 with the smallest memory footprint uses less than 12kB. As an additional output of this work, we present the first Cortex-M3 implementations of the key-encapsulation schemes NewHope and Kyber.
2020
TCHES
Ranking Loss: Maximizing the Success Rate in Deep Learning Side-Channel Analysis 📺
The side-channel community recently investigated a new approach, based on deep learning, to significantly improve profiled attacks against embedded systems. Compared to template attacks, deep learning techniques can deal with protected implementations, such as masking or desynchronization, without substantial preprocessing. However, important issues are still open. One challenging problem is to adapt the methods classically used in the machine learning field (e.g. loss function, performance metrics) to the specific side-channel context in order to obtain optimal results. We propose a new loss function derived from the learning to rank approach that helps preventing approximation and estimation errors, induced by the classical cross-entropy loss. We theoretically demonstrate that this new function, called Ranking Loss (RkL), maximizes the success rate by minimizing the ranking error of the secret key in comparison with all other hypotheses. The resulting model converges towards the optimal distinguisher when considering the mutual information between the secret and the leakage. Consequently, the approximation error is prevented. Furthermore, the estimation error, induced by the cross-entropy, is reduced by up to 23%. When the ranking loss is used, the convergence towards the best solution is up to 23% faster than a model using the cross-entropy loss function. We validate our theoretical propositions on public datasets.
2020
TCHES
Fill your Boots: Enhanced Embedded Bootloader Exploits via Fault Injection and Binary Analysis 📺
The bootloader of an embedded microcontroller is responsible for guarding the device’s internal (flash) memory, enforcing read/write protection mechanisms. Fault injection techniques such as voltage or clock glitching have been proven successful in bypassing such protection for specific microcontrollers, but this often requires expensive equipment and/or exhaustive search of the fault parameters. When multiple glitches are required (e.g., when countermeasures are in place) this search becomes of exponential complexity and thus infeasible. Another challenge which makes embedded bootloaders notoriously hard to analyse is their lack of debugging capabilities.This paper proposes a grey-box approach that leverages binary analysis and advanced software exploitation techniques combined with voltage glitching to develop a powerful attack methodology against embedded bootloaders. We showcase our techniques with three real-world microcontrollers as case studies: 1) we combine static and on-chip dynamic analysis to enable a Return-Oriented Programming exploit on the bootloader of the NXP LPC microcontrollers; 2) we leverage on-chip dynamic analysis on the bootloader of the popular STM8 microcontrollers to constrain the glitch parameter search, achieving the first fully-documented multi-glitch attack on a real-world target; 3) we apply symbolic execution to precisely aim voltage glitches at target instructions based on the execution path in the bootloader of the Renesas 78K0 automotive microcontroller. For each case study, we show that using inexpensive, open-design equipment, we are able to efficiently breach the security of these microcontrollers and get full control of the protected memory, even when multiple glitches are required. Finally, we identify and elaborate on several vulnerable design patterns that should be avoided when implementing embedded bootloaders.
2020
TCHES
Doppelganger Obfuscation — Exploring theDefensive and Offensive Aspects of Hardware Camouflaging 📺
Hardware obfuscation is widely used in practice to counteract reverse engineering. In recent years, low-level obfuscation via camouflaged gates has been increasingly discussed in the scientific community and industry. In contrast to classical high-level obfuscation, such gates result in recovery of an erroneous netlist. This technology has so far been regarded as a purely defensive tool. We show that low-level obfuscation is in fact a double-edged sword that can also enable stealthy malicious functionalities.In this work, we present Doppelganger, the first generic design-level obfuscation technique that is based on low-level camouflaging. Doppelganger obstructs central control modules of digital designs, e.g., Finite State Machines (FSMs) or bus controllers, resulting in two different design functionalities: an apparent one that is recovered during reverse engineering and the actual one that is executed during operation. Notably, both functionalities are under the designer’s control.In two case studies, we apply Doppelganger to a universal cryptographic coprocessor. First, we show the defensive capabilities by presenting the reverse engineer with a different mode of operation than the one that is actually executed. Then, for the first time, we demonstrate the considerable threat potential of low-level obfuscation. We show how an invisible, remotely exploitable key-leakage Trojan can be injected into the same cryptographic coprocessor just through obfuscation. In both applications of Doppelganger, the resulting design size is indistinguishable from that of an unobfuscated design, depending on the choice of encodings.
2020
TCHES
The design of scalar AES Instruction Set Extensions for RISC-V 📺
Secure, efficient execution of AES is an essential requirement on most computing platforms. Dedicated Instruction Set Extensions (ISEs) are often included for this purpose. RISC-V is a (relatively) new ISA that lacks such a standardized ISE. We survey the state-of-the-art industrial and academic ISEs for AES, implement and evaluate five different ISEs, one of which is novel. We recommend separate ISEs for 32 and 64-bit base architectures, with measured performance improvements for an AES-128 block encryption of 4x and 10x with a hardware cost of 1.1K and 8.2K gates respectively, when compared to a software-only implementation based on use of T-tables. We also explore how the proposed standard bit-manipulation extension to RISC-V can be harnessed for efficient implementation of AES-GCM. Our work supports the ongoing RISC-V cryptography extension standardisation process.
2020
TCHES
Rapidly Verifiable XMSS Signatures 📺
This work presents new speed records for XMSS (RFC 8391) signature verification on embedded devices. For this we make use of a probabilistic method recently proposed by Perin, Zambonin, Martins, Custódio, and Martina (PZMCM) at ISCC 2018, that changes the XMSS signing algorithm to search for rapidly verifiable signatures. We improve the method, ensuring that the added signing cost for the search is independent of the message length. We provide a statistical analysis of the resulting verification speed and support it by experiments. We present a record setting RFC compatible implementation of XMSS verification on the ARM Cortex-M4. At a signing time of about one minute on a general purpose CPU, we create signatures that are verified about 1.44 times faster than traditionally generated signatures. Adding further well-known implementation optimizations to the verification algorithm we reduce verification time by over a factor two from 13.85 million to 6.56 million cycles. In contrast to previous works, we provide a detailed security analysis of the resulting signature scheme under classical and quantum attacks that justifies our selection of parameters. On the way, we fill a gap in the security analysis of XMSS as described in RFC 8391 proving that the modified message hashing in the RFC does indeed mitigate multi-target attacks. This was not shown before and might be of independent interest.
2020
TCHES
DAPA: Differential Analysis aided Power Attack on (Non-) Linear Feedback Shift Registers 📺
Differential power analysis (DPA) is a form of side-channel analysis (SCA) that performs statistical analysis on the power traces of cryptographic computations. DPA is applicable to many cryptographic primitives, including block ciphers, stream ciphers and even hash-based message authentication code (HMAC). At COSADE 2017, Dobraunig et al. presented a DPA on the fresh re-keying scheme Keymill to extract the bit relations of neighbouring bits in its shift registers, reducing the internal state guessing space from 128 to 4 bits. In this work, we generalise their methodology and combine with differential analysis, we called it differential analysis aided power attack (DAPA), to uncover more bit relations and take into account the linear or non-linear functions that feedback to the shift registers (i.e. LFSRs or NLFSRs). Next, we apply our DAPA on LR-Keymill, the improved version of Keymill designed to resist the aforementioned DPA, and breaks its 67.9-bit security claim with a 4-bit internal state guessing. We experimentally verified our analysis. In addition, we improve the previous DPA on Keymill by halving the amount of data resources needed for the attack. We also applied our DAPA to Trivium, a hardware-oriented stream cipher from the eSTREAM portfolio and reduces the key guessing space from 80 to 14 bits.
2020
TCHES
Fault Injection as an Oscilloscope: Fault Correlation Analysis 📺
Fault Injection (FI) attacks have become a practical threat to modern cryptographic implementations. Such attacks have recently focused more on exploitation of implementation-centric and device-specific properties of the faults. In this paper, we consider the parallel between SCA attacks and FI attacks; specifically, that many FI attacks rely on the data-dependency of activation and propagation of a fault, and SCA attacks similarly rely on data-dependent power usage. In fact, these are so closely related that we show that existing SCA attacks can be directly applied in a purely FI setting, by translating power FI results to generate FI ‘probability traces’ as an analogue of power traces. We impose only the requirements of the equivalent SCA attack (e.g., knowledge of the input plaintext for CPA on the first round), along with a way to observe the status of the target (whether or not it has failed and been “muted” after a fault). We also analyse existing attacks such as Fault Template Analysis in the light of this parallel, and discuss the limitations of our methodology. To demonstrate that our attacks are practical, we first show that SPA can be used to recover RSA private exponents using FI attacks. Subsequently, we show the generic nature of our attacks by performing DPA on AES after applying FI attacks to several different targets (with AVR, 32-bit ARM and RISC-V CPUs), using different software on each target, and do so with a low-cost (i.e., less than $50) power fault injection setup. We call this technique Fault Correlation Analysis (FCA), since we perform CPA on fault probability traces. To show that this technique is not limited to software, we also present FCA results against the hardware AES engine supported by one of our targets. Our results show that even without access to the ciphertext (e.g., where an FI redundancy countermeasure is in place, or where ciphertext is simply not exposed to an attacker in any circumstance) and in the presence of light jitter, FCA attacks can successfully recover keys on each of these targets.
2020
TCHES
Polynomial Multiplication in NTRU Prime: Comparison of Optimization Strategies on Cortex-M4 📺
This paper proposes two different methods to perform NTT-based polynomial multiplication in polynomial rings that do not naturally support such a multiplication. We demonstrate these methods on the NTRU Prime key-encapsulation mechanism (KEM) proposed by Bernstein, Chuengsatiansup, Lange, and Vredendaal, which uses a polynomial ring that is, by design, not amenable to use with NTT. One of our approaches is using Good’s trick and focuses on speed and supporting more than one parameter set with a single implementation. The other approach is using a mixed radix NTT and focuses on the use of smaller multipliers and less memory. On a ARM Cortex-M4 microcontroller, we show that our three NTT-based implementations, one based on Good’s trick and two mixed radix NTTs, provide between 32% and 17% faster polynomial multiplication. For the parameter-set ntrulpr761, this results in between 16% and 9% faster total operations (sum of key generation, encapsulation, and decapsulation) and requires between 15% and 39% less memory than the current state-of-the-art NTRU Prime implementation on this platform, which is using Toom-Cook-based polynomial multiplication.
2020
TCHES
The Area-Latency Symbiosis: Towards Improved Serial Encryption Circuits 📺
The bit-sliding paper of Jean et al. (CHES 2017) showed that the smallest-size circuit for SPN based block ciphers such as AES, SKINNY and PRESENT can be achieved via bit-serial implementations. Their technique decreases the bit size of the datapath and naturally leads to a significant loss in latency (as well as the maximum throughput). Their designs complete a single round of the encryption in 168 (resp. 68) clock cycles for 128 (resp. 64) bit blocks. A follow-up work by Banik et al. (FSE 2020) introduced the swap-and-rotate technique that both eliminates this loss in latency and achieves even smaller footprints.In this paper, we extend these results on bit-serial implementations all the way to four authenticated encryption schemes from NIST LWC. Our first focus is to decrease latency and improve throughput with the use of the swap-and-rotate technique. Our block cipher implementations have the most efficient round operations in the sense that a round function of an n-bit block cipher is computed in exactly n clock cycles. This leads to implementations that are similar in size to the state of the art, but have much lower latency (savings up to 20 percent). We then extend our technique to 4- and 8-bit implementations. Although these results are promising, block ciphers themselves are not end-user primitives, as they need to be used in conjunction with a mode of operation. Hence, in the second part of the paper, we use our serial block ciphers to bootstrap four active NIST authenticated encryption candidates: SUNDAE-GIFT, Romulus, SAEAES and SKINNY-AEAD. In the wake of this effort, we provide the smallest block-cipher-based authenticated encryption circuits known in the literature so far.
2020
TCHES
Side-Channel Analysis of the Xilinx Zynq UltraScale+ Encryption Engine 📺
The Xilinx Zynq UltraScale+ (ZU+) is a powerful and flexible System-on- Chip (SoC) computing platform for next generation applications such as autonomous driving or industrial Internet-of-Things (IoT) based on 16 nm production technology. The devices are equipped with a secure boot mechanism in order to provide confidentiality, integrity, and authenticity of the configuration files that are loaded during power-up. This includes a dedicated encryption engine which features a protocol-based countermeasure against passive Side-Channel Attacks (SCAs) called key rolling. The mechanism ensures that the same key is used only for a certain number of data blocks that has to be defined by the user. However, a suitable choice for the key rolling parameter depends on the power leakage behavior of the chip and is not published by the manufacturer. To close this gap, this paper presents the first publicly known side-channel analysis of the ZU+ encryption unit. We conduct a black-box reverse engineering of the internal hardware architecture of the encryption engine using Electromagnetic (EM) measurements from a decoupling capacitor of the power supply. Then, we illustrate a sophisticated methodology that involves the first five rounds of an AES encryption to attack the 256-bit secret key. We apply the elaborated attack strategy using several new Deep Learning (DL)-based evaluation methods for cryptographic implementations. Even though we are unable to recover all bytes of the secret key, the experimental results still allow us to provide concrete recommendations for the key rolling parameter under realistic conditions. This eventually helps to configure the secure boot mechanism of the ZU+ and similar devices appropriately.
2020
TCHES
Re-Consolidating First-Order Masking Schemes: Nullifying Fresh Randomness 📺
Application of masking, known as the most robust and reliable countermeasure to side-channel analysis attacks, on various cryptographic algorithms has dedicated a lion’s share of research to itself. The difficulty originates from the fact that the overhead of application of such an algorithmic-level countermeasure might not be affordable. This includes the area- and latency overheads and the amount of fresh randomness required to fulfill the resulting design’s security properties. There are already techniques applicable in hardware platforms that consider glitches into account. Among them, classical threshold implementations force the designers to use at least three shares in the underlying masking. The other schemes, which can deal with two shares, often necessitates the use of fresh randomness.Here, in this work, we present a technique allowing us to use two shares to realize the first-order glitch-extended probing secure masked realization of several functions, including the S-box of Midori, PRESENT, PRINCE, and AES ciphers without any fresh randomness.
2020
TCHES
Keep it Unsupervised: Horizontal Attacks Meet Deep Learning 📺
To mitigate side-channel attacks, real-world implementations of public-key cryptosystems adopt state-of-the-art countermeasures based on randomization of the private or ephemeral keys. Usually, for each private key operation, a “scalar blinding” is performed using 32 or 64 randomly generated bits. Nevertheless, horizontal attacks based on a single trace still pose serious threats to protected ECC or RSA implementations. If the secrets learned through a single-trace attack contain too many wrong (or noisy) bits, the cryptanalysis methods for recovering remaining bits become impractical due to time and computational constraints. This paper proposes a deep learning-based framework to iteratively correct partially correct private keys resulting from a clustering-based horizontal attack. By testing the trained network on scalar multiplication (or exponentiation) traces, we demonstrate that a deep neural network can significantly reduce the number of wrong bits from randomized scalars (or exponents).When a simple horizontal attack can recover around 52% of attacked multiple private key bits, the proposed iterative framework improves the private key accuracy to above 90% on average and to 100% for at least one of the attacked keys. Our attack model remains fully unsupervised and excludes the need to know where the error or noisy bits are located in each separate randomized private key.
2020
TCHES
Exploring Crypto-Physical Dark Matter and Learning with Physical Rounding: Towards Secure and Efficient Fresh Re-Keying 📺
State-of-the-art re-keying schemes can be viewed as a tradeoff between efficient but heuristic solutions based on binary field multiplications, that are only secure if implemented with a sufficient amount of noise, and formal but more expensive solutions based on weak pseudorandom functions, that remain secure if the adversary accesses their output in full. Recent results on “crypto dark matter” (TCC 2018) suggest that low-complexity pseudorandom functions can be obtained by mixing linear functions over different small moduli. In this paper, we conjecture that by mixing some matrix multiplications in a prime field with a physical mapping similar to the leakage functions exploited in side-channel analysis, we can build efficient re-keying schemes based on “crypto-physical dark matter”, that remain secure against an adversary who can access noise-free measurements. We provide first analyzes of the security and implementation properties that such schemes provide. Precisely, we first show that they are more secure than the initial (heuristic) proposal by Medwed et al. (AFRICACRYPT 2010). For example, they can resist attacks put forward by Belaid et al. (ASIACRYPT 2014), satisfy some relevant cryptographic properties and can be connected to a “Learning with Physical Rounding” problem that shares some similarities with standard learning problems. We next show that they are significantly more efficient than the weak pseudorandom function proposed by Dziembowski et al. (CRYPTO 2016), by exhibiting hardware implementation results.
2020
TCHES
Fixslicing AES-like Ciphers: New bitsliced AES speed records on ARM-Cortex M and RISC-V 📺
The fixslicing implementation strategy was originally introduced as a new representation for the hardware-oriented GIFT block cipher to achieve very efficient software constant-time implementations. In this article, we show that the fundamental idea underlying the fixslicing technique is not of interest only for GIFT, but can be applied to other ciphers as well. Especially, we study the benefits of fixslicing in the case of AES and show that it allows to reduce by 52% the amount of operations required by the linear layer when compared to the current fastest bitsliced implementation on 32-bit platforms. Overall, we report that fixsliced AES-128 allows to reach 80 and 91 cycles per byte on ARM Cortex-M and E31 RISC-V processors respectively (assuming pre-computed round keys), improving the previous records on those platforms by 21% and 26%. In order to highlight that our work also directly improves masked implementations that rely on bitslicing, we report implementation results when integrating first-order masking that outperform by 12% the fastest results reported in the literature on ARM Cortex-M4. Finally, we demonstrate the genericity of the fixslicing technique for AES-like designs by applying it to the Skinny-128 tweakable block ciphers.
2020
TCHES
Redundant Code-based Masking Revisited 📺
Masking schemes are a popular countermeasure against side-channel attacks. To mask bytes, the two classical options are Boolean masking and polynomial masking. The latter lends itself to redundant masking, where leakage emanates from more shares than are strictly necessary to reconstruct, raising the obvious question how well such “redundant” leakage can be exploited by a side-channel adversary. We revisit the recent work by Chabanne et al. (CHES’18) and show that, contrary to their conclusions, said leakage can—in theory—always be exploited. For the Hamming weight scenario in the low-noise regime, we heuristically determine how security degrades in terms of the number of redundant shares for first and second order secure polynomial masking schemes.Furthermore, we leverage a well-established link between linear secret sharing schemes and coding theory to determine when different masking schemes will end up with essentially equivalent leakage profiles. Surprisingly, we conclude that for typical field sizes and security orders, Boolean masking is a special case of polynomial masking. We also identify quasi-Boolean masking schemes as a special class of redundant polynomial masking and point out that the popular “Frobenius-stable” sets of interpolations points typically lead to such quasi-Boolean masking schemes, with subsequent degraded leakage performance.
2020
TCHES
Concrete quantum cryptanalysis of binary elliptic curves 📺
This paper analyzes and optimizes quantum circuits for computing discrete logarithms on binary elliptic curves, including reversible circuits for fixed-base-point scalar multiplication and the full stack of relevant subroutines. The main optimization target is the size of the quantum computer, i.e., the number of logical qubits required, as this appears to be the main obstacle to implementing Shor’s polynomial-time discrete-logarithm algorithm. The secondary optimization target is the number of logical Toffoli gates. For an elliptic curve over a field of 2n elements, this paper reduces the number of qubits to 7n + ⌊log2(n)⌋ + 9. At the same time this paper reduces the number of Toffoli gates to 48n3 + 8nlog2(3)+1 + 352n2 log2(n) + 512n2 + O(nlog2(3)) with double-and-add scalar multiplication, and a logarithmic factor smaller with fixed-window scalar multiplication. The number of CNOT gates is also O(n3). Exact gate counts are given for various sizes of elliptic curves currently used for cryptography.
2020
TCHES
A Hybrid-CPU-FPGA-based Solution to the Recovery of Sha256crypt-hashed Passwords 📺
This paper presents an accelerator design for the password recovery of sha256crypt based on hybrid CPU-FPGA devices. By applying the brute-force attack computation model proposed in this paper, we decompose the sha256crypt function into two types of operations, namely the data dispatching and the block transforming. The data dispatching operation generates message blocks and the block transforming operation transforms message blocks into digests. These two operations are efficiently accelerated by the customized data dispatch unit and the pipelined block transform unit, respectively. Difficulties of adopting the pipeline technique are addressed also with the following techniques. The group scheduling is used to solve the data dependency that stalls the pipeline. The look-ahead execution eliminates the uncertainty of the execution path. The data path pruning and spatial-temporal multiplexing reduce the resource overhead of non-computing units.The proposed accelerator design is implemented and evaluated on the Xilinx Zynq-7000 XC7Z030-3 SoC. Our experimental results show that the proposed accelerator can improve energy efficiency by 2.54x over the state-of-the-art password recovery tool Hashcat running on an NVIDIA GTX1080Ti GPU. Compared with the pure FPGA-based implementation in John-the-Ripper, the proposed accelerator improves energy efficiency by 1.64x and improves resource efficiency by 1.69x.
2020
TCHES
On the spectral features of robust probing security 📺
In this work we provide a spectral formalization of non-interference in the presence of glitches. Our goal is to present new theoretical and practical tools to reason about robust-d-probing security. We show that the current understanding of extended probes lends itself to probes that participate, during gadget composition, to the creation of additional extended probes. In turn, this enables a natural extension of non-interference definitions into robust ones to build a new reasoning framework that can formally explain some semi-formal results already appeared in the past and be used to synthesize new robust-d-SNI gadgets.
2020
TCHES
Plaintext: A Missing Feature for Enhancing the Power of Deep Learning in Side-Channel Analysis? Breaking multiple layers of side-channel countermeasures 📺
Deep learning (DL) has proven to be very effective for image recognition tasks, with a large body of research on various model architectures for object classification. Straight-forward application of DL to side-channel analysis (SCA) has already shown promising success, with experimentation on open-source variable key datasets showing that secret keys can be revealed with 100s traces even in the presence of countermeasures. This paper aims to further improve the application of DL for SCA, by enhancing the power of DL when targeting the secret key of cryptographic algorithms when protected with SCA countermeasures. We propose a new model, CNN-based model with Plaintext feature extension (CNNP) together with multiple convolutional filter kernel sizes and structures with deeper and narrower neural networks, which has empirically proven its effectiveness by outperforming reference profiling attack methods such as template attacks (TAs), convolutional neural networks (CNNs) and multilayer perceptron (MLP) models. Our model generates state-of-the art results when attacking the ASCAD variable-key database, which has a restricted number of training traces per key, recovering the key within 40 attack traces in comparison with order of 100s traces required by straightforward machine learning (ML) application. During the profiling stage an attacker needs no additional knowledge on the implementation, such as the masking scheme or random mask values, only the ability to record the power consumption or electromagnetic field traces, plaintext/ciphertext and the key. Additionally, no heuristic pre-processing is required in order to break the high-order masking countermeasures of the target implementation.
2020
TCHES
Rejection Sampling Schemes for Extracting Uniform Distribution from Biased PUFs 📺
This paper presents an efficient fuzzy extractor (FE) construction for secure cryptographic key generation from physically unclonable functions (PUFs). The proposed FE, named acceptance-or-rejection (AR)-based FE, utilizes a new debiasing scheme to extract a uniform distribution from a biased PUF response. The proposed debiasing scheme employs the principle of rejection sampling, and can extract a longer debiased bit string compared to those of conventional debiasing schemes. In addition, the proposed AR-based FE is extended to ternary PUF responses (i.e., ternary encoding of a PUF response). These responses can be derived according to cell-wise reliability of the PUF and are promising for extraction of stable and high-entropy responses from common PUFs. The performance of the AR-based Fes is evaluated through an experimental simulation of PUF-based key generation and compared with conventional FEs. We confirm that the proposed AR-based FE can achieve the highest efficiency in terms of PUF and nonvolatile memory (NVM) sizes for various PUF conditions among the conventional counterparts. More precisely, the AR-based FE can realize a 128-bit key generation with up-to 55% smaller PUF size or up-to 72% smaller NVM size than other conventional FEs. In addition, the ternary AR-based FE is up to 55% more efficient than the binary version, and can also achieve up-to 63% higher efficiency than conventional counterparts. Furthermore, we show that the AR-based FE can be applied to PUFs with local biases (e.g., biases depending on cell location in SRAM PUFs), unlike all the conventional schemes, for which only global (or identical) biases are assumed.
2020
TCHES
Second-Order Masked Lookup Table Compression Scheme 📺
Masking by lookup table randomisation is a well-known technique used to achieve side-channel attack resistance for software implementations, particularly, against DPA attacks. The randomised table technique for first- and second-order security requires about m•2n bits of RAM to store an (n,m)-bit masked S-box lookup table. Table compression helps in reducing the amount of memory required, and this is useful for highly resource-constrained IoT devices. Recently, Vadnala (CT-RSA 2017) proposed a randomised table compression scheme for first- and second-order security in the probing leakage model. This scheme reduces the RAM memory required by about a factor of 2l, where l is a compression parameter. Vivek (Indocrypt 2017) demonstrated an attack against the second-order scheme of Vadnala. Hence achieving table compression at second and higher orders is an open problem.In this work, we propose a second-order secure randomised table compression scheme which works for any (n,m)-bit S-box. Our proposal is a variant of Vadnala’s scheme that is not only secure but also significantly improves the time-memory trade-off. Specifically, we improve the online execution time by a factor of 2n−l. Our proposed scheme is proved 2-SNI secure in the probing leakage model. We have implemented our method for AES-128 on a 32-bit ARM Cortex processor. We are able to reduce the memory required to store a randomised S-box table for second-order AES-128 implementation to 59 bytes.
2020
TCHES
Improving the Performance of the Picnic Signature Scheme 📺
Picnic is a digital signature algorithm designed to provide security against attacks by quantum computers. The design uses only symmetric-key primitives, and is an efficient instantiation of the MPC-in-the-head paradigm. In this work, we explore the Picnic design in great detail. We investigate and benchmark different parameter choices and show that there exist better parameter choices than those in the current specification. We also present improvements to the MPC protocol that shorten signatures and reduce signing time. The proposed MPC changes tailor the protocol to the circuit of interest in Picnic, but may also be of independent interest. Taken together, these changes give a new instantiation of Picnic that signs messages 7.9 to 13.9 times faster, and verifies signatures 4.5 to 5.5 times faster than the existing “Picnic2” design, while having nearly the same signature sizes.
2020
TCHES
Faster Montgomery and double-add ladders for short Weierstrass curves 📺
The Montgomery ladder and Joye ladder are well-known algorithms for elliptic curve scalar multiplication with a regular structure. The Montgomery ladder is best known for its implementation on Montgomery curves, which requires 5M+4S+1m+8A per scalar bit, and 6 field registers. Here (M, S,m,A) represent respectively field Multiplications, Squarings, multiplications by a curve constant, and Additions or subtractions. This ladder is also complete, meaning that it works on all input points and all scalars. Many protocols do not use Montgomery curves, but instead use prime-order curves in short Weierstrass form. These have historically been much slower, with ladders costing at least 14 multiplications or squarings per bit: 8M + 6S + 27A for the Montgomery ladder and 8M+ 6S + 30A for the Joye ladder. In 2017, Kim et al. improved the Montgomery ladder to 8M+ 4S + 12A + 1H per bit using 9 registers, where the H represents a halving. Hamburg simplified Kim et al.’s formulas to 8M+ 4S + 8A + 1H per bit using 6 registers. Here we present improved formulas which compute the Montgomery ladder on short Weierstrass curves using 8M+ 3S + 7A per bit, and requiring 6 registers. We also give formulas for the Joye ladder that use 9M+3S+7A per bit, requiring 5 registers. One of our new formulas supports very efficient 4-way vectorization. We also discuss curve invariants, exceptional points, side-channel protection and how to set up and finish these ladder operations. Finally, we show a novel technique to make these ladders complete when the curve order is not divisible by 2 or 3, at a modest increase in cost. A sample implementation of these techniques is given in the supplementary material, also posted at https://github.com/bitwiseshiftleft/ladder_formulas
2020
TCHES
Modeling Soft Analytical Side-Channel Attacks from a Coding Theory Viewpoint 📺
One important open question in side-channel analysis is to find out whether all the leakage samples in an implementation can be exploited by an adversary, as suggested by masking security proofs. For attacks exploiting a divide-and-conquer strategy, the answer is negative: only the leakages corresponding to the first/last rounds of a block cipher can be exploited. Soft Analytical Side-Channel Attacks (SASCA) have been introduced as a powerful solution to mitigate this limitation. They represent the target implementation and its leakages as a code (similar to a Low Density Parity Check code) that is decoded thanks to belief propagation. Previous works have shown the low data complexities that SASCA can reach in practice. In this paper, we revisit these attacks by modeling them with a variation of the Random Probing Model used in masking security proofs, that we denote as the Local Random Probing Model (LRPM). Our study establishes interesting connections between this model and the erasure channel used in coding theory, leading to the following benefits. First, the LRPM allows bounding the security of concrete implementations against SASCA in a fast and intuitive manner. We use it in order to confirm that the leakage of any operation in a block cipher can be exploited, although the leakages of external operations dominate in known-plaintext/ciphertext attack scenarios. Second, we show that the LRPM is a tool of choice for the (nearly worst-case) analysis of masked implementations in the noisy leakage model, taking advantage of all the operations performed, and leading to new tradeoffs between their amount of randomness and physical noise level. Third, we show that it can considerably speed up the evaluation of other countermeasures such as shuffling.
2020
TCHES
RISQ-V: Tightly Coupled RISC-V Accelerators for Post-Quantum Cryptography 📺
Empowering electronic devices to support Post-Quantum Cryptography (PQC) is a challenging task. PQC introduces new mathematical elements and operations which are usually not easy to implement on standard processors. Especially for low cost and resource constraint devices, hardware acceleration is usually required. In addition, as the standardization process of PQC is still ongoing, a focus on maintaining flexibility is mandatory. To cope with such requirements, hardware/software co-design techniques have been recently used for developing complex and highly customized PQC solutions. However, while most of the previous works have developed loosely coupled PQC accelerators, the design of tightly coupled accelerators and Instruction Set Architecture (ISA) extensions for PQC have been barely explored. To this end, we present RISQ-V, an enhanced RISC-V architecture that integrates a set of powerful tightly coupled accelerators to speed up lattice-based PQC. RISQ-V efficiently reuses processor resources and reduces the amount of memory accesses. This significantly increases the performance while keeping the silicon area overhead low. We present three contributions. First, we propose a set of powerful hardware accelerators deeply integrated into the RISC-V pipeline. Second, we extended the RISC-V ISA with 29 new instructions to efficiently perform operations for lattice-based cryptography. Third, we implemented our RISQ-V in ASIC technology and on FPGA. We evaluated the performance of NewHope, Kyber, and Saber on RISQ-V. Compared to the pure software implementation on RISC-V, our co-design implementations show a speedup factor of up to 11.4 for NewHope, 9.6 for Kyber, and 2.7 for Saber. For the ASIC implementation, the energy consumption was reduced by factors of up to 9.5 for NewHope, 7.7 for Kyber, and 2.1 for Saber. The cell count of the CPU was increased by a factor of 1.6 compared to the original RISC-V design, which can be considered as a moderate increase for the achieved performance gain.
2020
TCHES
Minerva: The curse of ECDSA nonces Systematic analysis of lattice attacks on noisy leakage of bit-length of ECDSA nonces 📺
Best Paper CHES 2020
We present our discovery of a group of side-channel vulnerabilities in implementations of the ECDSA signature algorithm in a widely used Atmel AT90SC FIPS 140-2 certified smartcard chip and five cryptographic libraries (libgcrypt, wolfSSL, MatrixSSL, SunEC/OpenJDK/Oracle JDK, Crypto++). Vulnerable implementations leak the bit-length of the scalar used in scalar multiplication via timing. Using leaked bit-length, we mount a lattice attack on a 256-bit curve, after observing enough signing operations. We propose two new methods to recover the full private key requiring just 500 signatures for simulated leakage data, 1200 for real cryptographic library data, and 2100 for smartcard data. The number of signatures needed for a successful attack depends on the chosen method and its parameters as well as on the noise profile, influenced by the type of leakage and used computation platform. We use the set of vulnerabilities reported in this paper, together with the recently published TPM-FAIL vulnerability [MSE+20] as a basis for real-world benchmark datasets to systematically compare our newly proposed methods and all previously published applicable lattice-based key recovery methods. The resulting exhaustive comparison highlights the methods’ sensitivity to its proper parametrization and demonstrates that our methods are more efficient in most cases. For the TPM-FAIL dataset, we decreased the number of required signatures from approximately 40 000 to mere 900.
2020
TCHES
DANA Universal Dataflow Analysis for Gate-Level Netlist Reverse Engineering 📺
Reverse engineering of integrated circuits, i.e., understanding the internals of Integrated Circuits (ICs), is required for many benign and malicious applications. Examples of the former are detection of patent infringements, hardware Trojans or Intellectual Property (IP)-theft, as well as interface recovery and defect analysis, while malicious applications include IP-theft and finding insertion points for hardware Trojans. However, regardless of the application, the reverse engineer initially starts with a large unstructured netlist, forming an incomprehensible sea of gates.This work presents DANA, a generic, technology-agnostic, and fully automated dataflow analysis methodology for flattened gate-level netlists. By analyzing the flow of data between individual Flip Flops (FFs), DANA recovers high-level registers. The key idea behind DANA is to combine independent metrics based on structural and control information with a powerful automated architecture. Notably, DANA works without any thresholds, scenario-dependent parameters, or other “magic” values that the user must choose. We evaluate DANA on nine modern hardware designs, ranging from cryptographic co-processors, over CPUs, to the OpenTitan, a stateof- the-art System-on-Chip (SoC), which is maintained by the lowRISC initiative with supporting industry partners like Google and Western Digital. Our results demonstrate almost perfect recovery of registers for all case studies, regardless whether they were synthesized as FPGA or ASIC netlists. Furthermore, we explore two applications for dataflow analysis: we show that the raw output of DANA often already allows to identify crucial components and high-level architecture features and also demonstrate its applicability for detecting simple hardware Trojans.Hence, DANA can be applied universally as the first step when investigating unknown netlists and provides major guidance for human analysts by structuring and condensing the otherwise incomprehensible sea of gates. Our implementation of DANA and all synthesized netlists are available as open source on GitHub.
2020
TCHES
Strength in Numbers: Improving Generalization with Ensembles in Machine Learning-based Profiled Side-channel Analysis 📺
The adoption of deep neural networks for profiled side-channel attacks provides powerful options for leakage detection and key retrieval of secure products. When training a neural network for side-channel analysis, it is expected that the trained model can implement an approximation function that can detect leaking side-channel samples and, at the same time, be insensible to noisy (or non-leaking) samples. This outlines a generalization situation where the model can identify the main representations learned from the training set in a separate test set.This paper discusses how output class probabilities represent a strong metric when conducting the side-channel analysis. Further, we observe that these output probabilities are sensitive to small changes, like selecting specific test traces or weight initialization for a neural network. Next, we discuss the hyperparameter tuning, where one commonly uses only a single out of dozens of trained models, where each of those models will result in different output probabilities. We show how ensembles of machine learning models based on averaged class probabilities can improve generalization. Our results emphasize that ensembles increase a profiled side-channel attack’s performance and reduce the variance of results stemming from different hyperparameters, regardless of the selected dataset or leakage model.
2020
TCHES
Retrofitting Leakage Resilient Authenticated Encryption to Microcontrollers 📺
The security of Internet of Things (IoT) devices relies on fundamental concepts such as cryptographically protected firmware updates. In this context attackers usually have physical access to a device and therefore side-channel attacks have to be considered. This makes the protection of required cryptographic keys and implementations challenging, especially for commercial off-the-shelf (COTS) microcontrollers that typically have no hardware countermeasures. In this work, we demonstrate how unprotected hardware AES engines of COTS microcontrollers can be efficiently protected against side-channel attacks by constructing a leakage resilient pseudo random function (LR-PRF). Using this side-channel protected building block, we implement a leakage resilient authenticated encryption with associated data (AEAD) scheme that enables secured firmware updates. We use concepts from leakage resilience to retrofit side-channel protection on unprotected hardware AES engines by means of software-only modifications. The LR-PRF construction leverages frequent key changes and low data complexity together with key dependent noise from parallel hardware to protect against side-channel attacks. Contrary to most other protection mechanisms such as time-based hiding, no additional true randomness is required. Our concept relies on parallel S-boxes in the AES hardware implementation, a feature that is fortunately present in many microcontrollers as a measure to increase performance. In a case study, we implement the protected AEAD scheme for two popular ARM Cortex-M microcontrollers with differing parallelism. We evaluate the protection capabilities in realistic IoT attack scenarios, where non-invasive EM probes or power consumption measurements are employed by the attacker. We show that the concept provides the side-channel hardening that is required for the long-term security of IoT devices.
2020
TCHES
Remove Some Noise: On Pre-processing of Side-channel Measurements with Autoencoders 📺
In the profiled side-channel analysis, deep learning-based techniques proved to be very successful even when attacking targets protected with countermeasures. Still, there is no guarantee that deep learning attacks will always succeed. Various countermeasures make attacks significantly more complex, and such countermeasures can be further combined to make the attacks even more challenging. An intuitive solution to improve the performance of attacks would be to reduce the effect of countermeasures.This paper investigates whether we can consider certain types of hiding countermeasures as noise and then use a deep learning technique called the denoising autoencoder to remove that noise. We conduct a detailed analysis of six different types of noise and countermeasures separately or combined and show that denoising autoencoder improves the attack performance significantly.
2020
TCHES
Unrolled Cryptography on Silicon: A Physical Security Analysis 📺
Cryptographic primitives with low-latency performance have gained momentum lately due to an increased demand for real-time applications. Block ciphers such as PRINCE enable data encryption (resp. decryption) within a single clock cycle at a moderately high operating frequency when implemented in a fully-unrolled fashion. Unsurprisingly, many typical environments for unrolled ciphers require protection against physical adversaries as well. Yet, recent works suggest that most common SCA countermeasures are hard to apply to low-latency circuits. Hardware masking, for example, requires register stages to offer resistance, thus adding delay and defeating the purpose of unrolling. On another note, it has been indicated that unrolled primitives without any additional means of protection offer an intrinsic resistance to SCA attacks due to their parallelism, asynchronicity and speed of execution. In this work, we take a closer look at the physical security properties provided by unrolled cryptographic IC implementations. We are able to confirm that the nature of unrolling indeed bears the potential to decrease the susceptibility of cipher implementations significantly when reset methods are applied. With respect to certain adversarial models, e.g., ciphertext-only access, an amazingly high level of protection can be achieved. While this seems to be a great result for cryptographic hardware engineers, there is an attack vector hidden in plain sight which still threatens the security of unrolled implementations remarkably – namely the static power consumption of CMOS-based circuits. We point out that essentially all reasons which make it hard to extract meaningful information from the dynamic behavior of unrolled primitives are not an issue when exploiting the static currents for key recovery. Our evaluation is based on real-silicon measurements of an unrolled PRINCE core in a custom 40nm ASIC. The presented results serve as a neat educational case study to demonstrate the broad differences between dynamic and static power information leakage in the light of technological advancement.
2020
TCHES
High-speed Instruction-set Coprocessor for Lattice-based Key Encapsulation Mechanism: Saber in Hardware 📺
In this paper, we present an instruction set coprocessor architecture for lattice-based cryptography and implement the module lattice-based post-quantum key encapsulation mechanism (KEM) Saber as a case study. To achieve fast computation time, the architecture is fully implemented in hardware, including CCA transformations. Since polynomial multiplication plays a performance-critical role in the module and ideal lattice-based public-key cryptography, a parallel polynomial multiplier architecture is proposed that overcomes memory access bottlenecks and results in a highly parallel yet simple and easy-to-scale design. Such multipliers can compute a full multiplication in 256 cycles, but are designed to target any area/performance trade-offs. Besides optimizing polynomial multiplication, we make important design decisions and perform architectural optimizations to reduce the overall cycle counts as well as improve resource utilization. For the module dimension 3 (security comparable to AES-192), the coprocessor computes CCA key generation, encapsulation, and decapsulation in only 5,453, 6,618 and 8,034 cycles respectively, making it the fastest hardware implementation of Saber to our knowledge. On a Xilinx UltraScale+ XCZU9EG-2FFVB1156 FPGA, the entire instruction set coprocessor architecture runs at 250 MHz clock frequency and consumes 23,686 LUTs, 9,805 FFs, and 2 BRAM tiles (including 5,113 LUTs and 3,068 FFs for the Keccak core).
2020
TCHES
Strengthening Sequential Side-Channel Attacks Through Change Detection 📺
The sequential structure of some side-channel attacks makes them subject to error propagation, i.e. when an error occurs during the recovery of some part of a secret key, all the following guesses might as well be chosen randomly. We propose a methodology that strengthens sequential attacks by automatically identifying and correcting errors. The core ingredient of our methodology is a change-detection test that monitors the distribution of the distinguisher values used to reconstruct the secret key. Our methodology includes an error-correction procedure that can cope both with false positives of the change-detection test, and inaccuracies of the estimated location of the wrong key guess. The proposed methodology is general and can be included in several attacks. As meaningful examples, we conduct two different side-channel attacks against RSA-2048: an horizontal power-analysis attack based on correlation and a vertical timing attack. Our experiments show that, in all the considered cases, strengthened attacks outperforms their original counterparts and alternative solutions that are based on thresholds. In particular, strengthened attacks achieve high success rates even when the side-channel measurements are noisy or limited in number, without prohibitively increasing the computing time.
2020
TCHES
Investigating Profiled Side-Channel Attacks Against the DES Key Schedule 📺
Recent publications describe profiled single trace side-channel attacks (SCAs) against the DES key-schedule of a “commercially available security controller”. They report a significant reduction of the average remaining entropy of cryptographic keys after the attack, with surprisingly large, key-dependent variations of attack results, and individual cases with remaining key entropies as low as a few bits. Unfortunately, they leave important questions unanswered: Are the reported wide distributions of results plausible - can this be explained? Are the results device-specific or more generally applicable to other devices? What is the actual impact on the security of 3-key triple DES? We systematically answer those and several other questions by analyzing two commercial security controllers and a general purpose microcontroller. We observe a significant overall reduction and, importantly, also observe a large key-dependent variation in single DES key security levels, i.e. 49.4 bit mean and 0.9 % of keys &lt; 40 bit (first investigated security controller; other results similar). We also observe a small fraction of keys with exceptionally low security levels that can be called weak keys. It is unclear, whether a device’s side-channel security should be assessed based on such rare weak key outliers. We generalize results to other leakage models by attacking the hardware DES accelerator of a general purpose microcontroller exhibiting a different leakage model. A highly simplified leakage simulation also confirms the wide distribution and shows that security levels are predictable to some extent. Through extensive investigations we find that the actual weakness of keys mainly stems from the specific switching noise they cause. Based on our investigations we expect that widely distributed results and weak outliers should be expected for all profiled attacks against (insufficiently protected) key-schedules, regardless of the algorithm and specific implementation. Finally, we describe a sound approach to estimate actual 3-key triple-DES security levels from empirical single DES results and find that the impact on the security of 3-key triple-DES is limited, i.e. 96.1 bit mean and 0.24 % of key-triples &lt; 80 bit for the same security controller.
2020
TCHES
A Novel Evaluation Metric for Deep Learning-Based Side Channel Analysis and Its Extended Application to Imbalanced Data 📺
Since Kocher (CRYPTO’96) proposed timing attack, side channel analysis (SCA) has shown great potential to break cryptosystems via physical leakage. Recently, deep learning techniques are widely used in SCA and show equivalent and even better performance compared to traditional methods. However, it remains unknown why and when deep learning techniques are effective and efficient for SCA. Masure et al. (IACR TCHES 2020(1):348–375) illustrated that deep learning paradigm is suitable for evaluating implementations against SCA from a worst-case scenario point of view, yet their work is limited to balanced data and a specific loss function. Besides, deep learning metrics are not consistent with side channel metrics. In most cases, they are deceptive in foreseeing the feasibility and complexity of mounting a successful attack, especially for imbalanced data. To mitigate the gap between deep learning metrics and side channel metrics, we propose a novel Cross Entropy Ratio (CER) metric to evaluate the performance of deep learning models for SCA. CER is closely related to traditional side channel metrics Guessing Entropy (GE) and Success Rate (SR) and fits to deep learning scenario. Besides, we show that it works stably while deep learning metrics such as accuracy becomes rather unreliable when the training data tends to be imbalanced. However, estimating CER can be done as easy as natural metrics in deep learning algorithms with low computational complexity. Furthermore, we adapt CER metric to a new kind of loss function, namely CER loss function, designed specifically for deep learning in side channel scenario. In this way, we link directly the SCA objective to deep learning optimization. Our experiments on several datasets show that, for SCA with imbalanced data, CER loss function outperforms Cross Entropy loss function in various conditions.
2020
TCHES
Splitting the Interpose PUF: A Novel Modeling Attack Strategy 📺
We demonstrate that the Interpose PUF proposed at CHES 2019, an Arbiter PUF-based design for so-called Strong Physical Unclonable Functions (PUFs), can be modeled by novel machine learning strategies up to very substantial sizes and complexities. Our attacks require in the most difficult cases considerable, but realistic, numbers of CRPs, while consuming only moderate computation times, ranging from few seconds to few days. The attacks build on a new divide-and-conquer approach that allows us to model the two building blocks of the Interpose PUF separately. For non-reliability based Machine Learning (ML) attacks, this eventually leads to attack times on (kup, kdown)-Interpose PUFs that are comparable to the ones against max{kup, kdown}-XOR Arbiter PUFs, refuting the original claim that Interpose PUFs could provide security similar to (kdown + kup/2)-XOR Arbiter PUFs (CHES 2019). On the technical side, our novel divide-and-conquer technique might also be useful in analyzing other designs, where XOR Arbiter PUF challenge bits are unknown to the attacker.
2020
TCHES
CPAmap: On the Complexity of Secure FPGA Virtualization, Multi-Tenancy, and Physical Design 📺
With virtualized Field Programmable Gate Arrays (FPGAs) on the verge of being deployed to the cloud computing domain, there is a rising interest in resolving recently identified security issues. Those issues result from different trusted and untrusted entities sharing the FPGA fabric and the Power Distribution Network. Researchers were able to perform both side-channel and fault attacks between logically isolated designs on the same FPGA fabric, compromising security of cryptographic modules and other critical implementations. Side-channel attacks specifically are enabled by the vast degree of freedom given to developers when making use of the basic FPGA resources. Both ring oscillators as well as long delay lines, implemented using low-level FPGA primitives, have been shown to provide sufficient data for simple or correlation-based power analysis attacks. In order to develop new or apply known countermeasures onto designs and implementations in a virtualized multi-tenant FPGA, we seek to fully understand the underlying mechanisms and dependencies of chip-internal side-channel attacks. Although the impact of process variation and other physical design parameters on side-channel vulnerability has been investigated in previous works, remote attacks between logically isolated partitions in multi-tenant FPGAs introduce new and unique challenges. Thus, we systematically analyze the impact of physical mapping of both attacker and victim design on the success of correlation power analysis attacks on the Advanced Encryption Standard (AES). We report our findings on a Xilinx Zynq 7000-based platform, which show that the effect of global and local placement as well as routing and process variation on the success of side-channel attacks almost exceeds the impact of hiding countermeasures. This result reveals fundamental challenges in secure virtualization of FPGAs, which have been mostly ignored so far. Eventually, our results may also help vendors and hypervisors in developing zero overhead side-channel countermeasures based on adequate global and local placement of isolated designs on a multi-tenant FPGA.
2020
TCHES
Revisiting a Methodology for Efficient CNN Architectures in Profiling Attacks 📺
This work provides a critical review of the paper by Zaid et al. titled “Methodology for Efficient CNN Architectures in Profiling attacks”, which was published in TCHES Volume 2020, Issue 1. This work studies the design of CNN networks to perform side-channel analysis of multiple implementations of the AES for embedded devices. Based on the authors’ code and public data sets, we were able to cross-check their results and perform a thorough analysis. We correct multiple misconceptions by carefully inspecting different elements of the model architectures proposed by Zaid et al. First, by providing a better understanding on the internal workings of these models, we can trivially reduce their number of parameters on average by 52%, while maintaining a similar performance. Second, we demonstrate that the convolutional filter’s size is not strictly related to the amount of misalignment in the traces. Third, we show that increasing the filter size and the number of convolutions actually improves the performance of a network. Our work demonstrates once again that reproducibility and review are important pillars of academic research. Therefore, we provide the reader with an online Python notebook which allows to reproduce some of our experiments1 and additional example code is made available on Github.2
2020
TCHES
JackHammer: Efficient Rowhammer on Heterogeneous FPGA-CPU Platforms 📺
After years of development, FPGAs are finally making an appearance on multi-tenant cloud servers. Heterogeneous FPGA-CPU microarchitectures require reassessment of common assumptions about isolation and security boundaries, as they introduce new attack vectors and vulnerabilities. In this work, we analyze the memory and cache subsystem and study Rowhammer and cache attacks enabled by two proposed heterogeneous FPGA-CPU platforms from Intel: the Arria 10 GX with an integrated FPGA-CPU platform, and the Arria 10 GX PAC expansion card which connects the FPGA to the CPU via the PCIe interface. We demonstrate JackHammer, a novel, efficient, and stealthy Rowhammer from the FPGA to the host’s main memory. Our results indicate that a malicious FPGA can perform twice as fast as a typical Rowhammer from the CPU on the same system and causes around four times as many bit flips as the CPU attack. We demonstrate the efficacy of JackHammer from the FPGA through a realistic fault attack on the WolfSSL RSA signing implementation that reliably causes a fault after an average of fifty-eight RSA signatures, 25% faster than a CPU Rowhammer. In some scenarios our JackHammer attack produces faulty signatures more than three times more often and almost three times faster than a conventional CPU Rowhammer. Finally, we systematically analyze new cache attacks in these environments following demonstration of a cache covert channel across FPGA and CPU.
2020
TCHES
The Long and Winding Path to Secure Implementation of GlobalPlatform SCP10 📺
GlobalPlatform (GP) card specifications are defined for smart cards regarding rigorous security requirements. The increasingly more powerful cards within an open ecosystem of multiple players stipulate that asymmetric-key protocols become necessary. In this paper, we analyze SCP10, which is the Secure Channel Protocol (SCP) that relies on RSA for key exchange and authentication. Our findings are twofold. First, we demonstrate several flaws in the design of SCP10. We discuss the scope of the identified flaws by presenting several attack scenarios in which a malicious attacker can recover all the messages protected by SCP10. We provide a full implementation of these attacks. For instance, an attacker can get the freshly generated session keys in less than three hours. Second, we propose a secure implementation of SCP10 and discuss how it can mitigate the discovered flaws. Finally, we measure the overhead incurred by the implemented countermeasures.
2020
TCHES
ISA Extensions for Finite Field Arithmetic: Accelerating Kyber and NewHope on RISC-V 📺
We present and evaluate a custom extension to the RISC-V instruction set for finite field arithmetic. The result serves as a very compact approach to software-hardware co-design of PQC implementations in the context of small embedded processors such as smartcards. The extension provides instructions that implement finite field operations with subsequent reduction of the result. As small finite fields are used in various PQC schemes, such instructions can provide a considerable speedup for an otherwise software-based implementation. Furthermore, we create a prototype implementation of the presented instructions for the extendable VexRiscv core, integrate the result into a chip design, and evaluate the design on two different FPGA platforms. The effectiveness of the extension is evaluated by using the instructions to optimize the Kyber and NewHope key-encapsulation schemes. To that end, we also present an optimized software implementation for the standard RISC-V instruction set for the polynomial arithmetic underlying those schemes, which serves as basis for comparison. Both variants are tuned on an assembler level to optimally use the processor pipelines of contemporary RISC-V CPUs. The result shows a speedup for the polynomial arithmetic of up to 85% over the basic software implementation. Using the custom instructions drastically reduces the code and data size of the implementation without introducing runtime-performance penalties at a small cost in circuit size. When used in the selected schemes, the custom instructions can be used to replace a full general purpose multiplier to achieve very compact implementations.
2020
TCHES
Single-Trace Attacks on Keccak 📺
Since its selection as the winner of the SHA-3 competition, Keccak, with all its variants, has found a large number of applications. It is, for instance, a common building block in schemes submitted to NIST’s post-quantum cryptography project. In many of these applications, Keccak processes ephemeral secrets. In such a setting, side-channel adversaries are limited to a single observation, meaning that differential attacks are inherently prevented. If, however, such a single trace of Keccak can already be sufficient for key recovery has so far been unknown. In this paper, we change the above by presenting the first single-trace attack targeting Keccak. Our method is based on soft-analytical side-channel attacks and, thus, combines template matching with message passing in a graphical model of the attacked algorithm. As a straight-forward model of Keccak does not yield satisfactory results, we describe several optimizations for the modeling and the message-passing algorithm. Their combination allows attaining high attack performance in terms of both success rate as well as computational runtime. We evaluate our attack assuming generic software (microcontroller) targets and thus use simulations in the generic noisy Hamming-weight leakage model. Hence, we assume relatively modest profiling capabilities of the adversary. Nonetheless, the attack can reliably recover secrets in a large number of evaluated scenarios at realistic noise levels. Consequently, we demonstrate the need for countermeasures even in settings where DPA is not a threat.
2020
TCHES
Parameterized Hardware Accelerators for Lattice-Based Cryptography and Their Application to the HW/SW Co-Design of qTESLA 📺
This paper presents a set of efficient and parameterized hardware accelerators that target post-quantum lattice-based cryptographic schemes, including a versatile cSHAKE core, a binary-search CDT-based Gaussian sampler, and a pipelined NTT-based polynomial multiplier, among others. Unlike much of prior work, the accelerators are fully open-sourced, are designed to be constant-time, and can be parameterized at compile-time to support different parameters without the need for re-writing the hardware implementation. These flexible, publicly-available accelerators are leveraged to demonstrate the first hardware-software co-design using RISC-V of the post-quantum lattice-based signature scheme qTESLA with provably secure parameters. In particular, this work demonstrates that the NIST’s Round 2 level 1 and level 3 qTESLA variants achieve over a 40-100x speedup for key generation, about a 10x speedup for signing, and about a 16x speedup for verification, compared to the baseline RISC-V software-only implementation. For instance, this corresponds to execution in 7.7, 34.4, and 7.8 milliseconds for key generation, signing, and verification, respectively, for qTESLA’s level 1 parameter set on an Artix-7 FPGA, demonstrating the feasibility of the scheme for embedded applications.
2020
TCHES
Generic Side-channel attacks on CCA-secure lattice-based PKE and KEMs 📺
In this work, we demonstrate generic and practical EM side-channel assisted chosen ciphertext attacks over multiple LWE/LWR-based Public Key Encryption (PKE) and Key Encapsulation Mechanisms (KEM) secure in the chosen ciphertext model (IND-CCA security). We show that the EM side-channel information can be efficiently utilized to instantiate a plaintext checking oracle, which provides binary information about the output of decryption, typically concealed within IND-CCA secure PKE/KEMs, thereby enabling our attacks. Firstly, we identified EM-based side-channel vulnerabilities in the error correcting codes (ECC) enabling us to distinguish based on the value/validity of decrypted codewords. We also identified similar vulnerabilities in the Fujisaki-Okamoto transform which leaks information about decrypted messages applicable to schemes that do not use ECC. We subsequently exploit these vulnerabilities to demonstrate practical attacks applicable to six CCA-secure lattice-based PKE/KEMs competing in the second round of the NIST standardization process. We perform experimental validation of our attacks on implementations taken from the open-source pqm4 library, running on the ARM Cortex-M4 microcontroller. Our attacks lead to complete key-recovery in a matter of minutes on all the targeted schemes, thus showing the effectiveness of our attack.
2020
TCHES
Cortex-M4 optimizations for {R,M} LWE schemes 📺
This paper proposes various optimizations for lattice-based key encapsulation mechanisms (KEM) using the Number Theoretic Transform (NTT) on the popular ARM Cortex-M4 microcontroller. Improvements come in the form of a faster code using more efficient modular reductions, optimized small-degree polynomial multiplications, and more aggressive layer merging in the NTT, but also in the form of reduced stack usage. We test our optimizations in software implementations of Kyber and NewHope, both round 2 candidates in the NIST post-quantum project, and also NewHope-Compact, a recently proposed variant of NewHope with smaller parameters. Our software is the first implementation of NewHope-Compact on the Cortex-M4 and shows speed improvements over previous high-speed implementations of Kyber and NewHope. Moreover, it gives a common framework to compare those schemes with the same level of optimization. Our results show that NewHope- Compact is the fastest scheme, followed by Kyber, and finally NewHope, which seems to suffer from its large modulus and error distribution for small dimensions.
2020
TCHES
Understanding Screaming Channels: From a Detailed Analysis to Improved Attacks 📺
Recently, some wireless devices have been found vulnerable to a novel class of side-channel attacks, called Screaming Channels. These leaks might appear if the sensitive leaks from the processor are unintentionally broadcast by a radio transmitter placed on the same chip. Previous work focuses on identifying the root causes, and on mounting an attack at a distance considerably larger than the one achievable with conventional electromagnetic side channels, which was demonstrated in the low-noise environment of an anechoic chamber. However, a detailed understanding of the leak, attacks that take full advantage of the novel vector, and security evaluations in more practical scenarios are still missing. In this paper, we conduct a thorough experimental analysis of the peculiar properties of Screaming Channels. For example, we learn about the coexistence of intended and unintended data, the role of distance and other parameters on the strength of the leak, the distortion of the leakmodel, and the portability of the profiles. With such insights, we build better attacks. We profile a device connected via cable with 10000·500 traces. Then, 5 months later, we attack a different instance at 15m in an office environment. We recover the AES-128 key with 5000·1000 traces and key enumeration up to 223. Leveraging spatial diversity, we mount some attacks in the presence of obstacles. As a first example of application to a real system, we show a proof-of-concept attack against the authentication method of Google Eddystone beacons. On the one side, this work lowers the bar for more realistic attacks, highlighting the importance of the novel attack vector. On the other side, it provides a broader security evaluation of the leaks, helping the defender and radio designers to evaluate risk, and the need of countermeasures.
2020
TCHES
Fixslicing: A New GIFT Representation: Fast Constant-Time Implementations of GIFT and GIFT-COFB on ARM Cortex-M 📺
The GIFT family of lightweight block ciphers, published at CHES 2017, offers excellent hardware performance figures and has been used, in full or in part, in several candidates of the ongoing NIST lightweight cryptography competition. However, implementation of GIFT in software seems complex and not efficient due to the bit permutation composing its linear layer (a feature shared with PRESENT cipher). In this article, we exhibit a new non-trivial representation of the GIFT family of block ciphers over several rounds. This new representation, that we call fixslicing, allows extremely efficient software bitsliced implementations of GIFT, using only a few rotations, surprisingly placing GIFT as a very efficient candidate on micro-controllers. Our constant time implementations show that, on ARM Cortex-M3, 128-bit data can be ciphered with only about 800 cycles for GIFT-64 and about 1300 cycles for GIFT-128 (assuming pre-computed round keys). In particular, this is much faster than the impressive PRESENT implementation published at CHES 2017 that requires 2116 cycles in the same setting, or the current best AES constant time implementation reported that requires 1617 cycles. This work impacts GIFT, but also improves software implementations of all other cryptographic primitives directly based on it or strongly related to it.
2020
TCHES
From A to Z: Projective coordinates leakage in the wild 📺
At EUROCRYPT 2004, Naccache et al. showed that the projective coordinates representation of the resulting point of an elliptic curve scalar multiplication potentially allows to recover some bits of the scalar. However, this attack has received little attention by the scientific community, and the status of deployed mitigations to prevent it in widely adopted cryptography libraries is unknown. In this paper, we aim to fill this gap, by analyzing several cryptography libraries in this context. To demonstrate the applicability of the attack, we use a side-channel attack to exploit this vulnerability within libgcrypt in the context of ECDSA. To the best of our knowledge, this is the first practical attack instance. It targets the insecure binary extended Euclidean algorithm implementation using a microarchitectural side-channel attack that allows recovering the projective representation of the output point of scalar multiplication during ECDSA signature generation. We captured 100k traces to estimate the number of traces an attacker would need to compromise the libgcrypt ECDSA implementation, resulting in less than 2k for commonly used elliptic curve secp256r1, demonstrating the attack feasibility. During exploitation, we found two additional vulnerabilities. However, we remark the purpose of this paper is not merely exploiting a library but about providing an analysis on the projective coordinates vulnerability status in widely deployed open-source libraries, filling a gap between its original description in the academic literature and the adoption of countermeasures to thwart it in real-world applications.
2020
TCHES
Defeating State-of-the-Art White-Box Countermeasures with Advanced Gray-Box Attacks 📺
The goal of white-box cryptography is to protect secret keys embedded in a cryptographic software deployed in an untrusted environment. In this article, we revisit state-of-the-art countermeasures employed in white-box cryptography, and we discuss possible ways to combine them. Then we analyze the different gray-box attack paths and study their performances in terms of required traces and computation time. Afterward, we propose a new paradigm for the gray-box attack against white-box cryptography, which exploits the data-dependency of the target implementation. We demonstrate that our approach provides substantial complexity improvements over the existing attacks. Finally, we showcase this new technique by breaking the three winning AES-128 white-box implementations from WhibOx 2019 white-box cryptography competition.
2020
TCHES
High-Speed Masking for Polynomial Comparison in Lattice-based KEMs 📺
With the NIST post-quantum standardization competition entering the second round, the interest in practical implementation results of the remaining NIST candidates is steadily growing. Especially implementations on embedded devices are often not protected against side-channel attacks, such as differential power analysis. In this regard, the application of countermeasures against side-channel attacks to candidates of the NIST standardization process is still an understudied topic. Our work aims to contribute to the NIST competition by enabling a more realistic judgment of the overhead cost introduced by side-channel countermeasures that are applied to lattice-based KEMs that achieve CCA-security based on the Fujisaki-Okamoto transform. We present a novel higher-order masking scheme that enables an efficient comparison of polynomials as previous techniques based on arithmetic-to-Boolean conversions renders this (generally inexpensive) component extremely expensive in the masked case. Our approach has linear complexity in the number of shares compared to quadratic complexity of previous contributions and it applies to lattice based schemes with prime modulus. It comes with a proof in the probing model and an efficient implementation on an ARM Cortex-M4F microcontroller which was defined as a preferred evaluation platform for embedded implementations by NIST. Our algorithm can be executed in only 1.5-2.2 milliseconds on the target platform (depending on the masking order) and is therefore well suited even for lightweight applications. While in previous work, practical side-channel experiments were conducted using only 5,000 - 100,000 power traces, we confirm the absence of first-order leakage in this work by collecting 1 million power traces and applying the t-test methodology.
2020
TCHES
Protecting against Statistical Ineffective Fault Attacks 📺
Statistical Ineffective Fault Attacks (SIFA) pose a threat for many practical implementations of symmetric primitives. Countermeasures against both power analysis and fault attacks typically do not prevent straightforward SIFA attacks, which require only very limited knowledge about the concrete implementation. Therefore, the exploration of countermeasures against SIFA that do not rely on protocols or physical protection mechanisms is of great interest. In this paper, we describe different countermeasure strategies against SIFA. First, we introduce an abstraction layer between the algorithmic specification of a cipher and its implementation in hardware or software to study and describe resistance against SIFA. We then show that by basing the masked implementation on permutations as building blocks, we can build circuits that withstand single-fault SIFA and DPA attacks. We show how this approach can be applied to 3-bit, 4-bit, and 5-bit S-boxes and the AES S-box. Additionally, we present a strategy based on fine-grained fault detection suitable for protecting any circuit against SIFA attacks. Although this approach may lead to a higher implementation cost due to the fine-grained detection needed, it can be used to protect arbitrary circuits and can be generalized to cover multi-fault SIFA. For single-fault SIFA protection, our countermeasures only have a small computational overhead compared to a simple combination of masking and duplication.
2020
TCHES
Side-Channel Countermeasures’ Dissection and the Limits of Closed Source Security Evaluations 📺
We take advantage of a recently published open source implementation of the AES protected with a mix of countermeasures against side-channel attacks to discuss both the challenges in protecting COTS devices against such attacks and the limitations of closed source security evaluations. The target implementation has been proposed by the French ANSSI (Agence Nationale de la Sécurité des Systèmes d’Information) to stimulate research on the design and evaluation of side-channel secure implementations. It combines additive and multiplicative secret sharings into an affine masking scheme that is additionally mixed with a shuffled execution. Its preliminary leakage assessment did not detect data dependencies with up to 100,000 measurements. We first exhibit the gap between such a preliminary leakage assessment and advanced attacks by demonstrating how a countermeasures’ dissection exploiting a mix of dimensionality reduction, multivariate information extraction and key enumeration can recover the full key with less than 2,000 measurements. We then discuss the relevance of open source evaluations to analyze such implementations efficiently, by pointing out that certain steps of the attack are hard to automate without implementation knowledge (even with machine learning tools), while performing them manually is straightforward. Our findings are not due to design flaws but from the general difficulty to prevent side-channel attacks in COTS devices with limited noise. We anticipate that high security on such devices requires significantly more shares.
2020
TCHES
A Fast and Accurate Guessing Entropy Estimation Algorithm for Full-key Recovery 📺
Guessing entropy (GE) is a widely adopted metric that measures the average computational cost needed for a successful side-channel analysis (SCA). However, with current estimation methods where the evaluator has to average the correct key rank over many independent side-channel leakage measurement sets, full-key GE estimation is impractical due to its prohibitive computing requirement. A recent estimation method based on posterior probabilities, although scalable, is not accurate.We propose a new guessing entropy estimation algorithm (GEEA) based on theoretical distributions of the ranking score vectors. By discovering the relationship of GE with pairwise success rates and utilizing it, GEEA uses a sum of many univariate Gaussian probabilities instead of multi-variate Gaussian probabilities, significantly improving the computation efficiency.We show that GEEA is more accurate and efficient than all current GE estimations. To the best of our knowledge, it is the only practical full-key GE evaluation on given experimental data sets which the evaluator has access to. Moreover, it can accurately predict the GE for larger sizes than the experimental data sets, providing comprehensive security evaluation.
2020
TCHES
Highly Efficient Architecture of NewHope-NIST on FPGA using Low-Complexity NTT/INTT 📺
NewHope-NIST is a promising ring learning with errors (RLWE)-based postquantum cryptography (PQC) for key encapsulation mechanisms. The performance on the field-programmable gate array (FPGA) affects the applicability of NewHope-NIST. In RLWE-based PQC algorithms, the number theoretic transform (NTT) is one of the most time-consuming operations. In this paper, low-complexity NTT and inverse NTT (INTT) are used to implement highly efficient NewHope-NIST on FPGA. First, both the pre-processing of NTT and the post-processing of INTT are merged into the fast Fourier transform (FFT) algorithm, which reduces N and 2N modular multiplications for N-point NTT and INTT, respectively. Second, a compact butterfly unit and an efficient modular reduction on the modulus 12289 are proposed for the low-complexity NTT/INTT architecture, which achieves an improvement of approximately 3× in the area time product (ATP) compared with the results of the state-of-the-art designs. Finally, a highly efficient architecture with doubled bandwidth and timing hiding for NewHope-NIST is presented. The implementation results on an FPGA show that our design is at least 2.5× faster and has 4.9× smaller ATP compared with the results of the state-of-the-art designs of NewHope-NIST on similar platforms.
2020
TCHES
FENL: an ISE to mitigate analogue micro-architectural leakage 📺
Ge et al. [GYH18] propose the augmented ISA (or aISA), a central tenet of which is the selective exposure of micro-architectural resources via a less opaque abstraction than normal. The aISA proposal is motivated by the need for control over such resources, for example to implement robust countermeasures against microarchitectural attacks. In this paper, we apply an aISA-style approach to challenges stemming from analogue micro-architectural leakage; examples include power-based Hamming weight and distance leakage from relatively fine-grained resources (e.g., pipeline registers), which are not exposed in, and so cannot be reliably controlled via, a normal ISA. Specifically, we design, implement, and evaluate an ISE named FENL: the ISE acts as a fence for leakage, preventing interaction between, and hence leakage from, instructions before and after it in program order. We demonstrate that the implementation and use of FENL has relatively low overhead, and represents an effective tool for systematically localising and reducing leakage.
2020
TCHES
Dismantling DST80-based Immobiliser Systems 📺
Car manufacturers deploy vehicle immobiliser systems in order to prevent car theft. However, in many cases the underlying cryptographic primitives used to authenticate a transponder are proprietary in nature and thus not open to public scrutiny. In this paper we publish the proprietary Texas Instruments DST80 cipher used in immobilisers of several manufacturers. Additionally, we expose serious flaws in immobiliser systems of major car manufacturers such as Toyota, Kia, Hyundai and Tesla. Specifically, by voltage glitching the firmware protection mechanisms of the microcontroller, we extracted the firmware from several immobiliser ECUs and reverse engineered the key diversification schemes employed within. We discovered that Kia and Hyundai immobiliser keys have only three bytes of entropy and that Toyota only relies on publicly readable information such as the transponder serial number and three constants to generate cryptographic keys. Furthermore, we present several practical attacks which can lead to recovering the full 80-bit cryptographic key in a matter of seconds or permanently disabling the transponder. Finally, even without key management or configuration issues, we demonstrate how an attacker can recover the cryptographic key using a profiled side-channel attack. We target the key loading procedure and investigate the practical applicability in the context of portability. Our work once again highlights the issues automotive vendors face in implementing cryptography securely.
2020
TCHES
Efficient and Private Computations with Code-Based Masking 📺
Code-based masking is a very general type of masking scheme that covers Boolean masking, inner product masking, direct sum masking, and so on. The merits of the generalization are twofold. Firstly, the higher algebraic complexity of the sharing function decreases the information leakage in “low noise conditions” and may increase the “statistical security order” of an implementation (with linear leakages). Secondly, the underlying error-correction codes can offer improved fault resistance for the encoded variables. Nevertheless, this higher algebraic complexity also implies additional challenges. On the one hand, a generic multiplication algorithm applicable to any linear code is still unknown. On the other hand, masking schemes with higher algebraic complexity usually come with implementation overheads, as for example witnessed by inner-product masking. In this paper, we contribute to these challenges in two directions. Firstly, we propose a generic algorithm that allows us (to the best of our knowledge for the first time) to compute on data shared with linear codes. Secondly, we introduce a new amortization technique that can significantly mitigate the implementation overheads of code-based masking, and illustrate this claim with a case study. Precisely, we show that, although performing every single code-based masked operation is relatively complex, processing multiple secrets in parallel leads to much better performances. This property enables code-based masked implementations of the AES to compete with the state-of-the-art in randomness complexity. Since our masked operations can be instantiated with various linear codes, we hope that these investigations open new avenues for the study of code-based masking schemes, by specializing the codes for improved performances, better side-channel security or improved fault tolerance.
2020
TCHES
Persistent Fault Attack in Practice 📺
Persistence fault analysis (PFA) is a novel fault analysis technique proposed in CHES 2018 and demonstrated with rowhammer-based fault injections. However, whether such analysis can be applied to traditional fault attack scenario, together with its difficulty in practice, has not been carefully investigated. For the first time, a persistent fault attack is conducted on an unprotected AES implemented on ATmega163L microcontroller in this paper. Several critical challenges are solved with our new improvements, including (1) how to decide whether the fault is injected in SBox; (2) how to use the maximum likelihood estimation to pursue the minimum number of ciphertexts; (3) how to utilize the unknown fault in SBox to extract the key. Our experiments show that: to break AES with physical laser injections despite all these challenges, the minimum and average number of required ciphertexts are 926 and 1641, respectively. It is about 38% and 28% reductions of the ciphertexts required in comparison to 1493 and 2273 in previous work where both fault value and location have to be known. Furthermore, our analysis is extended to the PRESENT cipher. By applying the persistent fault analysis to the penultimate round, the full PRESENT key of 80 bits can be recovered. Eventually, an experimental validation is performed to confirm the accuracy of our attack with more insights. This paper solves the challenges in most aspects of practice and also demonstrates the feasibility and universality of PFA on SPN block ciphers.
2020
TCHES
When one vulnerable primitive turns viral: Novel single-trace attacks on ECDSA and RSA 📺
Microarchitecture based side-channel attacks are common threats nowadays. Intel SGX technology provides a strong isolation from an adversarial OS, however, does not guarantee protection against side-channel attacks. In this paper, we analyze the security of the mbedTLS binary GCD algorithm, an implementation that offers interesting challenges when compared for example with OpenSSL, due to the usage of very tight loops in the former. Using practical experiments we demonstrate the mbedTLS binary GCD implementation is vulnerable to side-channel analysis using the SGX-Step framework against mbedTLS based SGX enclaves.We analyze the security of some use cases of this algorithm in this library, resulting in the discovery of a new vulnerability in the ECDSA code path that allows a single-trace attack against this implementation. This vulnerability is three-fold interesting: It resides in the implementation of a countermeasure which makes it more dangerous due to the false state of security the countermeasure currently offers. It reduces mbedTLS ECDSA security to an integer factorization problem. An unexpected GCD call inside the ECDSA code path compromises the countermeasure. We also cover an orthogonal use case, this time inside the mbedTLS RSA code path during the computation of a CRT parameter when loading a private key. The attack also exploits the binary GCD implementation threat, showing how a single vulnerable primitive leads to multiple vulnerabilities. We demonstrate both security threats with end-to-end attacks using 1000 trials each, showing in both cases single-trace attacks can be achieved with success rates very close to 100%.
2020
TCHES
Time-memory trade-off in Toom-Cook multiplication: an application to module-lattice based cryptography 📺
Since the introduction of the ring-learning with errors problem, the number theoretic transform (NTT) based polynomial multiplication algorithm has been studied extensively. Due to its faster quasilinear time complexity, it has been the preferred choice of cryptographers to realize ring-learning with errors cryptographic schemes. Compared to NTT, Toom-Cook or Karatsuba based polynomial multiplication algorithms, though being known for a long time, still have a fledgling presence in the context of post-quantum cryptography.In this work, we observe that the pre- and post-processing steps in Toom-Cook based multiplications can be expressed as linear transformations. Based on this observation we propose two novel techniques that can increase the efficiency of Toom-Cook based polynomial multiplications. Evaluation is reduced by a factor of 2, and we call this method precomputation, and interpolation is reduced from quadratic to linear, and we call this method lazy interpolation.As a practical application, we applied our algorithms to the Saber post-quantum key-encapsulation mechanism. We discuss in detail the various implementation aspects of applying our algorithms to Saber. We show that our algorithm can improve the efficiency of the computationally costly matrix-vector multiplication by 12−37% compared to previous methods on their respective platforms. Secondly, we propose different methods to reduce the memory footprint of Saber for Cortex-M4 microcontrollers. Our implementation shows between 2.6 and 5.7 KB reduction in the memory usage with respect to the smallest implementation in the literature.
2020
TCHES
A Compact and Scalable Hardware/Software Co-design of SIKE 📺
We present efficient and compact hardware/software co-design implementations of the Supersingular Isogeny Key Encapsulation (SIKE) protocol on field-programmable gate arrays (FPGAs). In order to be better equipped for different post-quantum scenarios, our architectures were designed to feature high-flexibility by covering all the currently available parameter sets and with support for primes up to 1016 bits. In particular, any of the current SIKE parameters equivalent to the post-quantum security of AES-128/192/256 and SHA3-256 can be selected and run on-the-fly. This security scalability property, together with the small footprint and efficiency of our architectures, makes them ideal for embedded applications in a post-quantum world. In addition, the proposed implementations exhibit regular, constant-time execution, which provides protection against timing and simple sidechannel attacks. Our results demonstrate that supersingular isogeny-based primitives such as SIDH and SIKE can indeed be deployed for embedded applications featuring competitive performance. For example, our smallest architecture based on a 128-bit MAC unit takes only 3415 slices, 21 BRAMs and 57 DSPs on a Virtex 7 690T and can perform key generation, encapsulation and decapsulation in 14.4, 24.4 and 26.0 milliseconds for SIKEp434 and in 52.3, 86.4 and 93.2 milliseconds for SIKEp751, respectively.
2020
TCHES
FEDS: Comprehensive Fault Attack Exploitability Detection for Software Implementations of Block Ciphers 📺
Fault injection attacks are one of the most powerful forms of cryptanalytic attacks on ciphers. A single, precisely injected fault during the execution of a cipher like the AES, can completely reveal the key within a few milliseconds. Software implementations of ciphers, therefore, need to be thoroughly evaluated for such attacks. In recent years, automated tools have been developed to perform these evaluations. These tools either work on the cipher algorithm or on their implementations. Tools that work at the algorithm level can provide a comprehensive assessment of fault attack vulnerability for different fault attacks and with different fault models. Their application is, however, restricted because every realization of the cipher has unique vulnerabilities. On the other hand, tools that work on cipher implementations have a much wider application but are often restricted by the range of fault attacks and the number of fault models they can evaluate.In this paper, we propose a framework, called FEDS, that uses a combination of compiler techniques and model checking to merge the advantages of both, algorithmic level tools as well as implementation level tools. Like the algorithmic level tools, FEDS can provide a comprehensive assessment of fault attack exploitability considering a wide range of fault attacks and fault models. Like implementation level tools, FEDS works with implementations, therefore has wide application. We demonstrate the versatility of FEDS by evaluating seven different implementations of AES (including bitsliced implementation) and implementations of CLEFIA and CAMELLIA for Differential Fault Attacks. The framework automatically identifies exploitable instructions in all implementations. Further, we present an application of FEDS in a Fault Attack Aware Compiler, that can automatically identify and protect exploitable regions of the code. We demonstrate that the compiler can generate significantly more efficient code than a naïvely protected equivalent, while maintaining the same level of protection.
2020
TCHES
Low-Latency Hardware Masking with Application to AES 📺
During the past two decades there has been a great deal of research published on masked hardware implementations of AES and other cryptographic primitives. Unfortunately, many hardware masking techniques can lead to increased latency compared to unprotected circuits for algorithms such as AES, due to the high-degree of nonlinear functions in their designs. In this paper, we present a hardware masking technique which does not increase the latency for such algorithms. It is based on the LUT-based Masked Dual-Rail with Pre-charge Logic (LMDPL) technique presented at CHES 2014. First, we show 1-glitch extended strong noninterference of a nonlinear LMDPL gadget under the 1-glitch extended probing model. We then use this knowledge to design an AES implementation which computes a full AES-128 operation in 10 cycles and a full AES-256 operation in 14 cycles. We perform practical side-channel analysis of our implementation using the Test Vector Leakage Assessment (TVLA) methodology and analyze univariate as well as bivariate t-statistics to demonstrate its DPA resistance level.
2020
TCHES
On the Security Goals of White-Box Cryptography 📺
We discuss existing and new security notions for white-box cryptography and comment on their suitability for Digital Rights Management and Mobile Payment Applications, the two prevalent use-cases of white-box cryptography. In particular, we put forward indistinguishability for white-box cryptography with hardware-binding (IND-WHW) as a new security notion that we deem central. We also discuss the security property of application-binding and explain the issues faced when defining it as a formal security notion. Based on our proposed notion for hardware-binding, we describe a possible white-box competition setup which assesses white-box implementations w.r.t. hardware-binding. Our proposed competition setup allows us to capture hardware-binding in a practically meaningful way.While some symmetric encryption schemes have been proven to admit plain white-box implementations, we show that not all secure symmetric encryption schemes are white-boxeable in the plain white-box attack scenario, i.e., without hardware-binding. Thus, even strong assumptions such as indistinguishability obfuscation cannot be used to provide secure white-box implementations for arbitrary ciphers. Perhaps surprisingly, our impossibility result does not carry over to the hardware-bound scenario. In particular, Alpirez Bock, Brzuska, Fischlin, Janson and Michiels (ePrint 2019/1014) proved a rather general feasibility result in the hardware-bound model. Equally important, the apparent theoretical distinction between the plain white-box model and the hardware-bound white-box model also translates into practically reduced attack capabilities as we explain in this paper.
2020
TOSC
Dumbo, Jumbo, and Delirium: Parallel Authenticated Encryption for the Lightweight Circus 📺
With the trend to connect more and more devices to the Internet, authenticated encryption has become a major backbone in securing the communication, not only between these devices and servers, but also the direct communication among these devices. Most authenticated encryption algorithms used in practice are developed to perform well on modern high-end devices, but are not necessarily suited for usage on resource-constrained devices. We present a lightweight authenticated encryption scheme, called Elephant. Elephant retains the advantages of GCM such as parallelism, but is tailored to the needs of resource-constrained devices. The two smallest instances of Elephant, Dumbo and Jumbo, are based on the 160-bit and 176-bit Spongent permutation, respectively, and are particularly suited for hardware; the largest instance of Elephant, Delirium, is based on 200-bit Keccak and is developed towards software use. All three instances are parallelizable, have a small state size while achieving a high level of security, and are constant time by design.
2020
TOSC
Pyjamask: Block Cipher and Authenticated Encryption with Highly Efficient Masked Implementation 📺
This paper introduces Pyjamask, a new block cipher family and authenticated encryption proposal submitted to the NIST lightweight cryptography standardization process. Pyjamask targets side-channel resistance as one of its main goal. More precisely, it strongly minimizes the number of nonlinear gates used in its internal primitive in order to allow efficient masked implementations, especially for high-order masking in software. Compared to other block ciphers, our proposal has thus among the smallest number of binary AND computations per input bit at the time of writing. Even though Pyjamask minimizes such an important criterion, it remains rather lightweight and efficient, thanks to a general bitslice construction that enables to computation of all nonlinear gates in parallel. For authenticated encryption, we adopt the provably secure AEAD mode OCB which has been extensively studied and has the benefit to offer full parallelization. Of course, other block cipher-based modes can be considered as well if other performance profiles are to be targeted.The paper first gives the specification of the Pyjamask block cipher and the associated AEAD proposal. We also provide a detailed design rationale for the block cipher which is guided by our aim of software efficiency in the presence of high-order masking. The security of the design is analyzed against most commonly known cryptanalysis techniques. We finally describe efficient (masked) implementations in software and provide implementation results with aggressive performances for masking of very high orders (up to 128). We also provide a rough estimation of the hardware performances which remain much better than those of an AES round-based implementation.
2020
TOSC
Xoodyak, a lightweight cryptographic scheme 📺
In this paper, we present Xoodyak, a cryptographic primitive that can be used for hashing, encryption, MAC computation and authenticated encryption. Essentially, it is a duplex object extended with an interface that allows absorbing strings of arbitrary length, their encryption and squeezing output of arbitrary length. It inherently hashes the history of all operations in its state, allowing to derive its resistance against generic attacks from that of the full-state keyed duplex. Internally, it uses the Xoodoo[12] permutation that, with its width of 48 bytes, allows for very compact implementations. The choice of 12 rounds justifies a security claim in the hermetic philosophy: It implies that there are no shortcut attacks with higher success probability than generic attacks. The claimed security strength is 128 bits. We illustrate the versatility of Xoodyak by describing a number of use cases, including the ones requested by NIST in the lightweight competition. For those use cases, we translate the relatively detailed security claim that we make for Xoodyak into simple ones.
2020
TOSC
SKINNY-AEAD and SKINNY-Hash 📺
We present the family of authenticated encryption schemes SKINNY-AEAD and the family of hashing schemes SKINNY-Hash. All of the schemes employ a member of the SKINNY family of tweakable block ciphers, which was presented at CRYPTO 2016, as the underlying primitive. In particular, for authenticated encryption, we show how to instantiate members of SKINNY in the Deoxys-I-like ΘCB3 framework to fulfill the submission requirements of the NIST lightweight cryptography standardization process. For hashing, we use SKINNY to build a function with larger internal state and employ it in a sponge construction. To highlight the extensive amount of third-party analysis that SKINNY obtained since its publication, we briefly survey the existing cryptanalysis results for SKINNY-128-256 and SKINNY-128-384 as of February 2020. In the last part of the paper, we provide a variety of ASIC implementations of our schemes and propose new simple SKINNY-AEAD and SKINNY-Hash variants with a reduced number of rounds while maintaining a very comfortable security margin. https://csrc.nist.gov/Projects/Lightweight-Cryptography
2020
TOSC
WAGE: An Authenticated Encryption with a Twist 📺
This paper presents WAGE, a new lightweight sponge-based authenticated cipher whose underlying permutation is based on a 37-stage Galois NLFSR over F27. At its core, the round function of the permutation consists of the well-analyzed Welch-Gong permutation (WGP), primitive feedback polynomial, a newly designed 7-bit SB sbox and partial word-wise XORs. The construction of the permutation is carried out such that the design of individual components is highly coupled with cryptanalysis and hardware efficiency. As such, we analyze the security of WAGE against differential, linear, algebraic and meet/miss-in-the-middle attacks. For 128-bit authenticated encryption security, WAGE achieves a throughput of 535 Mbps with hardware area of 2540 GE in ASIC ST Micro 90 nm standard cell library. Additionally, WAGE is designed with a twist where its underlying permutation can be efficiently turned into a pseudorandom bit generator based on the WG transformation (WG-PRBG) whose output bits have theoretically proved randomness properties.
2020
TOSC
Saturnin: a suite of lightweight symmetric algorithms for post-quantum security 📺
The cryptographic algorithms needed to ensure the security of our communications have a cost. For devices with little computing power, whose number is expected to grow significantly with the spread of the Internet of Things (IoT), this cost can be a problem. A simple answer to this problem is a compromise on the security level: through a weaker round function or a smaller number of rounds, the security level can be decreased in order to cheapen the implementation of the cipher. At the same time, quantum computers are expected to disrupt the state of the art in cryptography in the near future. For public-key cryptography, the NIST has organized a dedicated process to standardize new algorithms. The impact of quantum computing is harder to assess in the symmetric case but its study is an active research area.In this paper, we specify a new block cipher, Saturnin, and its usage in different modes to provide hashing and authenticated encryption in such a way that we can rigorously argue its security in the post-quantum setting. Its security analysis follows naturally from that of the AES, while our use of components that are easily implemented in a bitsliced fashion ensures a low cost for our primitives. Our aim is to provide a new lightweight suite of algorithms that performs well on small devices, in particular micro-controllers, while providing a high security level even in the presence of quantum computers. Saturnin is a 256-bit block cipher with a 256-bit key and an additional 9-bit parameter for domain separation. Using it, we built two authenticated ciphers and a hash function.• Saturnin-CTR-Cascade is an authenticated cipher using the counter mode and a separate MAC. It requires two passes over the data but its implementation does not require the inverse block cipher.• Saturnin-Short is an authenticated cipher intended for messages with a length strictly smaller than 128 bits which uses only one call to Saturnin to providenconfidentiality and integrity.• Saturnin-Hash is a 256-bit hash function. In this paper, we specify this suite of algorithms and argue about their security in both the classical and the post-quantum setting. https://project.inria.fr/saturnin/
2020
TOSC
The Subterranean 2.0 Cipher Suite 📺
This paper presents the Subterranean 2.0 cipher suite that can be used for hashing, MAC computation, stream encryption and several types of authenticated encryption schemes. At its core it has a duplex object with a 257-bit state and a lightweight single-round permutation. This makes Subterranean 2.0 very well suited for low-area and low-energy implementations in dedicated hardware.
2020
TOSC
Spook: Sponge-Based Leakage-Resistant Authenticated Encryption with a Masked Tweakable Block Cipher 📺
This paper defines Spook: a sponge-based authenticated encryption with associated data algorithm. It is primarily designed to provide security against side-channel attacks at a low energy cost. For this purpose, Spook is mixing a leakageresistant mode of operation with bitslice ciphers enabling efficient and low latency implementations. The leakage-resistant mode of operation leverages a re-keying function to prevent differential side-channel analysis, a duplex sponge construction to efficiently process the data, and a tag verification based on a Tweakable Block Cipher (TBC) providing strong data integrity guarantees in the presence of leakages. The underlying bitslice ciphers are optimized for the masking countermeasures against side-channel attacks. Spook is an efficient single-pass algorithm. It ensures state-of-the-art black box security with several prominent features: (i) nonce misuse-resilience, (ii) beyond-birthday security with respect to the TBC block size, and (iii) multiuser security at minimum cost with a public tweak. Besides the specifications and design rationale, we provide first software and hardware implementation results of (unprotected) Spook which confirm the limited overheads that the use of two primitives sharing internal components imply. We also show that the integrity of Spook with leakage, so far analyzed with unbounded leakages for the duplex sponge and a strongly protected TBC modeled as leak-free, can be proven with a much weaker unpredictability assumption for the TBC. We finally discuss external cryptanalysis results and tweaks to improve both the security margins and efficiency of Spook.
2020
TOSC
ESTATE: A Lightweight and Low Energy Authenticated Encryption Mode 📺
NIST has recently initiated a standardization project for efficient lightweight authenticated encryption schemes. SUNDAE, a candidate in this project, achieves optimal state size which results in low circuit overhead on top of the underlying block cipher. In addition, SUNDAE provides security in nonce-misuse scenario as well. However, in addition to the block cipher circuit, SUNDAE also requires some additional circuitry for multiplication by a primitive element. Further, it requires an additional block cipher invocation to create the starting state. In this paper, we propose a new lightweight and low energy authenticated encryption family, called ESTATE, that significantly improves the design of SUNDAE in terms of implementation costs (both hardware area and energy) and efficient processing of short messages. In particular, ESTATE does not require an additional multiplication circuit, and it reduces the number of block cipher calls by one. Moreover, it provides integrity security even under the release of unverified plaintext (or RUP) model. ESTATE is based on short-tweak tweakable block ciphers (or tBC, small ’t’ denotes short tweaks) and we instantiate it with two recently designed tBCs: TweAES and TweGIFT. We also propose a low latency variant of ESTATE, called sESTATE, that uses a round-reduced (6 rounds) variant of TweAES called TweAES-6. We provide comprehensive FPGA based hardware implementation for all the three instances. The implementation results depict that ESTATE_TweGIFT-128 (681 LUTs, 263 slices) consumes much lesser area as compared to SUNDAE_GIFT-128 (931 LUTs, 310 slices). When we moved to the AES variants, along with the area-efficiency (ESTATE_TweAES consumes 1901 LUTs, 602 slices while SUNDAE_AES-128 needs 1922 LUTs, 614 slices), we also achieve higher throughput for short messages (For 16-byte message, a throughput of 1251.10 and 945.36 Mbps for ESTATE_TweAES and SUNDAE_AES-128 respectively).
2020
TOSC
Isap v2.0 📺
We specify Isap v2.0, a lightweight permutation-based authenticated encryption algorithm that is designed to ease protection against side-channel and fault attacks. This design is an improved version of the previously published Isap v1.0, and offers increased protection against implementation attacks as well as more efficient implementations. Isap v2.0 is a candidate in NIST’s LightWeight Cryptography (LWC) project, which aims to identify and standardize authenticated ciphers that are well-suited for applications in constrained environments. We provide a self-contained specification of the new Isap v2.0 mode and discuss its design rationale. We formally prove the security of the Isap v2.0 mode in the leakage-resilient setting. Finally, in an extensive implementation overview, we show that Isap v2.0 can be implemented securely with very low area requirements. https://isap.iaik.tugraz.at
2020
TOSC
From Combined to Hybrid: Making Feedback-based AE even Smaller 📺
In CHES 2017, Chakraborti et al. proposed COFB, a rate-1 sequential block cipher-based authenticated encryption (AE) with only 1.5n-bit state, where n denotes the block size. They used a novel approach, the so-called combined feedback, where each block cipher input has a combined effect of the previous block cipher output and the current plaintext block. In this paper, we first study the security of a general rate-1 feedback-based AE scheme in terms of its overall internal state size. For a large class of feedback functions, we show that the overlying AE scheme can be attacked in 2r queries if the internal state size is n + r bits for some r ≥ 0. This automatically shows that a birthday bound (i.e. 2n/2 queries) secure AE scheme must have at least 1.5n-bit state, whence COFB is almost-optimal (use 1.5n-bit state and provides security up to 2n/2/n queries). We propose a new feedback function, called the hybrid feedback or HyFB, which is a hybrid composition of plaintext and ciphertext feedbacks. HyFB has a key advantage of lower XOR counts over the combined feedback function. This essentially helps in reducing the hardware footprint. Based on HyFB we propose a new AE scheme, called HyENA, that achieves the state size, rate, and security of COFB. In addition, HyENA has significantly lower XOR counts as compared to COFB, whence it is expected to have a smaller implementation as compared to COFB.
2020
TOSC
LM-DAE: Low-Memory Deterministic Authenticated Encryption for 128-bit Security 📺
This paper proposes a new lightweight deterministic authenticated encryption (DAE) scheme providing 128-bit security. Lightweight DAE schemes are practically important because resource-restricted devices sometimes cannot afford to manage a nonce properly. For this purpose, we first design a new mode LM-DAE that has a minimal state size and uses a tweakable block cipher (TBC). The design can be implemented with low memory and is advantageous in threshold implementations (TI) as a side-channel attack countermeasure. LM-DAE further reduces the implementation cost by eliminating the inverse tweak schedule needed in the previous TBC-based DAE modes. LM-DAE is proven to be indistinguishable from an ideal DAE up to the O(2n) query complexity for the block size n. To achieve 128-bit security, an underlying TBC must handle a 128-bit block, 128-bit key, and 128+4-bit tweak, where the 4-bit tweak comes from the domain separation. To satisfy this requirement, we extend SKINNY-128-256 with an additional 4-bit tweak, by applying the elastic-tweak proposed by Chakraborti et al. We evaluate the hardware performances of the proposed scheme with and without TI. Our LM-DAE implementation achieves 3,717 gates, roughly 15% fewer than state-of-the-art nonce-based schemes, thanks to removing the inverse tweak schedule.
2020
TOSC
Lightweight AEAD and Hashing using the Sparkle Permutation Family 📺
We introduce the Sparkle family of permutations operating on 256, 384 and 512 bits. These are combined with the Beetle mode to construct a family of authenticated ciphers, Schwaemm, with security levels ranging from 120 to 250 bits. We also use them to build new sponge-based hash functions, Esch256 and Esch384. Our permutations are among those with the lowest footprint in software, without sacrificing throughput. These properties are allowed by our use of an ARX component (the Alzette S-box) as well as a carefully chosen number of rounds. The corresponding analysis is enabled by the long trail strategy which gives us the tools we need to efficiently bound the probability of all the differential and linear trails for an arbitrary number of rounds. We also present a new application of this approach where the only trails considered are those mapping the rate to the outer part of the internal state, such trails being the only relevant trails for instance in a differential collision attack. To further decrease the number of rounds without compromising security, we modify the message injection in the classical sponge construction to break the alignment between the rate and our S-box layer.
2020
TOSC
Highly Secure Nonce-based MACs from the Sum of Tweakable Block Ciphers 📺
Tweakable block ciphers (TBCs) have proven highly useful to boost the security guarantees of authentication schemes. In 2017, Cogliati et al. proposed two MACs combining TBC and universal hash functions: a nonce-based MAC called NaT and a deterministic MAC called HaT. While both constructions provide high security, their properties are complementary: NaT is almost fully secure when nonces are respected (i.e., n-bit security, where n is the block size of the TBC, and no security degradation in terms of the number of MAC queries when nonces are unique), while its security degrades gracefully to the birthday bound (n/2 bits) when nonces are misused. HaT has n-bit security and can be used naturally as a nonce-based MAC when a message contains a nonce. However, it does not have full security even if nonces are unique.This work proposes two highly secure and efficient MACs to fill the gap: NaT2 and eHaT. Both provide (almost) full security if nonces are unique and more than n/2-bit security when nonces can repeat. Based on NaT and HaT, we aim at achieving these properties in a modular approach. Our first proposal, Nonce-as-Tweak2 (NaT2), is the sum of two NaT instances. Our second proposal, enhanced Hash-as-Tweak (eHaT), extends HaT by adding the output of an additional nonce-depending call to the TBC and prepending nonce to the message. Despite the conceptual simplicity, the security proofs are involved. For NaT2 in particular, we rely on the recent proof framework for Double-block Hash-then-Sum by Kim et al. from Eurocrypt 2020.
2020
TOSC
Forking Tweakable Even-Mansour Ciphers 📺
A forkcipher is a keyed, tweakable function mapping an n-bit input to a 2nbit output, which is equivalent to concatenating two outputs from two permutations. A forkcipher can be a useful primitive to design authenticated encryption schemes for short messages. A forkcipher is typically designed within the iterate-fork-iterate (IFI) paradigm, while the provable security of such a construction has not been widely explored.In this paper, we propose a method of constructing a forkcipher using public permutations as its building primitives. It can be seen as applying the IFI paradigm to the tweakable Even-Mansour ciphers. So our construction is dubbed the forked tweakable Even-Mansour (FTEM) cipher. Our main result is to prove that a (1, 1)-round FTEM cipher (applying a single-round TEM to a plaintext, followed by two independent copies of a single-round TEM) is secure up to 2 2n/3 queries in the ideal permutation model.
2020
TOSC
Fake Near Collisions Attacks 📺
Fast Near collision attacks on the stream ciphers Grain v1 and A5/1 were presented at Eurocrypt 2018 and Asiacrypt 2019 respectively. They use the fact that the entire internal state can be split into two parts so that the second part can be recovered from the first one which can be found using the keystream prefix and some guesses of the key materials.In this paper we reevaluate the complexity of these attacks and show that actually they are inferior to previously known results. Basically, we show that their complexity is actually much higher and we point out the main problems of these papers based on information theoretic ideas. We also check that some distributions do not have the predicted entropy loss claimed by the authors. Checking cryptographic attacks with galactic complexity is difficult in general. In particular, as these attacks involve many steps it is hard to identify precisely where the attacks are flawed. But for the attack against A5/1, it could have been avoided if the author had provided a full experiment of its attack since the overall claimed complexity was lower than 232 in both time and memory.
2020
TOSC
Catching the Fastest Boomerangs: Application to SKINNY 📺
In this paper we describe a new tool to search for boomerang distinguishers. One limitation of the MILP model of Liu et al. is that it handles only one round for the middle part while Song et al. have shown that dependencies could affect much more rounds, for instance up to 6 rounds for SKINNY. Thus we describe a new approach to turn an MILP model to search for truncated characteristics into an MILP model to search for truncated boomerang characteristics automatically handling the middle rounds. We then show a new CP model to search for the best possible instantiations to identify good boomerang distinguishers. Finally we systematized the method initiated by Song et al. to precisely compute the probability of a boomerang. As a result, we found many new boomerang distinguishers up to 24 rounds in the TK3 model. In particular, we improved by a factor 230 the probability of the best known distinguisher against 18-round SKINNY-128/256.
2020
TOSC
Towards Low-Energy Leakage-Resistant Authenticated Encryption from the Duplex Sponge Construction 📺
The ongoing NIST lightweight cryptography standardization process highlights the importance of resistance to side-channel attacks, which has renewed the interest for Authenticated Encryption schemes (AEs) with light(er)-weight sidechannel secure implementations. To address this challenge, our first contribution is to investigate the leakage-resistance of a generic duplex-based stream cipher. When the capacity of the duplex is of c bits, we prove the classical bound, i.e., ≈ 2c/2, under an assumption of non-invertible leakage. Based on this, we propose a new 1-pass AE mode TETSponge, which carefully combines a tweakable block cipher that must have strong protections against side-channel attacks and is scarcely used, and a duplex-style permutation that only needs weak side-channel protections and is used to frugally process the message and associated data. It offers: (i) provable integrity (resp. confidentiality) guarantees in the presence of leakage during both encryption and decryption (resp. encryption only), (ii) some level of nonce misuse robustness. We conclude that TETSponge is an appealing option for the implementation of low-energy AE in settings where side-channel attacks are a concern. We also provides the first rigorous methodology for the leakage-resistance of sponge/duplex-based AEs based on a minimal non-invertibility assumption on leakages, which leads to various insights on designs and implementations.
2020
TOSC
Duel of the Titans: The Romulus and Remus Families of Lightweight AEAD Algorithms 📺
In this article, we propose two new families of very lightweight and efficient authenticated encryption with associated data (AEAD) modes, Romulus and Remus, that provide security beyond the birthday bound with respect to the block-length n. The former uses a tweakable block cipher (TBC) as internal primitive and can be proven secure in the standard model. The later uses a block cipher (BC) as internal primitive and can be proven secure in the ideal cipher model. Both our modes allow to switch very easily from the nonce-respecting to the nonce-misuse scenario.Previous constructions, such as ΘCB3, are quite computationally efficient, yet needing a large memory for implementation, which makes them unsuitable for platforms where lightweight cryptography should play a key role. Romulus and Remus break this barrier by introducing a new architecture evolved from a BC mode COFB. They achieve the best of what can be possible with TBC – the optimal computational efficiency (rate-1 operation) and the minimum state size of a TBC mode (i.e., (n + t)-bit for n-bit block, t-bit tweak TBC), with almost equivalent provable security as ΘCB3. Actually, our comparisons show that both our designs present superior performances when compared to all other recent lightweight AEAD modes, being BC-based, TBC-based or sponge-based, in the nonce-respecting or nonce-misuse scenario. We eventually describe how to instantiate Romulus and Remus modes using the Skinny lightweight tweakable block cipher proposed at CRYPTO 2016, including the hardware implementation results
2020
TOSC
Improved Attacks on sLiSCP Permutation and Tight Bound of Limited Birthday Distinguishers 📺
Limited birthday distinguishers (LBDs) are widely used tools for the cryptanalysis of cryptographic permutations. In this paper we propose LBDs on several variants of the sLiSCP permutation family that are building blocks of two round 2 candidates of the NIST lightweight standardization process: Spix and SpoC. We improve the number of steps with respect to the previously known best results, that used rebound attack. We improve the techniques used for solving the middle part, called inbound, and we relax the external conditions in order to extend the previous attacks. The lower bound of the complexity of LBDs has been proved only against functions. In this paper, we prove for the first time the bound against permutations, which shows that the known upper bounds are tight.
2020
TOSC
Combiners for AEAD 📺
The Authenticated Encryption with Associated Data (AEAD) primitive, which integrates confidentiality and integrity services under a single roof, found wide-spread adoption in industry and became indispensable in practical protocol design. Recognizing this, academic research put forward a large number of candidate constructions, many of which come with provable security guarantees. Nevertheless, the recent past has shaken up with the discovery of vulnerabilities, some of them fatal, in well-regarded schemes, stemming from weak underlying primitives, flawed security arguments, implementation-level vulnerabilities, and so on. Simply reacting to such findings by replacing broken candidates by better(?) ones is in many cases unduly, costly, and sometimes just impossible. On the other hand, as attack techniques and opportunities change over time, it seems venturous to propose any specific scheme if the intended lifetime of its application is, say, twenty years.In this work we study a workable approach towards increasing the resilience against unforeseen breaks of AEAD primitives. Precisely, we consider the ability to combine two AEAD schemes into one such that the resulting AEAD scheme is secure as long as at least one of its components is (or: as long as at most one component is broken). We propose a series of such combiners, some of which work with fully generic AEAD components while others assume specific internal structures of the latter (like an encrypt-then-MAC design). We complement our results by proving the optimality of our constructions by showing the impossibility of combiners that get along with less invocations of the component algorithms.
2020
TOSC
Increasing Precision of Division Property 📺
In this paper we propose new techniques related to division property. We describe for the first time a practical algorithm for computing the propagation tables of 16-bit Super-Sboxes, increasing the precision of the division property by removing a lot of false division trails. We also improve the complexity of the procedure introduced by Lambin et al. (Design, Codes and Cryptography, 2020) to extend a cipher with linear mappings and show how to decrease the number of transitions to look for. While search procedures for integral distinguishers most often rely on MILP or SAT solvers for their ease of programming the propagation constraints, such generic solvers can only handle small 4/8-bit Sboxes. Thus we developed an ad-hoc tool handling larger Sboxes and all the improvements described in the paper. As a result, we found new integral distinguishers on SKINNY-64, HIGHT and Midori-64.
2020
TOSC
Low AND Depth and Efficient Inverses: a Guide on S-boxes for Low-latency Masking 📺
In this work, we perform an extensive investigation and construct a portfolio of S-boxes suitable for secure lightweight implementations, which aligns well with the ongoing NIST Lightweight Cryptography competition. In particular, we target good functional properties on the one hand and efficient implementations in terms of AND depth and AND gate complexity on the other. Moreover, we also consider the implementation of the inverse S-box and the possibility for it to share resources with the forward S-box. We take our exploration beyond the conventional small (and even) S-box sizes. Our investigation is twofold: (1) we note that implementations of existing S-boxes are not optimized for the criteria which define masking complexity (AND depth and AND gate complexity) and improve a tool published at FSE 2016 by Stoffelen in order to fill this gap. (2) We search for new S-box designs which take these implementation properties into account from the start. We perform a systematic search based on the properties of not only the S-box but also its inverse as well as an exploration of larger S-box sizes using length-doubling structures. The result of our investigation is not only a wide selection of very good S-boxes, but we also provide complete descriptions of their circuits, enabling their integration into future work.
2020
TOSC
Tightness of the Suffix Keyed Sponge Bound 📺
Generic attacks are a vital ingredient in the evaluation of the tightness of security proofs. In this paper, we evaluate the tightness of the suffix keyed sponge (SuKS) bound. As its name suggests, SuKS is a sponge-based construction that absorbs the key after absorbing the data, but before producing an output. This absorption of the key can be done via an easy to invert operation, like an XOR, or a hard to invert operation, like a PRF. Using SuKS with a hard to invert absorption provides benefits with respect to its resistance against side-channel attacks, and such a construction is used as part of the authenticated encryption scheme Isap. We derive two key recovery attacks against SuKS with easy to invert key absorption, and a forgery in case of hard to invert key absorption. The attacks closely match the terms in the PRF security bound of SuKS by Dobraunig and Mennink, ToSC 2019(4), and therewith show that these terms are justified, even if the function used to absorb the key is a PRF, and regardless of whether SuKS is used as a PRF or a MAC.
2020
TOSC
Swap and Rotate: Lightweight Linear Layers for SPN-based Blockciphers 📺
In CHES 2017, Jean et al. presented a paper on “Bit-Sliding” in which the authors proposed lightweight constructions for SPN based block ciphers like AES, PRESENT and SKINNY. The main idea behind these constructions was to reduce the length of the datapath to 1 bit and to reformulate the linear layer for these ciphers so that they require fewer scan flip-flops (which have built-in multiplexer functionality and so larger in area as compared to a simple flip-flop). In this paper, we develop their idea even further in few separate directions.First, we prove that given an arbitrary linear transformation, it is always possible to construct the linear layer using merely 2 scan flip-flops. This points to an optimistic venue to follow to gain further GE reductions, yet the straightforward application of the techniques in our proof to PRESENT and GIFT leads to inefficient implementations of the linear layer, as reducing ourselves to 2 scan flip-flops setting requires thousands of clock cycles and leads to very high latency.Equipped with the well-established formalism on permutation groups, we explore whether we can reduce the number of clock cycles to a practical level, i.e. few hundreds, by adding few more pairs of scan flip flops. For PRESENT, we show that 4 (resp. 8, 12) scan flip-flops are sufficient to complete the permutation layer in 384 (resp. 256, 128) clock cycles. For GIFT, we show that 4 (resp. 8, 10) scan flip flops correspond to 320 (resp. 192, 128) clock cycles. Finally, in order to provide the best of the two worlds (i.e. circuit area and latency), we push our scan flip-flop choices even further to completely eliminate the latency incurred by the permutation layer, without compromising our stringent GE budget. We show that not only 12 scan flip flops are sufficient to execute PRESENT permutation in 64 clock cycles, but also the same scan flip flops can be used readily in a combined encryption decryption circuit. Our final design of PRESENT and GIFT beat the record of Jean et al. and Banik et al. in both latency and in circuit-size metric. We believe that the techniques presented in our work can also be used at choosing bit-sliding-friendly linear layer permutations for the future SPN-based designs.
2020
TOSC
Cryptanalysis of Forkciphers 📺
The forkcipher framework was designed in 2018 by Andreeva et al. for authenticated encryption of short messages. Two dedicated ciphers were proposed in this framework: ForkAES based on the AES (and its tweakable variant Kiasu-BC), and ForkSkinny based on Skinny. The main motivation is that the forked ciphers should keep the same security as the underlying ciphers, but offer better performances thanks to the larger output. Recent cryptanalysis results at ACNS ’19 have shown that ForkAES actually offers a reduced security margin compared to the AES with an 8-round attack, and this was taken into account in the design of ForkSkinny.In this paper, we present new cryptanalysis results on forkciphers. First we improve the previous attack on ForkAES in order to attack the full 10 rounds. This is the first attack challenging the security of full ForkAES. Then we present the first analysis of ForkSkinny, showing that the best attacks on Skinny can be extended to one round for most ForkSkinny variants, and up to three rounds for ForkSkinny-128-256. This allows to evaluate the security degradation between ForkSkinny and the underlying block cipher.Our analysis shows that all components of a forkcipher must be carefully designed: the attack against ForkAES uses the weak diffusion of the middle rounds in reconstruction queries (going from one ciphertext to the other), but the attack against ForkSkinny uses a weakness of the tweakey schedule in encryption queries (when one branch of the tweakey schedule is skipped).
2020
TOSC
Spectral analysis of ZUC-256 📺
In this paper we develop a number of generic techniques and algorithms in spectral analysis of large linear approximations for use in cryptanalysis. We apply the developed tools for cryptanalysis of ZUC-256 and give a distinguishing attack with complexity around 2236. Although the attack is only 220 times faster than exhaustive key search, the result indicates that ZUC-256 does not provide a source with full 256-bit entropy in the generated keystream, which would be expected from a 256-bit key. To the best of our knowledge, this is the first known academic attack on full ZUC-256 with a computational complexity that is below exhaustive key search.
2020
TOSC
Algebraic and Higher-Order Differential Cryptanalysis of Pyjamask-96 📺
Cryptographic competitions, like the ongoing NIST call for lightweight cryptography, always provide a thriving research environment, where new interesting ideas are proposed and new cryptographic insights are made. One proposal for this NIST call that is accepted for the second round is Pyjamask. Pyjamask is an authenticated encryption scheme that builds upon two block ciphers, Pyjamask-96 and Pyjamask-128, that aim to minimize the number of AND operations at the cost of a very strong linear layer. A side-effect of this goal is a slow growth in the algebraic degree. In this paper, we focus on the block cipher Pyjamask-96 and are able to provide a theoretical key-recovery attack reaching 14 (out of 14) rounds as well as a practical attack on 8 rounds. We do this by combining higher-order differentials with an in-depth analysis of the system of equations gotten for 2.5 rounds of Pyjamask-96. The AEAD-scheme Pyjamask itself is not threatened by the work in this paper.
2020
TOSC
Cryptanalysis of the Legendre PRF and Generalizations 📺
The Legendre PRF relies on the conjectured pseudorandomness properties of the Legendre symbol with a hidden shift. Originally proposed as a PRG by Damgård at CRYPTO 1988, it was recently suggested as an efficient PRF for multiparty computation purposes by Grassi et al. at CCS 2016. Moreover, the Legendre PRF is being considered for usage in the Ethereum 2.0 blockchain.This paper improves previous attacks on the Legendre PRF and its higher-degree variant due to Khovratovich by reducing the time complexity from O(&lt; (p log p/M) to O(p log2 p/M2) Legendre symbol evaluations when M ≤ 4√ p log2 p queries are available. The practical relevance of our improved attack is demonstrated by breaking three concrete instances of the PRF proposed by the Ethereum foundation. Furthermore, we generalize our attack in a nontrivial way to the higher-degree variant of the Legendre PRF and we point out a large class of weak keys for this construction. Lastly, we provide the first security analysis of two additional generalizations of the Legendre PRF originally proposed by Damgård in the PRG setting, namely the Jacobi PRF and the power residue PRF.
2020
TOSC
On the Feistel Counterpart of the Boomerang Connectivity Table: Introduction and Analysis of the FBCT 📺
At Eurocrypt 2018, Cid et al. introduced the Boomerang Connectivity Table (BCT), a tool to compute the probability of the middle round of a boomerang distinguisher from the description of the cipher’s Sbox(es). Their new table and the following works led to a refined understanding of boomerangs, and resulted in a series of improved attacks. Still, these works only addressed the case of Substitution Permutation Networks, and completely left out the case of ciphers following a Feistel construction. In this article, we address this lack by introducing the FBCT, the Feistel counterpart of the BCT. We show that the coefficient at row Δi, ∇o corresponds to the number of times the second order derivative at points Δi, ∇o) cancels out. We explore the properties of the FBCT and compare it to what is known on the BCT. Taking matters further, we show how to compute the probability of a boomerang switch over multiple rounds with a generic formula.
2020
TOSC
Links between Division Property and Other Cube Attack Variants 📺
A theoretically reliable key-recovery attack should evaluate not only the non-randomness for the correct key guess but also the randomness for the wrong ones as well. The former has always been the main focus but the absence of the latter can also cause self-contradicted results. In fact, the theoretic discussion of wrong key guesses is overlooked in quite some existing key-recovery attacks, especially the previous cube attack variants based on pure experiments. In this paper, we draw links between the division property and several variants of the cube attack. In addition to the zero-sum property, we further prove that the bias phenomenon, the non-randomness widely utilized in dynamic cube attacks and cube testers, can also be reflected by the division property. Based on such links, we are able to provide several results: Firstly, we give a dynamic cube key-recovery attack on full Grain-128. Compared with Dinur et al.’s original one, this attack is supported by a theoretical analysis of the bias based on a more elaborate assumption. Our attack can recover 3 key bits with a complexity 297.86 and evaluated success probability 99.83%. Thus, the overall complexity for recovering full 128 key bits is 2125. Secondly, now that the bias phenomenon can be efficiently and elaborately evaluated, we further derive new secure bounds for Grain-like primitives (namely Grain-128, Grain-128a, Grain-V1, Plantlet) against both the zero-sum and bias cube testers. Our secure bounds indicate that 256 initialization rounds are not able to guarantee Grain-128 to resist bias-based cube testers. This is an efficient tool for newly designed stream ciphers for determining the number of initialization rounds. Thirdly, we improve Wang et al.’s relaxed term enumeration technique proposed in CRYPTO 2018 and extend their results on Kreyvium and ACORN by 1 and 13 rounds (reaching 892 and 763 rounds) with complexities 2121.19 and 2125.54 respectively. To our knowledge, our results are the current best key-recovery attacks on these two primitives.
2020
TOSC
Finding Bit-Based Division Property for Ciphers with Complex Linear Layers 📺
The bit-based division property (BDP) is the most effective technique for finding integral characteristics of symmetric ciphers. Recently, automatic search tools have become one of the most popular approaches to evaluating the security of designs against many attacks. Constraint-aided automatic tools for the BDP have been applied to many ciphers with simple linear layers like bit-permutation. Constructing models of complex linear layers accurately and efficiently remains hard. A straightforward method proposed by Sun et al. (called the S method), decomposes a complex linear layer into basic operations like COPY and XOR, then models them one by one. However, this method can easily insert invalid division trails into the solution pool, which results in a quicker loss of the balanced property than the cipher itself would. In order to solve this problem, Zhang and Rijmen propose the ZR method to link every valid trail with an invertible sub-matrix of the matrix corresponding to the linear layer, and then generate linear inequalities to represent all the invertible sub-matrices. Unfortunately, the ZR method is only applicable to invertible binary matrices (defined in Definition 3).To avoid generating a huge number of inequalities for all the sub-matrices, we build a new model that only includes that the sub-matrix corresponding to a valid trail should be invertible. The computing scale of our model can be tackled by most of SMT/SAT solvers, which makes our method practical. For applications, we improve the previous BDP for LED and MISTY1. We also give the 7-round BDP results for Camellia with FL/FL−1, which is the longest to date.Furthermore, we remove the restriction of the ZR method that the matrix has to be invertible, which provides more choices for future designs. Thanks to this, we also reproduce 5-round key-dependent integral distinguishers proposed at Crypto 2016 which cannot be obtained by either the S or ZR methods.
2020
TOSC
Improved Security Bounds for Generalized Feistel Networks 📺
We revisit the security of various generalized Feistel networks. Concretely, for unbalanced, alternating, type-1, type-2, and type-3 Feistel networks built from random functions, we substantially improve the coupling analyzes of Hoang and Rogaway (CRYPTO 2010). For a tweakable blockcipher-based generalized Feistelnetwork proposed by Coron et al. (TCC 2010), we present a coupling analysis and for the first time show that with enough rounds, it achieves 2n-bit security, and this provides highly secure, double-length tweakable blockciphers.
2020
TOSC
On the Composition of Single-Keyed Tweakable Even-Mansour for Achieving BBB Security 📺
Observing the growing popularity of random permutation (RP)-based designs (e.g, Sponge), Bart Mennink in CRYPTO 2019 has initiated an interesting research in the direction of RP-based pseudorandom functions (PRFs). Both are claimed to achieve beyond-the-birthday-bound (BBB) security of 2n/3 bits (n being the input block size in bits) but require two instances of RPs and can handle only oneblock inputs. In this work, we extend research in this direction by providing two new BBB-secure constructions by composing the tweakable Even-Mansour appropriately. Our first construction requires only one instance of an RP and requires only one key. Our second construction extends the first to a nonce-based Message Authentication Code (MAC) using a universal hash to deal with multi-block inputs. We show that the hash key can be derived from the original key when the underlying hash is the Poly hash. We provide matching attacks for both constructions to demonstrate the tightness of the proven security bounds.
2020
TOSC
Key Assignment Schemes with Authenticated Encryption, revisited 📺
A popular cryptographic option to implement Hierarchical Access Control in organizations is to combine a key assignment scheme with a symmetric encryption scheme. In brief, key assignment associates with each object in the hierarchy a unique symmetric key, and provides all higher-ranked “authorized” subjects with a method to recover it. This setup allows for encrypting the payloads associated with the objects so that they can be accessed by the authorized and remain inaccessible for the unauthorized. Both key assignment and symmetric encryption have been researched for roughly four decades now, and a plethora of efficient constructions have been the result. Surprisingly, a treatment of the joint primitive (key assignment combined with encryption, as used in practice) in the framework of provable security was conducted only very recently, leading to a publication in ToSC 2018(4). We first carefully revisit this publication. We then argue that there are actually two standard use cases for the combined primitive, which also require individual treatment. We correspondingly propose a fresh set of security models and provably secure constructions for each of them. Perhaps surprisingly, the two constructions call for different symmetric encryption primitives: While standard AEAD is the right tool for the one, we identify a less common tool called Encryptment as best fitting the other.
2020
TOSC
Beyond-Birthday-Bound Secure Cryptographic Permutations from Ideal Ciphers with Long Keys 📺
Coron et al. showed a construction of a 3-round 2n-bit cryptographic permutation from three independent n-bit ideal ciphers with n-bit keys (TCC 2010). Guo and Lin showed a construction of a (2d − 1)-round dn-bit cryptographic permutation from 2d − 1 independent n-bit ideal ciphers with kn-bit keys, where d = k + 1 (Cryptography and Communications, 2015). These constructions have an indifferentiability security bound of O(q2/2n) against adversaries that make at most q queries. The bound is commonly referred to as birthday-bound security. In this paper, we show that a 5-round version of Coron et al.’s construction and (2d+1)-round version of Guo and Lin’s construction yield a cryptographic permutation with an indifferentiability security bound of O(q2/22n), i.e., by adding two more rounds, these constructions have beyond-birthday-bound security. Furthermore, under the assumption that q ≤ 2n, we show that Guo and Lin’s construction with 2d+2l−1 rounds yields a cryptographic permutation with a security bound of O(q2/2(l+1)n), where 1 ≤ l ≤ d − 1, i.e., the security bound exponentially improves by adding every two more rounds, up to 4d − 3 rounds. To the best of our knowledge, our result gives the first cryptographic permutation that is built from n-bit ideal ciphers and has a full n-bit indifferentiability security bound.
2020
TOSC
On the Security of Sponge-type Authenticated Encryption Modes 📺
The sponge duplex is a popular mode of operation for constructing authenticated encryption schemes. In fact, one can assess the popularity of this mode from the fact that around 25 out of the 56 round 1 submissions to the ongoing NIST lightweight cryptography (LwC) standardization process are based on this mode. Among these, 14 sponge-type constructions are selected for the second round consisting of 32 submissions. In this paper, we generalize the duplexing interface of the duplex mode, which we call Transform-then-Permute. It encompasses Beetle as well as a new sponge-type mode SpoC (both are round 2 submissions to NIST LwC). We show a tight security bound for Transform-then-Permute based on b-bit permutation, which reduces to finding an exact estimation of the expected number of multi-chains (defined in this paper). As a corollary of our general result, authenticated encryption advantage of Beetle and SpoC is about T(D+r2r)/2b&nbsp;where T, D and r denotes the number of offline queries (related to time complexity of the attack), number of construction queries (related to data complexity) and rate of the construction (related to efficiency). Previously the same bound has been proved for Beetle under the limitation that T &lt;&lt; min{2r, 2b/2} (that compels to choose larger permutation with higher rate). In the context of NIST LwC requirement, SpoC based on 192-bit permutation achieves the desired security with 64-bit rate, which is not achieved by either duplex or Beetle (as per the previous analysis).
2020
TOSC
Optimizing Implementations of Linear Layers 📺
In this paper, we propose a new heuristic algorithm to search efficient implementations (in terms of Xor count) of linear layers used in symmetric-key cryptography. It is observed that the implementation cost of an invertible matrix is related to its matrix decomposition if sequential-Xor (s-Xor) metric is considered, thus reducing the implementation cost is equivalent to constructing an optimized matrix decomposition. The basic idea of this work is to find various matrix decompositions for a given matrix and optimize those decompositions to pick the best implementation. In order to optimize matrix decompositions, we present several matrix multiplication rules over F2, which are proved to be very powerful in reducing the implementation cost. We illustrate this heuristic by searching implementations of several matrices proposed recently and matrices already used in block ciphers and Hash functions, and the results show that our heuristic performs equally good or outperforms Paar’s and Boyar-Peralta’s heuristics in most cases.
2020
TOSC
Beyond-Birthday-Bound Security for 4-round Linear Substitution-Permutation Networks 📺
Recent works of Cogliati et al. (CRYPTO 2018) have initiated provable treatments of Substitution-Permutation Networks (SPNs), one of the most popular approach to construct modern blockciphers. Such theoretical SPN models may employ non-linear diffusion layers, which enables beyond-birthday-bound provable security. Though, for the model of real world blockciphers, i.e., SPN models with linear diffusion layers, existing provable results are capped at birthday security up to $2^{n/2}$ adversarial queries, where $n$ is the size of the idealized S-boxes. In this paper, we overcome this birthday barrier and prove that a 4-round SPN with linear diffusion layers and independent round keys is secure up to $2^{2n/3}$ queries. For this, we identify conditions on the linear layers that are sufficient for such security, which, unsurprisingly, turns out to be slightly stronger than Cogliati et al.'s conditions for birthday security. These provides additional theoretic supports for real world SPN blockciphers.
2020
TOSC
Efficient MILP Modelings for Sboxes and Linear Layers of SPN ciphers 📺
Mixed Integer Linear Programming (MILP) solvers are regularly used by designers for providing security arguments and by cryptanalysts for searching for new distinguishers. For both applications, bitwise models are more refined and permit to analyze properties of primitives more accurately than word-oriented models. Yet, they are much heavier than these last ones. In this work, we first propose many new algorithms for efficiently modeling differential propagation through Sboxes. We manage notably to represent the AES Sbox with three times less inequalities than before. Then, we present two new algorithms inspired from coding theory to model complex linear layers without dummy variables. This permits us to represent many diffusion matrices, notably the ones of Skinny-128 and AES in a much more compact way. To demonstrate the impact of our new models on the solving time we ran experiments for both Skinny-128 and AES. Finally, our new models allowed us to computationally prove that there are no impossible differentials for 5-round AES and 13-round Skinny-128 with exactly one input and one output active byte, even if the details of both the Sbox and the linear layer are taken into account.
2020
TOSC
Design of Symmetric-Key Primitives for Advanced Cryptographic Protocols 📺
While traditional symmetric algorithms like AES and SHA3 are optimized for efficient hardware and software implementations, a range of emerging applications using advanced cryptographic protocols such as multi-party computation and zero-knowledge proofs require optimization with respect to a different metric: arithmetic complexity. In this paper we study the design of secure cryptographic algorithms optimized to minimize this metric. We begin by identifying the differences in the design space between such arithmetization-oriented ciphers and traditional ones, with particular emphasis on the available tools, efficiency metrics, and relevant cryptanalysis. This discussion highlights a crucial point --- the considerations for designing arithmetization-oriented ciphers are oftentimes different from the considerations arising in the design of software- and hardware-oriented ciphers. The natural next step is to identify sound principles to securely navigate this new terrain, and to materialize these principles into concrete designs. To this end, we present the Marvellous design strategy which provides a generic way to easily instantiate secure and efficient algorithms for this emerging domain. We then show two examples for families following this approach. These families --- Vision and Rescue --- are benchmarked with respect to three use cases: the ZK-STARK proof system, proof systems based on Rank-One Constraint Satisfaction (R1CS), and Multi-Party Computation (MPC). These benchmarks show that our algorithms achieve a highly compact algebraic description, and thus benefit the advanced cryptographic protocols that employ them. Evidence is provided that this is the case also in real-world implementations.
2020
TOSC
Dasta – Alternative Linear Layer for Rasta 📺
Progress in the areas of multi-party computation (MPC) and fully homomorphic encryption (FHE) caused the demand of new design strategies, that minimize the number of multiplications in symmetric primitives. Rasta is an approach for a family of stream ciphers with an exceptional low AND depth, which equals the number of ANDs per encrypted bit. This is achieved in particular by randomizing parts of the computation with the help of a PRNG, implying that the security arguments rely on the provided randomness and the encryption/ decryption is potentially slowed down by this generation.In this paper we propose a variant of Rasta that achieves the same performance with respect to the AND depth and the number of ANDs per encrypted bit, but does not rely on a PRNG, i.e. is based on fixed linear layers.
2020
TOSC
Fast Decryption: a New Feature of Misuse-Resistant AE 📺
Misuse-resistant AE (MRAE) is a class of authenticated encryption (AE) that has a resistance against a potential misuse (repeat) of nonce. MRAE has received significant attention from the initial proposal by Rogaway and Shrimpton. They showed a generic MRAE construction called SIV. SIV becomes a de-facto scheme for MRAE, however, one notable drawback is its two-pass operation for both encryption and decryption. This implies that MRAE built on SIV is slower than the integrated nonce-based AE schemes, such as OCB.In this paper, we propose a new method to improve this situation. Particularly, our MRAE proposal (decryption-fast SIV or DFV) allows to decrypt as fast as a plain decryption, hence theoretically doubles its speed from the original SIV, while keeping the encryption speed equivalent to SIV. We present several generic compositions for DFV and their instantiations.
2020
TOSC
Differential Attacks on CRAFT Exploiting the Involutory S-boxes and Tweak Additions 📺
CRAFT is a lightweight tweakable block cipher proposed at FSE 2019, which allows countermeasures against Differential Fault Attacks to be integrated into the cipher at the algorithmic level with ease. CRAFT employs a lightweight and involutory S-box and linear layer, such that the encryption function can be turned into decryption at a low cost. Besides, the tweakey schedule algorithm of CRAFT is extremely simple, where four 64-bit round tweakeys are generated and repeatedly used. Due to a combination of these features which makes CRAFT exceedingly lightweight, we find that some input difference at a particular position can be preserved through any number of rounds if the input pair follows certain truncated differential trails. Interestingly, in contrast to traditional differential analysis, the validity of this invariant property is affected by the positions where the constant additions take place. We use this property to construct “weak-tweakey” truncated differential distinguishers of CRAFT in the single-key model. Subsequently, we show how the tweak additions allow us to convert these weak-tweakey distinguishers into ordinary secret-key distinguishers based on which key-recovery attacks can be performed. Moreover, we show how to construct MILP models to search for truncated differential distinguishers exploiting this invariant property. As a result, we find a 15-round truncated differential distinguisher of CRAFT and extend it to a 19-round key-recovery attack with 260.99 data, 268 memory, 294.59 time complexity, and success probability 80.66%. Also, we find a 14-round distinguisher with probability 2−43 (experimentally verified), a 16-round distinguisher with probability 2−55, and a 20-round weak-key distinguisher (2118 weak keys) with probability 2−63. Experiments on round-reduced versions of the distinguishers show that the experimental probabilities are sometimes higher than predicted. Finally, we note that our result is far from threatening the security of the full CRAFT.
2020
TOSC
On the Security Margin of TinyJAMBU with Refined Differential and Linear Cryptanalysis 📺
This paper presents the first third-party security analysis of TinyJAMBU, which is one of 32 second-round candidates in NIST’s lightweight cryptography standardization process. TinyJAMBU adopts an NLFSR based keyed-permutation that computes only a single NAND gate as a non-linear component per round. The designers evaluated the minimum number of active AND gates, however such a counting method neglects the dependency between multiple AND gates. There also exist previous works considering such dependencies with stricter models, however those are known to be too slow. In this paper, we present a new model that provides a good balance of efficiency and accuracy by only taking into account the first-order correlation of AND gates that frequently occurs in TinyJAMBU. With the refined model, we show a 338-round differential with probability 2−62.68 that leads to a forgery attack breaking 64-bit security. This implies that the security margin of TinyJAMBU with respect to the number of unattacked rounds is approximately 12%. We also show a differential on full 384 rounds with probability 2−70.64, thus the security margin of full rounds with respect to the data complexity, namely the gap between the claimed security bits and the attack complexity, is less than 8 bits. Our attacks also point out structural weaknesses of the mode that essentially come from the minimal state size to be lightweight.
2020
TOSC
Practical seed-recovery for the PCG Pseudo-Random Number Generator 📺
The Permuted Congruential Generators (PCG) are popular conventional (non-cryptographic) pseudo-random generators designed in 2014. They are used by default in the NumPy scientific computing package. Even though they are not of cryptographic strength, their designer stated that predicting their output should nevertheless be "challenging".In this article, we present a practical algorithm that recovers all the hidden parameters and reconstructs the successive internal states of the generator. This enables us to predict the next “random” numbers, and output the seeds of the generator. We have successfully executed the reconstruction algorithm using 512 bytes of challenge input; in the worst case, the process takes 20 000 CPU hours.This reconstruction algorithm makes use of cryptanalytic techniques, both symmetric and lattice-based. In particular, the most computationally expensive part is a guessand-determine procedure that solves about 252 instances of the Closest Vector Problem on a very small lattice.
2020
TOSC
Extended Truncated-differential Distinguishers on Round-reduced AES 📺
Distinguishers on round-reduced AES have attracted considerable attention in the recent years. While the number of rounds covered in key-recovery attacks did not increase, subspace, yoyo, mixture-differential, and multiple-of-n cryptanalysis advanced the understanding of the properties of the cipher.For substitution-permutation networks, integral attacks are a suitable target for extension since they usually end after a linear layer sums several subcomponents. Based on results by Patarin, Chen et al. already observed that the expected number of collisions for a sum of permutations differs slightly from that for a random primitive. Though, their target remained lightweight primitives.The present work illustrates how the well-known integral distinguisher on three-round AES resembles a sum of PRPs and can be extended to truncated-differential distinguishers over 4 and 5 rounds. In contrast to previous distinguishers by Grassi et al., our approach allows to prepend a round that starts from a diagonal subspace. We demonstrate how the prepended round can be used for key recovery with a new differential key-recovery attack on six-round AES. Moreover, we show how the prepended round can also be integrated to form a six-round distinguisher. For all distinguishers and the key-recovery attack, our results are supported by implementations with Cid et al.’s established Small-AES version. While the distinguishers do not threaten the security of the AES, they try to shed more light on its properties.
2020
TOSC
On the Usage of Deterministic (Related-Key) Truncated Differentials and Multidimensional Linear Approximations for SPN Ciphers 📺
Among the few works realising the search of truncated differentials (TD) and multidimensional linear approximations (MDLA) holding for sure, the optimality of the distinguisher should be confirmed via an exhaustive search over all possible input differences/masks, which cannot be afforded when the internal state of the primitive has a considerable number of words. The incomplete search is also a long-term problem in the search of optimal impossible differential (ID) and zerocorrelation linear approximation (ZCLA) since all available automatic tools operate under fixed input and output differences/masks, and testing all possible combinations of differences/masks is impracticable for now. In this paper, we start by introducing an automatic approach based on the constraint satisfaction problem for the exploration of deterministic TDs and MDLAs. Since we transform the exhaustive search into an inherent feature of the searching model, the issue of incomplete search is settled. This tool is applied to search for related-key (RK) TDs of AES-192, and a new related-key differential-linear (DL) distinguisher is identified with a TD with certainty. Due to the novel property of the distinguisher, the previous RK DL attack on AES-192 is improved. Also, the new distinguisher is explained from the viewpoint of differentiallinear connectivity table (DLCT) and thus can be regarded as the first application of DLCT in the related-key attack scenario. As the second application of the tool, we propose a method to construct (RK) IDs and ZCLAs automatically. Benefiting from the control of the nonzero fixed differential pattern and the inherent feature of exhaustive search, the new searching scheme can discover longer distinguishers and hence possesses some superiorities over the previous methods. This technique is implemented with several primitives, and the provable security bounds of SKINNY and Midori64 against impossible differential distinguishing attack are generalised.
2020
TOSC
Exploring Secret Keys in Searching Integral Distinguishers Based on Division Property 📺
Division property proposed by Todo at EUROCRYPT 2015 is a generalized integral property. Then, conventional bit-based division property (CBDP) and bitbased division property using three subsets (BDPT) were proposed by Todo and Morii at FSE 2016. At ASIACRYPT 2016, Xiang et al. extended Mixed Integer Linear Programming (MILP) method to search integral distinguishers based on CBDP. And at ASIACRYPT 2019, Wang et al. proposed an MILP-aided method of searching integral distinguishers based on BDPT. Although BDPT is powerful in searching integral distinguishers, the accuracy is not perfect.For block cipher SPECK32, as the block size is only 32 bits, we can experimentally observe the behaviors of all the plaintexts under a fixed key. By testing 210 random secret keys, we experimentally find a better integral distinguisher of 6-round SPECK32 with 30 active bits. But this experimental integral distinguisher cannot be proved by existing methods. So there still exists a gap between the proved distinguisher and the experimental one.To fill the gap, we explore secret keys in searching integral distinguishers based on BDPT. We put forward a situation where “Xor with The Secret Key” operation can be bypassed. Based on the new BDPT propagation rule, an improved automatic algorithm of searching integral distinguishers is proposed. For SPECK32, our improved algorithm can find the 6-round integral distinguisher with 230 chosen plaintexts. The gap between the proved distinguisher and the experimental one is filled. Moreover, we apply this improved method to search the integral distinguishers of SPECK, KATAN/KTANTAN, SIMON, SIMECK, SIMON(102), PRESENT and RECTANGLE block ciphers. The integral distinguishers found by our improved method are better than or consistent with the previous longest distinguishers.
2020
TOSC
Cryptanalysis of Curl-P and Other Attacks on the IOTA Cryptocurrency 📺
We present attacks on the cryptography formerly used in the IOTA blockchain, including under certain conditions the ability to forge signatures. We developed practical attacks on IOTA’s cryptographic hash function Curl-P-27, allowing us to quickly generate short colliding messages. These collisions work even for messages of the same length. Exploiting these weaknesses in Curl-P-27, we broke the EUCMA security of the former IOTA Signature Scheme (ISS). Finally, we show that in a chosen-message setting we could forge signatures and multi-signatures of valid spending transactions (called bundles in IOTA).
2020
TOSC
Deck-Based Wide Block Cipher Modes and an Exposition of the Blinded Keyed Hashing Model 📺
We present two tweakable wide block cipher modes from doubly-extendable cryptographic keyed (deck) functions and a keyed hash function: double-decker and docked-double-decker. Double-decker is a direct generalization of Farfalle-WBC of Bertoni et al. (ToSC 2017(4)), and is a four-round Feistel network on two arbitrarily large branches, where the middle two rounds call deck functions and the first and last rounds call the keyed hash function. Docked-double-decker is a variant of double-decker where the bulk of the input to the deck functions is moved to the keyed hash functions. We prove that the distinguishing advantage of the resulting wide block ciphers is simply two times the sum of the pseudorandom function distinguishing advantage of the deck function and the blinded keyed hashing distinguishing advantage of the keyed hash functions. We demonstrate that blinded keyed hashing is more general than the conventional notion of XOR-universality, and that it allows us to instantiate our constructions with keyed hash functions that have a very strong claim on bkh security but not necessarily on XOR-universality, such as Xoofffie (ePrint 2018/767). The bounds of double-decker and docked-double-decker are moreover reduced tweak-dependent, informally meaning that collisions on the keyed hash function for different tweaks only have a limited impact. We describe two use cases that can exploit this property opportunistically to get stronger security than what would be achieved with prior solutions: SSD encryption, where each sector can only be written to a limited number of times, and incremental tweaks, where one includes the state of the system in the variable-length tweak and appends new data incrementally.
2020
TOSC
Efficient Side-Channel Secure Message Authentication with Better Bounds 📺
We investigate constructing message authentication schemes from symmetric cryptographic primitives, with the goal of achieving security when most intermediate values during tag computation and verification are leaked (i.e., mode-level leakage-resilience). Existing efficient proposals typically follow the plain Hash-then-MAC paradigm T = TGenK(H(M)). When the domain of the MAC function TGenK is {0, 1}128, e.g., when instantiated with the AES, forgery is possible within time 264 and data complexity 1. To dismiss such cheap attacks, we propose two modes: LRW1-based Hash-then-MAC (LRWHM) that is built upon the LRW1 tweakable blockcipher of Liskov, Rivest, and Wagner, and Rekeying Hash-then-MAC (RHM) that employs internal rekeying. Built upon secure AES implementations, LRWHM is provably secure up to (beyond-birthday) 278.3 time complexity, while RHM is provably secure up to 2121 time. Thus in practice, their main security threat is expected to be side-channel key recovery attacks against the AES implementations. Finally, we benchmark the performance of instances of our modes based on the AES and SHA3 and confirm their efficiency.
2020
TOSC
Iterative Block Ciphers from Tweakable Block Ciphers with Long Tweaks 📺
We consider a problem of constructing a secure block cipher from a tweakable block cipher (TBC) with long tweaks. Given a TBC with n-bit blocks and Γn-bit tweaks for Γ ≥ 1, one of the constructions by Minematsu in DCC 2015 shows that a simple iteration of the TBC for 3d rounds yields a block cipher with dn-bit blocks that is secure up to 2dn/2 queries, where d = Γ + 1. In this paper, we show three results.1. Iteration of 3d − 2 rounds is enough for the security up to 2dn/2 queries, i.e., the security remains the same even if we reduce the number of rounds by two.2. When the number of queries is limited to 2n, d+1 rounds are enough, and with d + l rounds for 1 ≤ l ≤ d − 1, the security bound improves as l grows.3. A d-round construction gives a block cipher secure up to 2n/2 queries, i.e., it achieves the classical birthday-bound security. Our results show that a block cipher with beyond-birthday-bound (BBB) security (with respect to n) is obtained as low as d + 1 rounds, and we draw the security spectrum of d + l round version in the range of 1 ≤ l ≤ d−1 and l = 2d−2 for BBB security, and l = 0 for birthday-bound security.
2020
TOSC
INT-RUP Secure Lightweight Parallel AE Modes 📺
Owing to the growing demand for lightweight cryptographic solutions, NIST has initiated a standardization process for lightweight cryptographic algorithms. Specific to authenticated encryption (AE), the NIST draft demands that the scheme should have one primary member that has key length of 128 bits, and it should be secure for at least 250 − 1 byte queries and 2112 computations. Popular (lightweight) modes, such as OCB, OTR, CLOC, SILC, JAMBU, COFB, SAEB, Beetle, SUNDAE etc., require at least 128-bit primitives to meet the NIST criteria, as all of them are just birthday bound secure. Furthermore, most of them are sequential, and they either use a two pass mode or they do not offer any security when the adversary has access to unverified plaintext (RUP model). In this paper, we propose two new designs for lightweight AE modes, called LOCUS and LOTUS, structurally similar to OCB and OTR, respectively. These modes achieve notably higher AE security bounds with lighter primitives (only a 64-bit tweakable block cipher). Especially, they satisfy the NIST requirements: secure as long as the data complexity is less than 264 bytes and time complexity is less than 2128, even when instantiated with a primitive with 64-bit block and 128-bit key. Both these modes are fully parallelizable and provide full integrity security under the RUP model. We use TweGIFT-64[4,16,16,4] (also referred as TweGIFT-64), a tweakable variant of the GIFT block cipher, to instantiate our AE modes. TweGIFT-64-LOCUS and TweGIFT-64-LOTUS are significantly light in hardware implementation. To justify, we provide our FPGA based implementation results, which demonstrate that TweGIFT-64-LOCUS consumes only 257 slices and 690 LUTs, while TweGIFT-64-LOTUS consumes only 255 slices and 664 LUTs.
2020
TOSC
Release of Unverified Plaintext: Tight Unified Model and Application to ANYDAE 📺
Authenticated encryption schemes are usually expected to offer confidentiality and authenticity. In case of release of unverified plaintext (RUP), an adversary gets separated access to the decryption and verification functionality, and has more power in breaking the scheme. Andreeva et al. (ASIACRYPT 2014) formalized RUP security using plaintext awareness, informally meaning that the decryption functionality gives no extra power in breaking confidentiality, and INT-RUP security, covering authenticity in case of RUP. We describe a single, unified model, called AERUP security, that ties together these notions: we prove that an authenticated encryption scheme is AERUP secure if and only if it is conventionally secure, plaintext aware, and INT-RUP secure. We next present ANYDAE, a generalization of SUNDAE of Banik et al. (ToSC 2018/3). ANYDAE is a lightweight deterministic scheme that is based on a block cipher with block size n and arbitrary mixing functions that all operate on an n-bit state. It is particularly efficient for short messages, it does not rely on a nonce, and it provides maximal robustness to a lack of secure state. Whereas SUNDAE is not secure under release of unverified plaintext (a fairly simple attack can be mounted in constant time), ANYDAE is. We make handy use of the AERUP security model to prove that ANYDAE achieves both conventional security as RUP security, provided that certain modest conditions on the mixing functions are met. We describe two simple instances, called MONDAE and TUESDAE, that conform to these conditions and that are competitive with SUNDAE, in terms of efficiency and optimality.
2020
TOSC
Lightweight Iterative MDS Matrices: How Small Can We Go? 📺
As perfect building blocks for the diffusion layers of many symmetric-key primitives, the construction of MDS matrices with lightweight circuits has received much attention from the symmetric-key community. One promising way of realizing low-cost MDS matrices is based on the iterative construction: a low-cost matrix becomes MDS after rising it to a certain power. To be more specific, if At is MDS, then one can implement A instead of At to achieve the MDS property at the expense of an increased latency with t clock cycles. In this work, we identify the exact lower bound of the number of nonzero blocks for a 4 × 4 block matrix to be potentially iterative-MDS. Subsequently, we show that the theoretically lightest 4 × 4 iterative MDS block matrix (whose entries or blocks are 4 × 4 binary matrices) with minimal nonzero blocks costs at least 3 XOR gates, and a concrete example achieving the 3-XOR bound is provided. Moreover, we prove that there is no hope for previous constructions (GFS, LFS, DSI, and spares DSI) to beat this bound. Since the circuit latency is another important factor, we also consider the lower bound of the number of iterations for certain iterative MDS matrices. Guided by these bounds and based on the ideas employed to identify them, we explore the design space of lightweight iterative MDS matrices with other dimensions and report on improved results. Whenever we are unable to find better results, we try to determine the bound of the optimal solution. As a result, the optimality of some previous results is proved.
2020
TOSC
Improved Security Evaluation of SPN Block Ciphers and its Applications in the Single-key Attack on SKINNY 📺
In this paper, a new method for evaluating the integral property, truncated and impossible differentials for substitution-permutation network (SPN) block ciphers is proposed. The main assumption is an explicit description/expression of the internal state words in terms of the plaintext (ciphertext) words. By counting the number of times these words occur in the internal state expression, we can evaluate the resistance of a given block cipher to integral and impossible/truncated differential attacks more accurately than previous methods. More precisely, we explore the cryptographic consequences of uneven frequency of occurrences of plaintext (ciphertext) words appearing in the algebraic expression of the internal state words. This approach gives a new family of distinguishers employing different concepts such as the integral property, impossible/truncated differentials and the so-called zero-sum property. We then provide algorithms to determine the maximum number of rounds of such new types of distinguishers for SPN block ciphers. The potential and efficiency of this relatively simple method is confirmed through applications. For instance, in the case of SKINNY block cipher, several 10-round integral distinguishers, all of the 11-round impossible differentials, and a 7-round truncated differential could be determined. For the last case, using a single pair of plaintexts differing in three words so that (a = b = c) ≠ (a’ = b’ = c’), we are able to distinguish 7-round SKINNY from random permutations. More importantly, exploiting our distinguishers, we give the first practical attack on 11-round SKINNY-128-128 in the single-key setting (a theoretical attack reaches 16 rounds). Finally, using the same ideas, we provide a concise explanation on the existing distinguishers for round-reduced AES.
2020
TOSC
Cube-Based Cryptanalysis of Subterranean-SAE 📺
Subterranean 2.0 designed by Daemen, Massolino and Rotella is a Round 2 candidate of the NIST Lightweight Cryptography Standardization process. In the official document of Subterranean 2.0, the designers have analyzed the state collisions in unkeyed absorbing by reducing the number of rounds to absorb the message from 2 to 1. However, little cryptanalysis of the authenticated encryption scheme Subterranean-SAE is made. For Subterranean-SAE, the designers introduce 8 blank rounds to separate the controllable input and output, and expect that 8 blank rounds can achieve a sufficient diffusion. Therefore, it is meaningful to investigate the security by reducing the number of blank rounds. Moreover, the designers make no security claim but expect a non-trivial effort to achieve full-state recovery in a nonce-misuse scenario. In this paper, we present the first practical full-state recovery attack in a nonce-misuse scenario with data complexity of 213 32-bit blocks. In addition, in a nonce-respecting scenario and if the number of blank rounds is reduced to 4, we can mount a key-recovery attack with 2122 calls to the internal permutation of Subterranean-SAE and 269.5 32-bit blocks. A distinguishing attack with 233 calls to the internal permutation of Subterranean-SAE and 233 32-bit blocks is achieved as well. Our cryptanalysis does not threaten the security claim for Subterranean-SAE and we hope it can enhance the understanding of Subterranean-SAE.
2020
TOSC
Security of the Suffix Keyed Sponge 📺
We formalize and analyze the general suffix keyed sponge construction, a pseudorandom function built on top of a cryptographic permutation. The construction hashes its data using the (keyless) sponge construction, transforms part of the state using the secret key, and generates the tag from the output of a final permutation call. In its simplest form, if the key and tag size are at most the rate of the sponge, one can see the suffix keyed sponge as a simple sponge function evaluation whose input is the plaintext appended with the key. The suffix keyed sponge is, however, much more general: the key and tag size may exceed the rate without any need to make extra permutation calls. We prove that the suffix keyed sponge construction achieves birthday-bound PRF security in the capacity, even if key and tag size exceed the rate. Furthermore, we prove that if the absorption of the key into the state happens in a leakage resilient manner, the suffix keyed sponge itself is leakage resilient as well. Our findings show that the suffix keyed sponge compares favorably with the hash-then-MAC construction. For instance, to reach a security level of k bits, the side-channel protected component in the suffix keyed sponge just needs to process k bits of input besides the key, whereas schemes following the hash-then-MAC construction need a side-channel protected MAC function that processes 2k bits of input besides the key. Moreover, even if we just consider black-box attacks, the MAC function in a hash-then-MAC scheme needs to be cryptographically strong whereas in the suffix keyed sponge the key may be absorbed by a simple XOR. The security proofs are performed using the H-coefficient technique, and make effective use of the multicollision limit function results of Daemen et al. (ASIACRYPT 2017), both for arguing that state manipulation larger than the rate is tolerated after key processing and for upper bounding the amount of leakage an attacker may gain about the secret key.
2020
TOSC
Vectorized linear approximations for attacks on SNOW 3G 📺
SNOW 3G is a stream cipher designed in 2006 by ETSI/SAGE, serving in 3GPP as one of the standard algorithms for data confidentiality and integrity protection. It is also included in the 4G LTE standard. In this paper we derive vectorized linear approximations of the finite state machine in SNOW3G. In particular,we show one 24-bit approximation with a bias around 2−37 and one byte-oriented approximation with a bias around 2−40. We then use the approximations to launch attacks on SNOW 3G. The first approximation is used in a distinguishing attack resulting in an expected complexity of 2172 and the second one can be used in a standard fast correlation attack resulting in key recovery in an expected complexity of 2177. If the key length in SNOW 3G would be increased to 256 bits, the results show that there are then academic attacks on such a version faster than the exhaustive key search.
2020
TOSC
Weak Keys in the Rekeying Paradigm: Application to COMET and mixFeed 📺
In this paper, we study a group of AEAD schemes that use rekeying as a technique to increase efficiency by reducing the state size of the algorithm. We provide a unified model to study the behavior of the keys used in these schemes, called Rekey-and-Chain (RaC). This model helps understand the design of several AEAD schemes. We show generic attacks on these schemes based on the existence of certain types of weak keys. We also show that the borderline between multi-key and single-key analyses of these schemes is not solid and the analysis can be performed independent of the master key, leading sometimes to practical attacks in the multi-key setting. More importantly, the multi-key analysis can be applied in the single key setting, since each message is encrypted with a different key. Consequently, we show gaps in the security analysis of COMET and mixFeed in the single key setting, which led the designers to provide overly optimistic security claims. In the case of COMET, full key recovery can be performed with 264 online queries and 264 offline queries in the single-key setting, or 246 online queries per user and 264 offline queries in the multi-key setting with ∼ 0.5 million users. In the case of mixFeed, we enhance the forgery adversarial advantage in the single-key setting with a factor of 267 compared to what the designers claim. More importantly, our result is just a lower bound of this advantage, since we show that the gap in the analysis of mixFeed depends on properties of the AES Key Schedule that are not well understood and require more cryptanalytic efforts to find a more tight advantage. After reporting these findings, the designers updated their security analyses and accommodated the proposed attacks.
2020
TOSC
Comprehensive security analysis of CRAFT 📺
CRAFT is a lightweight block cipher, designed to provide efficient protection against differential fault attacks. It is a tweakable cipher that includes 32 rounds to produce a ciphertext from a 64-bit plaintext using a 128-bit key and 64-bit public tweak. In this paper, compared to the designers’ analysis, we provide a more detailed analysis of CRAFT against differential and zero-correlation cryptanalysis, aiming to provide better distinguishers for the reduced rounds of the cipher. Our distinguishers for reduced-round CRAFT cover a higher number of rounds compared to the designers’ analysis. In our analysis, we observed that, for any number of rounds, the differential effect of CRAFT has an extremely higher probability compared to any differential trail. As an example, while the best trail for 11 rounds of the cipher has a probability of at least 2−80, we present a differential with probability 2−49.79, containing 229.66 optimal trails, all with the same optimum probability of 2−80. Next, we use a partitioning technique, based on optimal expandable truncated trails to provide a better estimation of the differential effect on CRAFT. Thanks to this technique, we are able to find differential distinguishers for 9, 10, 11, 12, 13, and 14 rounds of the cipher in single tweak model with the probabilities of at least 2−40.20, 2−45.12, 2−49.79, 2−54.49, 2−59.13, and 2−63.80, respectively. These probabilities should be compared with the best distinguishers provided by the designers in the same model for 9 and 10 rounds of the cipher with the probabilities of at least 2−54.67 and 2−62.61, respectively. In addition, we consider the security of CRAFT against the new concept of related tweak zero-correlation (ZC) linear cryptanalysis and present a new distinguisher which covers 14 rounds of the cipher, while the best previous ZC distinguisher covered 13 rounds. Thanks to the related tweak ZC distinguisher for 14 rounds of the cipher, we also present 14 rounds integral distinguishers in related tweak mode of the cipher. Although the provided analysis does not compromise the cipher, we think it provides a better insight into the designing of CRAFT.
2020
TOSC
Improved Meet-in-the-Middle Preimage Attacks against AES Hashing Modes 📺
Hashing modes are ways to convert a block cipher into a hash function, and those with AES as the underlying block cipher are referred to as AES hashing modes. Sasaki in 2011, introduced the first preimage attack against AES hashing modes with the AES block cipher reduced to 7 rounds, by the method of meet-in-the-middle. In his attack, the key-schedules are not taken into account. Hence, the same attack applies to all three versions of AES. In this paper, by introducing neutral bits from the key, extra degree of freedom is gained, which is utilized in two ways, i.e., to reduce the time complexity and to extend the attack to more rounds. As an immediate result, the complexities of 7-round pseudo-preimage attacks are reduced from 2120 to 2104, 296, and 296 for AES-128, AES-192, and AES-256, respectively. By carefully choosing the neutral bits from the key to cancel those from the state, the attack is extended to 8 rounds for AES-192 and AES-256 with complexities 2112 and 296. Similar results are obtained for Kiasu-BC, a tweakable block cipher based on AES-128, and interestingly the additional input tweak helps reduce the complexity and extend the attack to one more round. To the best of our knowledge, these are the first preimage attacks against 8-round AES hashing modes.
2020
TOSC
Analyzing the Linear Keystream Biases in AEGIS 📺
AEGIS is one of the authenticated encryption designs selected for the final portfolio of the CAESAR competition. It combines the AES round function and simple Boolean operations to update its large state and extract a keystream to achieve an excellent software performance. In 2014, Minaud discovered slight biases in the keystream based on linear characteristics. For family member AEGIS-256, these could be exploited to undermine the confidentiality faster than generic attacks, but this still requires very large amounts of data. For final portfolio member AEGIS-128, these attacks are currently less efficient than generic attacks. We propose improved keystream approximations for the AEGIS family, but also prove upper bounds below 2−128 for the squared correlation contribution of any single suitable linear characteristic.
2020
TOSC
Multiple Linear Cryptanalysis Using Linear Statistics 📺
We propose an improved and extended approach of the multiple linear cryptanalysis presented by A. Biryukov et al. at CRYPTO 2004 that exploits dominant and statistically independent linear trails. While they presented only rank based attacks with success probability 1, we present threshold based attacks as well as rank based ones using newly introduced statistic that is a linear combination of the component statistics for the trails and is an approximation of the LLR statistic. The rank based Algorithm 1 style attack yields the same estimate for the gain with Biryukov et al.’s Algorithm 1 style attack. For each of the threshold based Algorithm 1 style and Algorithm 2 style attacks, we provide a formula for its advantage in terms of the correlations of the trails, the data complexity, and the success probability in case the aimed success probability is not 1. Combining the threshold based attacks with the rank based ones, we get attacks each of which has better estimates for the advantage compared to the threshold based one in case the aimed success probability is close to 1. We then extend the methods to get a new framework of multiple linear attacks exploiting close-to-dominant linear trails that may not be statistically independent. We apply the methods to full DES and get linear attacks using 4 linear trails with about the same or better complexity compared to those presented at ASIACRYPT 2017 that use 4 additional trails. With data complexity less than 241, the attack has better complexity than existing attacks on DES.
2020
TOSC
New Techniques for Searching Differential Trails in Keccak 📺
Keccak-f is the permutation used in the NIST SHA-3 hash function standard. Inspired by the previous exhaustive differential trail search methods by Mella et al. at ToSC 2017, we introduce in this paper new algorithms to cover 3-round trail cores with propagation weight at least 53, up from the previous best weight 45. To achieve the goal, the concept of ideal improvement assumption is proposed to construct theoretical representative of subspaces so as to efficiently cover the search space of 3-round trail cores with at least one out-Kernel α state. Of particular note is that the exhaustiveness in 3-round trail core search of at least one out-Kernel α is only experimentally verified. With the knowledge of all 3-round trail cores of weight up to 53, lower bounds on 4/5/6-round trails are tightened to 56/58/108, from the previous 48/50/92, respectively.
2020
TOSC
Improving the MILP-based Security Evaluation Algorithm against Differential/Linear Cryptanalysis Using A Divide-and-Conquer Approach 📺
In recent years, Mixed Integer Linear Programming (MILP) has been widely used in cryptanalysis of symmetric-key primitives. For differential and linear cryptanalysis, MILP can be used to solve two kinds of problems: calculation of the minimum number of differentially/linearly active S-boxes, and search for the best differential/linear characteristics. There are already numerous papers published in this area. However, the efficiency is not satisfactory enough for many symmetric-key primitives. In this paper, we greatly improve the efficiency of the MILP-based search algorithm for both problems. Each of the two problems for an r-round cipher can be converted to an MILP model whose feasible region is the set of all possible r-round differential/linear characteristics. Generally, high-probability differential/linear characteristics are likely to have a low number of active S-boxes at a certain round. Inspired by the idea of a divide-and-conquer approach, we divide the set of all possible differential/linear characteristics into several smaller subsets, then separately search them. That is to say, the search of the whole set is split into easier searches of smaller subsets, and optimal solutions within the smaller subsets are combined to give the optimal solution within the whole set. In addition, we use several techniques to further improve the efficiency of the search algorithm. As applications, we apply our search algorithm to five lightweight block ciphers: PRESENT, GIFT-64, RECTANGLE, LBLOCK and TWINE. For each cipher, we obtain better results than the best-known ones obtained from the MILP method. For the minimum number of differentially/linearly active S-boxes, we reach 31/31, 16/15, 16/16, 20/20 and 20/20 rounds for the five ciphers respectively. For the best differential/linear characteristics, we reach 18/18, 15/13, 15/14, 16/15 and 15/16 rounds for the five ciphers respectively.
2019
CHES
RISCV and Security: how, when and why? 📺
Invited talk
In this talk we will provide an overview of the current activities of the RISCV Foundation, including the creation of a Security Standing Committee about a year ago which is in charge of assessing new threats and opportunities in security in the RISCV world; we will discuss progress being made by the security-related task groups. The first one is working on specifying extensions of the base instruction set architecture (ISA) that will enable high-performance and high security cryptographic operations (AES, SHA-2, Public Key Cryptography); the second one is looking at creating extensions and hardware/software specifications to enable a trusted execution environment built on top of a RISCV processor; we will also provide details on the activities of the Security Standing Committee itself, and what some of the plans are to tackle the newest microarchitectural cache timing side-channel attacks such as Spectre, Meltdown, Foreshadow, etc. We will review some additional work on secure RISCV and existing security extension initiatives by academia around the world. Finally, we will describe some approaches of how a side-channel and DPA-resistant RISCV CPU could be built and elaborate on the research we have been focused on in the past months.
2019
CHES
Developing High-Performance Mechanically-Verified Cryptographic Code 📺
Invited talk
Project Everest is constructing a high-performance, standards-compliant, formally verified implementation of the HTTPS ecosystem, including TLS, X.509, and the core cryptographic algorithms. This talk will present an overview of how we verify our implementations are correct, cryptographically secure, and resilient to basic side channels. We will focus on our EverCrypt cryptographic provider, a comprehensive collection of verified, high-performance cryptographic functionalities available via a carefully designed API. The API provably supports agility (choosing between multiple algorithms for the same functionality) and multiplexing (choosing between multiple implementations of the same algorithm). Through a combination of abstraction and zero-cost generic programming, we show how agility can simplify verification without sacrificing performance, and we demonstrate how C and assembly can be composed and verified against shared specifications. The result is several functionalities whose performance matches or exceeds the best unverified implementations. Altogether, EverCrypt consists of over 100K verified lines of specs, code, and proofs, and it produces over 45K lines of C and assembly code.
2019
CRYPTO
Cryptanalysis of OCB2: Attacks on Authenticity and Confidentiality 📺
Best paper
We present practical attacks on OCB2. This mode of operation of a blockcipher was designed with the aim to provide particularly efficient and provably-secure authenticated encryption services, and since its proposal about 15 years ago it belongs to the top performers in this realm. OCB2 was included in an ISO standard in 2009.An internal building block of OCB2 is the tweakable blockcipher obtained by operating a regular blockcipher in $$ \text {XEX} ^*$$ mode. The latter provides security only when evaluated in accordance with certain technical restrictions that, as we note, are not always respected by OCB2. This leads to devastating attacks against OCB2’s security promises: We develop a range of very practical attacks that, amongst others, demonstrate universal forgeries and full plaintext recovery. We complete our report with proposals for (provably) repairing OCB2. To our understanding, as a direct consequence of our findings, OCB2 is currently in a process of removal from ISO standards. Our attacks do not apply to OCB1 and OCB3, and our privacy attacks on OCB2 require an active adversary.
2019
CRYPTO
Quantum Cryptanalysis in the RAM Model: Claw-Finding Attacks on SIKE 📺
Best Young Researcher Paper
We introduce models of computation that enable direct comparisons between classical and quantum algorithms. Incorporating previous work on quantum computation and error correction, we justify the use of the gate-count and depth-times-width cost metrics for quantum circuits. We demonstrate the relevance of these models to cryptanalysis by revisiting, and increasing, the security estimates for the Supersingular Isogeny Diffie–Hellman (SIDH) and Supersingular Isogeny Key Encapsulation (SIKE) schemes. Our models, analyses, and physical justifications have applications to a number of memory intensive quantum algorithms.
2019
CRYPTO
Fully Secure Attribute-Based Encryption for t-CNF from LWE 📺
Best young researcher
Attribute-based Encryption (ABE), first introduced by [SW05, GPSW06], is a public key encryption system that can support multiple users with varying decryption permissions. One of the main properties of such schemes is the supported function class of policies. While there are fully secure constructions from bilinear maps for a fairly large class of policies, the situation with lattice-based constructions is less satisfactory and many efforts were made to close this gap. Prior to this work the only known fully secure lattice construction was for the class of point functions (also known as IBE).In this work we construct for the first time a lattice-based (ciphertext-policy) ABE scheme for the function class t-CNF, which consists of CNF formulas where each clause depends on at most t bits of the input, for any constant t. This class includes NP-verification policies, bit-fixing policies and t-threshold policies. Towards this goal we also construct a fully secure single-key constrained PRF from OWF for the same function class, which might be of independent interest.
2019
CRYPTO
Noninteractive Zero Knowledge for NP from (Plain) Learning with Errors 📺
We finally close the long-standing problem of constructing a noninteractive zero-knowledge (NIZK) proof system for any NP language with security based on the plain Learning With Errors (LWE) problem, and thereby on worst-case lattice problems. Our proof system instantiates the framework recently developed by Canetti et al.  [EUROCRYPT’18], Holmgren and Lombardi [FOCS’18], and Canetti et al.  [STOC’19] for soundly applying the Fiat–Shamir transform using a hash function family that is correlation intractable for a suitable class of relations. Previously, such hash families were based either on “exotic” assumptions (e.g., indistinguishability obfuscation or optimal hardness of certain LWE variants) or, more recently, on the existence of circularly secure fully homomorphic encryption (FHE). However, none of these assumptions are known to be implied by plain LWE or worst-case hardness.Our main technical contribution is a hash family that is correlation intractable for arbitrary size-S circuits, for any polynomially bounded S, based on plain LWE (with small polynomial approximation factors). The construction combines two novel ingredients: a correlation-intractable hash family for log-depth circuits based on LWE (or even the potentially harder Short Integer Solution problem), and a “bootstrapping” transform that uses (leveled) FHE to promote correlation intractability for the FHE decryption circuit to arbitrary (bounded) circuits. Our construction can be instantiated in two possible “modes,” yielding a NIZK that is either computationally sound and statistically zero knowledge in the common random string model, or vice-versa in the common reference string model.
2019
CRYPTO
Lattice-Based Zero-Knowledge Proofs: New Techniques for Shorter and Faster Constructions and Applications 📺
We devise new techniques for design and analysis of efficient lattice-based zero-knowledge proofs (ZKP). First, we introduce one-shot proof techniques for non-linear polynomial relations of degree $$k\ge 2$$, where the protocol achieves a negligible soundness error in a single execution, and thus performs significantly better in both computation and communication compared to prior protocols requiring multiple repetitions. Such proofs with degree $$k\ge 2$$ have been crucial ingredients for important privacy-preserving protocols in the discrete logarithm setting, such as Bulletproofs (IEEE S&P ’18) and arithmetic circuit arguments (EUROCRYPT ’16). In contrast, one-shot proofs in lattice-based cryptography have previously only been shown for the linear case ($$k=1$$) and a very specific quadratic case ($$k=2$$), which are obtained as a special case of our technique.Moreover, we introduce two speedup techniques for lattice-based ZKPs: a CRT-packing technique supporting “inter-slot” operations, and “NTT-friendly” tools that permit the use of fully-splitting rings. The former technique comes at almost no cost to the proof length, and the latter one barely increases it, which can be compensated for by tweaking the rejection sampling parameters while still having faster computation overall.To illustrate the utility of our techniques, we show how to use them to build efficient relaxed proofs for important relations, namely proof of commitment to bits, one-out-of-many proof, range proof and set membership proof. Despite their relaxed nature, we further show how our proof systems can be used as building blocks for advanced cryptographic tools such as ring signatures.Our ring signature achieves a dramatic improvement in length over all the existing proposals from lattices at the same security level. The computational evaluation also shows that our construction is highly likely to outperform all the relevant works in running times. Being efficient in both aspects, our ring signature is particularly suitable for both small-scale and large-scale applications such as cryptocurrencies and e-voting systems. No trusted setup is required for any of our proposals.
2019
CRYPTO
Efficient Lattice-Based Zero-Knowledge Arguments with Standard Soundness: Construction and Applications 📺
We provide new zero-knowledge argument of knowledge systems that work directly for a wide class of language, namely, ones involving the satisfiability of matrix-vector relations and integer relations commonly found in constructions of lattice-based cryptography. Prior to this work, practical arguments for lattice-based relations either have a constant soundness error $$(2/3)$$, or consider a weaker form of soundness, namely, extraction only guarantees that the prover is in possession of a witness that “approximates” the actual witness. Our systems do not suffer from these limitations.The core of our new argument systems is an efficient zero-knowledge argument of knowledge of a solution to a system of linear equations, where variables of this solution satisfy a set of quadratic constraints. This argument enjoys standard soundness, a small soundness error $$(1/poly)$$, and a complexity linear in the size of the solution. Using our core argument system, we construct highly efficient argument systems for a variety of statements relevant to lattices, including linear equations with short solutions and matrix-vector relations with hidden matrices.Based on our argument systems, we present several new constructions of common privacy-preserving primitives in the standard lattice setting, including a group signature, a ring signature, an electronic cash system, and a range proof protocol. Our new constructions are one to three orders of magnitude more efficient than the state of the art (in standard lattice). This illustrates the efficiency and expressiveness of our argument system.
2019
CRYPTO
Algebraic Techniques for Short(er) Exact Lattice-Based Zero-Knowledge Proofs 📺
A key component of many lattice-based protocols is a zero-knowledge proof of knowledge of a vector $$\vec {s}$$ with small coefficients satisfying $$A\vec {s}=\vec {u}\bmod \,q$$ . While there exist fairly efficient proofs for a relaxed version of this equation which prove the knowledge of $$\vec {s}'$$ and c satisfying $$A\vec {s}'=\vec {u}c$$ where $$\Vert \vec {s}'\Vert \gg \Vert \vec {s}\Vert $$ and c is some small element in the ring over which the proof is performed, the proofs for the exact version of the equation are considerably less practical. The best such proof technique is an adaptation of Stern’s protocol (Crypto ’93), for proving knowledge of nearby codewords, to larger moduli. The scheme is a $$\varSigma $$ -protocol, each of whose iterations has soundness error $$2{/}3$$ , and thus requires over 200 repetitions to obtain soundness error of $$2^{-128}$$ , which is the main culprit behind the large size of the proofs produced. In this paper, we propose the first lattice-based proof system that significantly outperforms Stern-type proofs for proving knowledge of a short $$\vec {s}$$ satisfying $$A\vec {s}=\vec {u}\bmod \,q$$ . Unlike Stern’s proof, which is combinatorial in nature, our proof is more algebraic and uses various relaxed zero-knowledge proofs as sub-routines. The main savings in our proof system comes from the fact that each round has soundness error of $$1{/}n$$ , where n is the number of columns of A. For typical applications, n is a few thousand, and therefore our proof needs to be repeated around 10 times to achieve a soundness error of $$2^{-128}$$ . For concrete parameters, it produces proofs that are around an order of magnitude smaller than those produced using Stern’s approach.
2019
CRYPTO
Seedless Fruit Is the Sweetest: Random Number Generation, Revisited 📺
The need for high-quality randomness in cryptography makes random-number generation one of its most fundamental tasks.A recent important line of work (initiated by Dodis et al., CCS ’13) focuses on the notion of robustness for pseudorandom number generators (PRNGs) with inputs. These are primitives that use various sources to accumulate sufficient entropy into a state, from which pseudorandom bits are extracted. Robustness ensures that PRNGs remain secure even under state compromise and adversarial control of entropy sources. However, the achievability of robustness inherently depends on a seed, or, alternatively, on an ideal primitive (e.g., a random oracle), independent of the source of entropy. Both assumptions are problematic: seed generation requires randomness to start with, and it is arguable whether the seed or the ideal primitive can be kept independent of the source. This paper resolves this dilemma by putting forward new notions of robustness which enable both (1) seedless PRNGs and (2) primitive-dependent adversarial sources of entropy. To bypass obvious impossibility results, we make a realistic compromise by requiring that the source produce sufficient entropy even given its evaluations of the underlying primitive. We also provide natural, practical, and provably secure constructions based on hash-function designs from compression functions, block ciphers, and permutations. Our constructions can be instantiated with minimal changes to industry-standard hash functions SHA-2 and SHA-3, or key derivation function HKDF, and can be downgraded to (online) seedless randomness extractors, which are of independent interest.On the way we consider both a computational variant of robustness, where attackers only make a bounded number of queries to the ideal primitive, as well as a new information-theoretic variant, which dispenses with this assumption to a certain extent, at the price of requiring a high rate of injected weak randomness (as it is, e.g., plausible on Intel’s on-chip RNG). The latter notion enables applications such as everlasting security. Finally, we show that the CBC extractor, used by Intel’s on-chip RNG, is provably insecure in our model.
2019
CRYPTO
Nonces Are Noticed: AEAD Revisited 📺
We draw attention to a gap between theory and usage of nonce-based symmetric encryption, under which the way the former treats nonces can result in violation of privacy in the latter. We bridge the gap with a new treatment of nonce-based symmetric encryption that modifies the syntax (decryption no longer takes a nonce), upgrades the security goal (asking that not just messages, but also nonces, be hidden) and gives simple, efficient schemes conforming to the new definitions. We investigate both basic security (holding when nonces are not reused) and advanced security (misuse resistance, providing best-possible guarantees when nonces are reused).
2019
CRYPTO
How to Build Pseudorandom Functions from Public Random Permutations 📺
Pseudorandom functions are traditionally built upon block ciphers, but with the trend of permutation based cryptography, it is a natural question to investigate the design of pseudorandom functions from random permutations. We present a generic study of how to build beyond birthday bound secure pseudorandom functions from public random permutations. We first show that a pseudorandom function based on a single permutation call cannot be secure beyond the $$2^{n/2}$$ birthday bound, where n is the state size of the function. We next consider the Sum of Even-Mansour (SoEM) construction, that instantiates the sum of permutations with the Even-Mansour construction. We prove that SoEM achieves tight $$2n{/}3$$-bit security if it is constructed from two independent permutations and two randomly drawn keys. We also demonstrate a birthday bound attack if either the permutations or the keys are identical. Finally, we present the Sum of Key Alternating Ciphers (SoKAC) construction, a translation of Encrypted Davies-Meyer Dual to a public permutation based setting, and show that SoKAC achieves tight $$2n{/}3$$-bit security even when a single key is used.
2019
CRYPTO
New Results on Modular Inversion Hidden Number Problem and Inversive Congruential Generator 📺
The Modular Inversion Hidden Number Problem (MIHNP), introduced by Boneh, Halevi and Howgrave-Graham in Asiacrypt 2001, is briefly described as follows: Let $${\mathrm {MSB}}_{\delta }(z)$$ refer to the $$\delta $$ most significant bits of z. Given many samples $$\left( t_{i}, {\mathrm {MSB}}_{\delta }((\alpha + t_{i})^{-1} \bmod {p})\right) $$ for random $$t_i \in \mathbb {Z}_p$$, the goal is to recover the hidden number $$\alpha \in \mathbb {Z}_p$$. MIHNP is an important class of Hidden Number Problem.In this paper, we revisit the Coppersmith technique for solving a class of modular polynomial equations, which is respectively derived from the recovering problem of the hidden number $$\alpha $$ in MIHNP. For any positive integer constant d, let integer $$n=d^{3+o(1)}$$. Given a sufficiently large modulus p, $$n+1$$ samples of MIHNP, we present a heuristic algorithm to recover the hidden number $$\alpha $$ with a probability close to 1 when $$\delta /\log _2 p>\frac{1}{d\,+\,1}+o(\frac{1}{d})$$. The overall time complexity of attack is polynomial in $$\log _2 p$$, where the complexity of the LLL algorithm grows as $$d^{\mathcal {O}(d)}$$ and the complexity of the Gröbner basis computation grows as $$(2d)^{\mathcal {O}(n^2)}$$. When $$d> 2$$, this asymptotic bound outperforms $$\delta /\log _2 p>\frac{1}{3}$$ which is the asymptotic bound proposed by Boneh, Halevi and Howgrave-Graham in Asiacrypt 2001. It is the first time that a better bound for solving MIHNP is given, which implies that the conjecture that MIHNP is hard whenever $$\delta /\log _2 p<\frac{1}{3}$$ is broken. Moreover, we also get the best result for attacking the Inversive Congruential Generator (ICG) up to now.
2019
CRYPTO
On the Shortness of Vectors to Be Found by the Ideal-SVP Quantum Algorithm 📺
The hardness of finding short vectors in ideals of cyclotomic number fields (hereafter, Ideal-SVP) can serve as a worst-case assumption for numerous efficient cryptosystems, via the average-case problems Ring-SIS and Ring-LWE. For a while, it could be assumed the Ideal-SVP problem was as hard as the analog problem for general lattices (SVP), even when considering quantum algorithms.But in the last few years, a series of works has lead to a quantum algorithm for Ideal-SVP that outperforms what can be done for general SVP in certain regimes. More precisely, it was demonstrated (under certain hypotheses) that one can find in quantum polynomial time a vector longer by a factor at most $$\alpha = \exp ({\widetilde{O}(n^{1/2})})$$ than the shortest non-zero vector in a cyclotomic ideal lattice, where n is the dimension.In this work, we explore the constants hidden behind this asymptotic claim. While these algorithms have quantum steps, the steps that impact the approximation factor $$\alpha $$ are entirely classical, which allows us to estimate it experimentally using only classical computing. Moreover, we design heuristic improvements for those steps that significantly decrease the hidden factors in practice. Finally, we derive new provable effective lower bounds based on volumetric arguments.This study allows to predict the crossover point with classical lattice reduction algorithms, and thereby determine the relevance of this quantum algorithm in any cryptanalytic context. For example we predict that this quantum algorithm provides shorter vectors than BKZ-300 (roughly the weakest security level of NIST lattice-based candidates) for cyclotomic rings of rank larger than about 24000.
2019
CRYPTO
Proofs of Replicated Storage Without Timing Assumptions 📺
In this paper we provide a formal treatment of proof of replicated storage, a novel cryptographic primitive recently proposed in the context of a novel cryptocurrency, namely Filecoin.In a nutshell, proofs of replicated storage is a solution to the following problem: A user stores a file m on n different servers to ensure that the file will be available even if some of the servers fail. Using proof of retrievability, the user could check that every server is indeed storing the file. However, what if the servers collude and, in order to save on resources, decide to only store one copy of the file? A proof of replicated storage guarantees that, unless the (potentially colluding) servers are indeed reserving the space necessary to store n copies of the file, the user will not accept the proofs. While some candidate proofs of replicated storage have already been proposed, their soundness relies on timing assumptions i.e., the user must reject the proof if the prover does not reply within a certain time-bound.In this paper we provide the first construction of a proof of replication which does not rely on any timing assumptions.
2019
CRYPTO
Simple Proofs of Space-Time and Rational Proofs of Storage 📺
We introduce a new cryptographic primitive: Proofs of Space-Time (PoSTs) and construct an extremely simple, practical protocol for implementing these proofs. A PoST allows a prover to convince a verifier that she spent a “space-time” resource (storing data—space—over a period of time). Formally, we define the PoST resource as a trade-off between CPU work and space-time (under reasonable cost assumptions, a rational user will prefer to use the lower-cost space-time resource over CPU work).Compared to a proof-of-work, a PoST requires less energy use, as the “difficulty” can be increased by extending the time period over which data is stored without increasing computation costs. Our definition is very similar to “Proofs of Space” [ePrint 2013/796, 2013/805] but, unlike the previous definitions, takes into account amortization attacks and storage duration. Moreover, our protocol uses a very different (and much simpler) technique, making use of the fact that we explicitly allow a space-time tradeoff, and doesn’t require any non-standard assumptions (beyond random oracles). Unlike previous constructions, our protocol allows incremental difficulty adjustment, which can gracefully handle increases in the price of storage compared to CPU work. In addition, we show how, in a crypto-currency context, the parameters of the scheme can be adjusted using a market-based mechanism, similar in spirit to the difficulty adjustment for PoW protocols.
2019
CRYPTO
Non-malleable Codes for Decision Trees 📺
We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by decision trees of depth $$d= n^{1/4-o(1)}$$ . In particular, each bit of the tampered codeword is set arbitrarily after adaptively reading up to d arbitrary locations within the original codeword. Prior to this work, no efficient unconditional non-malleable codes were known for decision trees beyond depth $$O(\log ^2 n)$$ .Our result also yields efficient, unconditional non-malleable codes that are $$\exp (-n^{\varOmega (1)})$$ -secure against constant-depth circuits of $$\exp (n^{\varOmega (1)})$$ -size. Prior work of Chattopadhyay and Li (STOC 2017) and Ball et al. (FOCS 2018) only provide protection against $$\exp (O(\log ^2n))$$ -size circuits with $$\exp (-O(\log ^2n))$$ -security.We achieve our result through simple non-malleable reductions of decision tree tampering to split-state tampering. As an intermediary, we give a simple and generic reduction of leakage-resilient split-state tampering to split-state tampering with improved parameters. Prior work of Aggarwal et al. (TCC 2015) only provides a reduction to split-state non-malleable codes with decoders that exhibit particular properties.
2019
CRYPTO
Explicit Rate-1 Non-malleable Codes for Local Tampering 📺
This paper constructs high-rate non-malleable codes in the information-theoretic plain model against tampering functions with bounded locality. We consider $$\delta $$-local tampering functions; namely, each output bit of the tampering function is a function of (at most) $$\delta $$ input bits. This work presents the first explicit and efficient rate-1 non-malleable code for $$\delta $$-local tampering functions, where $$\delta =\xi \lg n$$ and $$\xi <1$$ is any positive constant. As a corollary, we construct the first explicit rate-1 non-malleable code against NC$$^0$$ tampering functions.Before our work, no explicit construction for a constant-rate non-malleable code was known even for the simplest 1-local tampering functions. Ball et al. (EUROCRYPT–2016), and Chattopadhyay and Li (STOC–2017) provided the first explicit non-malleable codes against $$\delta $$-local tampering functions. However, these constructions are rate-0 even when the tampering functions have 1-locality. In the CRS model, Faust et al. (EUROCRYPT–2014) constructed efficient rate-1 non-malleable codes for $$\delta = O(\log n)$$ local tampering functions.Our main result is a general compiler that bootstraps a rate-0 non-malleable code against leaky input and output local tampering functions to construct a rate-1 non-malleable code against $$\xi \lg n$$-local tampering functions, for any positive constant $$\xi < 1$$. Our explicit construction instantiates this compiler using an appropriate encoding by Ball et al. (EUROCRYPT–2016).
2019
CRYPTO
Continuous Space-Bounded Non-malleable Codes from Stronger Proofs-of-Space 📺
Non-malleable codes are encoding schemes that provide protections against various classes of tampering attacks. Recently Faust et al. (CRYPTO 2017) initiated the study of space-bounded non-malleable codes that provide such protections against tampering within small-space devices. They put forward a construction based on any non-interactive proof-of-space(NIPoS). However, the scheme only protects against an a priori bounded number of tampering attacks.We construct non-malleable codes that are resilient to an unbounded polynomial number of space-bounded tamperings. Towards that we introduce a stronger variant of $$\text {NIPoS}$$ called proof-extractable$$\text {NIPoS}$$ ($$\text {PExt-NIPoS}$$), and propose two approaches of constructing such a primitive. Using a new proof strategy we show that the generic encoding scheme of Faust et al. achieves unbounded tamper-resilience when instantiated with a $$\text {PExt-NIPoS}$$. We show two methods to construct $$\text {PExt-NIPoS}$$:1.The first method uses a special family of “memory-hard” graphs, called challenge-hard graphs (CHG), a notion we introduce here. We instantiate such family of graphs based on an extension of stack of localized expanders (first used by Ren and Devadas in the context of proof-of-space). In addition, we show that the graph construction used as a building block for the proof-of-space by Dziembowski et al. (CRYPTO 2015) satisfies challenge-hardness as well. These two CHG-instantiations lead to continuous space-bounded NMC with different features in the random oracle model.2.Our second instantiation relies on a new measurable property, called uniqueness of $$\text {NIPoS}$$. We show that standard extractability can be upgraded to proof-extractability if the $$\text {NIPoS}$$ also has uniqueness. We propose a simple heuristic construction of $$\text {NIPoS}$$, that achieves (partial) uniqueness, based on a candidate memory-hard function in the standard model and a publicly verifiable computation with small-space verification. Instantiating the encoding scheme of Faust et al. with this $$\text {NIPoS}$$, we obtain a continuous space-bounded NMC that supports the “most practical” parameters, complementing the provably secure but “relatively impractical” CHG-based constructions. Additionally, we revisit the construction of Faust et al. and observe that due to the lack of uniqueness of their $$\text {NIPoS}$$, the resulting encoding schemes yield “highly impractical” parameters in the continuous setting. We conclude the paper with a comparative study of all our non-malleable code constructions with an estimation of concrete parameters.
2019
CRYPTO
Synchronous, with a Chance of Partition Tolerance 📺
Murphy, Murky, Mopey, Moody, and Morose decide to write a paper together over the Internet and submit it to the prestigious CRYPTO’19 conference that has the most amazing PC. They encounter a few problems. First, not everyone is online every day: some are lazy and go skiing on Mondays; others cannot use git correctly and they are completely unaware that they are losing messages. Second, a small subset of the co-authors may be secretly plotting to disrupt the project (e.g., because they are writing a competing paper in stealth).Suppose that each day, sufficiently many honest co-authors are online (and use git correctly); moreover, suppose that messages checked into git on Monday can be correctly received by honest and online co-authors on Tuesday or any future day. Can the honest co-authors successfully finish the paper in a small number of days such that they make the CRYPTO deadline; and perhaps importantly, can all the honest co-authors, including even those who are lazy and those who sometimes use git incorrectly, agree on the final theorem?
2019
CRYPTO
Subvector Commitments with Application to Succinct Arguments 📺
We put forward the notion of subvector commitments (SVC): An SVC allows one to open a committed vector at a set of positions, where the opening size is independent of length of the committed vector and the number of positions to be opened. We propose two constructions under variants of the root assumption and the CDH assumption, respectively. We further generalize SVC to a notion called linear map commitments (LMC), which allows one to open a committed vector to its images under linear maps with a single short message, and propose a construction over pairing groups.Equipped with these newly developed tools, we revisit the “CS proofs” paradigm [Micali, FOCS 1994] which turns any arguments with public-coin verifiers into non-interactive arguments using the Fiat-Shamir transform in the random oracle model. We propose a compiler that turns any (linear, resp.) PCP into a non-interactive argument, using exclusively SVCs (LMCs, resp.). For an approximate 80 bits of soundness, we highlight the following new implications:1.There exists a succinct non-interactive argument of knowledge (SNARK) with public-coin setup with proofs of size 5360 bits, under the adaptive root assumption over class groups of imaginary quadratic orders against adversaries with runtime $$2^{128}$$. At the time of writing, this is the shortest SNARK with public-coin setup.2.There exists a non-interactive argument with private-coin setup, where proofs consist of 2 group elements and 3 field elements, in the generic bilinear group model.
2019
CRYPTO
Batching Techniques for Accumulators with Applications to IOPs and Stateless Blockchains 📺
We present batching techniques for cryptographic accumulators and vector commitments in groups of unknown order. Our techniques are tailored for distributed settings where no trusted accumulator manager exists and updates to the accumulator are processed in batches. We develop techniques for non-interactively aggregating membership proofs that can be verified with a constant number of group operations. We also provide a constant sized batch non-membership proof for a large number of elements. These proofs can be used to build the first positional vector commitment (VC) with constant sized openings and constant sized public parameters. As a core building block for our batching techniques we develop several succinct proof systems in groups of unknown order. These extend a recent construction of a succinct proof of correct exponentiation, and include a succinct proof of knowledge of an integer discrete logarithm between two group elements. We circumvent an impossibility result for Sigma-protocols in these groups by using a short trapdoor-free CRS. We use these new accumulator and vector commitment constructions to design a stateless blockchain, where nodes only need a constant amount of storage in order to participate in consensus. Further, we show how to use these techniques to reduce the size of IOP instantiations, such as STARKs. The full version of the paper is available online [BBF18b].
2019
CRYPTO
On the Plausibility of Fully Homomorphic Encryption for RAMs 📺
We initiate the study of fully homomorphic encryption for RAMs (RAM-FHE). This is a public-key encryption scheme where, given an encryption of a large database D, anybody can efficiently compute an encryption of P(D) for an arbitrary RAM program P. The running time over the encrypted data should be as close as possible to the worst case running time of P, which may be sub-linear in the data size.A central difficulty in constructing a RAM-FHE scheme is hiding the sequence of memory addresses accessed by P. This is particularly problematic because an adversary may homomorphically evaluate many programs over the same ciphertext, therefore effectively “rewinding” any mechanism for making memory accesses oblivious.We identify a necessary prerequisite towards constructing RAM-FHE that we call rewindable oblivious RAM (rewindable ORAM), which provides security even in this strong adversarial setting. We show how to construct rewindable ORAM using symmetric-key doubly efficient PIR (SK-DEPIR) (Canetti-Holmgren-Richelson, Boyle-Ishai-Pass-Wootters: TCC ’17). We then show how to use rewindable ORAM, along with virtual black-box (VBB) obfuscation for specific circuits, to construct RAM-FHE. The latter primitive can be heuristically instantiated using existing indistinguishability obfuscation candidates. Overall, we obtain a RAM-FHE scheme where the multiplicative overhead in running time is polylogarithmic in the database size N. Our basic scheme is single-hop, but we also extend it to obtain multi-hop RAM-FHE with overhead $$N^\epsilon $$ for arbitrarily small $$\epsilon >0$$ .We view our work as the first evidence that RAM-FHE is likely to exist.
2019
CRYPTO
Homomorphic Time-Lock Puzzles and Applications 📺
Time-lock puzzles allow one to encrypt messages for the future, by efficiently generating a puzzle with a solution s that remains hidden until time $$\mathcal {T}$$ has elapsed. The solution is required to be concealed from the eyes of any algorithm running in (parallel) time less than $$\mathcal {T}$$. We put forth the concept of homomorphic time-lock puzzles, where one can evaluate functions over puzzles without solving them, i.e., one can manipulate a set of puzzles with solutions $$(s_1, \dots , s_n)$$ to obtain a puzzle that solves to $$f(s_1, \ldots , s_n)$$, for any function f. We propose candidate constructions under concrete cryptographic assumptions for different classes of functions. Then we show how homomorphic time-lock puzzles overcome the limitations of classical time-lock puzzles by proposing new protocols for applications of interest, such as e-voting, multi-party coin flipping, and fair contract signing.
2019
CRYPTO
Symmetric Primitives with Structured Secrets 📺
Securely managing encrypted data on an untrusted party is a challenging problem that has motivated the study of a wide variety of cryptographic primitives. A special class of such primitives allows an untrusted party to transform a ciphertext encrypted under one key to a ciphertext under another key, using some auxiliary information that does not leak the underlying data. Prominent examples of such primitives in the symmetric setting are key-homomorphic (weak) PRFs, updatable encryption, and proxy re-encryption. Although these primitives differ significantly in terms of their constructions and security requirements, they share two important properties: (a) they have secrets with structure or extra functionality, and (b) all known constructions of these primitives satisfying reasonably strong definitions of security are based on concrete public-key assumptions, e.g., DDH and LWE. This raises the question of whether these objects inherently belong to the world of public-key primitives, or they can potentially be built from simple symmetric-key objects such as pseudorandom functions. In this work, we show that the latter possibility is unlikely. More specifically, we show that:Any (bounded) key-homomorphic weak PRF with an abelian output group implies a (bounded) input-homomorphic weak PRF, which has recently been shown to imply not only public-key encryption  but also a variety of primitives such as PIR, lossy TDFs, and even IBE.Any ciphertext-independent updatable encryption scheme that is forward and post-compromise secure implies PKE. Moreover, any symmetric-key proxy re-encryption scheme with reasonably strong security guarantees implies a forward and post-compromise secure ciphertext-independent updatable encryption, and hence PKE. In addition, we show that unbounded (or exact) key-homomorphic weak PRFs over abelian groups are impossible in the quantum world. In other words, over abelian groups, bounded key-homomorphism is the best that we can hope for in terms of post-quantum security. Our attack also works over other structured primitives with abelian groups and exact homomorphisms, including homomorphic one-way functions and input-homomorphic weak PRFs.
2019
CRYPTO
Unifying Leakage Models on a Rényi Day 📺
In the last decade, several works have focused on finding the best way to model the leakage in order to obtain provably secure implementations. One of the most realistic models is the noisy leakage model, introduced in [PR13, DDF14] together with secure constructions. These works suffer from various limitations, in particular the use of ideal leak-free gates in [PR13] and an important loss (in the size of the field) in the reduction in [DDF14].In this work, we provide new strategies to prove the security of masked implementations and start by unifying the different noisiness metrics used in prior works by relating all of them to a standard notion in information theory: the pointwise mutual information. Based on this new interpretation, we define two new natural metrics and analyze the security of known compilers with respect to these metrics. In particular, we prove (1) a tighter bound for reducing the noisy leakage models to the probing model using our first new metric, (2) better bounds for amplification-based security proofs using the second metric.To support that the improvements we obtain are not only a consequence of the use of alternative metrics, we show that for concrete representation of leakage (e.g., “Hamming weight + Gaussian noise”), our approach significantly improves the parameters compared to prior works. Finally, using the Rényi divergence, we quantify concretely the advantage of an adversary in attacking a block cipher depending on the number of leakage acquisitions available to it.
2019
CRYPTO
Leakage Certification Revisited: Bounding Model Errors in Side-Channel Security Evaluations 📺
Leakage certification aims at guaranteeing that the statistical models used in side-channel security evaluations are close to the true statistical distribution of the leakages, hence can be used to approximate a worst-case security level. Previous works in this direction were only qualitative: for a given amount of measurements available to an evaluation laboratory, they rated a model as “good enough” if the model assumption errors (i.e., the errors due to an incorrect choice of model family) were small with respect to the model estimation errors. We revisit this problem by providing the first quantitative tools for leakage certification. For this purpose, we provide bounds for the (unknown) Mutual Information metric that corresponds to the true statistical distribution of the leakages based on two easy-to-compute information theoretic quantities: the Perceived Information, which is the amount of information that can be extracted from a leaking device thanks to an estimated statistical model, possibly biased due to estimation and assumption errors, and the Hypothetical Information, which is the amount of information that would be extracted from an hypothetical device exactly following the model distribution. This positive outcome derives from the observation that while the estimation of the Mutual Information is in general a hard problem (i.e., estimators are biased and their convergence is distribution-dependent), it is significantly simplified in the case of statistical inference attacks where a target random variable (e.g., a key in a cryptographic setting) has a constant (e.g., uniform) probability. Our results therefore provide a general and principled path to bound the worst-case security level of an implementation. They also significantly speed up the evaluation of any profiled side-channel attack, since they imply that the estimation of the Perceived Information, which embeds an expensive cross-validation step, can be bounded by the computation of a cheaper Hypothetical Information, for any estimated statistical model.
2019
CRYPTO
Security in the Presence of Key Reuse: Context-Separable Interfaces and Their Applications 📺
Key separation is often difficult to enforce in practice. While key reuse can be catastrophic for security, we know of a number of cryptographic schemes for which it is provably safe. But existing formal models, such as the notions of joint security (Haber-Pinkas, CCS ’01) and agility (Acar et al., EUROCRYPT ’10), do not address the full range of key-reuse attacks—in particular, those that break the abstraction of the scheme, or exploit protocol interactions at a higher level of abstraction. This work attends to these vectors by focusing on two key elements: the game that codifies the scheme under attack, as well as its intended adversarial model; and the underlying interface that exposes secret key operations for use by the game. Our main security experiment considers the implications of using an interface (in practice, the API of a software library or a hardware platform such as TPM) to realize the scheme specified by the game when the interface is shared with other unspecified, insecure, or even malicious applications. After building up a definitional framework, we apply it to the analysis of two real-world schemes: the EdDSA signature algorithm and the Noise protocol framework. Both provide some degree of context separability, a design pattern for interfaces and their applications that aids in the deployment of secure protocols.
2019
CRYPTO
The Communication Complexity of Threshold Private Set Intersection 📺
Threshold private set intersection enables Alice and Bob who hold sets $$S_{\mathsf {A}}$$ and $$S_{\mathsf {B}}$$ of size n to compute the intersection $$S_{\mathsf {A}} \cap S_{\mathsf {B}} $$ if the sets do not differ by more than some threshold parameter $$t$$ . In this work, we investigate the communication complexity of this problem and we establish the first upper and lower bounds. We show that any protocol has to have a communication complexity of $$\varOmega (t)$$ . We show that an almost matching upper bound of $$\tilde{\mathcal {O}}(t)$$ can be obtained via fully homomorphic encryption. We present a computationally more efficient protocol based on weaker assumptions, namely additively homomorphic encryption, with a communication complexity of $$\tilde{\mathcal {O}}(t ^2)$$ . For applications like biometric authentication, where a given fingerprint has to have a large intersection with a fingerprint from a database, our protocols may result in significant communication savings.Prior to this work, all previous protocols had a communication complexity of $$\varOmega (n)$$ . Our protocols are the first ones with communication complexities that mainly depend on the threshold parameter $$t$$ and only logarithmically on the set size n.
2019
CRYPTO
Adaptively Secure MPC with Sublinear Communication Complexity 📺
A central challenge in the study of MPC is to balance between security guarantees, hardness assumptions, and resources required for the protocol. In this work, we study the cost of tolerating adaptive corruptions in MPC protocols under various corruption thresholds. In the strongest setting, we consider adaptive corruptions of an arbitrary number of parties (potentially all) and achieve the following results:A two-round secure function evaluation (SFE) protocol in the CRS model, assuming LWE and indistinguishability obfuscation (iO). The communication, the CRS size, and the online-computation are sublinear in the size of the function. The iO assumption can be replaced by secure erasures. Previous results required either the communication or the CRS size to be polynomial in the function size.Under the same assumptions, we construct a “Bob-optimized” 2PC (where Alice talks first, Bob second, and Alice learns the output). That is, the communication complexity and total computation of Bob are sublinear in the function size and in Alice’s input size. We prove impossibility of “Alice-optimized” protocols.Assuming LWE, we bootstrap adaptively secure NIZK arguments to achieve proof size sublinear in the circuit size of the NP-relation. On a technical level, our results are based on laconic function evaluation (LFE) (Quach, Wee, and Wichs, FOCS’18) and shed light on an interesting duality between LFE and FHE.Next, we analyze adaptive corruptions of all-but-one of the parties and show a two-round SFE protocol in the threshold PKI model (where keys of a threshold FHE scheme are pre-shared among the parties) with communication complexity sublinear in the circuit size, assuming LWE and NIZK. Finally, we consider the honest-majority setting, and show a two-round SFE protocol with guaranteed output delivery under the same constraints.
2019
CRYPTO
Communication Lower Bounds for Statistically Secure MPC, With or Without Preprocessing 📺
We prove a lower bound on the communication complexity of unconditionally secure multiparty computation, both in the standard model with $$n=2t+1$$ parties of which t are corrupted, and in the preprocessing model with $$n=t+1$$ . In both cases, we show that for any $$g \in \mathbb {N}$$ there exists a Boolean circuit C with g gates, where any secure protocol implementing C must communicate $$\varOmega (n g)$$ bits, even if only passive and statistical security is required. The results easily extends to constructing similar circuits over any fixed finite field. This shows that for all sizes of circuits, the O(n) overhead of all known protocols when t is maximal is inherent. It also shows that security comes at a price: the circuit we consider could namely be computed among n parties with communication only O(g) bits if no security was required. Our results extend to the case where the threshold t is suboptimal. For the honest majority case, this shows that the known optimizations via packed secret-sharing can only be obtained if one accepts that the threshold is $$t= (1/2 - c)n$$ for a constant c. For the honest majority case, we also show an upper bound that matches the lower bound up to a constant factor (existing upper bounds are a factor $$\lg n$$ off for Boolean circuits).
2019
CRYPTO
Communication-Efficient Unconditional MPC with Guaranteed Output Delivery 📺
We study the communication complexity of unconditionally secure MPC with guaranteed output delivery over point-to-point channels for corruption threshold $$t < n/3$$ . We ask the question: “is it possible to construct MPC in this setting s.t. the communication complexity per multiplication gate is linear in the number of parties?” While a number of works have focused on reducing the communication complexity in this setting, the answer to the above question has remained elusive for over a decade.We resolve the above question in the affirmative by providing an MPC with communication complexity $$O(Cn\kappa + n^3\kappa )$$ where $$\kappa $$ is the size of an element in the field, C is the size of the (arithmetic) circuit, and, n is the number of parties. This represents a strict improvement over the previously best known communication complexity of $$O(Cn\kappa +D_Mn^2\kappa +n^3\kappa )$$ where $$D_M$$ is the multiplicative depth of the circuit. To obtain this result, we introduce a novel technique called 4-consistent tuples of sharings which we believe to be of independent interest.
2019
CRYPTO
Efficient Collision Attack Frameworks for RIPEMD-160 📺
RIPEMD-160 is an ISO/IEC standard and has been applied to generate the Bitcoin address with SHA-256. Due to the complex dual-stream structure, the first collision attack on reduced RIPEMD-160 presented by Liu, Mendel and Wang at Asiacrypt 2017 only reaches 30 steps, having a time complexity of $$2^{70}$$. Apart from that, several semi-free-start collision attacks have been published for reduced RIPEMD-160 with the start-from-the-middle method. Inspired from such start-from-the middle structures, we propose two novel efficient collision attack frameworks for reduced RIPEMD-160 by making full use of the weakness of its message expansion. Those two frameworks are called dense-left-and-sparse-right (DLSR) framework and sparse-left-and-dense-right (SLDR) framework. As it turns out, the DLSR framework is more efficient than SLDR framework since one more step can be fully controlled, though with extra $$2^{32}$$ memory complexity. To construct the best differential characteristics for the DLSR framework, we carefully build the linearized part of the characteristics and then solve the corresponding nonlinear part using a guess-and-determine approach. Based on the newly discovered differential characteristics, we provide colliding messages pairs for the first practical collision attacks on 30 and 31 (out of 80) steps of RIPEMD-160 with time complexity $$2^{35.9}$$ and $$2^{41.5}$$ respectively. In addition, benefiting from the partial calculation, we can attack 33 and 34 (out of 80) steps of RIPEMD-160 with time complexity $$2^{67.1}$$ and $$2^{74.3}$$ respectively. When applying the SLDR framework to the differential characteristic used in the Asiacrypt 2017 paper, we significantly improve the time complexity by a factor of $$2^{13}$$. However, it still cannot compete with the results obtained from the DLSR framework. To the best of our knowledge, these are the best collision attacks on reduced RIPEMD-160 with respect to the number of steps, including the first colliding message pairs for 30 and 31 steps of RIPEMD-160.
2019
CRYPTO
Improving Attacks on Round-Reduced Speck32/64 Using Deep Learning 📺
This paper has four main contributions. First, we calculate the predicted difference distribution of Speck32/64 with one specific input difference under the Markov assumption completely for up to eight rounds and verify that this yields a globally fairly good model of the difference distribution of Speck32/64. Secondly, we show that contrary to conventional wisdom, machine learning can produce very powerful cryptographic distinguishers: for instance, in a simple low-data, chosen plaintext attack on nine rounds of Speck, we present distinguishers based on deep residual neural networks that achieve a mean key rank roughly five times lower than an analogous classical distinguisher using the full difference distribution table. Thirdly, we develop a highly selective key search policy based on a variant of Bayesian optimization which, together with our neural distinguishers, can be used to reduce the remaining security of 11-round Speck32/64 to roughly 38 bits. This is a significant improvement over previous literature. Lastly, we show that our neural distinguishers successfully use features of the ciphertext pair distribution that are invisible to all purely differential distinguishers even given unlimited data.While our attack is based on a known input difference taken from the literature, we also show that neural networks can be used to rapidly (within a matter of minutes on our machine) find good input differences without using prior human cryptanalysis. Supplementary code and data for this paper is available at https://github.com/agohr/deep_speck.
2019
CRYPTO
Correlation of Quadratic Boolean Functions: Cryptanalysis of All Versions of Full $\mathsf {MORUS}$ 📺
We show that the correlation of any quadratic Boolean function can be read out from its so-called disjoint quadratic form. We further propose a polynomial-time algorithm that can transform an arbitrary quadratic Boolean function into its disjoint quadratic form. With this algorithm, the exact correlation of quadratic Boolean functions can be computed efficiently.We apply this method to analyze the linear trails of $$\mathsf {MORUS}$$ (one of the seven finalists of the CAESAR competition), which are found with the help of a generic model for linear trails of $$\mathsf {MORUS}$$-like key-stream generators. In our model, any tool for finding linear trails of block ciphers can be used to search for trails of $$\mathsf {MORUS}$$-like key-stream generators. As a result, a set of trails with correlation $$2^{-38}$$ is identified for all versions of full $$\mathsf {MORUS}$$, while the correlations of previously published best trails for $$\mathsf {MORUS}$$-640 and $$\mathsf {MORUS}$$-1280 are $$2^{-73}$$ and $$2^{-76}$$ respectively (ASIACRYPT 2018). This significantly improves the complexity of the attack on $$\mathsf {MORUS}$$-1280-256 from $$2^{152}$$ to $$2^{76}$$. These new trails also lead to the first distinguishing and message-recovery attacks on $$\mathsf {MORUS}$$-640-128 and $$\mathsf {MORUS}$$-1280-128 with surprisingly low complexities around $$2^{76}$$.Moreover, we observe that the condition for exploiting these trails in an attack can be more relaxed than previously thought, which shows that the new trails are superior to previously published ones in terms of both correlation and the number of ciphertext blocks involved.
2019
CRYPTO
Low-Memory Attacks Against Two-Round Even-Mansour Using the 3-XOR Problem 📺
The iterated Even-Mansour construction is an elegant construction that idealizes block cipher designs such as the AES. In this work we focus on the simplest variant, the 2-round Even-Mansour construction with a single key. This is the most minimal construction that offers security beyond the birthday bound: there is a security proof up to $$2^{2n/3}$$ evaluations of the underlying permutations and encryption, and the best known attacks have a complexity of roughly $$2^n/n$$ operations.We show that attacking this scheme with block size n is related to the 3-XOR problem with element size $$\ell = 2n$$, an important algorithmic problem that has been studied since the nineties. In particular the 3-XOR problem is known to require at least $$2^{\ell /3}$$ queries, and the best known algorithms require around $$2^{\ell /2}/\ell $$ operations: this roughly matches the known bounds for the 2-round Even-Mansour scheme.Using this link we describe new attacks against the 2-round Even-Mansour scheme. In particular, we obtain the first algorithms where both the data and the memory complexity are significantly lower than $$2^{n}$$. From a practical standpoint, previous works with a data and/or memory complexity close to $$2^n$$ are unlikely to be more efficient than a simple brute-force search over the key. Our best algorithm requires just $$\lambda n$$ known plaintext/ciphertext pairs, for some constant $$0< \lambda < 1$$, $$2^n/\lambda n$$ time, and $$2^{\lambda n}$$ memory. For instance, with $$n=64$$ and $$\lambda = 1/2$$, the memory requirement is practical, and we gain a factor 32 over brute-force search. We also describe an algorithm with asymptotic complexity $$\mathcal {O}(2^{n} \ln ^2 n/n^2)$$, improving the previous asymptotic complexity of $$\mathcal {O}(2^n/n)$$, using a variant of the 3-SUM algorithm of Baran, Demaine, and Pǎtraşcu.
2019
CRYPTO
How to Record Quantum Queries, and Applications to Quantum Indifferentiability 📺
The quantum random oracle model (QROM) has become the standard model in which to prove the post-quantum security of random-oracle-based constructions. Unfortunately, none of the known proof techniques allow the reduction to record information about the adversary’s queries, a crucial feature of many classical ROM proofs, including all proofs of indifferentiability for hash function domain extension.In this work, we give a new QROM proof technique that overcomes this “recording barrier”. We do so by giving a new “compressed oracle” which allows for efficient on-the-fly simulation of random oracles, roughly analogous to the usual classical simulation. We then use this new technique to give the first proof of quantum indifferentiability for the Merkle-Damgård domain extender for hash functions. We also give a proof of security for the Fujisaki-Okamoto transformation; previous proofs required modifying the scheme to include an additional hash term. Given the threat posed by quantum computers and the push toward quantum-resistant cryptosystems, our work represents an important tool for efficient post-quantum cryptosystems.
2019
CRYPTO
Quantum Security Proofs Using Semi-classical Oracles 📺
We present an improved version of the one-way to hiding (O2H) Theorem by Unruh, J ACM 2015. Our new O2H Theorem gives higher flexibility (arbitrary joint distributions of oracles and inputs, multiple reprogrammed points) as well as tighter bounds (removing square-root factors, taking parallelism into account). The improved O2H Theorem makes use of a new variant of quantum oracles, semi-classical oracles, where queries are partially measured. The new O2H Theorem allows us to get better security bounds in several public-key encryption schemes.
2019
CRYPTO
Quantum Indistinguishability of Random Sponges 📺
In this work we show that the sponge construction can be used to construct quantum-secure pseudorandom functions. As our main result we prove that random sponges are quantum indistinguishable from random functions. In this setting the adversary is given superposition access to the input-output behavior of the construction but not to the internal function. Our proofs hold under the assumption that the internal function is a random function or permutation. We then use this result to obtain a quantum-security version of a result by Andreeva, Daemen, Mennink, and Van Assche (FSE’15) which shows that a sponge that uses a secure PRP or PRF as internal function is a secure PRF. This result also proves that the recent attacks against CBC-MAC in the quantum-access model by Kaplan, Leurent, Leverrier, and Naya-Plasencia (Crypto’16) and Santoli, and Schaffner (QIC’16) can be prevented by introducing a state with a non-trivial inner part.The proof of our main result is derived by analyzing the joint distribution of any q input-output pairs. Our method analyzes the statistical behavior of the considered construction in great detail. The used techniques might prove useful in future analysis of different cryptographic primitives considering quantum adversaries. Using Zhandry’s PRF/PRP switching lemma we then obtain that quantum indistinguishability also holds if the internal block function is a random permutation.
2019
CRYPTO
Revisiting Post-quantum Fiat-Shamir 📺
The Fiat-Shamir transformation is a useful approach to building non-interactive arguments (of knowledge) in the random oracle model. Unfortunately, existing proof techniques are incapable of proving the security of Fiat-Shamir in the quantum setting. The problem stems from (1) the difficulty of quantum rewinding, and (2) the inability of current techniques to adaptively program random oracles in the quantum setting. In this work, we show how to overcome the limitations above in many settings. In particular, we give mild conditions under which Fiat-Shamir is secure in the quantum setting. As an application, we show that existing lattice signatures based on Fiat-Shamir are secure without any modifications.
2019
CRYPTO
Security of the Fiat-Shamir Transformation in the Quantum Random-Oracle Model 📺
The famous Fiat-Shamir transformation turns any public-coin three-round interactive proof, i.e., any so-called $$\Sigma {\text {-protocol}}$$ , into a non-interactive proof in the random-oracle model. We study this transformation in the setting of a quantum adversary that in particular may query the random oracle in quantum superposition.Our main result is a generic reduction that transforms any quantum dishonest prover attacking the Fiat-Shamir transformation in the quantum random-oracle model into a similarly successful quantum dishonest prover attacking the underlying $$\Sigma {\text {-protocol}}$$ (in the standard model). Applied to the standard soundness and proof-of-knowledge definitions, our reduction implies that both these security properties, in both the computational and the statistical variant, are preserved under the Fiat-Shamir transformation even when allowing quantum attacks. Our result improves and completes the partial results that have been known so far, but it also proves wrong certain claims made in the literature.In the context of post-quantum secure signature schemes, our results imply that for any $$\Sigma {\text {-protocol}}$$ that is a proof-of-knowledge against quantum dishonest provers (and that satisfies some additional natural properties), the corresponding Fiat-Shamir signature scheme is secure in the quantum random-oracle model. For example, we can conclude that the non-optimized version of Fish, which is the bare Fiat-Shamir variant of the NIST candidate Picnic, is secure in the quantum random-oracle model.
2019
CRYPTO
Unconditionally Secure Computation Against Low-Complexity Leakage 📺
We consider the problem of constructing leakage-resilient circuit compilers that are secure against global leakage functions with bounded output length. By global, we mean that the leakage can depend on all circuit wires and output a low-complexity function (represented as a multi-output Boolean circuit) applied on these wires. In this work, we design compilers both in the stateless (a.k.a. single-shot leakage) setting and the stateful (a.k.a. continuous leakage) setting that are unconditionally secure against $$\mathsf {AC}^0$$ leakage and similar low-complexity classes.In the stateless case, we show that the original private circuits construction of Ishai, Sahai, and Wagner (Crypto 2003) is actually secure against $$\mathsf {AC}^0$$ leakage. In the stateful case, we modify the construction of Rothblum (Crypto 2012), obtaining a simple construction with unconditional security. Prior works that designed leakage-resilient circuit compilers against $$\mathsf {AC}^0$$ leakage had to rely either on secure hardware components (Faust et al., Eurocrypt 2010, Miles-Viola, STOC 2013) or on (unproven) complexity-theoretic assumptions (Rothblum, Crypto 2012).
2019
CRYPTO
Tight Leakage-Resilient CCA-Security from Quasi-Adaptive Hash Proof System 📺
We propose the concept of quasi-adaptive hash proof system (QAHPS), where the projection key is allowed to depend on the specific language for which hash values are computed. We formalize leakage-resilient(LR)-ardency for QAHPS by defining two statistical properties, including LR-$$\langle \mathscr {L}_0, \mathscr {L}_1 \rangle $$-universal and LR-$$\langle \mathscr {L}_0, \mathscr {L}_1 \rangle $$-key-switching.We provide a generic approach to tightly leakage-resilient CCA (LR-CCA) secure public-key encryption (PKE) from LR-ardent QAHPS. Our approach is reminiscent of the seminal work of Cramer and Shoup (Eurocrypt’02), and employ three QAHPS schemes, one for generating a uniform string to hide the plaintext, and the other two for proving the well-formedness of the ciphertext. The LR-ardency of QAHPS makes possible the tight LR-CCA security. We give instantiations based on the standard k-Linear (k-LIN) assumptions over asymmetric and symmetric pairing groups, respectively, and obtain fully compact PKE with tight LR-CCA security. The security loss is $${{O}}(\log {Q_{{e}}})$$ where $${Q_{{e}}}$$ denotes the number of encryption queries. Specifically, our tightly LR-CCA secure PKE instantiation from SXDH has only 4 group elements in the public key and 7 group elements in the ciphertext, thus is the most efficient one.
2019
CRYPTO
Non-malleable Secret Sharing in the Computational Setting: Adaptive Tampering, Noisy-Leakage Resilience, and Improved Rate 📺
We revisit the concept of non-malleable secret sharing (Goyal and Kumar, STOC 2018) in the computational setting. In particular, under the assumption of one-to-one one-way functions, we exhibit a computationally private, threshold secret sharing scheme satisfying all of the following properties. Continuous non-malleability: No computationally-bounded adversary tampering independently with all the shares can produce mauled shares that reconstruct to a value related to the original secret. This holds even in case the adversary can tamper continuously, for an unbounded polynomial number of times, with the same target secret sharing, where the next sequence of tampering functions, as well as the subset of shares used for reconstruction, can be chosen adaptively based on the outcome of previous reconstructions.Resilience to noisy leakage: Non-malleability holds even if the adversary can additionally leak information independently from all the shares. There is no bound on the length of leaked information, as long as the overall leakage does not decrease the min-entropy of each share by too much.Improved rate: The information rate of our final scheme, defined as the ratio between the size of the message and the maximal size of a share, asymptotically approaches 1 when the message length goes to infinity. Previous constructions achieved information-theoretic security, sometimes even for arbitrary access structures, at the price of at least one of the following limitations: (i) Non-malleability only holds against one-time tampering attacks; (ii) Non-malleability holds against a bounded number of tampering attacks, but both the choice of the tampering functions and of the sets used for reconstruction is non-adaptive; (iii) Information rate asymptotically approaching zero; (iv) No security guarantee in the presence of leakage.
2019
CRYPTO
Leakage Resilient Secret Sharing and Applications 📺
A secret sharing scheme allows a dealer to share a secret among a set of n parties such that any authorized subset of the parties can recover the secret, while any unauthorized subset learns no information about the secret. A leakage-resilient secret sharing scheme (introduced in independent works by Goyal and Kumar, STOC ’18 and Benhamouda, Degwekar, Ishai and Rabin, CRYPTO ’18) additionally requires the secrecy to hold against every unauthorized set of parties even if they obtain some bounded leakage from every other share. The leakage is said to be local if it is computed independently for each share. So far, the only known constructions of local leakage resilient secret sharing schemes are for threshold access structures for very low (O(1)) or very high ( $$n -o(\log n)$$ ) thresholds.In this work, we give a compiler that takes a secret sharing scheme for any monotone access structure and produces a local leakage resilient secret sharing scheme for the same access structure, with only a constant-factor asymptotic blow-up in the sizes of the shares. Furthermore, the resultant secret sharing scheme has optimal leakage-resilience rate, i.e., the ratio between the leakage tolerated and the size of each share can be made arbitrarily close to 1. Using this secret sharing scheme as the main building block, we obtain the following results:Rate Preserving Non-Malleable Secret Sharing. We give a compiler that takes any secret sharing scheme for a 4-monotone access structure (A 4-monotone access structure has the property that any authorized set has size at least 4.) with rate R and converts it into a non-malleable secret sharing scheme for the same access structure with rate $$\varOmega (R)$$ . The previous such non-zero rate construction (Badrinarayanan and Srinivasan, EUROCRYPT ’19) achieved a rate of $$\varTheta (R/{t_{\max }\log ^2 n})$$ , where $$t_{\max }$$ is the maximum size of any minimal set in the access structure. As a special case, for any threshold $$t \ge 4$$ and an arbitrary $$n \ge t$$ , we get the first constant-rate construction of t-out-of-n non-malleable secret sharing.Leakage-Tolerant Multiparty Computation for General Interaction Patterns. For any function f, we give a reduction from constructing a leakage-tolerant secure multi-party computation protocol for computing f that obeys any given interaction pattern to constructing a secure (but not necessarily leakage-tolerant) protocol for a related function that obeys the star interaction pattern. Together with the known results for the star interaction pattern, this gives leakage tolerant MPC for any interaction pattern with statistical/computational security. This improves upon the result of (Halevi et al., ITCS 2016), who presented such a reduction in a leak-free environment.
2019
CRYPTO
Stronger Leakage-Resilient and Non-Malleable Secret Sharing Schemes for General Access Structures 📺
In this work we present a collection of compilers that take secret sharing schemes for an arbitrary access structure as input and produce either leakage-resilient or non-malleable secret sharing schemes for the same access structure. A leakage-resilient secret sharing scheme hides the secret from an adversary, who has access to an unqualified set of shares, even if the adversary additionally obtains some size-bounded leakage from all other secret shares. A non-malleable secret sharing scheme guarantees that a secret that is reconstructed from a set of tampered shares is either equal to the original secret or completely unrelated. To the best of our knowledge we present the first generic compiler for leakage-resilient secret sharing for general access structures. In the case of non-malleable secret sharing, we strengthen previous definitions, provide separations between them, and construct a non-malleable secret sharing scheme for general access structures that fulfills the strongest definition with respect to independent share tampering functions. More precisely, our scheme is secure against concurrent tampering: The adversary is allowed to (non-adaptively) tamper the shares multiple times, and in each tampering attempt can freely choose the qualified set of shares to be used by the reconstruction algorithm to reconstruct the tampered secret. This is a strong analogue of the multiple-tampering setting for split-state non-malleable codes and extractors.We show how to use leakage-resilient and non-malleable secret sharing schemes to construct leakage-resilient and non-malleable threshold signatures. Classical threshold signatures allow to distribute the secret key of a signature scheme among a set of parties, such that certain qualified subsets can sign messages. We construct threshold signature schemes that remain secure even if an adversary leaks from or tampers with all secret shares.
2019
CRYPTO
Memory-Hard Functions from Cryptographic Primitives 📺
Memory-hard functions (MHFs) are moderately-hard functions which enforce evaluation costs both in terms of time and memory (often, in form of a trade-off). They are used e.g. for password protection, password-based key-derivation, and within cryptocurrencies, and have received a considerable amount of theoretical scrutiny over the last few years. However, analyses see MHFs as modes of operation of some underlying hash function $$\mathcal {H}$$, modeled as a monolithic random oracle. This is however a very strong assumption, as such hash functions are built from much simpler primitives, following somewhat ad-hoc design paradigms.This paper initiates the study of how to securely instantiate $$\mathcal {H}$$ within MHF designs using common cryptographic primitives like block ciphers, compression functions, and permutations. Security here will be in a model in which the adversary has parallel access to an idealized version of the underlying primitive. We will provide provably memory-hard constructions from all the aforementioned primitives. Our results are generic, in that we will rely on hard-to-pebble graphs designed in prior works to obtain our constructions.One particular challenge we encounter is that $$\mathcal {H}$$ is usually required to have large outputs (to increase memory hardness without changing the description size of MHFs), whereas the underlying primitives generally have small output sizes.
2019
CRYPTO
Data-Independent Memory Hard Functions: New Attacks and Stronger Constructions 📺
Memory-hard functions (MHFs) are a key cryptographic primitive underlying the design of moderately expensive password hashing algorithms and egalitarian proofs of work. Over the past few years several increasingly stringent goals for an MHF have been proposed including the requirement that the MHF have high sequential space-time (ST) complexity, parallel space-time complexity, amortized area-time (aAT) complexity and sustained space complexity. Data-Independent Memory Hard Functions (iMHFs) are of special interest in the context of password hashing as they naturally resist side-channel attacks. iMHFs can be specified using a directed acyclic graph (DAG) G with $$N=2^n$$ nodes and low indegree and the complexity of the iMHF can be analyzed using a pebbling game. Recently, Alwen et al. [ABH17] constructed a DAG called DRSample that has aAT complexity at least $$\varOmega \!\left( N^2/{\text {log}} N\right) $$ . Asymptotically DRSample outperformed all prior iMHF constructions including Argon2i, winner of the password hashing competition (aAT cost $${\mathcal {O}} \!\left( N^{1.767}\right) $$ ), though the constants in these bounds are poorly understood. We show that the greedy pebbling strategy of Boneh et al. [BCS16] is particularly effective against DRSample e.g., the aAT cost is $${\mathcal {O}} (N^2/{\text {log}} N)$$ . In fact, our empirical analysis reverses the prior conclusion of Alwen et al. that DRSample provides stronger resistance to known pebbling attacks for practical values of $$N \le 2^{24}$$ . We construct a new iMHF candidate (DRSample+BRG) by using the bit-reversal graph to extend DRSample. We then prove that the construction is asymptotically optimal under every MHF criteria, and we empirically demonstrate that our iMHF provides the best resistance to known pebbling attacks. For example, we show that any parallel pebbling attack either has aAT cost $$\omega (N^2)$$ or requires at least $$\varOmega (N)$$ steps with $$\varOmega (N/{\text {log}} N)$$ pebbles on the DAG. This makes our construction the first practical iMHF with a strong sustained space-complexity guarantee and immediately implies that any parallel pebbling has aAT complexity $$\varOmega (N^2/{\text {log}} N)$$ . We also prove that any sequential pebbling (including the greedy pebbling attack) has aAT cost $$\varOmega \!\left( N^2\right) $$ and, if a plausible conjecture holds, any parallel pebbling has aAT cost $$\varOmega (N^2 \log \log N/{\text {log}} N)$$ —the best possible bound for an iMHF. We implement our new iMHF and demonstrate that it is just as fast as Argon2. Along the way we propose a simple modification to the Argon2 round function that increases an attacker’s aAT cost by nearly an order of magnitude without increasing running time on a CPU. Finally, we give a pebbling reduction that proves that in the parallel random oracle model (PROM) the cost of evaluating an iMHF like Argon2i or DRSample+BRG is given by the pebbling cost of the underlying DAG. Prior pebbling reductions assumed that the iMHF round function concatenates input labels before hashing and did not apply to practical iMHFs such as Argon2i, DRSample or DRSample+BRG where input labels are instead XORed together.
2019
CRYPTO
Simultaneous Amplification: The Case of Non-interactive Zero-Knowledge 📺
In this work, we explore the question of simultaneous privacy and soundness amplification for non-interactive zero-knowledge argument systems (NIZK). We show that any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge NIZK candidate satisfying $$\delta _s+\delta _z=1-\epsilon $$, for any constant $$\epsilon >0$$, can be turned into a computationally sound and zero-knowledge candidate with the only extra assumption of a subexponentially secure public-key encryption.We develop novel techniques to leverage the use of leakage simulation lemma (Jetchev-Peitzrak TCC 2014) to argue amplification. A crucial component of our result is a new notion for secret sharing $$\mathsf {NP}$$ instances. We believe that this may be of independent interest.To achieve this result we analyze following two transformations:Parallel Repetition: We show that using parallel repetition any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge $$\mathsf {NIZK}$$ candidate can be turned into (roughly) $$\delta ^n_s-$$sound and $$1-(1-\delta _{z})^n-$$zero-knowledge candidate. Here n is the repetition parameter.MPC based Repetition: We propose a new transformation that amplifies zero-knowledge in the same way that parallel repetition amplifies soundness. We show that using this any $$\delta _s-$$sound and $$\delta _z-$$zero-knowledge $$\mathsf {NIZK}$$ candidate can be turned into (roughly) $$1-(1-\delta _s)^n-$$sound and $$2\cdot \delta ^n_{z}-$$zero-knowledge candidate. Then we show that using these transformations in a zig-zag fashion we can obtain our result. Finally, we also present a simple transformation which directly turns any $$\mathsf {NIZK}$$ candidate satisfying $$\delta _s,\delta _z<1/3 -1/\mathsf {poly}(\lambda )$$ to a secure one.
2019
CRYPTO
The Privacy Blanket of the Shuffle Model 📺
This work studies differential privacy in the context of the recently proposed shuffle model. Unlike in the local model, where the server collecting privatized data from users can track back an input to a specific user, in the shuffle model users submit their privatized inputs to a server anonymously. This setup yields a trust model which sits in between the classical curator and local models for differential privacy. The shuffle model is the core idea in the Encode, Shuffle, Analyze (ESA) model introduced by Bittau et al. (SOPS 2017). Recent work by Cheu et al. (EUROCRYPT 2019) analyzes the differential privacy properties of the shuffle model and shows that in some cases shuffled protocols provide strictly better accuracy than local protocols. Additionally, Erlingsson et al. (SODA 2019) provide a privacy amplification bound quantifying the level of curator differential privacy achieved by the shuffle model in terms of the local differential privacy of the randomizer used by each user.In this context, we make three contributions. First, we provide an optimal single message protocol for summation of real numbers in the shuffle model. Our protocol is very simple and has better accuracy and communication than the protocols for this same problem proposed by Cheu et al. Optimality of this protocol follows from our second contribution, a new lower bound for the accuracy of private protocols for summation of real numbers in the shuffle model. The third contribution is a new amplification bound for analyzing the privacy of protocols in the shuffle model in terms of the privacy provided by the corresponding local randomizer. Our amplification bound generalizes the results by Erlingsson et al. to a wider range of parameters, and provides a whole family of methods to analyze privacy amplification in the shuffle model.
2019
CRYPTO
Realizing Chosen Ciphertext Security Generically in Attribute-Based Encryption and Predicate Encryption 📺
We provide generic and black box transformations from any chosen plaintext secure Attribute-Based Encryption (ABE) or One-sided Predicate Encryption system into a chosen ciphertext secure system. Our transformation requires only the IND-CPA security of the original ABE scheme coupled with a pseudorandom generator (PRG) with a special security property.In particular, we consider a PRG with an n bit input $$s \in \{0,1\}^n$$ and $$n \cdot \ell $$ bit output $$y_1, \ldots , y_n$$ where each $$y_i$$ is an $$\ell $$ bit string. Then for a randomly chosen s the following two distributions should be computationally indistinguishable. In the first distribution $$r_{s_i, i} = y_i$$ and $$r_{\bar{s}_i, i}$$ is chosen randomly for $$i \in [n]$$. In the second distribution all $$r_{b, i}$$ are chosen randomly for $$i \in [n], b \in \{0,1\}$$.We show that such PRGs can be built from either the computational Diffie-Hellman assumption (in non-bilinear groups) or the Learning with Errors (LWE) assumption (and potentially other assumptions). Thus, one can transform any IND-CPA secure system into a chosen ciphertext secure one by adding either assumption. (Or by simply assuming an existing PRG is hinting secure.) In addition, our work provides a new approach and perspective for obtaining chosen ciphertext security in the basic case of public key encryption.
2019
CRYPTO
Match Me if You Can: Matchmaking Encryption and Its Applications 📺
We introduce a new form of encryption that we name matchmaking encryption (ME). Using ME, sender S and receiver R (each with its own attributes) can both specify policies the other party must satisfy in order for the message to be revealed. The main security guarantee is that of privacy-preserving policy matching: During decryption nothing is leaked beyond the fact that a match occurred/did not occur.ME opens up new ways of secretly communicating, and enables several new applications where both participants can specify fine-grained access policies to encrypted data. For instance, in social matchmaking, S can encrypt a file containing his/her personal details and specify a policy so that the file can be decrypted only by his/her ideal partner. On the other end, a receiver R will be able to decrypt the file only if S corresponds to his/her ideal partner defined through a policy.On the theoretical side, we define security for ME, as well as provide generic frameworks for constructing ME from functional encryption.These constructions need to face the technical challenge of simultaneously checking the policies chosen by S and R, to avoid any leakage.On the practical side, we construct an efficient identity-based scheme for equality policies, with provable security in the random oracle model under the standard BDH assumption. We implement and evaluate our scheme and provide experimental evidence that our construction is practical. We also apply identity-based ME to a concrete use case, in particular for creating an anonymous bulletin board over a Tor network.
2019
CRYPTO
ABE for DFA from k-Lin 📺
We present the first attribute-based encryption (ABE) scheme for deterministic finite automaton (DFA) based on static assumptions in bilinear groups; this resolves an open problem posed by Waters (CRYPTO 2012). Our main construction achieves selective security against unbounded collusions under the standard k-linear assumption in prime-order bilinear groups, whereas previous constructions all rely on q-type assumptions.
2019
CRYPTO
Attribute Based Encryption (and more) for Nondeterministic Finite Automata from LWE 📺
Constructing Attribute Based Encryption (ABE) [56] for uniform models of computation from standard assumptions, is an important problem, about which very little is known. The only known ABE schemes in this setting that (i) avoid reliance on multilinear maps or indistinguishability obfuscation, (ii) support unbounded length inputs and (iii) permit unbounded key requests to the adversary in the security game, are by Waters from Crypto, 2012 [57] and its variants. Waters provided the first ABE for Deterministic Finite Automata (DFA) satisfying the above properties, from a parametrized or “q-type” assumption over bilinear maps. Generalizing this construction to Nondeterministic Finite Automata (NFA) was left as an explicit open problem in the same work, and has seen no progress to date. Constructions from other assumptions such as more standard pairing based assumptions, or lattice based assumptions has also proved elusive.In this work, we construct the first symmetric key attribute based encryption scheme for nondeterministic finite automata (NFA) from the learning with errors (LWE) assumption. Our scheme supports unbounded length inputs as well as unbounded length machines. In more detail, secret keys in our construction are associated with an NFA M of unbounded length, ciphertexts are associated with a tuple $$(\mathbf {x}, m)$$ where $$\mathbf {x}$$ is a public attribute of unbounded length and m is a secret message bit, and decryption recovers m if and only if $$M(\mathbf {x})=1$$.Further, we leverage our ABE to achieve (restricted notions of) attribute hiding analogous to the circuit setting, obtaining the first predicate encryption and bounded key functional encryption schemes for NFA from LWE. We achieve machine hiding in the single/bounded key setting to obtain the first reusable garbled NFA from standard assumptions. In terms of lower bounds, we show that secret key functional encryption even for DFAs, with security against unbounded key requests implies indistinguishability obfuscation ($$\mathsf {iO}$$) for circuits; this suggests a barrier in achieving full fledged functional encryption for NFA.
2019
CRYPTO
The Distinction Between Fixed and Random Generators in Group-Based Assumptions 📺
There is surprisingly little consensus on the precise role of the generator g in group-based assumptions such as DDH. Some works consider g to be a fixed part of the group description, while others take it to be random. We study this subtle distinction from a number of angles. In the generic group model, we demonstrate the plausibility of groups in which random-generator DDH (resp. CDH) is hard but fixed-generator DDH (resp. CDH) is easy. We observe that such groups have interesting cryptographic applications.We find that seemingly tight generic lower bounds for the Discrete-Log and CDH problems with preprocessing (Corrigan-Gibbs and Kogan, Eurocrypt 2018) are not tight in the sub-constant success probability regime if the generator is random. We resolve this by proving tight lower bounds for the random generator variants; our results formalize the intuition that using a random generator will reduce the effectiveness of preprocessing attacks.We observe that DDH-like assumptions in which exponents are drawn from low-entropy distributions are particularly sensitive to the fixed- vs. random-generator distinction. Most notably, we discover that the Strong Power DDH assumption of Komargodski and Yogev (Komargodski and Yogev, Eurocrypt 2018) used for non-malleable point obfuscation is in fact false precisely because it requires a fixed generator. In response, we formulate an alternative fixed-generator assumption that suffices for a new construction of non-malleable point obfuscation, and we prove the assumption holds in the generic group model. We also give a generic group proof for the security of fixed-generator, low-entropy DDH (Canetti, Crypto 1997).
2019
CRYPTO
Unifying Computational Entropies via Kullback–Leibler Divergence 📺
We introduce hardness in relative entropy, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible entropy, two forms of computational entropy used in recent constructions of pseudorandom generators and statistically hiding commitment schemes, respectively. Thus, hardness in relative entropy unifies the latter two notions of computational entropy and sheds light on the apparent “duality” between them. Additionally, it yields a more modular and illuminating proof that one-way functions imply next-block inaccessible entropy, similar in structure to the proof that one-way functions imply next-block pseudoentropy (Vadhan and Zheng, STOC ‘12).
2019
CRYPTO
Trapdoor Hash Functions and Their Applications 📺
We introduce a new primitive, called trapdoor hash functions (TDH), which are hash functions $$\mathsf {H}: \{0,1\}^n \rightarrow \{0,1\}^\lambda $$ with additional trapdoor function-like properties. Specifically, given an index $$i\in [n]$$, TDHs allow for sampling an encoding key $$\mathsf {ek}$$ (that hides i) along with a corresponding trapdoor. Furthermore, given $$\mathsf {H}(x)$$, a hint value $$\mathsf {E}(\mathsf {ek},x)$$, and the trapdoor corresponding to $$\mathsf {ek}$$, the $$i^{th}$$ bit of x can be efficiently recovered. In this setting, one of our main questions is: How small can the hint value $$\mathsf {E}(\mathsf {ek},x)$$ be? We obtain constructions where the hint is only one bit long based on DDH, QR, DCR, or LWE.This primitive opens a floodgate of applications for low-communication secure computation. We mainly focus on two-message protocols between a receiver and a sender, with private inputs x and y, resp., where the receiver should learn f(x, y). We wish to optimize the (download) rate of such protocols, namely the asymptotic ratio between the size of the output and the sender’s message. Using TDHs, we obtain:1.The first protocols for (two-message) rate-1 string OT based on DDH, QR, or LWE. This has several useful consequences, such as:(a)The first constructions of PIR with communication cost poly-logarithmic in the database size based on DDH or QR. These protocols are in fact rate-1 when considering block PIR.(b)The first constructions of a semi-compact homomorphic encryption scheme for branching programs, where the encrypted output grows only with the program length, based on DDH or QR.(c)The first constructions of lossy trapdoor functions with input to output ratio approaching 1 based on DDH, QR or LWE.(d)The first constant-rate LWE-based construction of a 2-message “statistically sender-private” OT protocol in the plain model.2.The first rate-1 protocols (under any assumption) for n parallel OTs and matrix-vector products from DDH, QR or LWE. We further consider the setting where f evaluates a RAM program y with running time $$T\ll |x|$$ on x. We obtain the first protocols with communication sublinear in the size of x, namely $$T\cdot \sqrt{|x|}$$ or $$T\cdot \root 3 \of {|x|}$$, based on DDH or, resp., pairings (and correlated-input secure hash functions).
2019
CRYPTO
CCA Security and Trapdoor Functions via Key-Dependent-Message Security 📺
We study the relationship among public-key encryption (PKE) satisfying indistinguishability against chosen plaintext attacks (IND-CPA security), that against chosen ciphertext attacks (IND-CCA security), and trapdoor functions (TDF). Specifically, we aim at finding a unified approach and some additional requirement to realize IND-CCA secure PKE and TDF based on IND-CPA secure PKE, and show the following two main results.As the first main result, we show how to achieve IND-CCA security via a weak form of key-dependent-message (KDM) security. More specifically, we construct an IND-CCA secure PKE scheme based on an IND-CPA secure PKE scheme and a secret-key encryption (SKE) scheme satisfying one-time KDM security with respect to projection functions (projection-KDM security). Projection functions are very simple functions with respect to which KDM security has been widely studied. Since the existence of projection-KDM secure PKE implies that of the above two building blocks, as a corollary of this result, we see that the existence of IND-CCA secure PKE is implied by that of projection-KDM secure PKE.As the second main result, we extend the above construction of IND-CCA secure PKE into that of TDF by additionally requiring a mild requirement for each building block. Our TDF satisfies adaptive one-wayness. We can instantiate our TDF based on a wide variety of computational assumptions. Especially, we obtain the first TDF (with adaptive one-wayness) based on the sub-exponential hardness of the constant-noise learning-parity-with-noise (LPN) problem.
2019
CRYPTO
Zero-Knowledge Proofs on Secret-Shared Data via Fully Linear PCPs 📺
We introduce and study the notion of fully linear probabilistically checkable proof systems. In such a proof system, the verifier can make a small number of linear queries that apply jointly to the input and a proof vector.Our new type of proof system is motivated by applications in which the input statement is not fully available to any single verifier, but can still be efficiently accessed via linear queries. This situation arises in scenarios where the input is partitioned or secret-shared between two or more parties, or alternatively is encoded using an additively homomorphic encryption or commitment scheme. This setting appears in the context of secure messaging platforms, verifiable outsourced computation, PIR writing, private computation of aggregate statistics, and secure multiparty computation (MPC). In all these applications, there is a need for fully linear proof systems with short proofs.While several efficient constructions of fully linear proof systems are implicit in the interactive proofs literature, many questions about their complexity are open. We present several new constructions of fully linear zero-knowledge proof systems with sublinear proof size for “simple” or “structured” languages. For example, in the non-interactive setting of fully linear PCPs, we show how to prove that an input vector $$x\in {\mathbb {F}}^n$$, for a finite field $${\mathbb {F}}$$, satisfies a single degree-2 equation with a proof of size $$O(\sqrt{n})$$ and $$O(\sqrt{n})$$ linear queries, which we show to be optimal. More generally, for languages that can be recognized by systems of constant-degree equations, we can reduce the proof size to $$O(\log n)$$ at the cost of $$O(\log n)$$ rounds of interaction.We use our new proof systems to construct new short zero-knowledge proofs on distributed and secret-shared data. These proofs can be used to improve the performance of the example systems mentioned above.Finally, we observe that zero-knowledge proofs on distributed data provide a general-purpose tool for protecting MPC protocols against malicious parties. Applying our short fully linear PCPs to “natural” MPC protocols in the honest-majority setting, we can achieve unconditional protection against malicious parties with sublinear additive communication cost. We use this to improve the communication complexity of recent honest-majority MPC protocols. For instance, using any pseudorandom generator, we obtain a 3-party protocol for Boolean circuits in which the amortized communication cost is only one bit per AND gate per party (compared to 10 bits in the best previous protocol), matching the best known protocols for semi-honest parties.
2019
CRYPTO
Non-Uniformly Sound Certificates with Applications to Concurrent Zero-Knowledge 📺
We introduce the notion of non-uniformly sound certificates: succinct single-message (unidirectional) argument systems that satisfy a “best-possible security” against non-uniform polynomial-time attackers. In particular, no polynomial-time attacker with s bits of non-uniform advice can find significantly more than s accepting proofs for false statements. Our first result is a construction of non-uniformly sound certificates for all $$\mathbf{NP }$$ in the random oracle model, where the attacker’s advice can depend arbitrarily on the random oracle.We next show that the existence of non-uniformly sound certificates for $$\mathbf{P }$$ (and collision resistant hash functions) yields a public-coin constant-round fully concurrent zero-knowledge argument for $$\mathbf{NP } $$.
2019
CRYPTO
On Round Optimal Statistical Zero Knowledge Arguments 📺
We construct the first three message statistical zero knowledge arguments for all of NP, matching the known lower bound. We do so based on keyless multi-collision resistant hash functions and the Learning with Errors assumption—the same assumptions used to obtain round optimal computational zero knowledge.The main component in our construction is a statistically witness indistinguishable argument of knowledge based on a new notion of statistically hiding commitments with subset opening.
2019
CRYPTO
It Wasn’t Me! 📺
Ring signatures, introduced by [RST01], are a variant of digital signatures which certify that one among a particular set of parties has endorsed a message while hiding which party in the set was the signer. Ring signatures are designed to allow anyone to attach anyone else’s name to a signature, as long as the signer’s own name is also attached. But what guarantee do ring signatures provide if a purported signatory wishes to denounce a signed message—or alternatively, if a signatory wishes to later come forward and claim ownership of a signature? Prior security definitions for ring signatures do not give a conclusive answer to this question: under most existing definitions, the guarantees could go either way. That is, it is consistent with some standard definitions that a non-signer might be able to repudiate a signature that he did not produce, or that this might be impossible. Similarly, a signer might be able to later convincingly claim that a signature he produced is indeed his own, or not. Any of these guarantees might be desirable. For instance, a whistleblower might have reason to want to later claim an anonymously released signature, or a person falsely implicated in a crime associated with a ring signature might wish to denounce the signature that is framing them and damaging their reputation. In other circumstances, it might be desirable that even under duress, a member of a ring cannot produce proof that he did or did not sign a particular signature. In any case, a guarantee one way or the other seems highly desirable.In this work, we formalize definitions and give constructions of the new notions of repudiable, unrepudiable, claimable, and unclaimable ring signatures. Our repudiable construction is based on VRFs, which are implied by several number-theoretic assumptions (including strong RSA or bilinear maps); our claimable construction is a black-box transformation from any standard ring signature scheme to a claimable one; and our unclaimable construction is derived from the lattice-based ring signatures of [BK10], which rely on hardness of SIS. Our repudiable construction also provides a new construction of standard ring signatures.
2019
CRYPTO
Two-Party ECDSA from Hash Proof Systems and Efficient Instantiations 📺
ECDSA is a widely adopted digital signature standard. Unfortunately, efficient distributed variants of this primitive are notoriously hard to achieve and known solutions often require expensive zero knowledge proofs to deal with malicious adversaries. For the two party case, Lindell [Lin17] recently managed to get an efficient solution which, to achieve simulation-based security, relies on an interactive, non standard, assumption on Paillier’s cryptosystem. In this paper we generalize Lindell’s solution using hash proof systems. The main advantage of our generic method is that it results in a simulation-based security proof without resorting to non-standard interactive assumptions.Moving to concrete constructions, we show how to instantiate our framework using class groups of imaginary quadratic fields. Our implementations show that the practical impact of dropping such interactive assumptions is minimal. Indeed, while for 128-bit security our scheme is marginally slower than Lindell’s, for 256-bit security it turns out to be better both in key generation and signing time. Moreover, in terms of communication cost, our implementation significantly reduces both the number of rounds and the transmitted bits without exception.
2019
CRYPTO
Asymmetric Message Franking: Content Moderation for Metadata-Private End-to-End Encryption 📺
Content moderation is crucial for stopping abusive and harassing messages in online platforms. Existing moderation mechanisms, such as message franking, require platform providers to be able to associate user identifiers to encrypted messages. These mechanisms fail in metadata-private messaging systems, such as Signal, where users can hide their identities from platform providers. The key technical challenge preventing moderation is achieving cryptographic accountability while preserving deniability.In this work, we resolve this tension with a new cryptographic primitive: asymmetric message franking (AMF) schemes. We define strong security notions for AMF schemes, including the first formal treatment of deniability in moderation settings. We then construct, analyze, and implement an AMF scheme that is fast enough to use for content moderation of metadata-private messaging.
2019
CRYPTO
Statistical Zeroizing Attack: Cryptanalysis of Candidates of BP Obfuscation over GGH15 Multilinear Map 📺
We present a new cryptanalytic algorithm on obfuscations based on GGH15 multilinear map. Our algorithm, statistical zeroizing attack, directly distinguishes two distributions from obfuscation while it follows the zeroizing attack paradigm, that is, it uses evaluations of zeros of obfuscated programs.Our attack breaks the recent indistinguishability obfuscation candidate suggested by Chen et al. (CRYPTO’18) for the optimal parameter settings. More precisely, we show that there are two functionally equivalent branching programs whose CVW obfuscations can be efficiently distinguished by computing the sample variance of evaluations.This statistical attack gives a new perspective on the security of the indistinguishability obfuscations: we should consider the shape of the distributions of evaluation of obfuscation to ensure security.In other words, while most of the previous (weak) security proofs have been studied with respect to algebraic attack model or ideal model, our attack shows that this algebraic security is not enough to achieve indistinguishability obfuscation. In particular, we show that the obfuscation scheme suggested by Bartusek et al. (TCC’18) does not achieve the desired security in a certain parameter regime, in which their algebraic security proof still holds.The correctness of statistical zeroizing attacks holds under a mild assumption on the preimage sampling algorithm with a lattice trapdoor. We experimentally verify this assumption for implemented obfuscation by Halevi et al. (ACM CCS’17).
2019
CRYPTO
Indistinguishability Obfuscation Without Multilinear Maps: New Paradigms via Low Degree Weak Pseudorandomness and Security Amplification 📺
The existence of secure indistinguishability obfuscators ( $$i\mathcal {O}$$ ) has far-reaching implications, significantly expanding the scope of problems amenable to cryptographic study. All known approaches to constructing $$i\mathcal {O}$$ rely on d-linear maps. While secure bilinear maps are well established in cryptographic literature, the security of candidates for $$d>2$$ is poorly understood.We propose a new approach to constructing $$i\mathcal {O}$$ for general circuits. Unlike all previously known realizations of $$i\mathcal {O}$$ , we avoid the use of d-linear maps of degree $$d \ge 3$$ .At the heart of our approach is the assumption that a new weak pseudorandom object exists. We consider two related variants of these objects, which we call perturbation resilient generator ( $$\varDelta $$ RG) and pseudo flawed-smudging generator ( $$\mathrm {PFG}$$ ), respectively. At a high level, both objects are polynomially expanding functions whose outputs partially hide (or smudge) small noise vectors when added to them. We further require that they are computable by a family of degree-3 polynomials over $$\mathbb {Z}$$ . We show how they can be used to construct functional encryption schemes with weak security guarantees. Finally, we use novel amplification techniques to obtain full security.As a result, we obtain $$i\mathcal {O}$$ for general circuits assuming:Subexponentially secure LWEBilinear Maps $$\mathrm {poly}(\lambda )$$ -secure 3-block-local PRGs $$\varDelta $$ RGs or $$\mathrm {PFG}$$ s
2019
CRYPTO
Watermarking PRFs from Lattices: Stronger Security via Extractable PRFs 📺
A software watermarking scheme enables one to embed a “mark” (i.e., a message) within a program while preserving the program’s functionality. Moreover, there is an extraction algorithm that recovers an embedded message from a program. The main security goal is that it should be difficult to remove the watermark without destroying the functionality of the program. Existing constructions of watermarking focus on watermarking cryptographic functions like pseudorandom functions (PRFs); even in this setting, realizing watermarking from standard assumptions remains difficult. The first lattice-based construction of secret-key watermarking due to Kim and Wu (CRYPTO 2017) only ensures mark-unremovability against an adversary who does not have access to the mark-extraction oracle. The construction of Quach et al. (TCC 2018) achieves the stronger notion of mark-unremovability even if the adversary can make extraction queries, but has the drawback that the watermarking authority (who holds the watermarking secret key) can break pseudorandomness of all PRF keys in the family (including unmarked keys).In this work, we construct new lattice-based secret-key watermarking schemes for PRFs that both provide unremovability against adversaries that have access to the mark-extraction oracle and offer a strong and meaningful notion of pseudorandomness even against the watermarking authority (i.e., the outputs of unmarked keys are pseudorandom almost everywhere). Moreover, security of several of our schemes can be based on the hardness of computing nearly polynomial approximations to worst-case lattice problems. This is a qualitatively weaker assumption than that needed for existing lattice-based constructions of watermarking (that support message-embedding), all of which require quasi-polynomial approximation factors. Our constructions rely on a new cryptographic primitive called an extractable PRF, which may be of independent interest.
2019
CRYPTO
Watermarking Public-Key Cryptographic Primitives 📺
A software watermarking scheme enables users to embed a message or mark within a program while preserving its functionality. Moreover, it is difficult for an adversary to remove a watermark from a marked program without corrupting its behavior. Existing constructions of software watermarking from standard assumptions have focused exclusively on watermarking pseudorandom functions (PRFs).In this work, we study watermarking public-key primitives such as the signing key of a digital signature scheme or the decryption key of a public-key (predicate) encryption scheme. While watermarking public-key primitives might intuitively seem more challenging than watermarking PRFs, our constructions only rely on simple assumptions. Our watermarkable signature scheme can be built from the minimal assumption of one-way functions while our watermarkable public-key encryption scheme can be built from most standard algebraic assumptions that imply public-key encryption (e.g., factoring, discrete log, or lattice assumptions). Our schemes also satisfy a number of appealing properties: public marking, public mark-extraction, and collusion resistance. Our schemes are the first to simultaneously achieve all of these properties.The key enabler of our new constructions is a relaxed notion of functionality-preserving. While traditionally, we require that a marked program (approximately) preserve the input/output behavior of the original program, in the public-key setting, preserving the “functionality” does not necessarily require preserving the exact input/output behavior. For instance, if we want to mark a signing algorithm, it suffices that the marked algorithm still output valid signatures (even if those signatures might be different from the ones output by the unmarked algorithm). Similarly, if we want to mark a decryption algorithm, it suffices that the marked algorithm correctly decrypt all valid ciphertexts (but may behave differently from the unmarked algorithm on invalid or malformed ciphertexts). Our relaxed notion of functionality-preserving captures the essence of watermarking and still supports the traditional applications, but provides additional flexibility to enable new and simple realizations of this powerful cryptographic notion.
2019
CRYPTO
SpOT-Light: Lightweight Private Set Intersection from Sparse OT Extension 📺
We describe a novel approach for two-party private set intersection (PSI) with semi-honest security. Compared to existing PSI protocols, ours has a more favorable balance between communication and computation. Specifically, our protocol has the lowest monetary cost of any known PSI protocol, when run over the Internet using cloud-based computing services (taking into account current rates for CPU + data). On slow networks (e.g., 10 Mbps) our protocol is actually the fastest.Our novel underlying technique is a variant of oblivious transfer (OT) extension that we call sparse OT extension. Conceptually it can be thought of as a communication-efficient multipoint oblivious PRF evaluation. Our sparse OT technique relies heavily on manipulating high-degree polynomials over large finite fields (i.e. elements whose representation requires hundreds of bits). We introduce extensive algorithmic and engineering improvements for interpolation and multi-point evaluation of such polynomials, which we believe will be of independent interest.Finally, we present an extensive empirical comparison of state-of-the-art PSI protocols in several application scenarios and along several dimensions of measurement: running time, communication, peak memory consumption, and—arguably the most relevant metric for practice—monetary cost.
2019
CRYPTO
Universally Composable Secure Computation with Corrupted Tokens 📺
We introduce the corrupted token model. This model generalizes the tamper-proof token model proposed by Katz (EUROCRYPT ’07) relaxing the trust assumption on the honest behavior of tokens. Our model is motivated by the real-world practice of outsourcing hardware production to possibly corrupted manufacturers. We capture the malicious behavior of token manufacturers by allowing the adversary to corrupt the tokens of honest players at the time of their creation.We show that under minimal complexity assumptions, i.e., the existence of one-way functions, it is possible to UC-securely realize (a variant of) the tamper-proof token functionality of Katz in the corrupted token model with n stateless tokens assuming that the adversary corrupts at most $$n-1$$ of them (for any $$n>0$$). We apply this result to existing multi-party protocols in Katz’s model to achieve UC-secure MPC in the corrupted token model assuming only the existence of one-way functions. Finally, we show how to obtain the above results using tokens of small size that take only short inputs. The technique in this result can also be used to improve the assumption of UC-secure hardware obfuscation recently proposed by Nayak et al. (NDSS ’17). While their construction requires the existence of collision-resistant hash functions, we can obtain the same result from only one-way functions. Moreover using our main result we can improve the trust assumption on the tokens as well.
2019
CRYPTO
Reusable Non-Interactive Secure Computation 📺
We consider the problem of Non-Interactive Two-Party Secure Computation (NISC), where Rachel wishes to publish an encryption of her input x, in such a way that any other party, who holds an input y, can send her a single message which conveys to her the value f(x, y), and nothing more. We demand security against malicious parties. While such protocols are easy to construct using garbled circuits and general non-interactive zero-knowledge proofs, this approach inherently makes a non-black-box use of the underlying cryptographic primitives and is infeasible in practice.Ishai et al. (Eurocrypt 2011) showed how to construct NISC protocols that only use parallel calls to an ideal oblivious transfer (OT) oracle, and additionally make only a black-box use of any pseudorandom generator. Combined with the efficient 2-message OT protocol of Peikert et al. (Crypto 2008), this leads to a practical approach to NISC that has been implemented in subsequent works. However, a major limitation of all known OT-based NISC protocols is that they are subject to selective failure attacks that allows a malicious sender to entirely compromise the security of the protocol when the receiver’s first message is reused.Motivated by the failure of the OT-based approach, we consider the problem of basing reusable NISC on parallel invocations of a standard arithmetic generalization of OT known as oblivious linear-function evaluation (OLE). We obtain the following results:We construct an information-theoretically secure reusable NISC protocol for arithmetic branching programs and general zero-knowledge functionalities in the OLE-hybrid model. Our zero-knowledge protocol only makes an absolute constant number of OLE calls per gate in an arithmetic circuit whose satisfiability is being proved. We also get reusable NISC in the OLE-hybrid model for general Boolean circuits using any one-way function.We complement this by a negative result, showing that reusable NISC is impossible to achieve in the OT-hybrid model. This provides a formal justification for the need to replace OT by OLE.We build a universally composable 2-message reusable OLE protocol in the CRS model that can be based on the security of Paillier encryption and requires only a constant number of modular exponentiations. This provides the first arithmetic analogue of the 2-message OT protocols of Peikert et al. (Crypto 2008).By combining our NISC protocol in the OLE-hybrid model and the 2-message OLE protocol, we get protocols with new attractive asymptotic and concrete efficiency features. In particular, we get the first (designated-verifier) NIZK protocols for NP where following a statement-independent preprocessing, both proving and verifying are entirely “non-cryptographic” and involve only a constant computational overhead. Furthermore, we get the first statistical designated-verifier NIZK argument for NP under an assumption related to factoring.
2019
CRYPTO
Efficient Pseudorandom Correlation Generators: Silent OT Extension and More 📺
Secure multiparty computation (MPC) often relies on correlated randomness for better efficiency and simplicity. This is particularly useful for MPC with no honest majority, where input-independent correlated randomness enables a lightweight “non-cryptographic” online phase once the inputs are known. However, since the amount of randomness typically scales with the circuit size of the function being computed, securely generating correlated randomness forms an efficiency bottleneck, involving a large amount of communication and storage.A natural tool for addressing the above limitations is a pseudorandom correlation generator (PCG). A PCG allows two or more parties to securely generate long sources of useful correlated randomness via a local expansion of correlated short seeds and no interaction. PCGs enable MPC with silent preprocessing, where a small amount of interaction used for securely sampling the seeds is followed by silent local generation of correlated pseudorandomness.A concretely efficient PCG for Vector-OLE correlations was recently obtained by Boyle et al. (CCS 2018) based on variants of the learning parity with noise (LPN) assumption over large fields. In this work, we initiate a systematic study of PCGs and present concretely efficient constructions for several types of useful MPC correlations. We obtain the following main contributions:PCG foundations. We give a general security definition for PCGs. Our definition suffices for any MPC protocol satisfying a stronger security requirement that is met by existing protocols. We prove that a stronger security requirement is indeed necessary, and justify our PCG definition by ruling out a stronger and more natural definition.Silent OT extension. We present the first concretely efficient PCG for oblivious transfer correlations. Its security is based on a variant of the binary LPN assumption and any correlation-robust hash function. We expect it to provide a faster alternative to the IKNP OT extension protocol (Crypto 2003) when communication is the bottleneck. We present several applications, including protocols for non-interactive zero-knowledge with bounded-reusable preprocessing from binary LPN, and concretely efficient related-key oblivious pseudorandom functions.PCGs for simple 2-party correlations. We obtain PCGs for several other types of useful 2-party correlations, including (authenticated) one-time truth-tables and Beaver triples. While the latter PCGs are slower than our PCG for OT, they are still practically feasible. These PCGs are based on a host of assumptions and techniques, including specialized homomorphic secret sharing schemes and pseudorandom generators tailored to their structure.Multiparty correlations. We obtain PCGs for multiparty correlations that can be used to make the (input-dependent) online communication of MPC protocols scale linearly with the number of parties, instead of quadratically.
2019
CRYPTO
Non-interactive Non-malleability from Quantum Supremacy 📺
We construct non-interactive non-malleable commitments without setup in the plain model, under well-studied assumptions.First, we construct non-interactive non-malleable commitments w.r.t. commitment for $$\epsilon \log \log n$$ tags for a small constant $$\epsilon > 0$$, under the following assumptions:1.Sub-exponential hardness of factoring or discrete log.2.Quantum sub-exponential hardness of learning with errors (LWE). Second, as our key technical contribution, we introduce a new tag amplification technique. We show how to convert any non-interactive non-malleable commitment w.r.t. commitment for $$\epsilon \log \log n$$ tags (for any constant $$\epsilon >0$$) into a non-interactive non-malleable commitment w.r.t. replacement for $$2^n$$ tags. This part only assumes the existence of sub-exponentially secure non-interactive witness indistinguishable (NIWI) proofs, which can be based on sub-exponential security of the decisional linear assumption.Interestingly, for the tag amplification technique, we crucially rely on the leakage lemma due to Gentry and Wichs (STOC 2011). For the construction of non-malleable commitments for $$\epsilon \log \log n$$ tags, we rely on quantum supremacy. This use of quantum supremacy in classical cryptography is novel, and we believe it will have future applications. We provide one such application to two-message witness indistinguishable (WI) arguments from (quantum) polynomial hardness assumptions.
2019
CRYPTO
Cryptographic Sensing 📺
Is it possible to measure a physical object in a way that makes the measurement signals unintelligible to an external observer? Alternatively, can one learn a natural concept by using a contrived training set that makes the labeled examples useless without the line of thought that has led to their choice? We initiate a study of “cryptographic sensing” problems of this type, presenting definitions, positive and negative results, and directions for further research.
2019
CRYPTO
Public-Key Cryptography in the Fine-Grained Setting 📺
Cryptography is largely based on unproven assumptions, which, while believable, might fail. Notably if $$P = NP$$, or if we live in Pessiland, then all current cryptographic assumptions will be broken. A compelling question is if any interesting cryptography might exist in Pessiland.A natural approach to tackle this question is to base cryptography on an assumption from fine-grained complexity. Ball, Rosen, Sabin, and Vasudevan [BRSV’17] attempted this, starting from popular hardness assumptions, such as the Orthogonal Vectors (OV) Conjecture. They obtained problems that are hard on average, assuming that OV and other problems are hard in the worst case. They obtained proofs of work, and hoped to use their average-case hard problems to build a fine-grained one-way function. Unfortunately, they proved that constructing one using their approach would violate a popular hardness hypothesis. This motivates the search for other fine-grained average-case hard problems.The main goal of this paper is to identify sufficient properties for a fine-grained average-case assumption that imply cryptographic primitives such as fine-grained public key cryptography (PKC). Our main contribution is a novel construction of a cryptographic key exchange, together with the definition of a small number of relatively weak structural properties, such that if a computational problem satisfies them, our key exchange has provable fine-grained security guarantees, based on the hardness of this problem. We then show that a natural and plausible average-case assumption for the key problem Zero-k-Clique from fine-grained complexity satisfies our properties. We also develop fine-grained one-way functions and hardcore bits even under these weaker assumptions.Where previous works had to assume random oracles or the existence of strong one-way functions to get a key-exchange computable in O(n) time secure against $$O(n^2)$$ adversaries (see [Merkle’78] and [BGI’08]), our assumptions seem much weaker. Our key exchange has a similar gap between the computation of the honest party and the adversary as prior work, while being non-interactive, implying fine-grained PKC.
2019
CRYPTO
Exploring Constructions of Compact NIZKs from Various Assumptions 📺
A non-interactive zero-knowledge (NIZK) protocol allows a prover to non-interactively convince a verifier of the truth of the statement without leaking any other information. In this study, we explore shorter NIZK proofs for all $$\mathbf{NP }$$ languages. Our primary interest is NIZK proofs from falsifiable pairing/pairing-free group-based assumptions. Thus far, NIZKs in the common reference string model (CRS-NIZKs) for $$\mathbf{NP }$$ based on falsifiable pairing-based assumptions all require a proof size at least as large as $$O(|C| \kappa )$$, where C is a circuit computing the $$\mathbf{NP }$$ relation and $$\kappa $$ is the security parameter. This holds true even for the weaker designated-verifier NIZKs (DV-NIZKs). Notably, constructing a (CRS, DV)-NIZK with proof size achieving an additive-overhead $$O(|C|) + \mathsf {poly}(\kappa )$$, rather than a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$, based on any falsifiable pairing-based assumptions is an open problem.In this work, we present various techniques for constructing NIZKs with compact proofs, i.e., proofs smaller than $$O(|C|) + \mathsf {poly}(\kappa )$$, and make progress regarding the above situation. Our result is summarized below. We construct CRS-NIZK for all $$\mathbf{NP }$$ with proof size $$|C| +\mathsf {poly}(\kappa )$$ from a (non-static) falsifiable Diffie-Hellman (DH) type assumption over pairing groups. This is the first CRS-NIZK to achieve a compact proof without relying on either lattice-based assumptions or non-falsifiable assumptions. Moreover, a variant of our CRS-NIZK satisfies universal composability (UC) in the erasure-free adaptive setting. Although it is limited to $$\mathbf{NP }$$ relations in $$\mathbf{NC }^1$$, the proof size is $$|w| \cdot \mathsf {poly}(\kappa )$$ where w is the witness, and in particular, it matches the state-of-the-art UC-NIZK proposed by Cohen, shelat, and Wichs (CRYPTO’19) based on lattices.We construct (multi-theorem) DV-NIZKs for $$\mathbf{NP }$$ with proof size $$|C|+\mathsf {poly}(\kappa )$$ from the computational DH assumption over pairing-free groups. This is the first DV-NIZK that achieves a compact proof from a standard DH type assumption. Moreover, if we further assume the $$\mathbf{NP }$$ relation to be computable in $$\mathbf{NC }^1$$ and assume hardness of a (non-static) falsifiable DH type assumption over pairing-free groups, the proof size can be made as small as $$|w| + \mathsf {poly}(\kappa )$$. Another related but independent issue is that all (CRS, DV)-NIZKs require the running time of the prover to be at least $$|C|\cdot \mathsf {poly}(\kappa )$$. Considering that there exists NIZKs with efficient verifiers whose running time is strictly smaller than |C|, it is an interesting problem whether we can construct prover-efficient NIZKs. To this end, we construct prover-efficient CRS-NIZKs for $$\mathbf{NP }$$ with compact proof through a generic construction using laconic functional evaluation schemes (Quach, Wee, and Wichs (FOCS’18)). This is the first NIZK in any model where the running time of the prover is strictly smaller than the time it takes to compute the circuit C computing the $$\mathbf{NP }$$ relation.Finally, perhaps of an independent interest, we formalize the notion of homomorphic equivocal commitments, which we use as building blocks to obtain the first result, and show how to construct them from pairing-based assumptions.
2019
CRYPTO
New Constructions of Reusable Designated-Verifier NIZKs 📺
Non-interactive zero-knowledge arguments (NIZKs) for $$\mathsf {NP}$$ are an important cryptographic primitive, but we currently only have instantiations under a few specific assumptions. Notably, we are missing constructions from the learning with errors (LWE) assumption, the Diffie-Hellman (CDH/DDH) assumption, and the learning parity with noise (LPN) assumption.In this paper, we study a relaxation of NIZKs to the designated-verifier setting (DV-NIZK), where a trusted setup generates a common reference string together with a secret key for the verifier. We want reusable schemes, which allow the verifier to reuse the secret key to verify many different proofs, and soundness should hold even if the malicious prover learns whether various proofs are accepted or rejected. Such reusable DV-NIZKs were recently constructed under the CDH assumption, but it was open whether they can also be constructed under LWE or LPN.We also consider an extension of reusable DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK). In this setting, the only trusted setup consists of a common random string. However, there is also an additional untrusted setup in which the verifier chooses a public/secret key needed to generate/verify proofs, respectively. We require that zero-knowledge holds even if the public key is chosen maliciously by the verifier. Such reusable MDV-NIZKs were recently constructed under the “one-more CDH” assumption, but constructions under CDH/LWE/LPN remained open.In this work, we give new constructions of (reusable) DV-NIZKs and MDV-NIZKs using generic primitives that can be instantiated under CDH, LWE, or LPN.
2019
CRYPTO
Scalable Zero Knowledge with No Trusted Setup 📺
One of the approaches to constructing zero knowledge (ZK) arguments relies on “PCP techniques” that date back to influential works from the early 1990’s [Babai et al., Arora et al. 1991-2]. These techniques require only minimal cryptographic assumptions, namely, the existence of a family of collision-resistant hash functions [Kilian, STOC 1992], and achieve two remarkable properties: (i) all messages generated by the verifier are public random coins, and (ii) total verification time is merely poly-logarithmic in the time needed to naïvely execute the computation being verified [Babai et al., STOC 1991].Those early constructions were never realized in code, mostly because proving time was too large. To address this, the model of interactive oracle proofs (IOPs), which generalizes the PCP model, was recently suggested. Proving time for ZK-IOPs was reduced to quasi-linear, even for problems that require nondeterministic exponential time to decide [Ben-Sasson et al., TCC 2016, ICALP 2017].Despite these recent advances it was still not clear whether ZK-IOP systems can lead to concretely efficient succinct argument systems. Our main claim is that this is indeed the case. We present a new construction of an IOP of knowledge (which we call a zk-STIK) that improves, asymptotically, on the state of art: for log-space computations of length T it is the first to $$O(T \log T)$$ arithmetic prover complexity and $$O(\log T)$$ verifier arithmetic complexity. Prior IOPs had additional $$\mathsf{poly} \log T$$ factors in both prover and verifier. Additionally, we report a C++ realization of this system (which we call libSTARK). Compared to prevailing ZK realizations, it has the fastest proving and (total) verification time for sufficiently large sequential computations.
2019
CRYPTO
Libra: Succinct Zero-Knowledge Proofs with Optimal Prover Computation 📺
We present Libra, the first zero-knowledge proof system that has both optimal prover time and succinct proof size/verification time. In particular, if C is the size of the circuit being proved (i) the prover time is O(C) irrespective of the circuit type; (ii) the proof size and verification time are both $$O(d\log C)$$ for d-depth log-space uniform circuits (such as RAM programs). In addition Libra features an one-time trusted setup that depends only on the size of the input to the circuit and not on the circuit logic. Underlying Libra is a new linear-time algorithm for the prover of the interactive proof protocol by Goldwasser, Kalai and Rothblum (also known as GKR protocol), as well as an efficient approach to turn the GKR protocol to zero-knowledge using small masking polynomials. Not only does Libra have excellent asymptotics, but it is also efficient in practice. For example, our implementation shows that it takes 200 s to generate a proof for constructing a SHA2-based Merkle tree root on 256 leaves, outperforming all existing zero-knowledge proof systems. Proof size and verification time of Libra are also competitive.
2019
CRYPTO
Highly Efficient Key Exchange Protocols with Optimal Tightness 📺
In this paper we give nearly-tight reductions for modern implicitly authenticated Diffie-Hellman protocols in the style of the Signal and Noise protocols, which are extremely simple and efficient. Unlike previous approaches, the combination of nearly-tight proofs and efficient protocols enables the first real-world instantiations for which the parameters can be chosen in a theoretically sound manner.Our reductions have only a linear loss in the number of users, implying that our protocols are more efficient than the state of the art when instantiated with theoretically sound parameters. We also prove that our security proofs are optimal: a linear loss in the number of users is unavoidable for our protocols for a large and natural class of reductions.
2019
CRYPTO
Strong Asymmetric PAKE Based on Trapdoor CKEM 📺
Password-Authenticated Key Exchange (PAKE) protocols allow two parties that share a password to establish a shared key in a way that is immune to offline attacks. Asymmetric PAKE (aPAKE) [20] adapts this notion to the common client-server setting, where the server stores a one-way hash of the password instead of the password itself, and server compromise allows the adversary to recover the password only via the (inevitable) offline dictionary attack. Most aPAKE protocols, however, allow an attacker to pre-compute a dictionary of hashed passwords, thus instantly learning the password on server compromise. Recently, Jarecki, Krawczyk, and Xu formalized a Universally Composable strong aPAKE (saPAKE) [23], which requires the password hash to be salted so that the dictionary attack can only start after the server compromise leaks the salt and the salted hash. The UC saPAKE protocol shown in [23], called OPAQUE, uses 3 protocol flows, 3–4 exponentiations per party, and relies on the One-More Diffie-Hellman assumption in ROM.We propose an alternative UC saPAKE construction based on a novel use of the encryption+SPHF paradigm for UC PAKE design [19, 26]. Compared to OPAQUE, our protocol uses only 2 flows, has comparable costs, avoids hashing onto a group, and relies on different assumptions, namely Decisional Diffie-Hellman (DDH), Strong Diffie-Hellman (SDH), and an assumption that the Boneh-Boyen function $$f_ s (x)=g^{1/( s +x)}$$ [9] is a Salted Tight One-Way Function (STOWF). We formalize a UC model for STOWF and analyze the Boneh-Boyen function as UC STOWF in the generic group model and ROM.Our saPAKE protocol employs a new form of Conditional Key Encapsulation Mechanism (CKEM), a generalization of SPHF, which we call an implicit-statement CKEM. This strengthening of SPHF allows for a UC (sa)PAKE design where only the client commits to its password, and only the server performs an SPHF, compared to the standard UC PAKE design paradigm where the encrypt+SPHF subroutine is used symmetrically by both parties.
2019
CRYPTO
Broadcast and Trace with $N^{\varepsilon }$ Ciphertext Size from Standard Assumptions 📺
We construct a broadcast and trace scheme (also known as trace and revoke or broadcast, trace and revoke) with N users, where the ciphertext size can be made as low as $$O(N^\varepsilon )$$ , for any arbitrarily small constant $$\varepsilon >0$$ . This improves on the prior best construction of broadcast and trace under standard assumptions by Boneh and Waters (CCS ‘06), which had ciphertext size $$O(N^{1/2})$$ . While that construction relied on bilinear maps, ours uses a combination of the learning with errors (LWE) assumption and bilinear maps.Recall that, in both broadcast encryption and traitor-tracing schemes, there is a collection of N users, each of which gets a different secret key $${\mathsf {sk}}_i$$ . In broadcast encryption, it is possible to create ciphertexts targeted to a subset $$S \subseteq [N]$$ of the users such that only those users can decrypt it correctly. In a traitor tracing scheme, if a subset of users gets together and creates a decoder box D that is capable of decrypting ciphertexts, then it is possible to trace at least one of the users responsible for creating D. A broadcast and trace scheme intertwines the two properties, in a way that results in more than just their union. In particular, it ensures that if a decoder D is able to decrypt ciphertexts targeted toward a set S of users, then it should be possible to trace one of the users in the set S responsible for creating D, even if other users outside of S also participated. As of recently, we have essentially optimal broadcast encryption (Boneh, Gentry, Waters CRYPTO ’05) under bilinear maps and traitor tracing (Goyal, Koppula, Waters STOC ’18) under LWE, where the ciphertext size is at most poly-logarithmic in N. The main contribution of our paper is to carefully combine LWE and bilinear-map based components, and get them to interact with each other, to achieve broadcast and trace.
2019
CRYPTO
2019
CRYPTO
Contextual Integrity 📺
Invited Talk
2019
EUROCRYPT
2019
EUROCRYPT
2019
EUROCRYPT
Lower Bounds for Differentially Private RAMs 📺
In this work, we study privacy-preserving storage primitives that are suitable for use in data analysis on outsourced databases within the differential privacy framework. The goal in differentially private data analysis is to disclose global properties of a group without compromising any individual’s privacy. Typically, differentially private adversaries only ever learn global properties. For the case of outsourced databases, the adversary also views the patterns of access to data. Oblivious RAM (ORAM) can be used to hide access patterns but ORAM might be excessive as in some settings it could be sufficient to be compatible with differential privacy and only protect the privacy of individual accesses.We consider $$(\epsilon ,\delta )$$(ϵ,δ)-Differentially Private RAM, a weakening of ORAM that only protects individual operations and seems better suited for use in data analysis on outsourced databases. As differentially private RAM has weaker security than ORAM, there is hope that we can bypass the $$\varOmega (\log (nb/c))$$Ω(log(nb/c)) bandwidth lower bounds for ORAM by Larsen and Nielsen [CRYPTO ’18] for storing an array of nb-bit entries and a client with c bits of memory. We answer in the negative and present an $$\varOmega (\log (nb/c))$$Ω(log(nb/c)) bandwidth lower bound for privacy budgets of $$\epsilon = O(1)$$ϵ=O(1) and $$\delta \le 1/3$$δ≤1/3.The information transfer technique used for ORAM lower bounds does not seem adaptable for use with the weaker security guarantees of differential privacy. Instead, we prove our lower bounds by adapting the chronogram technique to our setting. To our knowledge, this is the first work that uses the chronogram technique for lower bounds on privacy-preserving storage primitives.
2019
EUROCRYPT
Beyond Birthday Bound Secure MAC in Faulty Nonce Model 📺
Encrypt-then-MAC (EtM) is a popular mode for authenticated encryption (AE). Unfortunately, almost all designs following the EtM paradigm, including the AE suites for TLS, are vulnerable against nonce misuse. A single repetition of the nonce value reveals the hash key, leading to a universal forgery attack. There are only two authenticated encryption schemes following the EtM paradigm which can resist nonce misuse attacks, the GCM-RUP (CRYPTO-17) and the $$\mathsf {GCM/2}^{+} $$ (INSCRYPT-12). However, they are secure only up to the birthday bound in the nonce respecting setting, resulting in a restriction on the data limit for a single key. In this paper we show that nEHtM, a nonce-based variant of EHtM (FSE-10) constructed using a block cipher, has a beyond birthday bound (BBB) unforgeable security that gracefully degrades under nonce misuse. We combine nEHtM with the CENC (FSE-06) mode of encryption using the EtM paradigm to realize a nonce-based AE, CWC+. CWC+ is very close (requiring only a few more xor operations) to the CWC AE scheme (FSE-04) and it not only provides BBB security but also gracefully degrading security on nonce misuse.
2019
EUROCRYPT
Tight Time-Memory Trade-Offs for Symmetric Encryption 📺
Concrete security proofs give upper bounds on the attacker’s advantage as a function of its time/query complexity. Cryptanalysis suggests however that other resource limitations – most notably, the attacker’s memory – could make the achievable advantage smaller, and thus these proven bounds too pessimistic. Yet, handling memory limitations has eluded existing security proofs.This paper initiates the study of time-memory trade-offs for basic symmetric cryptography. We show that schemes like counter-mode encryption, which are affected by the Birthday Bound, become more secure (in terms of time complexity) as the attacker’s memory is reduced.One key step of this work is a generalization of the Switching Lemma: For adversaries with S bits of memory issuing q distinct queries, we prove an n-to-n bit random function indistinguishable from a permutation as long as $$S \times q \ll 2^n$$S×q≪2n. This result assumes a combinatorial conjecture, which we discuss, and implies right away trade-offs for deterministic, stateful versions of CTR and OFB encryption.We also show an unconditional time-memory trade-off for the security of randomized CTR based on a secure PRF. Via the aforementioned conjecture, we extend the result to assuming a PRP instead, assuming only one-block messages are encrypted.Our results solely rely on standard PRF/PRP security of an underlying block cipher. We frame the core of our proofs within a general framework of indistinguishability for streaming algorithms which may be of independent interest.
2019
EUROCRYPT
Non-Malleable Codes Against Bounded Polynomial Time Tampering 📺
We construct efficient non-malleable codes (NMC) that are (computationally) secure against tampering by functions computable in any fixed polynomial time. Our construction is in the plain (no-CRS) model and requires the assumptions that (1) $$\mathbf {E}$$E is hard for $$\mathbf {NP}$$NP circuits of some exponential $$2^{\beta n}$$2βn ($$\beta >0$$β>0) size (widely used in the derandomization literature), (2) sub-exponential trapdoor permutations exist, and (3) $$\mathbf {P}$$P-certificates with sub-exponential soundness exist.While it is impossible to construct NMC secure against arbitrary polynomial-time tampering (Dziembowski, Pietrzak, Wichs, ICS ’10), the existence of NMC secure against $$O(n^c)$$O(nc)-time tampering functions (for any fixedc), was shown (Cheraghchi and Guruswami, ITCS ’14) via a probabilistic construction. An explicit construction was given (Faust, Mukherjee, Venturi, Wichs, Eurocrypt ’14) assuming an untamperable CRS with length longer than the runtime of the tampering function. In this work, we show that under computational assumptions, we can bypass these limitations. Specifically, under the assumptions listed above, we obtain non-malleable codes in the plain model against $$O(n^c)$$O(nc)-time tampering functions (for any fixed c), with codeword length independent of the tampering time bound.Our new construction of NMC draws a connection with non-interactive non-malleable commitments. In fact, we show that in the NMC setting, it suffices to have a much weaker notion called quasi non-malleable commitments—these are non-interactive, non-malleable commitments in the plain model, in which the adversary runs in $$O(n^c)$$O(nc)-time, whereas the honest parties may run in longer (polynomial) time. We then construct a 4-tag quasi non-malleable commitment from any sub-exponential OWF and the assumption that $$\mathbf {E}$$E is hard for some exponential size $$\mathbf {NP}$$NP-circuits, and use tag amplification techniques to support an exponential number of tags.
2019
EUROCRYPT
Continuous Non-Malleable Codes in the 8-Split-State Model 📺
Non-malleable codes (NMCs), introduced by Dziembowski, Pietrzak and Wichs [20], provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. NMCs have emerged as a fundamental object at the intersection of coding theory and cryptography. In particular, progress in the study of non-malleable codes and the related notion of non-malleable extractors has led to new insights and progress on even more fundamental problems like the construction of multi-source randomness extractors. A large body of the recent work has focused on various constructions of non-malleable codes in the split-state model. Many variants of NMCs have been introduced in the literature, e.g., strong NMCs, super strong NMCs and continuous NMCs. The most general, and hence also the most useful notion among these is that of continuous non-malleable codes, that allows for continuous tampering by the adversary. We present the first efficient information-theoretically secure continuously non-malleable code in the constant split-state model. We believe that our main technical result could be of independent interest and some of the ideas could in future be used to make progress on other related questions.
2019
EUROCRYPT
Correlated-Source Extractors and Cryptography with Correlated-Random Tapes 📺
In this paper, we consider the setting where a party uses correlated random tapes across multiple executions of a cryptographic algorithm. We ask if the security properties could still be preserved in such a setting. As examples, we introduce the notion of correlated-tape zero knowledge, and, correlated-tape multi-party computation, where, the zero-knowledge property, and, the ideal/real model security must still be preserved even if a party uses correlated random tapes in multiple executions.Our constructions are based on a new type of randomness extractor which we call correlated-source extractors. Correlated-source extractors can be seen as a dual of non-malleable extractors, and, allow an adversary to choose several tampering functions which are applied to the randomness source. Correlated-source extractors guarantee that even given the output of the extractor on the tampered sources, the output on the original source is still uniformly random. Given (seeded) correlated-source extractors, and, resettably-secure computation protocols, we show how to directly get a positive result for both correlated-tape zero-knowledge and correlated-tape multi-party computation in the CRS model. This is tight considering the known impossibility results on cryptography with imperfect randomness.Our main technical contribution is an explicit construction of a correlated-source extractor where the length of the seed is independent of the number of tamperings. Additionally, we also provide a (non-explicit) existential result for correlated source extractors with almost optimal parameters.
2019
EUROCRYPT
Revisiting Non-Malleable Secret Sharing 📺
A threshold secret sharing scheme (with threshold t) allows a dealer to share a secret among a set of parties such that any group of t or more parties can recover the secret and no group of at most $$t-1$$ t-1 parties learn any information about the secret. A non-malleable threshold secret sharing scheme, introduced in the recent work of Goyal and Kumar (STOC’18), additionally protects a threshold secret sharing scheme when its shares are subject to tampering attacks. Specifically, it guarantees that the reconstructed secret from the tampered shares is either the original secret or something that is unrelated to the original secret.In this work, we continue the study of threshold non-malleable secret sharing against the class of tampering functions that tamper each share independently. We focus on achieving greater efficiency and guaranteeing a stronger security property. We obtain the following results:Rate Improvement. We give the first construction of a threshold non-malleable secret sharing scheme that has rate $$> 0$$ >0. Specifically, for every $$n,t \ge 4$$ n,t≥4, we give a construction of a t-out-of-n non-malleable secret sharing scheme with rate $$\varTheta (\frac{1}{t\log ^2 n})$$ Θ(1tlog2n). In the prior constructions, the rate was $$\varTheta (\frac{1}{n\log m})$$ Θ(1nlogm) where m is the length of the secret and thus, the rate tends to 0 as $$m \rightarrow \infty $$ m→∞. Furthermore, we also optimize the parameters of our construction and give a concretely efficient scheme.Multiple Tampering. We give the first construction of a threshold non-malleable secret sharing scheme secure in the stronger setting of bounded tampering wherein the shares are tampered by multiple (but bounded in number) possibly different tampering functions. The rate of such a scheme is $$\varTheta (\frac{1}{k^3t\log ^2 n})$$ Θ(1k3tlog2n) where k is an apriori bound on the number of tamperings. We complement this positive result by proving that it is impossible to have a threshold non-malleable secret sharing scheme that is secure in the presence of an apriori unbounded number of tamperings.General Access Structures. We extend our results beyond threshold secret sharing and give constructions of rate-efficient, non-malleable secret sharing schemes for more general monotone access structures that are secure against multiple (bounded) tampering attacks.
2019
EUROCRYPT
Multi-party Virtual State Channels 📺
Smart contracts are self-executing agreements written in program code and are envisioned to be one of the main applications of blockchain technology. While they are supported by prominent cryptocurrencies such as Ethereum, their further adoption is hindered by fundamental scalability challenges. For instance, in Ethereum contract execution suffers from a latency of more than 15 s, and the total number of contracts that can be executed per second is very limited. State channel networks are one of the core primitives aiming to address these challenges. They form a second layer over the slow and expensive blockchain, thereby enabling instantaneous contract processing at negligible costs.In this work we present the first complete description of a state channel network that exhibits the following key features. First, it supports virtual multi-party state channels, i.e. state channels that can be created and closed without blockchain interaction and that allow contracts with any number of parties. Second, the worst case time complexity of our protocol is constant for arbitrary complex channels. This is in contrast to the existing virtual state channel construction that has worst case time complexity linear in the number of involved parties. In addition to our new construction, we provide a comprehensive model for the modular design and security analysis of our construction.
2019
EUROCRYPT
Aggregate Cash Systems: A Cryptographic Investigation of Mimblewimble 📺
Mimblewimble is an electronic cash system proposed by an anonymous author in 2016. It combines several privacy-enhancing techniques initially envisioned for Bitcoin, such as Confidential Transactions (Maxwell, 2015), non-interactive merging of transactions (Saxena, Misra, Dhar, 2014), and cut-through of transaction inputs and outputs (Maxwell, 2013). As a remarkable consequence, coins can be deleted once they have been spent while maintaining public verifiability of the ledger, which is not possible in Bitcoin. This results in tremendous space savings for the ledger and efficiency gains for new users, who must verify their view of the system.In this paper, we provide a provable-security analysis for Mimblewimble. We give a precise syntax and formal security definitions for an abstraction of Mimblewimble that we call an aggregate cash system. We then formally prove the security of Mimblewimble in this definitional framework. Our results imply in particular that two natural instantiations (with Pedersen commitments and Schnorr or BLS signatures) are provably secure against inflation and coin theft under standard assumptions.
2019
EUROCRYPT
Consensus Through Herding 📺
State Machine Replication (SMR) is an important abstraction for a set of nodes to agree on an ever-growing, linearly-ordered log of transactions. In decentralized cryptocurrency applications, we would like to design SMR protocols that (1) resist adaptive corruptions; and (2) achieve small bandwidth and small confirmation time. All past approaches towards constructing SMR fail to achieve either small confirmation time or small bandwidth under adaptive corruptions (without resorting to strong assumptions such as the erasure model or proof-of-work).We propose a novel paradigm for reaching consensus that departs significantly from classical approaches. Our protocol is inspired by a social phenomenon called herding, where people tend to make choices considered as the social norm. In our consensus protocol, leader election and voting are coalesced into a single (randomized) process: in every round, every node tries to cast a vote for what it views as the most popular item so far: such a voting attempt is not always successful, but rather, successful with a certain probability. Importantly, the probability that the node is elected to vote for v is independent from the probability it is elected to vote for $$v' \ne v$$v′≠v. We will show how to realize such a distributed, randomized election process using appropriate, adaptively secure cryptographic building blocks.We show that amazingly, not only can this new paradigm achieve consensus (e.g., on a batch of unconfirmed transactions in a cryptocurrency system), but it also allows us to derive the first SMR protocol which, even under adaptive corruptions, requires only polylogarithmically many rounds and polylogarithmically many honest messages to be multicast to confirm each batch of transactions; and importantly, we attain these guarantees under standard cryptographic assumptions.
2019
EUROCRYPT
Homomorphic Secret Sharing from Lattices Without FHE 📺
Homomorphic secret sharing (HSS) is an analog of somewhat- or fully homomorphic encryption (S/FHE) to the setting of secret sharing, with applications including succinct secure computation, private manipulation of remote databases, and more. While HSS can be viewed as a relaxation of S/FHE, the only constructions from lattice-based assumptions to date build atop specific forms of threshold or multi-key S/FHE. In this work, we present new techniques directly yielding efficient 2-party HSS for polynomial-size branching programs from a range of lattice-based encryption schemes, without S/FHE. More concretely, we avoid the costly key-switching and modulus-reduction steps used in S/FHE ciphertext multiplication, replacing them with a new distributed decryption procedure for performing “restricted” multiplications of an input with a partial computation value. Doing so requires new methods for handling the blowup of “noise” in ciphertexts in a distributed setting, and leverages several properties of lattice-based encryption schemes together with new tricks in share conversion.The resulting schemes support a superpolynomial-size plaintext space and negligible correctness error, with share sizes comparable to SHE ciphertexts, but cost of homomorphic multiplication roughly one order of magnitude faster. Over certain rings, our HSS can further support some level of packed SIMD homomorphic operations. We demonstrate the practical efficiency of our schemes within two application settings, where we compare favorably with current best approaches: 2-server private database pattern-match queries, and secure 2-party computation of low-degree polynomials.
2019
EUROCRYPT
Improved Bootstrapping for Approximate Homomorphic Encryption 📺
Since Cheon et al. introduced a homomorphic encryption scheme for approximate arithmetic (Asiacrypt ’17), it has been recognized as suitable for important real-life usecases of homomorphic encryption, including training of machine learning models over encrypted data. A follow up work by Cheon et al. (Eurocrypt ’18) described an approximate bootstrapping procedure for the scheme. In this work, we improve upon the previous bootstrapping result. We improve the amortized bootstrapping time per plaintext slot by two orders of magnitude, from $$\sim $$∼1 s to $$\sim $$∼0.01 s. To achieve this result, we adopt a smart level-collapsing technique for evaluating DFT-like linear transforms on a ciphertext. Also, we replace the Taylor approximation of the sine function with a more accurate and numerically stable Chebyshev approximation, and design a modified version of the Paterson-Stockmeyer algorithm for fast evaluation of Chebyshev polynomials over encrypted data.
2019
EUROCRYPT
Minicrypt Primitives with Algebraic Structure and Applications 📺
Algebraic structure lies at the heart of Cryptomania as we know it. An interesting question is the following: instead of building (Cryptomania) primitives from concrete assumptions, can we build them from simple Minicrypt primitives endowed with some additional algebraic structure? In this work, we affirmatively answer this question by adding algebraic structure to the following Minicrypt primitives:One-Way Function (OWF)Weak Unpredictable Function (wUF)Weak Pseudorandom Function (wPRF) The algebraic structure that we consider is group homomorphism over the input/output spaces of these primitives. We also consider a “bounded” notion of homomorphism where the primitive only supports an a priori bounded number of homomorphic operations in order to capture lattice-based and other “noisy” assumptions. We show that these structured primitives can be used to construct many cryptographic protocols. In particular, we prove that: (Bounded) Homomorphic OWFs (HOWFs) imply collision-resistant hash functions, Schnorr-style signatures and chameleon hash functions.(Bounded) Input-Homomorphic weak UFs (IHwUFs) imply CPA-secure PKE, non-interactive key exchange, trapdoor functions, blind batch encryption (which implies anonymous IBE, KDM-secure and leakage-resilient PKE), CCA2 deterministic PKE, and hinting PRGs (which in turn imply transformation of CPA to CCA security for ABE/1-sided PE).(Bounded) Input-Homomorphic weak PRFs (IHwPRFs) imply PIR, lossy trapdoor functions, OT and MPC (in the plain model). In addition, we show how to realize any CDH/DDH-based protocol with certain properties in a generic manner using IHwUFs/IHwPRFs, and how to instantiate such a protocol from many concrete assumptions.We also consider primitives with substantially richer structure, namely Ring IHwPRFs and L-composable IHwPRFs. In particular, we show the following: Ring IHwPRFs with certain properties imply FHE.2-composable IHwPRFs imply (black-box) IBE, and L-composable IHwPRFs imply non-interactive $$(L+1)$$ (L+1)-party key exchange. Our framework allows us to categorize many cryptographic protocols based on which structured Minicrypt primitive implies them. In addition, it potentially makes showing the existence of many cryptosystems from novel assumptions substantially easier in the future.
2019
EUROCRYPT
Attacks only Get Better: How to Break FF3 on Large Domains 📺
We improve the attack of Durak and Vaudenay (CRYPTO’17) on NIST Format-Preserving Encryption standard FF3, reducing the running time from $$O(N^5)$$O(N5) to $$O(N^{17/6})$$O(N17/6) for domain $$\mathbb {Z}_N \times \mathbb {Z}_N$$ZN×ZN. Concretely, DV’s attack needs about $$2^{50}$$250 operations to recover encrypted 6-digit PINs, whereas ours only spends about $$2^{30}$$230 operations. In realizing this goal, we provide a pedagogical example of how to use distinguishing attacks to speed up slide attacks. In addition, we improve the running time of DV’s known-plaintext attack on 4-round Feistel of domain $$\mathbb {Z}_N \times \mathbb {Z}_N$$ZN×ZN from $$O(N^3)$$O(N3) time to just $$O(N^{5/3})$$O(N5/3) time. We also generalize our attacks to a general domain $$\mathbb {Z}_M \times \mathbb {Z}_N$$ZM×ZN, allowing one to recover encrypted SSNs using about $$2^{50}$$250 operations. Finally, we provide some proof-of-concept implementations to empirically validate our results.
2019
EUROCRYPT
Session Resumption Protocols and Efficient Forward Security for TLS 1.3 0-RTT 📺
The TLS 1.3 0-RTT mode enables a client reconnecting to a server to send encrypted application-layer data in “0-RTT” (“zero round-trip time”), without the need for a prior interactive handshake. This fundamentally requires the server to reconstruct the previous session’s encryption secrets upon receipt of the client’s first message. The standard techniques to achieve this are Session Caches or, alternatively, Session Tickets. The former provides forward security and resistance against replay attacks, but requires a large amount of server-side storage. The latter requires negligible storage, but provides no forward security and is known to be vulnerable to replay attacks.In this paper, we first formally define session resumption protocols as an abstract perspective on mechanisms like Session Caches and Session Tickets. We give a new generic construction that provably provides forward security and replay resilience, based on puncturable pseudorandom functions (PPRFs). This construction can immediately be used in TLS 1.3 0-RTT and deployed unilaterally by servers, without requiring any changes to clients or the protocol.We then describe two new constructions of PPRFs, which are particularly suitable for use for forward-secure and replay-resilient session resumption in TLS 1.3. The first construction is based on the strong RSA assumption. Compared to standard Session Caches, for “128-bit security” it reduces the required server storage by a factor of almost 20, when instantiated in a way such that key derivation and puncturing together are cheaper on average than one full exponentiation in an RSA group. Hence, a 1 GB Session Cache can be replaced with only about 51 MBs of storage, which significantly reduces the amount of secure memory required. For larger security parameters or in exchange for more expensive computations, even larger storage reductions are achieved. The second construction combines a standard binary tree PPRF with a new “domain extension” technique. For a reasonable choice of parameters, this reduces the required storage by a factor of up to 5 compared to a standard Session Cache. It employs only symmetric cryptography, is suitable for high-traffic scenarios, and can serve thousands of tickets per second.
2019
EUROCRYPT
An Analysis of NIST SP 800-90A 📺
We investigate the security properties of the three deterministic random bit generator (DRBG) mechanisms in NIST SP 800-90A [2]. The standard received considerable negative attention due to the controversy surrounding the now retracted $$\mathsf{{DualEC\text {-}DRBG}}$$DualEC-DRBG, which appeared in earlier versions. Perhaps because of the attention paid to the DualEC, the other algorithms in the standard have received surprisingly patchy analysis to date, despite widespread deployment. This paper addresses a number of these gaps in analysis, with a particular focus on $$\mathsf{{HASH\text {-}DRBG}}$$HASH-DRBG and $$\mathsf{{HMAC\text {-}DRBG}}$$HMAC-DRBG. We uncover a mix of positive and less positive results. On the positive side, we prove (with a caveat) the robustness [13] of $$\mathsf{{HASH\text {-}DRBG}}$$HASH-DRBG and $$\mathsf{{HMAC\text {-}DRBG}}$$HMAC-DRBG in the random oracle model (ROM). Regarding the caveat, we show that if an optional input is omitted, then – contrary to claims in the standard—$$\mathsf{{HMAC\text {-}DRBG}}$$HMAC-DRBG does not even achieve the (weaker) property of forward security. We then conduct a more informal and practice-oriented exploration of flexibility in the standard. Specifically, we argue that these DRBGs have the property that partial state leakage may lead security to break down in unexpected ways. We highlight implementation choices allowed by the overly flexible standard that exacerbate both the likelihood, and impact, of such attacks. While our attacks are theoretical, an analysis of two open source implementations of $$\mathsf{{CTR\text {-}DRBG}}$$CTR-DRBG shows that these potentially problematic implementation choices are made in the real world.
2019
EUROCRYPT
Computationally Volume-Hiding Structured Encryption 📺
We initiate the study of structured encryption schemes with computationally-secure leakage. Specifically, we focus on the design of volume-hiding encrypted multi-maps; that is, of encrypted multi-maps that hide the response length to computationally-bounded adversaries. We describe the first volume-hiding STE schemes that do not rely on naïve padding; that is, padding all tuples to the same length. Our first construction has efficient query complexity and storage but can be lossy. We show, however, that the information loss can be bounded with overwhelming probability for a large class of multi-maps (i.e., with lengths distributed according to a Zipf distribution). Our second construction is not lossy and can achieve storage overhead that is asymptotically better than naïve padding for Zipf-distributed multi-maps. We also show how to further improve the storage when the multi-map is highly concentrated in the sense that it has a large number of tuples with a large intersection. We achieve these results by leveraging computational assumptions; not just for encryption but, more interestingly, to hide the volumes themselves. Our first construction achieves this using a pseudo-random function whereas our second construction achieves this by relying on the conjectured hardness of the planted densest subgraph problem which is a planted variant of the well-studied densest subgraph problem. This assumption was previously used to design public-key encryptions schemes (Applebaum et al., STOC ’10) and to study the computational complexity of financial products (Arora et al., ICS ’10).
2019
EUROCRYPT
Locality-Preserving Oblivious RAM 📺
Oblivious RAMs, introduced by Goldreich and Ostrovsky [JACM’96], compile any RAM program into one that is “memory oblivious”, i.e., the access pattern to the memory is independent of the input. All previous ORAM schemes, however, completely break the locality of data accesses (for instance, by shuffling the data to pseudorandom positions in memory).In this work, we initiate the study of locality-preserving ORAMs—ORAMs that preserve locality of the accessed memory regions, while leaking only the lengths of contiguous memory regions accessed. Our main results demonstrate the existence of a locality-preserving ORAM with poly-logarithmic overhead both in terms of bandwidth and locality. We also study the tradeoff between locality, bandwidth and leakage, and show that any scheme that preserves locality and does not leak the lengths of the contiguous memory regions accessed, suffers from prohibitive bandwidth.To the best of our knowledge, before our work, the only works combining locality and obliviousness were for symmetric searchable encryption [e.g., Cash and Tessaro (EUROCRYPT’14), Asharov et al. (STOC’16)]. Symmetric search encryption ensures obliviousness if each keyword is searched only once, whereas ORAM provides obliviousness to any input program. Thus, our work generalizes that line of work to the much more challenging task of preserving locality in ORAMs.
2019
EUROCRYPT
Private Anonymous Data Access 📺
We consider a scenario where a server holds a huge database that it wants to make accessible to a large group of clients. After an initial setup phase, clients should be able to read arbitrary locations in the database while maintaining privacy (the server does not learn which locations are being read) and anonymity (the server does not learn which client is performing each read). This should hold even if the server colludes with a subset of the clients. Moreover, the run-time of both the server and the client during each read operation should be low, ideally only poly-logarithmic in the size of the database and the number of clients. We call this notion Private Anonymous Data Access (PANDA). PANDA simultaneously combines aspects of Private Information Retrieval (PIR) and Oblivious RAM (ORAM). PIR has no initial setup, and allows anybody to privately and anonymously access a public database, but the server’s run-time is linear in the data size. On the other hand, ORAM achieves poly-logarithmic server run-time, but requires an initial setup after which only a single client with a secret key can access the database. The goal of PANDA is to get the best of both worlds: allow many clients to privately and anonymously access the database as in PIR, while having an efficient server as in ORAM.In this work, we construct bounded-collusion PANDA schemes, where the efficiency scales linearly with a bound on the number of corrupted clients that can collude with the server, but is otherwise poly-logarithmic in the data size and the total number of clients. Our solution relies on standard assumptions, namely the existence of fully homomorphic encryption, and combines techniques from both PIR and ORAM. We also extend PANDA to settings where clients can write to the database.
2019
EUROCRYPT
Reversible Proofs of Sequential Work 📺
Proofs of sequential work (PoSW) are proof systems where a prover, upon receiving a statement $$\chi $$ and a time parameter T computes a proof $$\phi (\chi ,T)$$ which is efficiently and publicly verifiable. The proof can be computed in T sequential steps, but not much less, even by a malicious party having large parallelism. A PoSW thus serves as a proof that T units of time have passed since $$\chi $$ was received.PoSW were introduced by Mahmoody, Moran and Vadhan [MMV11], a simple and practical construction was only recently proposed by Cohen and Pietrzak [CP18].In this work we construct a new simple PoSW in the random permutation model which is almost as simple and efficient as [CP18] but conceptually very different. Whereas the structure underlying [CP18] is a hash tree, our construction is based on skip lists and has the interesting property that computing the PoSW is a reversible computation.The fact that the construction is reversible can potentially be used for new applications like constructing proofs of replication. We also show how to “embed” the sloth function of Lenstra and Weselowski [LW17] into our PoSW to get a PoSW where one additionally can verify correctness of the output much more efficiently than recomputing it (though recent constructions of “verifiable delay functions” subsume most of the applications this construction was aiming at).
2019
EUROCRYPT
Incremental Proofs of Sequential Work 📺
A proof of sequential work allows a prover to convince a verifier that a certain amount of sequential steps have been computed. In this work we introduce the notion of incremental proofs of sequential work where a prover can carry on the computation done by the previous prover incrementally, without affecting the resources of the individual provers or the size of the proofs.To date, the most efficient instance of proofs of sequential work [Cohen and Pietrzak, Eurocrypt 2018] for N steps require the prover to have $$\sqrt{N}$$N memory and to run for $$N + \sqrt{N}$$N+N steps. Using incremental proofs of sequential work we can bring down the prover’s storage complexity to $$\log N$$logN and its running time to N.We propose two different constructions of incremental proofs of sequential work: Our first scheme requires a single processor and introduces a poly-logarithmic factor in the proof size when compared with the proposals of Cohen and Pietrzak. Our second scheme assumes $$\log N$$logN parallel processors but brings down the overhead of the proof size to a factor of 9. Both schemes are simple to implement and only rely on hash functions (modelled as random oracles).
2019
EUROCRYPT
Tight Proofs of Space and Replication 📺
We construct a concretely practical proof-of-space (PoS) with arbitrarily tight security based on stacked depth robust graphs and constant-degree expander graphs. A proof-of-space (PoS) is an interactive proof system where a prover demonstrates that it is persistently using space to store information. A PoS is arbitrarily tight if the honest prover uses exactly N space and for any $$\epsilon > 0$$ϵ>0 the construction can be tuned such that no adversary can pass verification using less than $$(1-\epsilon ) N$$(1-ϵ)N space. Most notably, the degree of the graphs in our construction are independent of $$\epsilon $$ϵ, and the number of layers is only $$O(\log (1/\epsilon ))$$O(log(1/ϵ)). The proof size is $$O(d/\epsilon )$$O(d/ϵ). The degree d depends on the depth robust graphs, which are only required to maintain $$\varOmega (N)$$Ω(N) depth in subgraphs on 80% of the nodes. Our tight PoS is also secure against parallel attacks.Tight proofs of space are necessary for proof-of-replication (PoRep), which is a publicly verifiable proof that the prover is dedicating unique resources to storing one or more retrievable replicas of a specified file. Our main PoS construction can be used as a PoRep, but data extraction is as inefficient as replica generation. We present a second variant of our construction called ZigZag PoRep that has fast/parallelizable data extraction compared to replica generation and maintains the same space tightness while only increasing the number of levels by roughly a factor two.
2019
EUROCRYPT
Founding Secure Computation on Blockchains 📺
We study the foundations of secure computation in the blockchain-hybrid model, where a blockchain – modeled as a global functionality – is available as an Oracle to all the participants of a cryptographic protocol. We demonstrate both destructive and constructive applications of blockchains:We show that classical rewinding-based simulation techniques used in many security proofs fail against blockchain-active adversaries that have read and post access to a global blockchain. In particular, we show that zero-knowledge (ZK) proofs with black-box simulation are impossible against blockchain-active adversaries.Nevertheless, we show that achieving security against blockchain-active adversaries is possible if the honest parties are also blockchain active. We construct an $$\omega (1)$$-round ZK protocol with black-box simulation. We show that this result is tight by proving the impossibility of constant-round ZK with black-box simulation.Finally, we demonstrate a novel application of blockchains to overcome the known impossibility results for concurrent secure computation in the plain model. We construct a concurrent self-composable secure computation protocol for general functionalities in the blockchain-hybrid model based on standard cryptographic assumptions. We develop a suite of techniques for constructing secure protocols in the blockchain-hybrid model that we hope will find applications to future research in this area.
2019
EUROCRYPT
Uncovering Algebraic Structures in the MPC Landscape 📺
A fundamental problem in the theory of secure multi-party computation (MPC) is to characterize functions with more than 2 parties which admit MPC protocols with information-theoretic security against passive corruption. This question has seen little progress since the work of Chor and Ishai (1996), which demonstrated difficulties in resolving it. In this work, we make significant progress towards resolving this question in the important case of aggregating functionalities, in which m parties $$P_1,\dots ,P_m$$ P1,⋯,Pm hold inputs $$x_1,\dots ,x_m$$ x1,⋯,xm and an aggregating party $$P_0$$ P0 must learn $$f(x_1,\dots ,x_m)$$ f(x1,⋯,xm).We uncover a rich class of algebraic structures that are closely related to secure computability, namely, “Commuting Permutations Systems” (CPS) and its variants. We present an extensive set of results relating these algebraic structures among themselves and to MPC, including new protocols, impossibility results and separations. Our results include a necessary algebraic condition and slightly stronger sufficient algebraic condition for a function to admit information-theoretically secure MPC protocols.We also introduce and study new models of minimally interactive MPC (called UNIMPC and ), which not only help in understanding our positive and negative results better, but also open up new avenues for studying the cryptographic complexity landscape of multi-party functionalities. Our positive results include novel protocols in these models, which may be of independent practical interest.Finally, we extend our results to a definition that requires UC security as well as semi-honest security (which we term strong security). In this model we are able to carry out the characterization of all computable functions, except for a gap in the case of aggregating functionalities.
2019
EUROCRYPT
Quantum Circuits for the CSIDH: Optimizing Quantum Evaluation of Isogenies 📺
Choosing safe post-quantum parameters for the new CSIDH isogeny-based key-exchange system requires concrete analysis of the cost of quantum attacks. The two main contributions to attack cost are the number of queries in hidden-shift algorithms and the cost of each query. This paper analyzes algorithms for each query, introducing several new speedups while showing that some previous claims were too optimistic for the attacker. This paper includes a full computer-verified simulation of its main algorithm down to the bit-operation level.
2019
EUROCRYPT
A Quantum-Proof Non-malleable Extractor 📺
In privacy amplification, two mutually trusted parties aim to amplify the secrecy of an initial shared secret X in order to establish a shared private key K by exchanging messages over an insecure communication channel. If the channel is authenticated the task can be solved in a single round of communication using a strong randomness extractor; choosing a quantum-proof extractor allows one to establish security against quantum adversaries.In the case that the channel is not authenticated, this simple solution is no longer secure. Nevertheless, Dodis and Wichs (STOC’09) showed that the problem can be solved in two rounds of communication using a non-malleable extractor, a stronger pseudo-random construction than a strong extractor.We give the first construction of a non-malleable extractor that is secure against quantum adversaries. The extractor is based on a construction by Li (FOCS’12), and is able to extract from source of min-entropy rates larger than 1 / 2. Combining this construction with a quantum-proof variant of the reduction of Dodis and Wichs, due to Cohen and Vidick (unpublished) we obtain the first privacy amplification protocol secure against active quantum adversaries.
2019
EUROCRYPT
A Note on the Communication Complexity of Multiparty Computation in the Correlated Randomness Model 📺
Secure multiparty computation ( $$\mathsf {MPC}$$ MPC) addresses the challenge of evaluating functions on secret inputs without compromising their privacy. A central question in multiparty computation is to understand the amount of communication needed to securely evaluate a circuit of size s. In this work, we revisit this fundamental question in the setting of information-theoretically secure $$\mathsf {MPC}$$ MPC in the correlated randomness model, where a trusted dealer distributes correlated random coins, independent of the inputs, to all parties before the start of the protocol. This setting is of strong theoretical interest, and has led to the most practically efficient $$\mathsf {MPC}$$ MPC protocols known to date.While it is known that protocols with optimal communication (proportional to input plus output size) can be obtained from the LWE assumption, and that protocols with sublinear communication o(s) can be obtained from the DDH assumption, the question of constructing protocols with o(s) communication remains wide open for the important case of information-theoretic $$\mathsf {MPC}$$ MPC in the correlated randomness model; all known protocols in this model require O(s) communication in the online phase.In this work, we exhibit the first generic multiparty computation protocol in the correlated randomness model with communication sublinear in the circuit size, for a large class of circuits. More precisely, we show the following: any size-slayered circuit (whose nodes can be partitioned into layers so that any edge connects adjacent layers) can be evaluated with $$O(s/\log \log s)$$ O(s/loglogs) communication. Our results holds for both boolean and arithmetic circuits, in the honest-but-curious setting, and do not assume honest majority. For boolean circuits, we extend our results to handle malicious corruption.
2019
EUROCRYPT
Degree 2 is Complete for the Round-Complexity of Malicious MPC 📺
We show, via a non-interactive reduction, that the existence of a secure multi-party computation (MPC) protocol for degree-2 functions implies the existence of a protocol with the same round complexity for general functions. Thus showing that when considering the round complexity of MPC, it is sufficient to consider very simple functions.Our completeness theorem applies in various settings: information theoretic and computational, fully malicious and malicious with various types of aborts. In fact, we give a master theorem from which all individual settings follow as direct corollaries. Our basic transformation does not require any additional assumptions and incurs communication and computation blow-up which is polynomial in the number of players and in $$S,2^D$$S,2D, where S, D are the circuit size and depth of the function to be computed. Using one-way functions as an additional assumption, the exponential dependence on the depth can be removed.As a consequence, we are able to push the envelope on the state of the art in various settings of MPC, including the following cases. 3-round perfectly-secure protocol (with guaranteed output delivery) against an active adversary that corrupts less than 1/4 of the parties.2-round statistically-secure protocol that achieves security with “selective abort” against an active adversary that corrupts less than half of the parties.Assuming one-way functions, 2-round computationally-secure protocol that achieves security with (standard) abort against an active adversary that corrupts less than half of the parties. This gives a new and conceptually simpler proof to the recent result of Ananth et al. (Crypto 2018). Technically, our non-interactive reduction draws from the encoding method of Applebaum, Brakerski and Tsabary (TCC 2018). We extend these methods to ones that can be meaningfully analyzed even in the presence of malicious adversaries.
2019
EUROCRYPT
Two Round Information-Theoretic MPC with Malicious Security 📺
We provide the first constructions of two round information-theoretic (IT) secure multiparty computation (MPC) protocols in the plain model that tolerate any $$t<n/2$$t<n/2 malicious corruptions. Our protocols satisfy the strongest achievable standard notions of security in two rounds in different communication models.Previously, IT-MPC protocols in the plain model either required a larger number of rounds, or a smaller minority of corruptions.
2019
EUROCRYPT
Designated-Verifier Pseudorandom Generators, and Their Applications 📺
We provide a generic construction of non-interactive zero-knowledge (NIZK) schemes. Our construction is a refinement of Dwork and Naor’s (FOCS 2000) implementation of the hidden bits model using verifiable pseudorandom generators (VPRGs). Our refinement simplifies their construction and relaxes the necessary assumptions considerably.As a result of this conceptual improvement, we obtain interesting new instantiations:A designated-verifier NIZK (with unbounded soundness) based on the computational Diffie-Hellman (CDH) problem. If a pairing is available, this NIZK becomes publicly verifiable. This constitutes the first fully secure CDH-based designated-verifier NIZKs (and more generally, the first fully secure designated-verifier NIZK from a non-generic assumption which does not already imply publicly-verifiable NIZKs), and it answers an open problem recently raised by Kim and Wu (CRYPTO 2018).A NIZK based on the learning with errors (LWE) assumption, and assuming a non-interactive witness-indistinguishable (NIWI) proof system for bounded distance decoding (BDD). This simplifies and improves upon a recent NIZK from LWE that assumes a NIZK for BDD (Rothblum et al., PKC 2019).
2019
EUROCRYPT
Reusable Designated-Verifier NIZKs for all NP from CDH 📺
Non-interactive zero-knowledge proofs (NIZKs) are a fundamental cryptographic primitive. Despite a long history of research, we only know how to construct NIZKs under a few select assumptions, such as the hardness of factoring or using bilinear maps. Notably, there are no known constructions based on either the computational or decisional Diffie-Hellman (CDH/DDH) assumption without relying on a bilinear map.In this paper, we study a relaxation of NIZKs in the designated verifier setting (DV-NIZK), in which the public common-reference string is generated together with a secret key that is given to the verifier in order to verify proofs. In this setting, we distinguish between one-time and reusable schemes, depending on whether they can be used to prove only a single statement or arbitrarily many statements. For reusable schemes, the main difficulty is to ensure that soundness continues to hold even when the malicious prover learns whether various proofs are accepted or rejected by the verifier. One-time DV-NIZKs are known to exist for general NP statements assuming only public-key encryption. However, prior to this work, we did not have any construction of reusable DV-NIZKs for general NP statements from any assumption under which we didn’t already also have standard NIZKs.In this work, we construct reusable DV-NIZKs for general NP statements under the CDH assumption, without requiring a bilinear map. Our construction is based on the hidden-bits paradigm, which was previously used to construct standard NIZKs. We define a cryptographic primitive called a hidden-bits generator (HBG), along with a designated-verifier variant (DV-HBG), which modularly abstract out how to use this paradigm to get both standard NIZKs and reusable DV-NIZKs. We construct a DV-HBG scheme under the CDH assumption by relying on techniques from the Cramer-Shoup hash-proof system, and this yields our reusable DV-NIZK for general NP statements under CDH.We also consider a strengthening of DV-NIZKs to the malicious designated-verifier setting (MDV-NIZK) where the setup consists of an honestly generated common random string and the verifier then gets to choose his own (potentially malicious) public/secret key pair to generate/verify proofs. We construct MDV-NIZKs under the “one-more CDH” assumption without relying on bilinear maps.
2019
EUROCRYPT
Designated Verifier/Prover and Preprocessing NIZKs from Diffie-Hellman Assumptions 📺
In a non-interactive zero-knowledge (NIZK) proof, a prover can non-interactively convince a verifier of a statement without revealing any additional information. Thus far, numerous constructions of NIZKs have been provided in the common reference string (CRS) model (CRS-NIZK) from various assumptions, however, it still remains a long standing open problem to construct them from tools such as pairing-free groups or lattices. Recently, Kim and Wu (CRYPTO’18) made great progress regarding this problem and constructed the first lattice-based NIZK in a relaxed model called NIZKs in the preprocessing model (PP-NIZKs). In this model, there is a trusted statement-independent preprocessing phase where secret information are generated for the prover and verifier. Depending on whether those secret information can be made public, PP-NIZK captures CRS-NIZK, designated-verifier NIZK (DV-NIZK), and designated-prover NIZK (DP-NIZK) as special cases. It was left as an open problem by Kim and Wu whether we can construct such NIZKs from weak paring-free group assumptions such as DDH. As a further matter, all constructions of NIZKs from Diffie-Hellman (DH) type assumptions (regardless of whether it is over a paring-free or paring group) require the proof size to have a multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$|C|·poly(κ), where |C| is the size of the circuit that computes the $$\mathbf {NP}$$NP relation.In this work, we make progress of constructing (DV, DP, PP)-NIZKs with varying flavors from DH-type assumptions. Our results are summarized as follows:DV-NIZKs for $$\mathbf {NP}$$NP from the CDH assumption over pairing-free groups. This is the first construction of such NIZKs on pairing-free groups and resolves the open problem posed by Kim and Wu (CRYPTO’18).DP-NIZKs for $$\mathbf {NP}$$NP with short proof size from a DH-type assumption over pairing groups. Here, the proof size has an additive-overhead $$|C|+\mathsf {poly}(\kappa )$$|C|+poly(κ) rather then an multiplicative-overhead $$|C| \cdot \mathsf {poly}(\kappa )$$|C|·poly(κ). This is the first construction of such NIZKs (including CRS-NIZKs) that does not rely on the LWE assumption, fully-homomorphic encryption, indistinguishability obfuscation, or non-falsifiable assumptions.PP-NIZK for $$\mathbf {NP}$$NP with short proof size from the DDH assumption over pairing-free groups. This is the first PP-NIZK that achieves a short proof size from a weak and static DH-type assumption such as DDH. Similarly to the above DP-NIZK, the proof size is $$|C|+\mathsf {poly}(\kappa )$$|C|+poly(κ). This too serves as a solution to the open problem posed by Kim and Wu (CRYPTO’18). Along the way, we construct two new homomorphic authentication (HomAuth) schemes which may be of independent interest.
2019
EUROCRYPT
Building an Efficient Lattice Gadget Toolkit: Subgaussian Sampling and More 📺
Many advanced lattice cryptography applications require efficient algorithms for inverting the so-called “gadget” matrices, which are used to formally describe a digit decomposition problem that produces an output with specific (statistical) properties. The common gadget inversion problems are the classical (often binary) digit decomposition, subgaussian decomposition, Learning with Errors (LWE) decoding, and discrete Gaussian sampling. In this work, we build and implement an efficient lattice gadget toolkit that provides a general treatment of gadget matrices and algorithms for their inversion/sampling. The main contribution of our work is a set of new gadget matrices and algorithms for efficient subgaussian sampling that have a number of major theoretical and practical advantages over previously known algorithms. Another contribution deals with efficient algorithms for LWE decoding and discrete Gaussian sampling in the Residue Number System (RNS) representation.We implement the gadget toolkit in PALISADE and evaluate the performance of our algorithms both in terms of runtime and noise growth. We illustrate the improvements due to our algorithms by implementing a concrete complex application, key-policy attribute-based encryption (KP-ABE), which was previously considered impractical for CPU systems (except for a very small number of attributes). Our runtime improvements for the main bottleneck operation based on subgaussian sampling range from 18x (for 2 attributes) to 289x (for 16 attributes; the maximum number supported by a previous implementation). Our results are applicable to a wide range of other advanced applications in lattice cryptography, such as GSW-based homomorphic encryption schemes, leveled fully homomorphic signatures, other forms of ABE, some program obfuscation constructions, and more.
2019
EUROCRYPT
Approx-SVP in Ideal Lattices with Pre-processing 📺
We describe an algorithm to solve the approximate Shortest Vector Problem for lattices corresponding to ideals of the ring of integers of an arbitrary number field K. This algorithm has a pre-processing phase, whose run-time is exponential in  $$\log |\varDelta |$$ log|Δ| with  $$\varDelta $$ Δ the discriminant of K. Importantly, this pre-processing phase depends only on K. The pre-processing phase outputs an “advice”, whose bit-size is no more than the run-time of the query phase. Given this advice, the query phase of the algorithm takes as input any ideal I of the ring of integers, and outputs an element of I which is at most $$\exp (\widetilde{O}((\log |\varDelta |)^{\alpha +1}/n))$$ exp(O~((log|Δ|)α+1/n)) times longer than a shortest non-zero element of I (with respect to the Euclidean norm of its canonical embedding). This query phase runs in time and space $$\exp (\widetilde{O}( (\log |\varDelta |)^{\max (2/3, 1-2\alpha )}))$$ exp(O~((log|Δ|)max(2/3,1-2α))) in the classical setting, and $$\exp (\widetilde{O}((\log |\varDelta |)^{1-2\alpha }))$$ exp(O~((log|Δ|)1-2α)) in the quantum setting. The parameter $$\alpha $$ α can be chosen arbitrarily in [0, 1 / 2]. Both correctness and cost analyses rely on heuristic assumptions, whose validity is consistent with experiments.The algorithm builds upon the algorithms from Cramer et al. [EUROCRYPT 2016] and Cramer et al. [EUROCRYPT 2017]. It relies on the framework from Buchmann [Séminaire de théorie des nombres 1990], which allows to merge them and to extend their applicability from prime-power cyclotomic fields to all number fields. The cost improvements are obtained by allowing precomputations that depend on the field only.
2019
EUROCRYPT
The General Sieve Kernel and New Records in Lattice Reduction 📺
We propose the General Sieve Kernel (G6K, pronounced / e.si.ka/), an abstract stateful machine supporting a wide variety of lattice reduction strategies based on sieving algorithms. Using the basic instruction set of this abstract stateful machine, we first give concise formulations of previous sieving strategies from the literature and then propose new ones. We then also give a light variant of BKZ exploiting the features of our abstract stateful machine. This encapsulates several recent suggestions (Ducas at Eurocrypt 2018; Laarhoven and Mariano at PQCrypto 2018) to move beyond treating sieving as a blackbox SVP oracle and to utilise strong lattice reduction as preprocessing for sieving. Furthermore, we propose new tricks to minimise the sieving computation required for a given reduction quality with mechanisms such as recycling vectors between sieves, on-the-fly lifting and flexible insertions akin to Deep LLL and recent variants of Random Sampling Reduction.Moreover, we provide a highly optimised, multi-threaded and tweakable implementation of this machine which we make open-source. We then illustrate the performance of this implementation of our sieving strategies by applying G6K to various lattice challenges. In particular, our approach allows us to solve previously unsolved instances of the Darmstadt SVP (151, 153, 155) and LWE (e.g. (75, 0.005)) challenges. Our solution for the SVP-151 challenge was found 400 times faster than the time reported for the SVP-150 challenge, the previous record. For exact-SVP, we observe a performance crossover between G6K and FPLLL’s state of the art implementation of enumeration at dimension 70.
2019
EUROCRYPT
Misuse Attacks on Post-quantum Cryptosystems 📺
Many post-quantum cryptosystems which have been proposed in the National Institute of Standards and Technology (NIST) standardization process follow the same meta-algorithm, but in different algebras or different encoding methods. They usually propose two constructions, one being weaker and the other requiring a random oracle. We focus on the weak version of nine submissions to NIST. Submitters claim no security when the secret key is used several times. In this paper, we analyze how easy it is to run a key recovery under multiple key reuse. We mount a classical key recovery under plaintext checking attacks (i.e., with a plaintext checking oracle saying if a given ciphertext decrypts well to a given plaintext) and a quantum key recovery under chosen ciphertext attacks. In the latter case, we assume quantum access to the decryption oracle.
2019
EUROCRYPT
On ELFs, Deterministic Encryption, and Correlated-Input Security 📺
We construct deterministic public key encryption secure for any constant number of arbitrarily correlated computationally unpredictable messages. Prior works required either random oracles or non-standard knowledge assumptions. In contrast, our constructions are based on the exponential hardness of DDH, which is plausible in elliptic curve groups. Our central tool is a new trapdoored extremely lossy function, which modifies extremely lossy functions by adding a trapdoor.
2019
EUROCRYPT
New Techniques for Efficient Trapdoor Functions and Applications 📺
We develop techniques for constructing trapdoor functions (TDFs) with short image size and advanced security properties. Our approach builds on the recent framework of Garg and Hajiabadi [CRYPTO 2018]. As applications of our techniques, we obtainThe first construction of deterministic-encryption schemes for block-source inputs (both for the CPA and CCA cases) based on the Computational Diffie-Hellman (CDH) assumption. Moreover, by applying our efficiency-enhancing techniques, we obtain CDH-based schemes with ciphertext size linear in plaintext size.The first construction of lossy TDFs based on the Decisional Diffie-Hellman (DDH) assumption with image size linear in input size, while retaining the lossiness rate of [Peikert-Waters STOC 2008]. Prior to our work, all constructions of deterministic encryption based even on the stronger DDH assumption incurred a quadratic gap between the ciphertext and plaintext sizes. Moreover, all DDH-based constructions of lossy TDFs had image size quadratic in the input size.At a high level, we break the previous quadratic barriers by introducing a novel technique for encoding input bits via hardcore output bits with the use of erasure-resilient codes. All previous schemes used group elements for encoding input bits, resulting in quadratic expansions.
2019
EUROCRYPT
Symbolic Encryption with Pseudorandom Keys 📺
We give an efficient decision procedure that, on input two (acyclic) expressions making arbitrary use of common cryptographic primitives (namely, encryption and pseudorandom generators), determines (in polynomial time) if the two expressions produce computationally indistinguishable distributions for any cryptographic instantiation satisfying the standard security notions of pseudorandomness and indistinguishability under chosen plaintext attack. The procedure works by mapping each expression to a symbolic pattern that captures, in a fully abstract way, the information revealed by the expression to a computationally bounded observer. Our main result shows that if two expressions are mapped to different symbolic patterns, then there are secure pseudorandom generators and encryption schemes for which the two distributions can be distinguished with overwhelming advantage. At the same time if any two (acyclic) expressions are mapped to the same pattern, then the associated distributions are indistinguishable.
2019
EUROCRYPT
Efficient Circuit-Based PSI with Linear Communication 📺
We present a new protocol for computing a circuit which implements the private set intersection functionality (PSI). Using circuits for this task is advantageous over the usage of specific protocols for PSI, since many applications of PSI do not need to compute the intersection itself but rather functions based on the items in the intersection.Our protocol is the first circuit-based PSI protocol to achieve linear communication complexity. It is also concretely more efficient than all previous circuit-based PSI protocols. For example, for sets of size $$2^{20}$$ it improves the communication of the recent work of Pinkas et al. (EUROCRYPT’18) by more than 10 times, and improves the run time by a factor of 2.8x in the LAN setting, and by a factor of 5.8x in the WAN setting.Our protocol is based on the usage of a protocol for computing oblivious programmable pseudo-random functions (OPPRF), and more specifically on our technique to amortize the cost of batching together multiple invocations of OPPRF.
2019
EUROCRYPT
An Algebraic Approach to Maliciously Secure Private Set Intersection 📺
Private set intersection (PSI) is an important area of research and has been the focus of many works over the past decades. It describes the problem of finding an intersection between the input sets of at least two parties without revealing anything about the input sets apart from their intersection.In this paper, we present a new approach to compute the intersection between sets based on a primitive called Oblivious Linear Function Evaluation (OLE). On an abstract level, we use this primitive to efficiently add two polynomials in a randomized way while preserving the roots of the added polynomials. Setting the roots of the input polynomials to be the elements of the input sets, this directly yields an intersection protocol with optimal asymptotic communication complexity $$O(m\kappa )$$. We highlight that the protocol is information-theoretically secure against a malicious adversary assuming OLE.We also present a natural generalization of the 2-party protocol for the fully malicious multi-party case. Our protocol does away with expensive (homomorphic) threshold encryption and zero-knowledge proofs. Instead, we use simple combinatorial techniques to ensure the security. As a result we get a UC-secure protocol with asymptotically optimal communication complexity $$O((n^2+nm)\kappa )$$, where n is the number of parties, m is the set size and $$\kappa $$ is the security parameter. Apart from yielding an asymptotic improvement over previous works, our protocols are also conceptually simple and require only simple field arithmetic. Along the way we develop techniques that might be of independent interest.
2019
EUROCRYPT
On Finding Quantum Multi-collisions 📺
A k-collision for a compressing hash function H is a set of k distinct inputs that all map to the same output. In this work, we show that for any constant k, $$\varTheta \left( N^{\frac{1}{2}(1-\frac{1}{2^k-1})}\right) $$ quantum queries are both necessary and sufficient to achieve a k-collision with constant probability. This improves on both the best prior upper bound (Hosoyamada et al., ASIACRYPT 2017) and provides the first non-trivial lower bound, completely resolving the problem.
2019
EUROCRYPT
On Quantum Advantage in Information Theoretic Single-Server PIR 📺
In (single-server) Private Information Retrieval (PIR), a server holds a large database $${\mathtt {DB}}$$ of size n, and a client holds an index $$i \in [n]$$ and wishes to retrieve $${\mathtt {DB}}[i]$$ without revealing i to the server. It is well known that information theoretic privacy even against an “honest but curious” server requires $$\varOmega (n)$$ communication complexity. This is true even if quantum communication is allowed and is due to the ability of such an adversarial server to execute the protocol on a superposition of databases instead of on a specific database (“input purification attack”).Nevertheless, there have been some proposals of protocols that achieve sub-linear communication and appear to provide some notion of privacy. Most notably, a protocol due to Le Gall (ToC 2012) with communication complexity $$O(\sqrt{n})$$ , and a protocol by Kerenidis et al. (QIC 2016) with communication complexity $$O(\log (n))$$ , and O(n) shared entanglement.We show that, in a sense, input purification is the only potent adversarial strategy, and protocols such as the two protocols above are secure in a restricted variant of the quantum honest but curious (a.k.a specious) model. More explicitly, we propose a restricted privacy notion called anchored privacy, where the adversary is forced to execute on a classical database (i.e. the execution is anchored to a classical database). We show that for measurement-free protocols, anchored security against honest adversarial servers implies anchored privacy even against specious adversaries.Finally, we prove that even with (unlimited) pre-shared entanglement it is impossible to achieve security in the standard specious model with sub-linear communication, thus further substantiating the necessity of our relaxation. This lower bound may be of independent interest (in particular recalling that PIR is a special case of Fully Homomorphic Encryption).
2019
EUROCRYPT
Verifier-on-a-Leash: New Schemes for Verifiable Delegated Quantum Computation, with Quasilinear Resources 📺
The problem of reliably certifying the outcome of a computation performed by a quantum device is rapidly gaining relevance. We present two protocols for a classical verifier to verifiably delegate a quantum computation to two non-communicating but entangled quantum provers. Our protocols have near-optimal complexity in terms of the total resources employed by the verifier and the honest provers, with the total number of operations of each party, including the number of entangled pairs of qubits required of the honest provers, scaling as $$O(g\log g)$$ for delegating a circuit of size g. This is in contrast to previous protocols, whose overhead in terms of resources employed, while polynomial, is far beyond what is feasible in practice. Our first protocol requires a number of rounds that is linear in the depth of the circuit being delegated, and is blind, meaning neither prover can learn the circuit or its input. The second protocol is not blind, but requires only a constant number of rounds of interaction.Our main technical innovation is an efficient rigidity theorem which allows a verifier to test that two entangled provers perform measurements specified by an arbitrary m-qubit tensor product of single-qubit Clifford observables on their respective halves of m shared EPR pairs, with a robustness that is independent of m. Our two-prover classical-verifier delegation protocols are obtained by combining this rigidity theorem with a single-prover quantum-verifier protocol for the verifiable delegation of a quantum computation, introduced by Broadbent.
2019
EUROCRYPT
Ring Signatures: Logarithmic-Size, No Setup—from Standard Assumptions 📺
Ring signatures allow for creating signatures on behalf of an ad hoc group of signers, hiding the true identity of the signer among the group. A natural goal is to construct a ring signature scheme for which the signature size is short in the number of ring members. Moreover, such a construction should not rely on a trusted setup and be proven secure under falsifiable standard assumptions. Despite many years of research this question is still open.In this paper, we present the first construction of size-optimal ring signatures which do not rely on a trusted setup or the random oracle heuristic. Specifically, our scheme can be instantiated from standard assumptions and the size of signatures grows only logarithmically in the number of ring members.We also extend our techniques to the setting of linkable ring signatures, where signatures created using the same signing key can be linked.
2019
EUROCRYPT
A Modular Treatment of Blind Signatures from Identification Schemes 📺
We propose a modular security treatment of blind signatures derived from linear identification schemes in the random oracle model. To this end, we present a general framework that captures several well known schemes from the literature and allows to prove their security. Our modular security reduction introduces a new security notion for identification schemes called One-More-Man In the Middle Security which we show equivalent to the classical One-More-Unforgeability notion for blind signatures.We also propose a generalized version of the Forking Lemma due to Bellare and Neven (CCS 2006) and show how it can be used to greatly improve the understandability of the classical security proofs for blind signatures schemes by Pointcheval and Stern (Journal of Cryptology 2000).
2019
EUROCRYPT
Efficient Verifiable Delay Functions 📺
Best Young Researcher Paper
We construct a verifiable delay function (VDF). A VDF is a function whose evaluation requires running a given number of sequential steps, yet the result can be efficiently verified. They have applications in decentralised systems, such as the generation of trustworthy public randomness in a trustless environment, or resource-efficient blockchains. To construct our VDF, we actually build a trapdoor VDF. A trapdoor VDF is essentially a VDF which can be evaluated efficiently by parties who know a secret (the trapdoor). By setting up this scheme in a way that the trapdoor is unknown (not even by the party running the setup, so that there is no need for a trusted setup environment), we obtain a simple VDF. Our construction is based on groups of unknown order such as an RSA group, or the class group of an imaginary quadratic field. The output of our construction is very short (the result and the proof of correctness are each a single element of the group), and the verification of correctness is very efficient.
2019
EUROCRYPT
Quantum Lightning Never Strikes the Same State Twice 📺
Best Paper
Public key quantum money can be seen as a version of the quantum no-cloning theorem that holds even when the quantum states can be verified by the adversary. In this work, we investigate quantum lightning where no-cloning holds even when the adversary herself generates the quantum state to be cloned. We then study quantum money and quantum lightning, showing the following results:We demonstrate the usefulness of quantum lightning beyond quantum money by showing several potential applications, such as generating random strings with a proof of entropy, to completely decentralized cryptocurrency without a block-chain, where transactions is instant and local.We give Either/Or results for quantum money/lightning, showing that either signatures/hash functions/commitment schemes meet very strong recently proposed notions of security, or they yield quantum money or lightning. Given the difficulty in constructing public key quantum money, this suggests that natural schemes do attain strong security guarantees.We show that instantiating the quantum money scheme of Aaronson and Christiano [STOC’12] with indistinguishability obfuscation that is secure against quantum computers yields a secure quantum money scheme. This construction can be seen as an instance of our Either/Or result for signatures, giving the first separation between two security notions for signatures from the literature.Finally, we give a plausible construction for quantum lightning, which we prove secure under an assumption related to the multi-collision resistance of degree-2 hash functions. Our construction is inspired by our Either/Or result for hash functions, and yields the first plausible standard model instantiation of a non-collapsing collision resistant hash function. This improves on a result of Unruh [Eurocrypt’16] which is relative to a quantum oracle.
2019
EUROCRYPT
Secret-Sharing Schemes for General and Uniform Access Structures 📺
A secret-sharing scheme allows some authorized sets of parties to reconstruct a secret; the collection of authorized sets is called the access structure. For over 30 years, it was known that any (monotone) collection of authorized sets can be realized by a secret-sharing scheme whose shares are of size $$2^{n-o(n)}$$ and until recently no better scheme was known. In a recent breakthrough, Liu and Vaikuntanathan (STOC 2018) have reduced the share size to $$O(2^{0.994n})$$. Our first contribution is improving the exponent of secret sharing down to 0.892. For the special case of linear secret-sharing schemes, we get an exponent of 0.942 (compared to 0.999 of Liu and Vaikuntanathan).Motivated by the construction of Liu and Vaikuntanathan, we study secret-sharing schemes for uniform access structures. An access structure is k-uniform if all sets of size larger than k are authorized, all sets of size smaller than k are unauthorized, and each set of size k can be either authorized or unauthorized. The construction of Liu and Vaikuntanathan starts from protocols for conditional disclosure of secrets, constructs secret-sharing schemes for uniform access structures from them, and combines these schemes in order to obtain secret-sharing schemes for general access structures. Our second contribution in this paper is constructions of secret-sharing schemes for uniform access structures. We achieve the following results:A secret-sharing scheme for k-uniform access structures for large secrets in which the share size is $$O(k^2)$$ times the size of the secret.A linear secret-sharing scheme for k-uniform access structures for a binary secret in which the share size is $$\tilde{O}(2^{h(k/n)n/2})$$ (where h is the binary entropy function). By counting arguments, this construction is optimal (up to polynomial factors).A secret-sharing scheme for k-uniform access structures for a binary secret in which the share size is $$2^{\tilde{O}(\sqrt{k \log n})}$$. Our third contribution is a construction of ad-hoc PSM protocols, i.e., PSM protocols in which only a subset of the parties will compute a function on their inputs. This result is based on ideas we used in the construction of secret-sharing schemes for k-uniform access structures for a binary secret.
2019
EUROCRYPT
Towards Optimal Robust Secret Sharing with Security Against a Rushing Adversary 📺
Robust secret sharing enables the reconstruction of a secret-shared message in the presence of up to t (out of n) incorrect shares. The most challenging case is when $$n = 2t+1$$, which is the largest t for which the task is still possible, up to a small error probability $$2^{-\kappa }$$ and with some overhead in the share size.Recently, Bishop, Pastro, Rajaraman and Wichs [3] proposed a scheme with an (almost) optimal overhead of $$\widetilde{O}(\kappa )$$. This seems to answer the open question posed by Cevallos et al. [6] who proposed a scheme with overhead of $$\widetilde{O}(n+\kappa )$$ and asked whether the linear dependency on n was necessary or not. However, a subtle issue with Bishop et al.’s solution is that it (implicitly) assumes a non-rushing adversary, and thus it satisfies a weaker notion of security compared to the scheme by Cevallos et al. [6], or to the classical scheme by Rabin and BenOr [13].In this work, we almost close this gap. We propose a new robust secret sharing scheme that offers full security against a rushing adversary, and that has an overhead of $$O(\kappa n^\varepsilon )$$, where $$\varepsilon > 0$$ is arbitrary but fixed. This $$n^\varepsilon $$-factor is obviously worse than the $$\mathrm {polylog}(n)$$-factor hidden in the $$\widetilde{O}$$ notation of the scheme of Bishop et al. [3], but it greatly improves on the linear dependency on n of the best known scheme that features security against a rushing adversary (when $$\kappa $$ is substantially smaller than n).A small variation of our scheme has the same $$\widetilde{O}(\kappa )$$ overhead as the scheme of Bishop et al. and achieves security against a rushing adversary, but suffers from a (slightly) superpolynomial reconstruction complexity.
2019
EUROCRYPT
Simple Schemes in the Bounded Storage Model 📺
The bounded storage model promises unconditional security proofs against computationally unbounded adversaries, so long as the adversary’s space is bounded. In this work, we develop simple new constructions of two-party key agreement, bit commitment, and oblivious transfer in this model. In addition to simplicity, our constructions have several advantages over prior work, including an improved number of rounds and enhanced correctness. Our schemes are based on Raz’s lower bound for learning parities.
2019
EUROCRYPT
From Collisions to Chosen-Prefix Collisions Application to Full SHA-1 📺
A chosen-prefix collision attack is a stronger variant of a collision attack, where an arbitrary pair of challenge prefixes are turned into a collision. Chosen-prefix collisions are usually significantly harder to produce than (identical-prefix) collisions, but the practical impact of such an attack is much larger. While many cryptographic constructions rely on collision-resistance for their security proofs, collision attacks are hard to turn into break of concrete protocols, because the adversary has a limited control over the colliding messages. On the other hand, chosen-prefix collisions have been shown to break certificates (by creating a rogue CA) and many internet protocols (TLS, SSH, IPsec).In this article, we propose new techniques to turn collision attacks into chosen-prefix collision attacks. Our strategy is composed of two phases: first a birthday search that aims at taking the random chaining variable difference (due to the chosen-prefix model) to a set of pre-defined target differences. Then, using a multi-block approach, carefully analysing the clustering effect, we map this new chaining variable difference to a colliding pair of states using techniques developed for collision attacks.We apply those techniques to MD5 and SHA-1, and obtain improved attacks. In particular, we have a chosen-prefix collision attack against SHA-1 with complexity between $$2^{66.9}$$ and $$2^{69.4}$$ (depending on assumptions about the cost of finding near-collision blocks), while the best-known attack has complexity $$2^{77.1}$$. This is within a small factor of the complexity of the classical collision attack on SHA-1 (estimated as $$2^{64.7}$$). This represents yet another warning that industries and users have to move away from using SHA-1 as soon as possible.
2019
EUROCRYPT
Preimage Attacks on Round-Reduced Keccak-224/256 via an Allocating Approach 📺
We present new preimage attacks on standard Keccak-224 and Keccak-256 that are reduced to 3 and 4 rounds. An allocating approach is used in the attacks, and the whole complexity is allocated to two stages, such that fewer constraints are considered and the complexity is lowered in each stage. Specifically, we are trying to find a 2-block preimage, instead of a 1-block one, for a given hash value, and the first and second message blocks are found in two stages, respectively. Both the message blocks are constrained by a set of newly proposed conditions on the middle state, which are weaker than those brought by the initial values and the hash values. Thus, the complexities in the two stages are both lower than that of finding a 1-block preimage directly. Together with the basic allocating approach, an improved method is given to balance the complexities of two stages, and hence, obtains the optimal attacks. As a result, we present the best theoretical preimage attacks on Keccak-224 and Keccak-256 that are reduced to 3 and 4 rounds. Moreover, we practically found a (second) preimage for 3-round Keccak-224 with a complexity of $$2^{39.39}$$.
2019
EUROCRYPT
bison Instantiating the Whitened Swap-Or-Not Construction 📺
We give the first practical instance – bison – of the Whitened Swap-Or-Not construction. After clarifying inherent limitations of the construction, we point out that this way of building block ciphers allows easy and very strong arguments against differential attacks.
2019
EUROCRYPT
Worst-Case Hardness for LPN and Cryptographic Hashing via Code Smoothing 📺
We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of $$1/2-1/\mathrm{poly}(n)$$ . Thus we can only show that “very hard” LPN is harder than some “very mildly hard” worst case problem. We note that LPN with noise $$1/2-1/\mathrm{poly}(n)$$ already implies symmetric cryptography.Specifically, we consider the (n, m, w)-nearest codeword problem ((n, m, w)-NCP) which takes as input a generating matrix for a binary linear code in m dimensions and rank n, and a target vector which is very close to the code (Hamming distance at most w), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error $$w/m \approx {\log ^2 n}/{n}$$ , (n, m, w)-NCP can be solved given oracle access to an LPN distinguisher with noise ratio $$1/2-1/\mathrm{poly}(n)$$ .Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that (n, m, w)-NCP with the aforementioned parameters lies in the complexity class $$\mathrm {{Search}\hbox {-}\mathcal {BPP}}^\mathcal {SZK}$$ (i.e. reducible to a problem that has a statistical zero knowledge protocol) implying that it is unlikely to be $$\mathcal {NP}$$ -hard. We then show that the hardness of LPN with very low noise rate $$\log ^2(n)/n$$ implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in $$\mathcal {BPP}^\mathcal {SZK}$$ ).
2019
EUROCRYPT
New Techniques for Obfuscating Conjunctions 📺
A conjunction is a function $$f(x_1,\dots ,x_n) = \bigwedge _{i \in S} l_i$$ where $$S \subseteq [n]$$ and each $$l_i$$ is $$x_i$$ or $$\lnot x_i$$. Bishop et al. (CRYPTO 2018) recently proposed obfuscating conjunctions by embedding them in the error positions of a noisy Reed-Solomon codeword and placing the codeword in a group exponent. They prove distributional virtual black box (VBB) security in the generic group model for random conjunctions where $$|S| \ge 0.226n$$. While conjunction obfuscation is known from LWE [31, 47], these constructions rely on substantial technical machinery.In this work, we conduct an extensive study of simple conjunction obfuscation techniques. We abstract the Bishop et al. scheme to obtain an equivalent yet more efficient “dual” scheme that can handle conjunctions over exponential size alphabets. This scheme admits a straightforward proof of generic group security, which we combine with a novel combinatorial argument to obtain distributional VBB security for |S| of any size.If we replace the Reed-Solomon code with a random binary linear code, we can prove security from standard LPN and avoid encoding in a group. This addresses an open problem posed by Bishop et al. to prove security of this simple approach in the standard model.We give a new construction that achieves information theoretic distributional VBB security and weak functionality preservation for $$|S| \ge n - n^\delta $$ and $$\delta < 1$$. Assuming discrete log and $$\delta < 1/2$$, we satisfy a stronger notion of functionality preservation for computationally bounded adversaries while still achieving information theoretic security.
2019
EUROCRYPT
Distributional Collision Resistance Beyond One-Way Functions 📺
Distributional collision resistance is a relaxation of collision resistance that only requires that it is hard to sample a collision (x, y) where x is uniformly random and y is uniformly random conditioned on colliding with x. The notion lies between one-wayness and collision resistance, but its exact power is still not well-understood. On one hand, distributional collision resistant hash functions cannot be built from one-way functions in a black-box way, which may suggest that they are stronger. On the other hand, so far, they have not yielded any applications beyond one-way functions.Assuming distributional collision resistant hash functions, we construct constant-round statistically hiding commitment scheme. Such commitments are not known based on one-way functions, and are impossible to obtain from one-way functions in a black-box way. Our construction relies on the reduction from inaccessible entropy generators to statistically hiding commitments by Haitner et al. (STOC ’09). In the converse direction, we show that two-message statistically hiding commitments imply distributional collision resistance, thereby establishing a loose equivalence between the two notions.A corollary of the first result is that constant-round statistically hiding commitments are implied by average-case hardness in the class $${\textsf {SZK}}$$ (which is known to imply distributional collision resistance). This implication seems to be folklore, but to the best of our knowledge has not been proven explicitly. We provide yet another proof of this implication, which is arguably more direct than the one going through distributional collision resistance.
2019
EUROCRYPT
Multi-target Attacks on the Picnic Signature Scheme and Related Protocols 📺
Picnic is a signature scheme that was presented at ACM CCS 2017 by Chase et al. and submitted to NIST’s post-quantum standardization project. Among all submissions to NIST’s project, Picnic is one of the most innovative, making use of recent progress in construction of practically efficient zero-knowledge (ZK) protocols for general circuits.In this paper, we devise multi-target attacks on Picnic and its underlying ZK protocol, ZKB++. Given access to S signatures, produced by a single or by several users, our attack can (information theoretically) recover the $$\kappa $$-bit signing key of a user in complexity of about $$2^{\kappa - 7}/S$$. This is faster than Picnic’s claimed $$2^{\kappa }$$ security against classical (non-quantum) attacks by a factor of $$2^7 \cdot S$$ (as each signature contains about $$2^7$$ attack targets).Whereas in most multi-target attacks, the attacker can easily sort and match the available targets, this is not the case in our attack on Picnic, as different bits of information are available for each target. Consequently, it is challenging to reach the information theoretic complexity in a computational model, and we had to perform cryptanalytic optimizations by carefully analyzing ZKB++ and its underlying circuit. Our best attack for $$\kappa = 128$$ has time complexity of $$T = 2^{77}$$ for $$S = 2^{64}$$. Alternatively, we can reach the information theoretic complexity of $$T = 2^{64}$$ for $$S = 2^{57}$$, given that all signatures are produced with the same signing key.Our attack exploits a weakness in the way that the Picnic signing algorithm uses a pseudo-random generator. The weakness is fixed in the recent Picnic 2.0 version.In addition to our attack on Picnic, we show that a recently proposed improvement of the ZKB++ protocol (due to Katz, Kolesnikov and Wang) is vulnerable to a similar multi-target attack.
2019
EUROCRYPT
Durandal: A Rank Metric Based Signature Scheme 📺
We describe a variation of the Schnorr-Lyubashevsky approach to devising signature schemes that is adapted to rank based cryptography. This new approach enables us to obtain a randomization of the signature, which previously seemed difficult to derive for code-based cryptography. We provide a detailed analysis of attacks and an EUF-CMA proof for our scheme. Our scheme relies on the security of the Ideal Rank Support Learning and the Ideal Rank Syndrome problems and a newly introduced problem: Product Spaces Subspaces Indistinguishability, for which we give a detailed analysis. Overall the parameters we propose are efficient and comparable in terms of signature size to the Dilithium lattice-based scheme, with a signature size of 4 kB for a public key of size less than 20 kB.
2019
EUROCRYPT
SeaSign: Compact Isogeny Signatures from Class Group Actions 📺
We give a new signature scheme for isogenies that combines the class group actions of CSIDH with the notion of Fiat-Shamir with aborts. Our techniques allow to have signatures of size less than one kilobyte at the 128-bit security level, even with tight security reduction (to a non-standard problem) in the quantum random oracle model. Hence our signatures are potentially shorter than lattice signatures, but signing and verification are currently very expensive.
2019
EUROCRYPT
Differential Privacy and the People's Data 📺
IACR Distingished Lecture
2019
FSE
On Invariant Attacks 📺
Invited talk
2019
TCHES
Methodology for Efficient CNN Architectures in Profiling Attacks 📺
The side-channel community recently investigated a new approach, based on deep learning, to significantly improve profiled attacks against embedded systems. Previous works have shown the benefit of using convolutional neural networks (CNN) to limit the effect of some countermeasures such as desynchronization. Compared with template attacks, deep learning techniques can deal with trace misalignment and the high dimensionality of the data. Pre-processing is no longer mandatory. However, the performance of attacks depends to a great extent on the choice of each hyperparameter used to configure a CNN architecture. Hence, we cannot perfectly harness the potential of deep neural networks without a clear understanding of the network’s inner-workings. To reduce this gap, we propose to clearly explain the role of each hyperparameters during the feature selection phase using some specific visualization techniques including Weight Visualization, Gradient Visualization and Heatmaps. By highlighting which features are retained by filters, heatmaps come in handy when a security evaluator tries to interpret and understand the efficiency of CNN. We propose a methodology for building efficient CNN architectures in terms of attack efficiency and network complexity, even in the presence of desynchronization. We evaluate our methodology using public datasets with and without desynchronization. In each case, our methodology outperforms the previous state-of-the-art CNN models while significantly reducing network complexity. Our networks are up to 25 times more efficient than previous state-of-the-art while their complexity is up to 31810 times smaller. Our results show that CNN networks do not need to be very complex to perform well in the side-channel context.
2019
TCHES
Recovering the CTR_DRBG state in 256 traces 📺
The NIST CTR_DRBG specification prescribes a maximum size on each random number request, limiting the number of encryptions in CTR mode with the same key to 4 096. Jaffe’s attack on AES in CTR mode without knowledge of the nonce from CHES 2007 requires 216 traces, which is safely above this recommendation. In this work, we exhibit an attack that requires only 256 traces, which is well within the NIST limits. We use simulated traces to investigate the success probability as a function of the signal-to-noise ratio. We also demonstrate its success in practice by attacking an AES-CTR implementation on a Cortex-M4 among others and recovering both the key and nonce. Our traces and code are made openly available for reproducibility.
2019
TCHES
Lightweight Authenticated Encryption Mode of Operation for Tweakable Block Ciphers 📺
The use of a small block length is a common strategy when designing lightweight (tweakable) block ciphers (TBCs), and several 64-bit primitives have been proposed. However, when such a 64-bit primitive is used for an authenticated encryption with birthday-bound security, it has only 32-bit data complexity, which is subject to practical attacks. To employ a short block length without compromising security, we propose PFB, a lightweight TBC-based authenticated encryption with associated data mode, which achieves beyond-birthday-bound security. For this purpose, we extend iCOFB, which is originally defined with a tweakable random function. Unlike iCOFB, the proposed method can be instantiated with a TBC using a fixed tweak length and can handle variable-length data. Moreover, its security bound is improved and independent of the data length; this improves the key lifetime, particularly in lightweight blocks with a small size. The proposed method also covers a broader class of feedback functions because of the generalization presented in our proof. We evaluate the concrete hardware performances of PFB, which benefits from the small block length and shows particularly good performances in threshold implementation.
2019
TCHES
SITM: See-In-The-Middle Side-Channel Assisted Middle Round Differential Cryptanalysis on SPN Block Ciphers 📺
Side-channel analysis constitutes a powerful attack vector against cryptographic implementations. Techniques such as power and electromagnetic side-channel analysis have been extensively studied to provide an efficient way to recover the secret key used in cryptographic algorithms. To protect against such attacks, countermeasure designers have developed protection methods, such as masking and hiding, to make the attacks harder. However, due to significant overheads, these protections are sometimes deployed only at the beginning and the end of encryption, which are the main targets for side-channel attacks.In this paper, we present a methodology for side-channel assisted differential cryptanalysis attack to target middle rounds of block cipher implementations. Such method presents a powerful attack vector against designs that normally only protect the beginning and end rounds of ciphers. We generalize the attack to SPN based ciphers and calculate the effort the attacker needs to recover the secret key. We provide experimental results on 8-bit and 32-bit microcontrollers. We provide case studies on state-of-the-art symmetric block ciphers, such as AES, SKINNY, and PRESENT. Furthermore, we show how to attack shuffling-protected implementations.
2019
TCHES
Power Analysis on NTRU Prime 📺
This paper applies a variety of power analysis techniques to several implementations of NTRU Prime, a Round 2 submission to the NIST PQC Standardization Project. The techniques include vertical correlation power analysis, horizontal indepth correlation power analysis, online template attacks, and chosen-input simple power analysis. The implementations include the reference one, the one optimized using smladx, and three protected ones. Adversaries in this study can fully recover private keys with one single trace of short observation span, with few template traces from a fully controlled device similar to the target and no a priori power model, or sometimes even with the naked eye. The techniques target the constant-time generic polynomial multiplications in the product scanning method. Though in this work they focus on the decapsulation, they also work on the key generation and encapsulation of NTRU Prime. Moreover, they apply to the ideal-lattice-based cryptosystems where each private-key coefficient comes from a small set of possibilities.
2019
TCHES
Share-slicing: Friend or Foe? 📺
Masking is a well loved and widely deployed countermeasure against side channel attacks, in particular in software. Under certain assumptions (w.r.t. independence and noise level), masking provably prevents attacks up to a certain security order and leads to a predictable increase in the number of required leakages for successful attacks beyond this order. The noise level in typical processors where software masking is used may not be very high, thus low masking orders are not sufficient for real world security. Higher order masking however comes at a great cost, and therefore a number techniques have been published over the years that make such implementations more efficient via parallelisation in the form of bit or share slicing. We take two highly regarded schemes (ISW and Barthe et al.), and some corresponding open source implementations that make use of share slicing, and discuss their true security on an ARM Cortex-M0 and an ARM Cortex-M3 processor (both from the LPC series). We show that micro-architectural features of the M0 and M3 undermine the independence assumptions made in masking proofs and thus their theoretical guarantees do not translate into practice (even worse it seems unpredictable at which order leaks can be expected). Our results demonstrate how difficult it is to link theoretical security proofs to practical real-world security guarantees.
2019
TCHES
CAS-Lock: A Security-Corruptibility Trade-off Resilient Logic Locking Scheme 📺
Logic locking has recently been proposed as a solution for protecting gatelevel semiconductor intellectual property (IP). However, numerous attacks have been mounted on this technique, which either compromise the locking key or restore the original circuit functionality. SAT attacks leverage golden IC information to rule out all incorrect key classes, while bypass and removal attacks exploit the limited output corruptibility and/or structural traces of SAT-resistant locking schemes. In this paper, we propose a new lightweight locking technique: CAS-Lock (cascaded locking) which nullifies both SAT and bypass attacks, while simultaneously maintaining nontrivial output corruptibility. This property of CAS-Lock is in stark contrast to the well-accepted notion that there is an inherent trade-off between output corruptibility and SAT resistance. We theoretically and experimentally validate the SAT resistance of CAS-Lock, and show that it reduces the attack to brute-force, regardless of its construction. Further, we evaluate its resistance to recently proposed approximate SAT attacks (i.e., AppSAT). We also propose a modified version of CAS-Lock (mirrored CAS-Lock or M-CAS) to protect against removal attacks. M-CAS allows a trade-off evaluation between removal attack and SAT attack resiliency, while incurring minimal area overhead. We also show how M-CAS parameters such as the implemented Boolean function and selected key can be tuned by the designer so that a desired level of protection against all known attacks can be achieved.
2019
TCHES
Improved Heuristics for Short Linear Programs 📺
In this article, we propose new heuristics for minimising the amount of XOR gates required to compute a system of linear equations in GF(2). We first revisit the well known Boyar-Peralta strategy and argue that a proper randomisation process during the selection phases can lead to great improvements. We then propose new selection criteria and explain their rationale. Our new methods outperform state-of-the-art algorithms such as Paar or Boyar-Peralta (or open synthesis tools such as Yosys) when tested on random matrices with various densities. They can be applied to matrices of reasonable sizes (up to about 32 × 32). Notably, we provide a new implementation record for the matrix underlying the MixColumns function of the AES block cipher, requiring only 94 XORs.
2019
TCHES
Cache vs. Key-Dependency: Side Channeling an Implementation of Pilsung 📺
Over the past two decades, cache attacks have been identified as a threat to the security of cipher implementations. These attacks recover secret information by combining observations of the victim cache accesses with the knowledge of the internal structure of the cipher. So far, cache attacks have been applied to ciphers that have fixed state transformations, leaving open the question of whether using secret, key-dependent transformations enhances the security against such attacks. In this paper we investigate this question. We look at an implementation of the North Korean cipher Pilsung, as reverse-engineered by Kryptos Logic. Like AES, Pilsung is a permutation-substitution cipher, but unlike AES, both the substitution and the permutation steps in Pilsung depend on the key, and are not known to the attacker. We analyze Pilsung and design a cache-based attack. We improve the state of the art by developing techniques for reversing secret-dependent transformations. Our attack, which requires an average of eight minutes on a typical laptop computer, demonstrates that secret transformations do not necessarily protect ciphers against side channel attacks.
2019
TCHES
TEDT, a Leakage-Resist AEAD Mode for High Physical Security Applications 📺
We propose TEDT, a new Authenticated Encryption with Associated Data (AEAD) mode leveraging Tweakable Block Ciphers (TBCs). TEDT provides the following features: (i) It offers full leakage-resistance, that is, it limits the exploitability of physical leakages via side-channel attacks, even if these leakages happen during every message encryption and decryption operation. Moreover, the leakage integrity bound is asymptotically optimal in the multi-user setting. (ii) It offers nonce misuse-resilience, that is, the repetition of nonces does not impact the security of ciphertexts produced with fresh nonces. (iii) It can be implemented with a remarkably low energy cost when strong resistance to side-channel attacks is needed, supports online encryption and handles static and incremental associated data efficiently. Concretely, TEDT encourages so-called leveled implementations, in which two TBCs are implemented: the first one needs strong and energy demanding protections against side-channel attacks but is used in a limited way, while the other only requires weak and energy-efficient protections and performs the bulk of the computation. As a result, TEDT leads to more energy-efficient implementations compared to traditional AEAD schemes, whose side-channel security requires to uniformly protect every (T)BC execution.
2019
TCHES
Bluethunder: A 2-level Directional Predictor Based Side-Channel Attack against SGX 📺
Software Guard Extension (SGX) is a hardware-based trusted execution environment (TEE) implemented in recent Intel commodity processors. By isolating the memory of security-critical applications from untrusted software, this mechanism provides users with a strongly shielded environment called enclave for executing programs safely. However, recent studies have demonstrated that SGX enclaves are vulnerable to side-channel attacks. In order to deal with these attacks, several protection techniques have been studied and utilized.In this paper, we explore a new pattern history table (PHT) based side-channel attack against SGX named Bluethunder, which can bypass existing protection techniques and reveal the secret information inside an enclave. Comparing to existing PHT-based attacks (such as Branchscope [ERAG+18]), Bluethunder abuses the 2-level directional predictor in the branch prediction unit, on top of which we develop an exploitation methodology to disclose the input-dependent control flow in an enclave. Since the cost of training the 2-level predictor is pretty low, Bluethunder can achieve a high bandwidth during the attack. We evaluate our attacks on two case studies: extracting the format string information in the vfprintf function in the Intel SGX SDK and attacking the implementation of RSA decryption algorithm in mbed TLS. Both attacks show that Bluethunder can recover fine-grained information inside an enclave with low training overhead, which outperforms the latest PHT-based side channel attack (Branchscope) by 52×. Specifically, in the second attack, Bluethunder can recover the RSA private key with 96.76% accuracy in a single run.
2019
TCHES
A Comprehensive Study of Deep Learning for Side-Channel Analysis 📺
Recently, several studies have been published on the application of deep learning to enhance Side-Channel Attacks (SCA). These seminal works have practically validated the soundness of the approach, especially against implementations protected by masking or by jittering. Concurrently, important open issues have emerged. Among them, the relevance of machine (and thereby deep) learning based SCA has been questioned in several papers based on the lack of relation between the accuracy, a typical performance metric used in machine learning, and common SCA metrics like the Guessing entropy or the key-discrimination success rate. Also, the impact of the classical side-channel counter-measures on the efficiency of deep learning has been questioned, in particular by the semi-conductor industry. Both questions enlighten the importance of studying the theoretical soundness of deep learning in the context of side-channel and of developing means to quantify its efficiency, especially with respect to the optimality bounds published so far in the literature for side-channel leakage exploitation. The first main contribution of this paper directly concerns the latter point. It is indeed proved that minimizing the Negative Log Likelihood (NLL for short) loss function during the training of deep neural networks is actually asymptotically equivalent to maximizing the Perceived Information introduced by Renauld et al. at EUROCRYPT 2011 as a lower bound of the Mutual Information between the leakage and the target secret. Hence, such a training can be considered as an efficient and effective estimation of the PI, and thereby of the MI (known to be complex to accurately estimate in the context of secure implementations). As a second direct consequence of our main contribution, it is argued that, in a side-channel exploitation context, choosing the NLL loss function to drive the training is sound from an information theory point of view. As a third contribution, classical counter-measures like Boolean masking or execution flow shuffling, initially dedicated to classical SCA, are proved to stay sound against deep Learning based attacks.
2019
TCHES
AuCPace: Efficient verifier-based PAKE protocol tailored for the IIoT 📺
Increasingly connectivity becomes integrated in products and devices that previously operated in a stand-alone setting. This observation holds for many consumer applications in the so-called "Internet of Things" (IoT) as well as for corresponding industry applications (IIoT), such as industrial process sensors. Often the only practicable means for authentication of human users is a password. The security of password-based authentication schemes frequently forms the weakest point of the security infrastructure. In this paper we first explain why a tailored protocol designed for the IIoT use case is considered necessary. The differences between IIoT and the conventional Internet use-cases result in largely modified threats and require special procedures for allowing both, convenient and secure use in the highly constrained industrial setting. Specifically the use of a verifier-based password-authenticated key-exchange (V-PAKE) protocol as a hedge against public-key-infrastructure (PKI) failures is considered important. Availability concerns for the case of failures of (part of) the communication infrastructure makes local storage of access credentials mandatory. The larger threat of physical attacks makes it important to use memory-hard password hashing. This paper presents a corresponding tailored protocol, AuCPace, together with a security proof within the Universal Composability (UC) framework considering fully adaptive adversaries. We also introduce a new security notion of partially augmented PAKE that provides specific performance advantages and makes them suitable for a larger set of IIoT applications. We also present an actual instantiation of our protocol, AuCPace25519, and present performance results on ARM Cortex-M0 and Cortex-M4 microcontrollers. Our implementation realizes new speed-records for PAKE and X25519 Diffie-Hellman for the ARM Cortex M4 architecture.
2019
TCHES
Best Information is Most Successful 📺
Using information-theoretic tools, this paper establishes a mathematical link between the probability of success of a side-channel attack and the minimum number of queries to reach a given success rate, valid for any possible distinguishing rule and with the best possible knowledge on the attacker’s side. This link is a lower bound on the number of queries highly depends on Shannon’s mutual information between the traces and the secret key. This leads us to derive upper bounds on the mutual information that are as tight as possible and can be easily calculated. It turns out that, in the case of an additive white Gaussian noise, the bound on the probability of success of any attack is directly related to the signal to noise ratio. This leads to very easy computations and predictions of the success rate in any leakage model.
2019
TCHES
Secure Data Retrieval on the Cloud: Homomorphic Encryption meets Coresets 📺
Secure report is the problem of a client that retrieves all records matching specified attributes from a database table at the server (e.g. cloud), as in SQL SELECT queries, but where the query and the database are encrypted. Here, only the client has the secret key, but still the server is expected to compute and return the encrypted result. Secure report is theoretically possible with Fully Homomorphic Encryption (FHE). However, the current state-of-the-art solutions are realized by a polynomial of degree that is at least linear in the number m of records, which is too slow in practice even for very small databases. We present the first solution that is realized by a polynomial that attains degree independent of the number of records m, as well as the first implementation of an FHE solution to Secure report. This is by suggesting a novel paradigm that forges a link between cryptography and modern data summarization techniques known as coresets (core-sets), and sketches in particular. The key idea is to compute only a coreset of the desired report. Since the coreset is small, the client can quickly decode the desired report that the server computes after decrypting the coreset. We implemented our main reporting system in an open source library. This is the first implemented system that can answer such database queries when processing only FHE encrypted data and queries. As our analysis promises, the experimental results show that we can run Secure report queries on billions records in minutes on an Amazon EC2 server, compared to less than a hundred-thousands in previous FHE based solutions.
2019
TCHES
Non-Profiled Deep Learning-based Side-Channel attacks with Sensitivity Analysis 📺
Deep Learning has recently been introduced as a new alternative to perform Side-Channel analysis [MPP16]. Until now, studies have been focused on applying Deep Learning techniques to perform Profiled Side-Channel attacks where an attacker has a full control of a profiling device and is able to collect a large amount of traces for different key values in order to characterize the device leakage prior to the attack. In this paper we introduce a new method to apply Deep Learning techniques in a Non-Profiled context, where an attacker can only collect a limited number of side-channel traces for a fixed unknown key value from a closed device. We show that by combining key guesses with observations of Deep Learning metrics, it is possible to recover information about the secret key. The main interest of this method is that it is possible to use the power of Deep Learning and Neural Networks in a Non-Profiled scenario. We show that it is possible to exploit the translation-invariance property of Convolutional Neural Networks [CDP17] against de-synchronized traces also during Non-Profiled side-channel attacks. In this case, we show that this method can outperform classic Non-Profiled attacks such as Correlation Power Analysis. We also highlight that it is possible to break masked implementations in black-box, without leakages combination pre-preprocessing and with no assumptions nor knowledge about the masking implementation. To carry the attack, we introduce metrics based on Sensitivity Analysis that can reveal both the secret key value as well as points of interest, such as leakages and masks locations in the traces. The results of our experiments demonstrate the interests of this new method and show that this attack can be performed in practice.
2019
TCHES
Deep Learning to Evaluate Secure RSA Implementations 📺
This paper presents the results of several successful profiled side-channel attacks against a secure implementation of the RSA algorithm. The implementation was running on a ARM Core SC 100 completed with a certified EAL4+ arithmetic co-processor. The analyses have been conducted by three experts’ teams, each working on a specific attack path and exploiting information extracted either from the electromagnetic emanation or from the power consumption. A particular attention is paid to the description of all the steps that are usually followed during a security evaluation by a laboratory, including the acquisitions and the observations preprocessing which are practical issues usually put aside in the literature. Remarkably, the profiling portability issue is also taken into account and different device samples are involved for the profiling and testing phases. Among other aspects, this paper shows the high potential of deep learning attacks against secure implementations of RSA and raises the need for dedicated countermeasures.
2019
TCHES
Towards Globally Optimized Masking: From Low Randomness to Low Noise Rate 📺
We improve the state-of-the-art masking schemes in two important directions. First, we propose a new masked multiplication algorithm that satisfies a recently introduced notion called Probe-Isolating Non-Interference (PINI). It captures a sufficient requirement for designing masked implementations in a trivial way, by combining PINI multiplications and linear operations performed share by share. Our improved algorithm has the best reported randomness complexity for large security orders (while the previous PINI multiplication was best for small orders). Second, we analyze the security of most existing multiplication algorithms in the literature against so-called horizontal attacks, which aim to reduce the noise of the actual leakages measured by an adversary, by combining the information of multiple target intermediate values. For this purpose, we leave the (abstract) probing model and consider a specialization of the (more realistic) noisy leakage / random probing models. Our (still partially heuristic but quantitative) analysis allows confirming the improved security of an algorithm by Battistello et al. from CHES 2016 in this setting. We then use it to propose new improved algorithms, leading to better tradeoffs between randomness complexity and noise rate, and suggesting the possibility to design efficient masked multiplication algorithms with constant noise rate in F2.
2019
TCHES
Shaping the Glitch: Optimizing Voltage Fault Injection Attacks 📺
Voltage fault injection is a powerful active side channel attack that modifies the execution-flow of a device by creating disturbances on the power supply line. The attack typically aims at skipping security checks or generating side-channels that gradually leak sensitive data, including the firmware code. In this paper we propose a new voltage fault injection technique that generates fully arbitrary voltage glitch waveforms using off-the-shelf and low cost equipment. To show the effectiveness of our setup, we present new, unpublished firmware extraction attacks on six microcontrollers from three major manufacturers: STMicroelectronics, Texas Instruments and Renesas Electronics that, in 2016 declared a market of $1.5 billion, $800 million and $2.5 billion on units sold, respectively. Among the presented attacks, the most challenging ones exploit multiple vulnerabilities and inject over one million glitches, heavily leveraging on the performance and repeatability of the new proposed technique. We perform a thorough evaluation of arbitrary glitch waveforms by comparing the attack performance against two other major V-FI techniques in the literature. Along a responsible disclosure policy, all the vulnerabilities have been timely reported to the manufacturers.
2019
TCHES
Analysis and Improvement of Differential Computation Attacks against Internally-Encoded White-Box Implementations 📺
White-box cryptography is the last security barrier for a cryptographic software implementation deployed in an untrusted environment. The principle of internal encodings is a commonly used white-box technique to protect block cipher implementations. It consists in representing an implementation as a network of look-up tables which are then encoded using randomly generated bijections (the internal encodings). When this approach is implemented based on nibble (i.e. 4-bit wide) encodings, the protected implementation has been shown to be vulnerable to differential computation analysis (DCA). The latter is essentially an adaptation of differential power analysis techniques to computation traces consisting of runtime information, e.g., memory accesses, of the target software. In order to thwart DCA, it has then been suggested to use wider encodings, and in particular byte encodings, at least to protect the outer rounds of the block cipher which are the prime targets of DCA.In this work, we provide an in-depth analysis of when and why DCA works. We pinpoint the properties of the target variables and the encodings that make the attack (in)feasible. In particular, we show that DCA can break encodings wider than 4-bit, such as byte encodings. Additionally, we propose new DCA-like attacks inspired from side-channel analysis techniques. Specifically, we describe a collision attack particularly effective against the internal encoding countermeasure. We also investigate mutual information analysis (MIA) which naturally applies in this context. Compared to the original DCA, these attacks are also passive and they require very limited knowledge of the attacked implementation, but they achieve significant improvements in terms of trace complexity. All the analyses of our work are experimentally backed up with various attack simulation results. We also verified the practicability of our analyses and attack techniques against a publicly available white-box AES implementation protected with byte encodings –which DCA has failed to break before– and against a “masked” white-box AES implementation –which intends to resist DCA.
2019
TCHES
Glitch-Resistant Masking Revisited 📺
Best paper award CHES 2019
Implementing the masking countermeasure in hardware is a delicate task. Various solutions have been proposed for this purpose over the last years: we focus on Threshold Implementations (TIs), Domain-Oriented Masking (DOM), the Unified Masking Approach (UMA) and Generic Low Latency Masking (GLM). The latter generally come with innovative ideas to cope with physical defaults such as glitches. Yet, and in contrast to the situation in software-oriented masking, these schemes have not been formally proven at arbitrary security orders and their composability properties were left unclear. So far, only a 2-cycle implementation of the seminal masking scheme by Ishai, Sahai and Wagner has been shown secure and composable in the robust probing model – a variation of the probing model aimed to capture physical defaults such as glitches – for any number of shares.In this paper, we argue that this lack of proofs for TIs, DOM, UMA and GLM makes the interpretation of their security guarantees difficult as the number of shares increases. For this purpose, we first put forward that the higher-order variants of all these schemes are affected by (local or composability) security flaws in the (robust) probing model, due to insufficient refreshing. We then show that composability and robustness against glitches cannot be analyzed independently. We finally detail how these abstract flaws translate into concrete (experimental) attacks, and discuss the additional constraints robust probing security implies on the need of registers. Despite not systematically leading to improved complexities at low security orders, e.g., with respect to the required number of measurements for a successful attack, we argue that these weaknesses provide a case for the need of security proofs in the robust probing model (or a similar abstraction) at higher security orders.
2019
TCHES
Reducing a Masked Implementation’s Effective Security Order with Setup Manipulations 📺
Couplings are a type of physical default that can violate the independence assumption needed for the secure implementation of the masking countermeasure. Two recent works by De Cnudde et al. put forward qualitatively that couplings can cause information leakages of lower order than theoretically expected. However, the (quantitative) amplitude of these lower-order leakages (e.g., measured as the amplitude of a detection metric such as Welch’s T statistic) was usually lower than the one of the (theoretically expected) dth order leakages. So the actual security level of these implementations remained unaffected. In addition, in order to make the couplings visible, the authors sometimes needed to amplify them internally (e.g., by tweaking the placement and routing or iterating linear operations on the shares). In this paper, we first show that the amplitude of low-order leakages in masked implementations can be amplified externally, by tweaking side-channel measurement setups in a way that is under control of a power analysis adversary. Our experiments put forward that the “effective security order” of both hardware (FPGA) and software (ARM-32) implementations can be reduced, leading to concrete reductions of their security level. For this purpose, we move from the detection-based analyzes of previous works to attack-based evaluations, allowing to confirm the exploitability of the lower-order leakages that we amplify. We also provide a tentative explanation for these effects based on couplings, and describe a model that can be used to predict them in function of the measurement setup’s external resistor and implementation’s supply voltage. We posit that the effective security orders observed are mainly due to “externally-amplified couplings” that can be systematically exploited by actual adversaries.
2019
TCHES
Multi-Tuple Leakage Detection and the Dependent Signal Issue 📺
Leakage detection is a common tool to quickly assess the security of a cryptographic implementation against side-channel attacks. The Test Vector Leakage Assessment (TVLA) methodology using Welch’s t-test, proposed by Cryptography Research, is currently the most popular example of such tools, thanks to its simplicity and good detection speed compared to attack-based evaluations. However, as any statistical test, it is based on certain assumptions about the processed samples and its detection performances strongly depend on parameters like the measurement’s Signal-to-Noise Ratio (SNR), their degree of dependency, and their density, i.e., the ratio between the amount of informative and non-informative points in the traces. In this paper, we argue that the correct interpretation of leakage detection results requires knowledge of these parameters which are a priori unknown to the evaluator, and, therefore, poses a non-trivial challenge to evaluators (especially if restricted to only one test). For this purpose, we first explore the concept of multi-tuple detection, which is able to exploit differences between multiple informative points of a trace more effectively than tests relying on the minimum p-value of concurrent univariate tests. To this end, we map the common Hotelling’s T2-test to the leakage detection setting and, further, propose a specialized instantiation of it which trades computational overheads for a dependency assumption. Our experiments show that there is not one test that is the optimal choice for every leakage scenario. Second, we highlight the importance of the assumption that the samples at each point in time are independent, which is frequently considered in leakage detection, e.g., with Welch’s t-test. Using simulated and practical experiments, we show that (i) this assumption is often violated in practice, and (ii) deviations from it can affect the detection performances, making the correct interpretation of the results more difficult. Finally, we consolidate our findings by providing guidelines on how to use a combination of established and newly-proposed leakage detection tools to infer the measurements parameters. This enables a better interpretation of the tests’ results than the current state-of-the-art (yet still relying on heuristics for the most challenging evaluation scenarios).
2019
TCHES
M&amp;M: Masks and Macs against Physical Attacks 📺
Cryptographic implementations on embedded systems need to be protected against physical attacks. Today, this means that apart from incorporating countermeasures against side-channel analysis, implementations must also withstand fault attacks and combined attacks. Recent proposals in this area have shown that there is a big tradeoff between the implementation cost and the strength of the adversary model. In this work, we introduce a new combined countermeasure M&amp;M that combines Masking with information-theoretic MAC tags and infective computation. It works in a stronger adversary model than the existing scheme ParTI, yet is a lot less costly to implement than the provably secure MPC-based scheme CAPA. We demonstrate M&amp;M with a SCA- and DFA-secure implementation of the AES block cipher. We evaluate the side-channel leakage of the second-order secure design with a non-specific t-test and use simulation to validate the fault resistance.
2019
TCHES
Return of the Hidden Number Problem. A Widespread and Novel Key Extraction Attack on ECDSA and DSA 📺
Side channels have long been recognized as a threat to the security of cryptographic applications. Implementations can unintentionally leak secret information through many channels, such as microarchitectural state changes in processors, changes in power consumption, or electromagnetic radiation. As a result of these threats, many implementations have been hardened to defend against these attacks. Despite these mitigations, this work presents a novel side-channel attack against ECDSA and DSA. The attack targets a common implementation pattern that is found in many cryptographic libraries. In fact, about half of the libraries that were tested exhibited the vulnerable pattern. This pattern is exploited in a full proof of concept attack against OpenSSL, demonstrating that it is possible to extract a 256-bit ECDSA private key using a simple cache attack after observing only a few thousand signatures. The target of this attack is a previously unexplored part of (EC)DSA signature generation, which explains why mitigations are lacking and the issue is so widespread. Finally, estimates are provided for the minimum number of signatures needed to perform the attack, and countermeasures are suggested to protect against this attack.
2019
TCHES
Error Amplification in Code-based Cryptography 📺
Code-based cryptography is one of the main techniques enabling cryptographic primitives in a post-quantum scenario. In particular, the MDPC scheme is a basic scheme from which many other schemes have been derived. These schemes rely on iterative decoding in the decryption process and thus have a certain small probability p of having a decryption (decoding) error.In this paper we show a very fundamental and important property of code-based encryption schemes. Given one initial error pattern that fails to decode, the time needed to generate another message that fails to decode is strictly much less than 1/p. We show this by developing a method for fast generation of undecodable error patterns (error pattern chaining), which additionally proves that a measure of closeness in ciphertext space can be exploited through its strong linkage to the difficulty of decoding these messages. Furthermore, if side-channel information is also available (time to decode), then the initial error pattern no longer needs to be given since one can be easily generated in this case.These observations are fundamentally important because they show that a, say, 128- bit encryption scheme is not inherently safe from reaction attacks even if it employs a decoder with a failure rate of 2−128. In fact, unless explicit protective measures are taken, having a failure rate at all – of any magnitude – can pose a security problem because of the error amplification effect of our method.A key-recovery reaction attack was recently shown on the MDPC scheme as well as similar schemes, taking advantage of decoding errors in order to recover the secret key. It was also shown that knowing the number of iterations in the iterative decoding step, which could be received in a timing attack, would also enable and enhance such an attack. In this paper we apply our error pattern chaining method to show how to improve the performance of such reaction attacks in the CPA case. We show that after identifying a single decoding error (or a decoding step taking more time than expected in a timing attack), we can adaptively create new error patterns that have a much higher decoding error probability than for a random error. This leads to a significant improvement of the attack based on decoding errors in the CPA case and it also gives the strongest known attack on MDPC-like schemes, both with and without using side-channel information.
2019
TCHES
Implementing RLWE-based Schemes Using an RSA Co-Processor 📺
We repurpose existing RSA/ECC co-processors for (ideal) lattice-based cryptography by exploiting the availability of fast long integer multiplication. Such co-processors are deployed in smart cards in passports and identity cards, secured microcontrollers and hardware security modules (HSM). In particular, we demonstrate an implementation of a variant of the Module-LWE-based Kyber Key Encapsulation Mechanism (KEM) that is tailored for high performance on a commercially available smart card chip (SLE 78). To benefit from the RSA/ECC co-processor we use Kronecker substitution in combination with schoolbook and Karatsuba polynomial multiplication. Moreover, we speed-up symmetric operations in our Kyber variant using the AES co-processor to implement a PRNG and a SHA-256 co-processor to realise hash functions. This allows us to execute CCA-secure Kyber768 key generation in 79.6 ms, encapsulation in 102.4 ms and decapsulation in 132.7 ms.
2019
TCHES
3-Share Threshold Implementation of AES S-box without Fresh Randomness 📺
Threshold implementation is studied as a countermeasure against sidechannel attack. There had been no threshold implementation for the AES and Keccak S-boxes that satisfies an important property called uniformity. In the conventional implementations, intermediate values are remasked to compensate for the lack of uniformity. The remasking consumes thousands of fresh random bits and its implementation cost is a serious concern. Daemen recently proposed a 3-share uniform threshold implementation of the Keccak S-box. This is enabled by a new technique called the changing of the guards which can be applied to any invertible functions. Subsequently, Wegener et al. proposed a 4-share threshold implementation of the AES S-box based on the changing of the guards technique. However, a 3-share threshold implementation of AES S-box remains open. The difficulty stays in 2-input multiplication, used in decomposed S-box representations, which is non-invertible because of different input and output sizes. In this study, this problem is addressed by introducing a certain generalization of the changing of the guards technique. The proposed method provides a generic way to construct a uniform sharing for a target function having different input and output sizes. The key idea is to transform a target function into an invertible one by adding additional inputs and outputs. Based on the proposed technique, the first 3-share threshold implementation of AES S-box without fresh randomness is presented. Performance evaluation and simulation-based leakage assessment of the implementation are also presented.
2019
TCHES
Secure Physical Enclosures from Covers with Tamper-Resistance 📺
Ensuring physical security of multiple-chip embedded systems on a PCB is challenging, since the attacker can control the device in a hostile environment. To detect physical intruders as part of a layered approach to security, it is common to create a physical security boundary that is difficult to penetrate or remove, e.g., enclosures created from tamper-respondent envelopes or covers. Their physical integrity is usually checked by active sensing, i.e., a battery-backed circuit continuously monitors the enclosure. However, adoption is often hampered by the disadvantages of a battery and due to specialized equipment which is required to create the enclosure. In contrast, we present a batteryless tamper-resistant cover made from standard flexPCB technology, i.e., a commercially widespread, scalable, and proven technology. The cover comprises a fine mesh of electrodes and an evaluation unit underneath the cover checks their integrity by detecting short and open circuits. Additionally, it measures the capacitances between the electrodes of the mesh. Once its preliminary integrity is confirmed, a cryptographic key is derived from the capacitive measurements representing a PUF, to decrypt and authenticate sensitive data of the enclosed system. We demonstrate the feasibility of our concept, provide details on the layout, electrical properties of the cover, and explain the underlying security architecture. Practical results including statistics over a set of 115 flexPCB covers, physical attacks, and environmental testing support our design rationale. Hence, our work opens up a new direction of counteracting physical tampering without the need of batteries, while aiming at a physical security level comparable to FIPS 140-2 level 3.
2019
TCHES
Practical Evaluation of Protected Residue Number System Scalar Multiplication 📺
The Residue Number System (RNS) arithmetic is gaining grounds in public key cryptography, because it offers fast, efficient and secure implementations over large prime fields or rings of integers. In this paper, we propose a generic, thorough and analytic evaluation approach for protected scalar multiplication implementations with RNS and traditional Side Channel Attack (SCA) countermeasures in an effort to assess the SCA resistance of RNS. This paper constitutes the first robust evaluation of RNS software for Elliptic Curve Cryptography against electromagnetic (EM) side-channel attacks. Four different countermeasures, namely scalar and point randomization, random base permutations and random moduli operation sequence, are implemented and evaluated using the Test Vector Leakage Assessment (TVLA) and template attacks. More specifically, variations of RNS-based Montgomery Powering Ladder scalar multiplication algorithms are evaluated on an ARM Cortex A8 processor using an EM probe for acquisition of the traces. We show experimentally and theoretically that new bounds should be put forward when TVLA evaluations on public key algorithms are performed. On the security of RNS, our data and location dependent template attacks show that even protected implementations are vulnerable to these attacks. A combination of RNS-based countermeasures is the best way to protect against side-channel leakage.
2019
TCHES
Improving CEMA using Correlation Optimization 📺
Sensitive cryptographic information, e.g. AES secret keys, can be extracted from the electromagnetic (EM) leakages unintentionally emitted by a device using techniques such as Correlation Electromagnetic Analysis (CEMA). In this paper, we introduce Correlation Optimization (CO), a novel approach that improves CEMA attacks by formulating the selection of useful EM leakage samples in a trace as a machine learning optimization problem. To this end, we propose the correlation loss function, which aims to maximize the Pearson correlation between a set of EM traces and the true AES key during training. We show that CO works with high-dimensional and noisy traces, regardless of time-domain trace alignment and without requiring prior knowledge of the power consumption characteristics of the cryptographic hardware. We evaluate our approach using the ASCAD benchmark dataset and a custom dataset of EM leakages from an Arduino Duemilanove, captured with a USRP B200 SDR. Our results indicate that the masked AES implementation used in all three ASCAD datasets can be broken with a shallow Multilayer Perceptron model, whilst requiring only 1,000 test traces on average. A similar methodology was employed to break the unprotected AES implementation from our custom dataset, using 22,000 unaligned and unfiltered test traces.
2019
TCHES
Fully Automated Differential Fault Analysis on Software Implementations of Block Ciphers 📺
Differential Fault Analysis (DFA) is considered as the most popular fault analysis method. While there are techniques that provide a fault analysis automation on the cipher level to some degree, it can be shown that when it comes to software implementations, there are new vulnerabilities, which cannot be found by observing the cipher design specification.This work bridges the gap by providing a fully automated way to carry out DFA on assembly implementations of symmetric block ciphers. We use a customized data flow graph to represent the program and develop a novel fault analysis methodology to capture the program behavior under faults. We establish an effective description of DFA as constraints that are passed to an SMT solver. We create a tool that takes assembly code as input, analyzes the dependencies among instructions, automatically attacks vulnerable instructions using SMT solver and outputs the attack details that recover the last round key (and possibly the earlier keys). We support our design with evaluations on lightweight ciphers SIMON, SPECK, and PRIDE, and a current NIST standard, AES. By automated assembly analysis, we were able to find new efficient DFA attacks on SPECK and PRIDE, exploiting implementation specific vulnerabilities, and previously published DFA on SIMON and AES. Moreover, we present a novel DFA on multiplication operation that has never been shown for symmetric block ciphers before. Our experimental evaluation also shows reasonable execution times that are scalable to current cipher designs and can easily outclass the manual analysis. Moreover, we present a method to check the countermeasure-protected implementations in a way that helps implementers to decide how many rounds should be protected. We note that this is the first work that automatically carries out DFA on cipher implementations without any plaintext or ciphertext information and therefore, can be generally applied to any input data to the cipher.
2019
TCHES
The Curse of Class Imbalance and Conflicting Metrics with Machine Learning for Side-channel Evaluations 📺
We concentrate on machine learning techniques used for profiled sidechannel analysis in the presence of imbalanced data. Such scenarios are realistic and often occurring, for instance in the Hamming weight or Hamming distance leakage models. In order to deal with the imbalanced data, we use various balancing techniques and we show that most of them help in mounting successful attacks when the data is highly imbalanced. Especially, the results with the SMOTE technique are encouraging, since we observe some scenarios where it reduces the number of necessary measurements more than 8 times. Next, we provide extensive results on comparison of machine learning and side-channel metrics, where we show that machine learning metrics (and especially accuracy as the most often used one) can be extremely deceptive. This finding opens a need to revisit the previous works and their results in order to properly assess the performance of machine learning in side-channel analysis.
2019
TCHES
New Insights to Key Derivation for Tamper-Evident Physical Unclonable Functions 📺
Several publications presented tamper-evident Physical Unclonable Functions (PUFs) for secure storage of cryptographic keys and tamper-detection. Unfortunately, previously published PUF-based key derivation schemes do not sufficiently take into account the specifics of the underlying application, i.e., an attacker that tampers with the physical parameters of the PUF outside of an idealized noise error model. This is a notable extension of existing schemes for PUF key derivation, as they are typically concerned about helper data leakage, i.e., by how much the PUF’s entropy is diminished when gaining access to its helper data.To address the specifics of tamper-evident PUFs, we formalize the aspect of tamper-sensitivity, thereby providing a new tool to rate by how much an attacker is allowed to tamper with the PUF. This complements existing criteria such as effective number of secret bits for entropy and failure rate for reliability. As a result, it provides a fair comparison among different schemes and independent of the PUF implementation, as its unit is based on the noise standard deviation of the underlying PUF measurement. To overcome the limitations of previous schemes, we then propose an Error-Correcting Code (ECC) based on the Lee metric, i.e., a distance metric well-suited to describe the distance between q-ary symbols as output from an equidistant quantization, i.e., a higher-order alphabet PUF. This novel approach is required, as the underlying symbols’ bits are not i.i.d. which hinders applying previous state-of-the-art approaches. We present the concept for our scheme and demonstrate its feasibility based on an empirical PUF distribution. The benefits of our approach are an increase by over 21% in effective secret bit compared to previous approaches based on equidistant quantization. At the same time, we improve tamper-sensitivity compared to an equiprobable quantization while ensuring similar reliability and entropy. Hence, this work opens up a new direction of how to interpret the PUF output and details a practically relevant scheme outperforming all previous constructions.
2019
TCHES
Fast, Furious and Insecure: Passive Keyless Entry and Start Systems in Modern Supercars 📺
The security of immobiliser and Remote Keyless Entry systems has been extensively studied over many years. Passive Keyless Entry and Start systems, which are currently deployed in luxury vehicles, have not received much attention besides relay attacks. In this work we fully reverse engineer a Passive Keyless Entry and Start system and perform a thorough analysis of its security.Our research reveals several security weaknesses. Specifically, we document the use of an inadequate proprietary cipher using 40-bit keys, the lack of mutual authentication in the challenge-response protocol, no firmware readout protection features enabled and the absence of security partitioning.In order to validate our findings, we implement a full proof of concept attack allowing us to clone a Tesla Model S key fob in a matter of seconds with low cost commercial off the shelf equipment. Our findings most likely apply to other manufacturers of luxury vehicles including McLaren, Karma and Triumph motorcycles as they all use the same system developed by Pektron.
2019
TCHES
Covert Gates: Protecting Integrated Circuits with Undetectable Camouflaging 📺
Integrated circuit (IC) camouflaging has emerged as a promising solution for protecting semiconductor intellectual property (IP) against reverse engineering. Existing methods of camouflaging are based on standard cells that can assume one of many Boolean functions, either through variation of transistor threshold voltage or contact configurations. Unfortunately, such methods lead to high area, delay and power overheads, and are vulnerable to invasive as well as non-invasive attacks based on Boolean satisfiability/VLSI testing. In this paper, we propose, fabricate, and demonstrate a new cell camouflaging strategy, termed as ‘covert gate’ that leverages doping and dummy contacts to create camouflaged cells that are indistinguishable from regular standard cells under modern imaging techniques. We perform a comprehensive security analysis of covert gate, and show that it achieves high resiliency against SAT and test-based attacks at very low overheads. We also derive models to characterize the covert cells, and develop measures to incorporate them into a gate-level design. Simulation results of overheads and attacks are presented on benchmark circuits.
2019
TCHES
Consolidating Security Notions in Hardware Masking 📺
In this paper, we revisit the security conditions of masked hardware implementations. We describe a new, succinct, information-theoretic condition called d-glitch immunity which is both necessary and sufficient for security in the presence of glitches. We show that this single condition includes, but is not limited to, previous security notions such as those used in higher-order threshold implementations and in abstractions using ideal gates. As opposed to these previously known necessary conditions, our new condition is also sufficient. On the other hand, it excludes avoidable notions such as uniformity. We also treat the notion of (strong) noninterference from an information-theoretic point-of-view in order to unify the different security concepts and pave the way to the verification of composability in the presence of glitches. We conclude the paper by demonstrating how the condition can be used as an efficient and highly generic flaw detection mechanism for a variety of functions and schemes based on different operations.
2019
TCHES
SMT Attack: Next Generation Attack on Obfuscated Circuits with Capabilities and Performance Beyond the SAT Attacks 📺
In this paper, we introduce the Satisfiability Modulo Theory (SMT) attack on obfuscated circuits. The proposed attack is the superset of Satisfiability (SAT) attack, with many additional features. It uses one or more theory solvers in addition to its internal SAT solver. For this reason, it is capable of modeling far more complex behaviors and could formulate much stronger attacks. In this paper, we illustrate that the use of theory solvers enables the SMT to carry attacks that are not possible by SAT formulated attacks. As an example of its capabilities, we use the SMT attack to break a recent obfuscation scheme that uses key values to alter delay properties (setup and hold time) of a circuit to remain SAT hard. Considering that the logic delay is not a Boolean logical property, the targeted obfuscation mechanism is not breakable by a SAT attack. However, in this paper, we illustrate that the proposed SMT attack, by deploying a simple graph theory solver, can model and break this obfuscation scheme in few minutes. We describe how the SMT attack could be used in one of four different attack modes: (1) We explain how SMT attack could be reduced to a SAT attack, (2) how the SMT attack could be carried out in Eager, and (3) Lazy approach, and finally (4) we introduce the Accelerated SMT (AccSMT) attack that offers significant speed-up to SAT attack. Additionally, we explain how AccSMT attack could be used as an approximate attack when facing SMT-Hard obfuscation schemes.
2019
TCHES
Make Some Noise. Unleashing the Power of Convolutional Neural Networks for Profiled Side-channel Analysis 📺
Profiled side-channel analysis based on deep learning, and more precisely Convolutional Neural Networks, is a paradigm showing significant potential. The results, although scarce for now, suggest that such techniques are even able to break cryptographic implementations protected with countermeasures. In this paper, we start by proposing a new Convolutional Neural Network instance able to reach high performance for a number of considered datasets. We compare our neural network with the one designed for a particular dataset with masking countermeasure and we show that both are good designs but also that neither can be considered as a superior to the other one.Next, we address how the addition of artificial noise to the input signal can be actually beneficial to the performance of the neural network. Such noise addition is equivalent to the regularization term in the objective function. By using this technique, we are able to reduce the number of measurements needed to reveal the secret key by orders of magnitude for both neural networks. Our new convolutional neural network instance with added noise is able to break the implementation protected with the random delay countermeasure by using only 3 traces in the attack phase. To further strengthen our experimental results, we investigate the performance with a varying number of training samples, noise levels, and epochs. Our findings show that adding noise is beneficial throughout all training set sizes and epochs.
2019
TCHES
NTTRU: Truly Fast NTRU Using NTT 📺
We present NTTRU – an IND-CCA2 secure NTRU-based key encapsulation scheme that uses the number theoretic transform (NTT) over the cyclotomic ring Z7681[X]/(X768−X384+1) and produces public keys and ciphertexts of approximately 1.25 KB at the 128-bit security level. The number of cycles on a Skylake CPU of our constant-time AVX2 implementation of the scheme for key generation, encapsulation and decapsulation is approximately 6.4K, 6.1K, and 7.9K, which is more than 30X, 5X, and 8X faster than these respective procedures in the NTRU schemes that were submitted to the NIST post-quantum standardization process. These running times are also, by a large margin, smaller than those for all the other schemes in the NIST process as well as the KEMs based on elliptic curve Diffie-Hellman. We additionally give a simple transformation that allows one to provably deal with small decryption errors in OW-CPA encryption schemes (such as NTRU) when using them to construct an IND-CCA2 key encapsulation.
2019
TCHES
Static Power SCA of Sub-100 nm CMOS ASICs and the Insecurity of Masking Schemes in Low-Noise Environments 📺
Semiconductor technology scaling faced tough engineering challenges while moving towards and beyond the deep sub-micron range. One of the most demanding issues, limiting the shrinkage process until the present day, is the difficulty to control the leakage currents in nanometer-scaled field-effect transistors. Previous articles have shown that this source of energy dissipation, at least in case of digital CMOS logic, can successfully be exploited as a side-channel to recover the secrets of cryptographic implementations. In this work, we present the first fair technology comparison with respect to static power side-channel measurements on real silicon and demonstrate that the effect of down-scaling on the potency of this security threat is huge. To this end, we designed two ASICs in sub-100nm CMOS nodes (90 nm, 65 nm) and got them fabricated by one of the leading foundries. Our experiments, which we performed at different operating conditions, show consistently that the ASIC technology with the smaller minimum feature size (65 nm) indeed exhibits substantially more informative leakages (factor of ~10) than the 90nm one, even though all targeted instances have been derived from identical RTL code. However, the contribution of this work extends well beyond a mere technology comparison. With respect to the real-world impact of static power attacks, we present the first realistic scenarios that allow to perform a static power side-channel analysis (including noise reduction) without requiring control over the clock signal of the target. Furthermore, as a follow-up to some proof-of-concept work indicating the vulnerability of masking schemes to static powerattacks, we perform a detailed study on how the reduction of the noise level in static leakage measurements affects the security provided by masked implementations. As a result of this study, we do not only find out that the threat for masking schemes is indeed real, but also that common leakage assessment techniques, such as the Welch’s t-test, together with essentially any moment-based analysis of the leakage traces, is simply not sufficient in low-noise contexts. In fact, we are able to show that either a conversion (resp. compression) of the leakage order or the recently proposed X2 test need to be considered in assessment and attack to avoid false negatives.
2019
TCHES
Exploring the Effect of Device Aging on Static Power Analysis Attacks 📺
Vulnerability of cryptographic devices to side-channel analysis attacks, and in particular power analysis attacks has been extensively studied in the recent years. Among them, static power analysis attacks have become relevant with moving towards smaller technology nodes for which the static power is comparable to the dynamic power of a chip, or even dominant in future technology generations. The magnitude of the static power of a chip depends on the physical characteristics of transistors (e.g., the dimensions) as well as operating conditions (e.g., the temperature) and the electrical specifications such as the threshold voltage. In fact, the electrical specifications of transistors deviate from their originally intended ones during device lifetime due to aging mechanisms. Although device aging has been extensively investigated from reliability point of view, the impact of aging on the security of devices, and in particular on the vulnerability of devices to power analysis attacks are yet to be considered.This paper fills the gap and investigates how device aging can affect the susceptibility of a chip exposed to static power analysis attacks. To this end, we conduct both, simulation and practical experiments on real silicon. The experimental results are extracted from a realization of the PRESENT cipher fabricated using a 65nm commercial standard cell library. The results show that the amount of exploitable leakage through the static power consumption as a side channel is reduced when the device is aged. This can be considered as a positive development which can (even slightly) harden such static power analysis attacks. Additionally, this result is of great interest to static power side-channel adversaries since state-of-the-art leakage current measurements are conducted over long time periods under increased working temperatures and supply voltages to amplify the exploitable information, which certainly fuels aging-related device degradation.
2019
TCHES
Software Toolkit for HFE-based Multivariate Schemes 📺
In 2017, NIST shook the cryptographic world by starting a process for standardizing post-quantum cryptography. Sixty-four submissions have been considered for the first round of the on-going NIST Post-Quantum Cryptography (PQC) process. Multivariate cryptography is a classical post-quantum candidate that turns to be the most represented in the signature category. At this stage of the process, it is of primary importance to investigate efficient implementations of the candidates. This article presents MQsoft, an efficient library which permits to implement HFE-based multivariate schemes submitted to the NIST PQC process such as GeMSS, Gui and DualModeMS. The library is implemented in C targeting Intel 64-bit processors and using avx2 set instructions. We present performance results for our library and its application to GeMSS, Gui and DualModeMS. In particular, we optimize several crucial parts for these schemes. These include root finding for HFE polynomials and evaluation of multivariate quadratic systems in F2. We propose a new method which accelerates root finding for specific HFE polynomials by a factor of two. For GeMSS and Gui, we obtain a speed-up of a factor between 2 and 19 for the keypair generation, between 1.2 and 2.5 for the signature generation, and between 1.6 and 2 for the verifying process. We have also improved the arithmetic in F2n by a factor of 4 compared to the NTL library. Moreover, a large part of our implementation is protected against timing attacks.
2019
TCHES
Leaky Noise: New Side-Channel Attack Vectors in Mixed-Signal IoT Devices 📺
Microcontrollers and SoC devices have widely been used in Internet of Things applications. This also brings the question whether they lead to new security threats unseen in traditional computing systems. In fact, almost all modern SoC chips, particularly in the IoT domain, contain both analog and digital components, for various sensing and transmission tasks. Traditional remote-accessible online systems do not have this property, which can potentially become a security vulnerability. In this paper we demonstrate that such mixed-signal components, namely ADCs, expose a new security threat that allows attackers with ADC access to deduce the activity of a CPU in the system. To prove the leakage, we perform leakage assessment on three individual microcontrollers from two different vendors with various ADC settings. After showing a correlation of CPU activity with ADC noise, we continue with a leakage assessment of modular exponentiation and AES. It is shown that for all of these devices, leakage occurs for at least one algorithm and configuration of the ADC. Finally, we show a full key recovery attack on AES that works despite of the limited ADC sampling rate. These results imply that even remotely accessible microcontroller systems should be equipped with proper countermeasures against power analysis attacks, or restrict access to ADC data.
2019
TCHES
Fast constant-time gcd computation and modular inversion 📺
This paper introduces streamlined constant-time variants of Euclid’s algorithm, both for polynomial inputs and for integer inputs. As concrete applications, this paper saves time in (1) modular inversion for Curve25519, which was previously believed to be handled much more efficiently by Fermat’s method, and (2) key generation for the ntruhrss701 and sntrup4591761 lattice-based cryptosystems.
2019
TCHES
Security on Plastics: Fake or Real? 📺
Electronic devices on plastic foil, also referred to as flexible electronics, are making their way into mainstream applications. In the near future, flexible electronic labels can be embedded in smart blisters, but also used as mainstream technology for flexible medical patches. A key technology for flexible electronics is based on thin-film transistors, which have the potential to be manufactured at low cost, making them an ideal candidate for these applications. Yet, up to now, no-one is taking digital security into account in the design of flexible electronics.In this paper, we present, to our knowledge, the first cryptographic core on plastic foil. Two main research challenges arise. The first challenge is related to the reliability of the circuit, which typically decreases when the circuit area increases. By integrating cryptographic modules, we explore the limits of the technology, since the smallest lightweight block ciphers feature a larger area than the largest digital circuit on flex foil reported up to now. The second challenge is related to key hiding. The relatively large features on the chip and the fact that electronic chips on plastics are used as bare dies, i.e. they are not packaged, make it easy to read out the value of the stored secret key. Because there is no dedicated non-volatile memory technology yet, existing methods for writing data to the flexible chip after fabrication are based on wire cutting with a laser or inkjet printing. With these techniques, however, it is extremely easy to “see” the value of the secret key under a microscope. We propose a novel solution that allows us to invisibly program the key after fabrication.
2019
TCHES
Sapphire: A Configurable Crypto-Processor for Post-Quantum Lattice-based Protocols 📺
Public key cryptography protocols, such as RSA and elliptic curve cryptography, will be rendered insecure by Shor’s algorithm when large-scale quantum computers are built. Cryptographers are working on quantum-resistant algorithms, and lattice-based cryptography has emerged as a prime candidate. However, high computational complexity of these algorithms makes it challenging to implement lattice-based protocols on low-power embedded devices. To address this challenge, we present Sapphire – a lattice cryptography processor with configurable parameters. Efficient sampling, with a SHA-3-based PRNG, provides two orders of magnitude energy savings; a single-port RAM-based number theoretic transform memory architecture is proposed, which provides 124k-gate area savings; while a low-power modular arithmetic unit accelerates polynomial computations. Our test chip was fabricated in TSMC 40nm low-power CMOS process, with the Sapphire cryptographic core occupying 0.28 mm2 area consisting of 106k logic gates and 40.25 KB SRAM. Sapphire can be programmed with custom instructions for polynomial arithmetic and sampling, and it is coupled with a low-power RISC-V micro-processor to demonstrate NIST Round 2 lattice-based CCA-secure key encapsulation and signature protocols Frodo, NewHope, qTESLA, CRYSTALS-Kyber and CRYSTALS-Dilithium, achieving up to an order of magnitude improvement in performance and energy-efficiency compared to state-of-the-art hardware implementations. All key building blocks of Sapphire are constant-time and secure against timing and simple power analysis side-channel attacks. We also discuss how masking-based DPA countermeasures can be implemented on the Sapphire core without any changes to the hardware.
2019
TCHES
Electromagnetic Information Extortion from Electronic Devices Using Interceptor and Its Countermeasure 📺
The problem of information leakage through electromagnetic waves for various devices has been extensively discussed in literature. Conventionally, devices that are under such a threat suffer from potential electromagnetic information leakage during their operation. Further, the information inside the devices can be obtained by monitoring the electromagnetic waves leaking at the boundaries of the devices. The leakage of electromagnetic waves, however, was not observed for some devices, and such devices were not the target of the threat discussed above. In light of this circumstance, this paper discusses an “interceptor” that forces the leakage of information through electromagnetic waves, even from devices in which potential electromagnetic leakage does not occur. The proposed interceptor is a small circuit consisting of an affordable semiconductor chip and wiring and is powered by electromagnetic waves that irradiate from the outside of a device as its driving energy. The distance at which information is obtained is controlled by increasing the intensity of the irradiated electromagnetic waves. The paper presents the structure of the circuit for implementing the proposed interceptor to be used in major input–output devices and cryptographic modules, mounting a pathway designed on the basis of the construction method onto each device. Moreover, it is shown that it is possible to forcefully cause information leakage through electromagnetic waves. To detect the aforementioned threat, the paper also focuses on the changes in a device itself and the surrounding electromagnetic environment as a result of mounting an interceptor and considers a method of detecting an interceptor by both passive and active monitoring methods.
2019
TCHES
New Circuit Minimization Techniques for Smaller and Faster AES SBoxes 📺
In this paper we consider various methods and techniques to find the smallest circuit realizing a given linear transformation on n input signals and m output signals, with a constraint of a maximum depth, maxD, of the circuit. Additional requirements may include that input signals can arrive to the circuit with different delays, and output signals may be requested to be ready at a different depth. We apply these methods and also improve previous results in order to find hardware circuits for forward, inverse, and combined AES SBoxes, and for each of them we provide the fastest and smallest combinatorial circuits. Additionally, we propose a novel technique with “floating multiplexers” to minimize the circuit for the combined SBox, where we have two different linear matrices (forward and inverse) combined with multiplexers. The resulting AES SBox solutions are the fastest and smallest to our knowledge.
2019
TCHES
On-Device Power Analysis Across Hardware Security Domains. 📺
Side-channel power analysis is a powerful method of breaking secure cryptographic algorithms, but typically power analysis is considered to require specialized measurement equipment on or near the device. Assuming an attacker first gained the ability to run code on the unsecure side of a device, they could trigger encryptions and use the on-board ADC to capture power traces of that hardware encryption engine.This is demonstrated on a SAML11 which contains a M23 core with a TrustZone-M implementation as the hardware security barrier. This attack requires 160 × 106 traces, or approximately 5 GByte of data. This attack does not use any external measurement equipment, entirely performing the power analysis using the ADC on-board the microcontroller under attack. The attack is demonstrated to work both from the non-secure and secure environment on the chip, being a demonstration of a cross-domain power analysis attack.To understand the effect of noise and sample rate reduction, an attack is mounted on the SAML11 hardware AES peripheral using classic external equipment, and results are compared for various sample rates and hardware setups. A discussion on how users of this device can help prevent such remote attacks is also presented, along with metrics that can be used in evaluating other devices. Complete copies of all recorded power traces and scripts used by the authors are publicly presented.
2019
TCHES
Fast and simple constant-time hashing to the BLS12-381 elliptic curve 📺
Pairing-friendly elliptic curves in the Barreto-Lynn-Scott family are seeing a resurgence in popularity because of the recent result of Kim and Barbulescu that improves attacks against other pairing-friendly curve families. One particular Barreto-Lynn-Scott curve, called BLS12-381, is the locus of significant development and deployment effort, especially in blockchain applications. This effort has sparked interest in using the BLS12-381 curve for BLS signatures, which requires hashing to one of the groups of the bilinear pairing defined by BLS12-381.While there is a substantial body of literature on the problem of hashing to elliptic curves, much of this work does not apply to Barreto-Lynn-Scott curves. Moreover, the work that does apply has the unfortunate property that fast implementations are complex, while simple implementations are slow.In this work, we address these issues. First, we show a straightforward way of adapting the “simplified SWU” map of Brier et al. to BLS12-381. Second, we describe optimizations to this map that both simplify its implementation and improve its performance; these optimizations may be of interest in other contexts. Third, we implement and evaluate. We find that our work yields constant-time hash functions that are simple to implement, yet perform within 9% of the fastest, non–constant-time alternatives, which require much more complex implementations.
2019
TCHES
Novel Side-Channel Attacks on Quasi-Cyclic Code-Based Cryptography 📺
Chou suggested a constant-time implementation for quasi-cyclic moderatedensity parity-check (QC-MDPC) code-based cryptography to mitigate timing attacks at CHES 2016. This countermeasure was later found to become vulnerable to a differential power analysis (DPA) in private syndrome computation, as described by Rossi et al. at CHES 2017. The proposed DPA, however, still could not completely recover accurate secret indices, requiring further solving linear equations to obtain entire secret information. In this paper, we propose a multiple-trace attack which enables to completely recover accurate secret indices. We further propose a singletrace attack which can even work when using ephemeral keys or applying Rossi et al.’s DPA countermeasures. Our experiments show that the BIKE and LEDAcrypt may become vulnerable to our proposed attacks. The experiments are conducted using power consumption traces measured from ChipWhisperer-Lite XMEGA (8-bit processor) and ChipWhisperer UFO STM32F3 (32-bit processor) target boards.
2019
TCHES
Cache-Timing Attacks on RSA Key Generation 📺
During the last decade, constant-time cryptographic software has quickly transitioned from an academic construct to a concrete security requirement for real-world libraries. Most of OpenSSL’s constant-time code paths are driven by cryptosystem implementations enabling a dedicated flag at runtime. This process is perilous, with several examples emerging in the past few years of the flag either not being set or software defects directly mishandling the flag. In this work, we propose a methodology to analyze security-critical software for side-channel insecure code path traversal. Applying our methodology to OpenSSL, we identify three new code paths during RSA key generation that potentially leak critical algorithm state. Exploiting one of these leaks, we design, implement, and mount a single trace cache-timing attack on the GCD computation step. We overcome several hurdles in the process, including but not limited to: (1) granularity issues due to word-size operands to the GCD function; (2) bulk processing of desynchronized trace data; (3) non-trivial error rate during information extraction; and (4) limited high-confidence information on the modulus factors. Formulating lattice problem instances after obtaining and processing this limited information, our attack achieves roughly a 27% success rate for key recovery using the empirical data from 10K trials.
2019
TCHES
The Interpose PUF: Secure PUF Design against State-of-the-art Machine Learning Attacks 📺
The design of a silicon Strong Physical Unclonable Function (PUF) that is lightweight and stable, and which possesses a rigorous security argument, has been a fundamental problem in PUF research since its very beginnings in 2002. Various effective PUF modeling attacks, for example at CCS 2010 and CHES 2015, have shown that currently, no existing silicon PUF design can meet these requirements. In this paper, we introduce the novel Interpose PUF (iPUF) design, and rigorously prove its security against all known machine learning (ML) attacks, including any currently known reliability-based strategies that exploit the stability of single CRPs (we are the first to provide a detailed analysis of when the reliability based CMA-ES attack is successful and when it is not applicable). Furthermore, we provide simulations and confirm these in experiments with FPGA implementations of the iPUF, demonstrating its practicality. Our new iPUF architecture so solves the currently open problem of constructing practical, silicon Strong PUFs that are secure against state-of-the-art ML attacks.
2019
TOSC
ZOCB and ZOTR: Tweakable Blockcipher Modes for Authenticated Encryption with Full Absorption 📺
We define ZOCB and ZOTR for nonce-based authenticated encryption with associated data, and analyze their provable security. These schemes use a tweakable blockcipher (TBC) as the underlying primitive, and fully utilize its input to process a plaintext and associated data (AD). This property is commonly referred to as full absorption, and this has been explored for schemes based on a permutation or a pseudorandom function (PRF). Our schemes improve the efficiency of TBC-based counterparts of OCB and OTR called OCB3 (Krovetz and Rogaway, FSE 2011) and OTR (Minematsu, EUROCRYPT 2014). Specifically, ΘCB3 and OTR have an independent part to process AD, and our schemes integrate this process into the encryption part of a plaintext by using the tweak input of the TBC. Up to a certain length of AD, ZOCB and ZOTR completely eliminate the independent process for it. Even for longer AD, our schemes process it efficiently by fully using the tweak input of the TBC. For this purpose, based on previous tweak extension schemes for TBCs, we introduce a scheme called XTX*. To our knowledge, ZOCB and ZOTR are the first efficiency improvement of ΘCB3 and OTR in terms of the number of TBC calls. Compared to Sponge-based and PRF-based schemes, ZOCB and ZOTR allow fully parallel computation of the underlying primitive, and have a unique design feature that an authentication tag is independent of a part of AD. We present experimental results illustrating the practical efficiency gain and clarifying the efficiency cost for it with a concrete instantiation. The results show that for long input data, our schemes have gains, while we have efficiency loss for short input data.
2019
TOSC
Quantum Security Analysis of AES 📺
In this paper we analyze for the first time the post-quantum security of AES. AES is the most popular and widely used block cipher, established as the encryption standard by the NIST in 2001. We consider the secret key setting and, in particular, AES-256, the recommended primitive and one of the few existing ones that aims at providing a post-quantum security of 128 bits. In order to determine the new security margin, i.e., the lowest number of non-attacked rounds in time less than 2128 encryptions, we first provide generalized and quantized versions of the best known cryptanalysis on reduced-round AES, as well as a discussion on attacks that don’t seem to benefit from a significant quantum speed-up. We propose a new framework for structured search that encompasses both the classical and quantum attacks we present, and allows to efficiently compute their complexity. We believe this framework will be useful for future analysis.Our best attack is a quantum Demirci-Selçuk meet-in-the-middle attack. Unexpectedly, using the ideas underlying its design principle also enables us to obtain new, counter-intuitive classical TMD trade-offs. In particular, we can reduce the memory in some attacks against AES-256 and AES-128.One of the building blocks of our attacks is solving efficiently the AES S-Box differential equation, with respect to the quantum cost of a reversible S-Box. We believe that this generic quantum tool will be useful for future quantum differential attacks. Judging by the results obtained so far, AES seems a resistant primitive in the post-quantum world as well as in the classical one, with a bigger security margin with respect to quantum generic attacks.
2019
TOSC
New Conditional Cube Attack on Keccak Keyed Modes 📺
The conditional cube attack on round-reduced Keccak keyed modes was proposed by Huang et al. at EUROCRYPT 2017. In their attack, a conditional cube variable was introduced, whose diffusion was significantly reduced by certain key bit conditions. The attack requires a set of cube variables which are not multiplied in the first round while the conditional cube variable is not multiplied with other cube variables (called ordinary cube variables) in the first two rounds. This has an impact on the degree of the output of Keccak and hence gives a distinguisher. Later, the MILP method was applied to find ordinary cube variables. However, for some Keccak based versions with few degrees of freedom, one could not find enough ordinary cube variables, which weakens or even invalidates the conditional cube attack.In this paper, a new conditional cube attack on Keccak is proposed. We remove the limitation that no cube variables multiply with each other in the first round. As a result, some quadratic terms may appear in the first round. We make use of some new bit conditions to prevent the quadratic terms from multiplying with other cube variables in the second round, so that there will be no cubic terms in the first two rounds. Furthermore, we introduce the kernel quadratic term and construct a 6-2-2 pattern to reduce the diffusion of quadratic terms significantly, where the Θ operation even in the second round becomes an identity transformation (CP-kernel property) for the kernel quadratic term. Previous conditional cube attacks on Keccak only explored the CP-kernel property of Θ operation in the first round. Therefore, more degrees of freedom are available for ordinary cube variables and fewer bit conditions are used to remove the cubic terms in the second round, which plays a key role in the conditional cube attack on versions with very few degrees of freedom. We also use the MILP method in the search of cube variables and give key-recovery attacks on round-reduced Keccak keyed modes.As a result, we reduce the time complexity of key-recovery attacks on 7-round Keccak-MAC-512 and 7-round Ketje Sr v2 from 2111, 299 to 272, 277, respectively. Additionally, we have reduced the time complexity of attacks on 9-round KMAC256 and 7-round Ketje Sr v1. Besides, practical attacks on 6-round Ketje Sr v1 and v2 are also given in this paper for the first time.
2019
TOSC
The Exact Security of PMAC with Two Powering-Up Masks 📺
PMAC is a rate-1, parallelizable, block-cipher-based message authentication code (MAC), proposed by Black and Rogaway (EUROCRYPT 2002). Improving the security bound is a main research topic for PMAC. In particular, showing a tight bound is the primary goal of the research, since Luykx et al.’s paper (EUROCRYPT 2016). Regarding the pseudo-random-function (PRF) security of PMAC, a collision of the hash function, or the difference between a random permutation and a random function offers the lower bound Ω(q2/2n) for q queries and the block cipher size n. Regarding the MAC security (unforgeability), a hash collision for MAC queries, or guessing a tag offers the lower bound Ω(q2m /2n + qv/2n) for qm MAC queries and qv verification queries (forgery attempts). The tight upper bound of the PRF-security O(q2/2n) of PMAC was given by Gaži et el. (ToSC 2017, Issue 1), but their proof requires a 4-wise independent masking scheme that uses 4 n-bit random values. Open problems from their work are: (1) find a masking scheme with three or less random values with which PMAC has the tight upper bound for PRF-security; (2) find a masking scheme with which PMAC has the tight upper bound for MAC-security.In this paper, we consider PMAC with two powering-up masks that uses two random values for the masking scheme. Using the structure of the powering-up masking scheme, we show that the PMAC has the tight upper bound O(q2/2n) for PRF-security, which answers the open problem (1), and the tight upper bound O(q2m /2n + qv/2n) for MAC-security, which answers the open problem (2). Note that these results deal with two-key PMACs, thus showing tight upper bounds of PMACs with single-key and/or with one powering-up mask are open problems.
2019
TOSC
On Beyond-Birthday-Bound Security: Revisiting the Development of ISO/IEC 9797-1 MACs 📺
Best Paper FSE 2020
ISO/IEC 9797-1 is an international standard for block-cipher-based Message Authentication Code (MAC). The current version ISO/IEC 9797-1:2011 specifies six single-pass CBC-like MAC structures that are capped at the birthday bound security. For a higher security that is beyond-birthday bound, it recommends to use the concatenation combiner of two single-pass MACs. In this paper, we reveal the invalidity of the suggestion, by presenting a birthday bound forgery attack on the concatenation combiner, which is essentially based on Joux’s multi-collision. Notably, our new forgery attack for the concatenation of two MAC Algorithm 1 with padding scheme 2 only requires 3 queries. Moreover, we look for patches by revisiting the development of ISO/IEC 9797-1 with respect to the beyond-birthday bound security. More specifically, we evaluate the XOR combiner of single-pass CBC-like MACs, which was used in previous version of ISO/IEC 9797-1.
2019
TOSC
Classification of Balanced Quadratic Functions 📺
S-boxes, typically the only nonlinear part of a block cipher, are the heart of symmetric cryptographic primitives. They significantly impact the cryptographic strength and the implementation characteristics of an algorithm. Due to their simplicity, quadratic vectorial Boolean functions are preferred when efficient implementations for a variety of applications are of concern. Many characteristics of a function stay invariant under affine equivalence. So far, all 6-bit Boolean functions, 3- and 4-bit permutations have been classified up to affine equivalence. At FSE 2017, Bozoliv et al. presented the first classification of 5-bit quadratic permutations. In this work, we propose an adaptation of their work resulting in a highly efficient algorithm to classify n x m functions for n ≥ m. Our algorithm enables for the first time a complete classification of 6-bit quadratic permutations as well as all balanced quadratic functions for n ≤ 6. These functions can be valuable for new cryptographic algorithm designs with efficient multi-party computation or side-channel analysis resistance as goal. In addition, we provide a second tool for finding decompositions of length two. We demonstrate its use by decomposing existing higher degree S-boxes and constructing new S-boxes with good cryptographic and implementation properties.
2019
TOSC
Reconstructing an S-box from its Difference Distribution Table 📺
In this paper we study the problem of recovering a secret S-box from its difference distribution table (DDT). While being an interesting theoretical problem on its own, the ability to recover the S-box from the DDT of a secret S-box can be used in cryptanalytic attacks where the attacker can obtain the DDT (e.g., in Bar-On et al.’s attack on GOST), in supporting theoretical analysis of the properties of difference distribution tables (e.g., in Boura et al.’s work), or in some analysis of S-boxes with unknown design criteria (e.g., in Biryukov and Perrin’s analysis).We show that using the well established relation between the DDT and the linear approximation table (LAT), one can devise an algorithm different from the straightforward guess-and-determine (GD) algorithm proposed by Boura et al. Moreover, we show how to exploit this relation, and embed the knowledge obtained from it in the GD algorithm. We tested our new algorithm on random S-boxes of different sizes, and for random 14-bit bijective S-boxes, our results outperform the GD attack by several orders of magnitude.
2019
TOSC
Efficient Search for Optimal Diffusion Layers of Generalized Feistel Networks 📺
The Feistel construction is one of the most studied ways of building block ciphers. Several generalizations were then proposed in the literature, leading to the Generalized Feistel Network, where the round function first applies a classical Feistel operation in parallel on an even number of blocks, and then a permutation is applied to this set of blocks. In 2010 at FSE, Suzaki and Minematsu studied the diffusion of such construction, raising the question of how many rounds are required so that each block of the ciphertext depends on all blocks of the plaintext. They thus gave some optimal permutations, with respect to this diffusion criteria, for a Generalized Feistel Network consisting of 2 to 16 blocks, as well as giving a good candidate for 32 blocks. Later at FSE’19, Cauchois et al. went further and were able to propose optimal even-odd permutations for up to 26 blocks.In this paper, we complete the literature by building optimal even-odd permutations for 28, 30, 32, 36 blocks which to the best of our knowledge were unknown until now. The main idea behind our constructions and impossibility proof is a new characterization of the total diffusion of a permutation after a given number of rounds. In fact, we propose an efficient algorithm based on this new characterization which constructs all optimal even-odd permutations for the 28, 30, 32, 36 blocks cases and proves a better lower bound for the 34, 38, 40 and 42 blocks cases. In particular, we improve the 32 blocks case by exhibiting optimal even-odd permutations with diffusion round of 9. The existence of such a permutation was an open problem for almost 10 years and the best known permutation in the literature had a diffusion round of 10. Moreover, our characterization can be implemented very efficiently and allows us to easily re-find all optimal even-odd permutations for up to 26 blocks with a basic exhaustive search
2019
TOSC
CRAFT: Lightweight Tweakable Block Cipher with Efficient Protection Against DFA Attacks 📺
Traditionally, countermeasures against physical attacks are integrated into the implementation of cryptographic primitives after the algorithms have been designed for achieving a certain level of cryptanalytic security. This picture has been changed by the introduction of PICARO, ZORRO, and FIDES, where efficient protection against Side-Channel Analysis (SCA) attacks has been considered in their design. In this work we present the tweakable block cipher CRAFT: the efficient protection of its implementations against Differential Fault Analysis (DFA) attacks has been one of the main design criteria, while we provide strong bounds for its security in the related-tweak model. Considering the area footprint of round-based hardware implementations, CRAFT outperforms the other lightweight ciphers with the same state and key size. This holds not only for unprotected implementations but also when fault-detection facilities, side-channel protection, and their combination are integrated into the implementation. In addition to supporting a 64-bit tweak, CRAFT has the additional property that the circuit realizing the encryption can support the decryption functionality as well with very little area overhead.
2019
TOSC
libInterMAC: Beyond Confidentiality and Integrity in Practice 📺
Boldyreva et al. (Eurocrypt 2012) defined a fine-grained security model capturing ciphertext fragmentation attacks against symmetric encryption schemes. The model was extended by Albrecht et al. (CCS 2016) to include an integrity notion. The extended security model encompasses important security goals of SSH that go beyond confidentiality and integrity to include length hiding and denial-of-service resistance properties. Boldyreva et al. also defined and analysed the InterMAC scheme, while Albrecht et al. showed that InterMAC satisfies stronger security notions than all currently available SSH encryption schemes. In this work, we take the InterMAC scheme and make it fully ready for use in practice. This involves several steps. First, we modify the InterMAC scheme to support encryption of arbitrary length plaintexts and we replace the use of Encrypt-then-MAC in InterMAC with modern noncebased authenticated encryption. Second, we describe a reference implementation of the modified InterMAC scheme in the form of the library libInterMAC. We give a performance analysis of libInterMAC. Third, to test the practical performance of libInterMAC, we implement several InterMAC-based encryption schemes in OpenSSH and carry out a performance analysis for the use-case of file transfer using SCP. We measure the data throughput and the data overhead of using InterMAC-based schemes compared to existing schemes in OpenSSH. Our analysis shows that, for some network set-ups, using InterMAC-based schemes in OpenSSH only moderately affects performance whilst providing stronger security guarantees compared to existing schemes.
2019
TOSC
Constructing Low-latency Involutory MDS Matrices with Lightweight Circuits 📺
MDS matrices are important building blocks providing diffusion functionality for the design of many symmetric-key primitives. In recent years, continuous efforts are made on the construction of MDS matrices with small area footprints in the context of lightweight cryptography. Just recently, Duval and Leurent (ToSC 2018/FSE 2019) reported some 32 × 32 binary MDS matrices with branch number 5, which can be implemented with only 67 XOR gates, whereas the previously known lightest ones of the same size cost 72 XOR gates.In this article, we focus on the construction of lightweight involutory MDS matrices, which are even more desirable than ordinary MDS matrices, since the same circuit can be reused when the inverse is required. In particular, we identify some involutory MDS matrices which can be realized with only 78 XOR gates with depth 4, whereas the previously known lightest involutory MDS matrices cost 84 XOR gates with the same depth. Notably, the involutory MDS matrix we find is much smaller than the AES MixColumns operation, which requires 97 XOR gates with depth 8 when implemented as a block of combinatorial logic that can be computed in one clock cycle. However, with respect to latency, the AES MixColumns operation is superior to our 78-XOR involutory matrices, since the AES MixColumns can be implemented with depth 3 by using more XOR gates.We prove that the depth of a 32 × 32 MDS matrix with branch number 5 (e.g., the AES MixColumns operation) is at least 3. Then, we enhance Boyar’s SLP-heuristic algorithm with circuit depth awareness, such that the depth of its output circuit is limited. Along the way, we give a formula for computing the minimum achievable depth of a circuit implementing the summation of a set of signals with given depths, which is of independent interest. We apply the new SLP heuristic to a large set of lightweight involutory MDS matrices, and we identify a depth 3 involutory MDS matrix whose implementation costs 88 XOR gates, which is superior to the AES MixColumns operation with respect to both lightweightness and latency, and enjoys the extra involution property.
2019
TOSC
Boomerang Connectivity Table Revisited. Application to SKINNY and AES 📺
The boomerang attack is a variant of differential cryptanalysis which regards a block cipher E as the composition of two sub-ciphers, i.e., E = E1 o E0, and which constructs distinguishers for E with probability p2q2 by combining differential trails for E0 and E1 with probability p and q respectively. However, the validity of this attack relies on the dependency between the two differential trails. Murphy has shown cases where probabilities calculated by p2q2 turn out to be zero, while techniques such as boomerang switches proposed by Biryukov and Khovratovich give rise to probabilities greater than p2q2. To formalize such dependency to obtain a more accurate estimation of the probability of the distinguisher, Dunkelman et al. proposed the sandwich framework that regards E as Ẽ1 o Em o Ẽ0, where the dependency between the two differential trails is handled by a careful analysis of the probability of the middle part Em. Recently, Cid et al. proposed the Boomerang Connectivity Table (BCT) which unifies the previous switch techniques and incompatibility together and evaluates the probability of Em theoretically when Em is composed of a single S-box layer. In this paper, we revisit the BCT and propose a generalized framework which is able to identify the actual boundaries of Em which contains dependency of the two differential trails and systematically evaluate the probability of Em with any number of rounds. To demonstrate the power of this new framework, we apply it to two block ciphers SKINNY and AES. In the application to SKINNY, the probabilities of four boomerang distinguishers are re-evaluated. It turns out that Em involves5 or 6 rounds and the probabilities of the full distinguishers are much higher than previously evaluated. In the application to AES, the new framework is used to exclude incompatibility and find high probability distinguishers of AES-128 under the related-subkey setting. As a result, a 6-round distinguisher with probability 2−109.42 is constructed. Lastly, we discuss the relation between the dependency of two differential trails in boomerang distinguishers and the properties of components of the cipher.
2019
TOSC
Boomerang Switch in Multiple Rounds. Application to AES Variants and Deoxys 📺
The boomerang attack is a cryptanalysis technique that allows an attacker to concatenate two short differential characteristics. Several research results (ladder switch, S-box switch, sandwich attack, Boomerang Connectivity Table (BCT), ...) showed that the dependency between these two characteristics at the switching round can have a significant impact on the complexity of the attack, or even potentially invalidate it. In this paper, we revisit the issue of boomerang switching effect, and exploit it in the case where multiple rounds are involved. To support our analysis, we propose a tool called Boomerang Difference Table (BDT), which can be seen as an improvement of the BCT and allows a systematic evaluation of the boomerang switch through multiple rounds. In order to illustrate the power of this technique, we propose a new related-key attack on 10-round AES-256 which requires only 2 simple related-keys and 275 computations. This is a much more realistic scenario than the state-of-the-art 10-round AES-256 attacks, where subkey oracles, or several related-keys and high computational power is needed. Furthermore, we also provide improved attacks against full AES-192 and reduced-round Deoxys.
2019
TOSC
A General Proof Framework for Recent AES Distinguishers 📺
In this paper, a new framework is developed for proving and adapting the recently proposed multiple-of-8 property and mixture-differential distinguishers. The above properties are formulated as immediate consequences of an equivalence relation on the input pairs, under which the difference at the output of the round function is invariant. This approach provides a further understanding of these newly developed distinguishers. For example, it clearly shows that the branch number of the linear layer does not influence the validity of the property, on the contrary of what was previously believed. We further provide an extension of the mixture-differential distinguishers and multiple-of-8 property to any SPN and to a larger class of subspaces. These adapted properties can then be exhibited in a systematic way for other ciphers than the AES. We illustrate this with the examples of Midori, Klein, LED and Skinny.
2019
TOSC
Zero-Correlation Attacks on Tweakable Block Ciphers with Linear Tweakey Expansion 📺
The design and analysis of dedicated tweakable block ciphers is a quite recent and very active research field that provides an ongoing stream of new insights. For instance, results of Kranz, Leander, and Wiemer from FSE 2017 show that the addition of a tweak using a linear tweak schedule does not introduce new linear characteristics. In this paper, we consider – to the best of our knowledge – for the first time the effect of the tweak on zero-correlation linear cryptanalysis for ciphers that have a linear tweak schedule. It turns out that the tweak can often be used to get zero-correlation linear hulls covering more rounds compared to just searching zero-correlation linear hulls on the data-path of a cipher. Moreover, this also implies the existence of integral distinguishers on the same number of rounds. We have applied our technique on round reduced versions of Qarma, Mantis, and Skinny. As a result, we can present – to the best of our knowledge – the best attack (with respect to number of rounds) on a round-reduced variant of Qarma.
2019
TOSC
Related-Tweak Statistical Saturation Cryptanalysis and Its Application on QARMA 📺
Statistical saturation attack takes advantage of a set of plaintext with some bits fixed while the others vary randomly, and then track the evolution of a non-uniform plaintext distribution through the cipher. Previous statistical saturation attacks are all implemented under single-key setting, and there is no public attack models under related-key/tweak setting. In this paper, we propose a new cryptanalytic method which can be seen as related-key/tweak statistical saturation attack by revealing the link between the related-key/tweak statistical saturation distinguishers and KDIB (Key Difference Invariant Bias) / TDIB (Tweak Difference Invariant Bias) ones. KDIB cryptanalysis was proposed by Bogdanov et al. at ASIACRYPT’13 and utilizes the property that there can exist linear trails such that their biases are deterministically invariant under key difference. And this method can be easily extended to TDIB distinguishers if the tweak is also alternated. The link between them provides a new and more efficient way to find related-key/tweak statistical saturation distinguishers in ciphers. Thereafter, an automatic searching algorithm for KDIB/TDIB distinguishers is also given in this paper, which can be implemented to find word-level KDIB distinguishers for S-box based key-alternating ciphers. We apply this algorithm to QARMA-64 and give related-tweak statistical saturation attack for 10-round QARMA-64 with outer whitening key. Besides, an 11-round attack on QARMA-128 is also given based on the TDIB technique. Compared with previous public attacks on QARMA including outer whitening key, all attacks presented in this paper are the best ones in terms of the number of rounds.
2019
TOSC
General Diffusion Analysis: How to Find Optimal Permutations for Generalized Type-II Feistel Schemes 📺
Type-II Generalized Feistel Schemes are one of the most popular versions of Generalized Feistel Schemes. Their round function consists in applying a classical Feistel transformation to p sub-blocks of two consecutive words and then shifting the k = 2p words cyclically. The low implementation costs it offers are balanced by a low diffusion, limiting its efficiency. Diffusion of such structures may however be improved by replacing the cyclic shift with a different permutation without any additional implementation cost. In this paper, we study ways to determine permutations with the fastest diffusion called optimal permutations.To do so, two ideas are used. First, we study the natural equivalence classes of permutations that preserve cryptographic properties; second, we use the representation of permutations as coloured trees.For both heuristic and historical reasons, we focus first on even-odd permutations, that is, those permutations for which images of even numbers are odd. We derive from their structure an upper bound on the number of their equivalence classes together with a strategy to perform exhaustive searches on classes. We performed those exhaustive searches for sizes k ≤ 24, while previous exhaustive searches on all permutations were limited to k ≤ 16. For sizes beyond the reach of this method, we use tree representations to find permutations with good intermediate diffusion properties. This heuristic leads to an optimal even-odd permutation for k = 26 and best-known results for sizes k = 64 and k = 128.Finally, we transpose these methods to all permutations. Using a new strategy to exhaust equivalence classes, we perform exhaustive searches on classes for sizes k ≤ 20 whose results confirmed the initial heuristic: there always exist optimal permutations that are even-odd and furthermore for k = 18 all optimal permutations are even-odd permutations.
2019
TOSC
Partitions in the S-Box of Streebog and Kuznyechik 📺
Streebog and Kuznyechik are the latest symmetric cryptographic primitives standardized by the Russian GOST. They share the same S-Box, π, whose design process was not described by its authors. In previous works, Biryukov, Perrin and Udovenko recovered two completely different decompositions of this S-Box.We revisit their results and identify a third decomposition of π. It is an instance of a fairly small family of permutations operating on 2m bits which we call TKlog and which is closely related to finite field logarithms. Its simplicity and the small number of components it uses lead us to claim that it has to be the structure intentionally used by the designers of Streebog and Kuznyechik.The 2m-bit permutations of this type have a very strong algebraic structure: they map multiplicative cosets of the subfield GF(2m)* to additive cosets of GF(2m)*. Furthermore, the function relating each multiplicative coset to the corresponding additive coset is always essentially the same. To the best of our knowledge, we are the first to expose this very strong algebraic structure.We also investigate other properties of the TKlog and show in particular that it can always be decomposed in a fashion similar to the first decomposition of Biryukov et al., thus explaining the relation between the two previous decompositions. It also means that it is always possible to implement a TKlog efficiently in hardware and that it always exhibits a visual pattern in its LAT similar to the one present in π. While we could not find attacks based on these new results, we discuss the impact of our work on the security of Streebog and Kuznyechik. To this end, we provide a new simpler representation of the linear layer of Streebog as a matrix multiplication in the exact same field as the one used to define π. We deduce that this matrix interacts in a non-trivial way with the partitions preserved by π.
2019
TOSC
PEIGEN – a Platform for Evaluation, Implementation, and Generation of S-boxes 📺
In this paper, a platform named PEIGEN is presented to evaluate security, find efficient software/hardware implementations, and generate cryptographic S-boxes. Continuously developed for decades, S-boxes are constantly evolving in terms of the design criteria for both security requirements and software/hardware performances. PEIGEN is aimed to be a platform covering a comprehensive check-list of design criteria of S-boxes appearing in the literature. To do so, the security requirements are first intensively surveyed, existing tools of S-boxes are then comprehensively compared, and finally our platform PEIGEN is presented. The survey part is aimed to be a systematic reference for the theoretical study of S-boxes. The platform is aimed to be an assistant tool for the experimental study and practical use of S-boxes. PEIGEN not only integrates most of the features in existing tools, but also equips with functionalities to evaluate new security-related properties, improves the efficiency of the search algorithms for optimized implementations in several aspects. With the help of this powerful platform, many interesting observations are made in-between the security notations, as well as on the S-boxes used in the existing symmetrickey cryptographic primitives. PEIGEN will become an open platform and welcomes contributions from all parties to help the community to facilitate the research and use of S-boxes.
2019
TOSC
A new SNOW stream cipher called SNOW-V 📺
In this paper we are proposing a new member in the SNOW family of stream ciphers, called SNOW-V. The motivation is to meet an industry demand of very high speed encryption in a virtualized environment, something that can be expected to be relevant in a future 5G mobile communication system. We are revising the SNOW 3G architecture to be competitive in such a pure software environment, making use of both existing acceleration instructions for the AES encryption round function as well as the ability of modern CPUs to handle large vectors of integers (e.g. SIMD instructions). We have kept the general design from SNOW 3G, in terms of linear feedback shift register (LFSR) and Finite State Machine (FSM), but both entities are updated to better align with vectorized implementations. The LFSR part is new and operates 8 times the speed of the FSM. We have furthermore increased the total state size by using 128-bit registers in the FSM, we use the full AES encryption round function in the FSM update, and, finally, the initialization phase includes a masking with key bits at its end. The result is an algorithm generally much faster than AES-256 and with expected security not worse than AES-256.
2019
TOSC
DoveMAC: A TBC-based PRF with Smaller State, Full Security, and High Rate 📺
Recent parallelizable message authentication codes (MACs) have demonstrated the benefit of tweakable block ciphers (TBCs) for authentication with high security guarantees. With ZMAC, Iwata et al. extended this line of research by showing that TBCs can simultaneously increase the number of message bits that are processed per primitive call. However, ZMAC and previous TBC-based MACs needed more memory than sequential constructions. While this aspect is less an issue on desktop processors, it can be unfavorable on resource-constrained platforms. In contrast, existing sequential MACs limit the number of message bits to the block length of the primitive n or below.This work proposes DoveMAC, a TBC-based PRF that reduces the memory of ZMAC-based MACs to 2n+ 2t+2k bits, where n is the state size, t the tweak length, and k the key length of the underlying primitive. DoveMAC provides (n+min(n+t))/2 bits of security, and processes n+t bits per primitive call. Our construction is the first sequential MAC that combines beyond-birthday-bound security with a rate above n bits per call. By reserving a single tweak bit for domain separation, we derive a single-key variant DoveMAC1k.
2019
TOSC
Revisit Division Property Based Cube Attacks: Key-Recovery or Distinguishing Attacks? 📺
Cube attacks are an important type of key recovery attacks against stream ciphers. In particular, they are shown to be powerful against Trivium-like ciphers. Traditional cube attacks are experimental attacks which could only exploit cubes of size less than 40. At CRYPTO 2017, division property based cube attacks were proposed by Todo et al., and an advantage of introducing the division property to cube attacks is that large cube sizes which are beyond the experimental range could be explored, and so powerful theoretical attacks were mounted on many lightweight stream ciphers.In this paper, we revisit the division property based cube attacks. There is an important assumption, called Weak Assumption, proposed in division property based cube attacks to support the effectiveness of key recovery. Todo et al. in CRYPTO 2017 said that the Weak Assumption was expected to hold for theoretically recovered superpolies of Trivium according to some experimental results on small cubes. In this paper, it is shown that the Weak Assumption often fails in cube attacks against Trivium, and moreover a new method to recover the exact superpoly of a given cube is developed based on the bit-based division property. With our method, for the cube I proposed by Todo et al. at CRYPTO 2017 to attack the 832-round Trivium, we recover its superpoly pI (x, v) = v68v78 · (x58⊕v70) · (x59x60⊕x34⊕x61). Furthermore, we prove that some best key recovery results given at CRYPTO 2018 on Trivium are actually distinguishing attacks. Hopefully this paper gives some new insights on accurately recovering the superpolies with the bit-based division property and also attract some attention on the validity of division property based cube attacks against stream ciphers.
2019
TOSC
Cryptanalysis of Plantlet 📺
Plantlet is a lightweight stream cipher designed by Mikhalev, Armknecht and Müller in IACR ToSC 2017. It has a Grain-like structure with two state registers of size 40 and 61 bits. In spite of this, the cipher does not seem to lose in security against generic Time-Memory-Data Tradeoff attacks due to the novelty of its design. The cipher uses a 80-bit secret key and a 90-bit IV. In this paper, we first present a key recovery attack on Plantlet that requires around 276.26 Plantlet encryptions. The attack leverages the fact that two internal states of Plantlet that differ in the 43rd LFSR location are guaranteed to produce keystream that are either equal or unequal in 45 locations with probability 1. Thus an attacker can with some probability guess that when 2 segments of keystream blocks possess the 45 bit difference just mentioned, they have been produced by two internal states that differ only in the 43rd LFSR location. Thereafter by solving a system of polynomial equations representing the keystream bits, the attacker can find the secret key if his guess was indeed correct, or reach some kind of contradiction if his guess was incorrect. In the latter event, he would repeat the procedure for other keystream blocks with the given difference. We show that the process when repeated a finite number of times, does indeed yield the value of the secret key. In the second part of the paper, we observe that the previous attack was limited to internal state differences that occurred at time instances that were congruent to 0 mod 80. We further observe that by generalizing the attack to include internal state differences that are congruent to all equivalence classed modulo 80, we lower the total number of keystream bits required to perform the attack and in the process reduce the attack complexity to 269.98 Plantlet encryptions.
2019
TOSC
New Related-Tweakey Boomerang and Rectangle Attacks on Deoxys-BC Including BDT Effect 📺
In the CAESAR competition, Deoxys-I and Deoxys-II are two important authenticated encryption schemes submitted by Jean et al. Recently, Deoxys-II together with Ascon, ACORN, AEGIS-128, OCB and COLM have been selected as the final CAESAR portfolio. Notably, Deoxys-II is also the primary choice for the use case “Defense in depth”. However, Deoxys-I remains to be one of the third-round candidates of the CAESAR competition. Both Deoxys-I and Deoxys-II adopt Deoxys-BC-256 and Deoxys-BC-384 as their internal tweakable block ciphers.In this paper, we investigate the security of round-reduced Deoxys-BC-256/-384 and Deoxys-I against the related-tweakey boomerang and rectangle attacks with some new boomerang distinguishers. For Deoxys-BC-256, we present 10-round related-tweakey boomerang and rectangle attacks for the popular setting (|tweak|, |key|) = (128, 128), which reach one more round than the previous attacks in this setting. Moreover, an 11-round related-tweakey rectangle attack on Deoxys-BC-256 is given for the first time. We also put forward a 13-round related-tweakey boomerang attack in the popular setting (|tweak|, |key|) = (128, 256) for Deoxys-BC-384, while the previous attacks in this setting only work for 12 rounds at most. In addition, the first 14-round relatedtweakey rectangle attack on Deoxys-BC-384 is given when (|tweak| &lt; 98, |key| &gt; 286), that attacks one more round than before. Besides, we give the first 10-round rectangle attack on the authenticated encryption mode Deoxys-I-128-128 with one more round than before, and we also reduce the complexity of the related-tweakey rectangle attack on 12-round Deoxys-I-256-128 by a factor of 228. Our attacks can not be applied to (round-reduced) Deoxys-II.
2019
TOSC
Substitution Attacks against Message Authentication 📺
This work introduces Algorithm Substitution Attacks (ASAs) on message authentication schemes. In light of revelations concerning mass surveillance, ASAs were initially introduced by Bellare, Paterson and Rogaway as a novel attack class against the confidentiality of encryption schemes. Such an attack replaces one or more of the regular scheme algorithms with a subverted version that aims to reveal information to an adversary (engaged in mass surveillance), while remaining undetected by users. While most prior work focused on subverting encryption systems, we study options to subvert symmetric message authentication protocols. In particular we provide powerful generic attacks that apply e.g. to HMAC or Carter–Wegman based schemes, inducing only a negligible implementation overhead. As subverted authentication can act as an enabler for subverted encryption (software updates can be manipulated to include replacements of encryption routines), we consider attacks of the new class highly impactful and dangerous.
2019
TOSC
New Semi-Free-Start Collision Attack Framework for Reduced RIPEMD-160 📺
RIPEMD-160 is a hash function published in 1996, which shares similarities with other hash functions designed in this time-period like MD4, MD5 and SHA-1. However, for RIPEMD-160, no (semi-free-start) collision attacks on the full number of steps are known. Hence, it is still used, e.g., to generate Bitcoin addresses together with SHA-256, and is an ISO/IEC standard. Due to its dual-stream structure, even semifree- start collision attacks starting from the first step only reach 36 steps, which were firstly shown by Mendel et al. at Asiacrypt 2013 and later improved by Liu, Mendel and Wang at Asiacrypt 2017. Both of the attacks are based on a similar freedom degree utilization technique as proposed by Landelle and Peyrin at Eurocrypt 2013. However, the best known semi-free-start collision attack on 36 steps of RIPEMD-160 presented at Asiacrypt 2017 still requires 255.1 time and 232 memory. Consequently, a practical semi-free-start collision attack for the first 36 steps of RIPEMD-160 still requires a significant amount of resources. Considering the structure of these previous semi-free-start collision attacks for 36 steps of RIPEMD-160, it seems hard to extend it to more steps. Thus, we develop a different semi-free-start collision attack framework for reduced RIPEMD-160 by carefully investigating the message expansion of RIPEMD-160. Our new framework has several advantages. First of all, it allows to extend the attacks to more steps. Second, the memory complexity of the attacks is negligible. Hence, we were able to mount semi-free-start collision attacks on 36 and 37 steps of RIPEMD-160 with practical time complexity 241 and 249 respectively. Additionally, we describe semi-free-start collision attacks on 38 and 40 (out of 80) steps of RIPEMD-160 with time complexity 252 and 274.6, respectively. To the best of our knowledge, these are the best semi-free-start collision attacks for RIPEMD-160 starting from the first step with respect to the number of steps, including the first practical colliding message pairs for 36 and 37 steps of RIPEMD-160.
2019
TOSC
Security of Symmetric Primitives against Key-Correlated Attacks 📺
We study the security of symmetric primitives against key-correlated attacks (KCA), whereby an adversary can arbitrarily correlate keys, messages, and ciphertexts. Security against KCA is required whenever a primitive should securely encrypt key-dependent data, even when it is used under related keys. KCA is a strengthening of the previously considered notions of related-key attack (RKA) and key-dependent message (KDM) security. This strengthening is strict, as we show that 2-round Even–Mansour fails to be KCA secure even though it is both RKA and KDM secure. We provide feasibility results in the ideal-cipher model for KCAs and show that 3-round Even–Mansour is KCA secure under key offsets in the random-permutation model. We also give a natural transformation that converts any authenticated encryption scheme to a KCA-secure one in the random-oracle model. Conceptually, our results allow for a unified treatment of RKA and KDM security in idealized models of computation.
2019
TOSC
Exhaustive Search for Various Types of MDS Matrices 📺
MDS matrices are used in the design of diffusion layers in many block ciphers and hash functions due to their optimal branch number. But MDS matrices, in general, have costly implementations. So in search for efficiently implementable MDS matrices, there have been many proposals. In particular, circulant, Hadamard, and recursive MDS matrices from companion matrices have been widely studied. In a recent work, recursive MDS matrices from sparse DSI matrices are studied, which are of interest due to their low fixed cost in hardware implementation. In this paper, we present results on the exhaustive search for (recursive) MDS matrices over GL(4, F2). Specifically, circulant MDS matrices of order 4, 5, 6, 7, 8; Hadamard MDS matrices of order 4, 8; recursive MDS matrices from companion matrices of order 4; recursive MDS matrices from sparse DSI matrices of order 4, 5, 6, 7, 8 are considered. It is to be noted that the exhaustive search is impractical with a naive approach. We first use some linear algebra tools to restrict the search to a smaller domain and then apply some space-time trade-off techniques to get the solutions. From the set of solutions in the restricted domain, one can easily generate all the solutions in the full domain. From the experimental results, we can see the (non) existence of (involutory) MDS matrices for the choices mentioned above. In particular, over GL(4, F2), we provide companion matrices of order 4 that yield involutory MDS matrices, circulant MDS matrices of order 8, and establish the nonexistence of involutory circulant MDS matrices of order 6, 8, circulant MDS matrices of order 7, sparse DSI matrices of order 4 that yield involutory MDS matrices, and sparse DSI matrices of order 5, 6, 7, 8 that yield MDS matrices. To the best of our knowledge, these results were not known before. For the choices mentioned above, if such MDS matrices exist, we provide base sets of MDS matrices, from which all the MDS matrices with the least cost (with respect to d-XOR and s-XOR counts) can be obtained. We also take this opportunity to present some results on the search for sparse DSI matrices over finite fields that yield MDS matrices. We establish that there is no sparse DSI matrix S of order 8 over F28 such that S8 is MDS.
2018
CRYPTO
2018
CRYPTO
TinyKeys: A New Approach to Efficient Multi-Party Computation 📺
We present a new approach to designing concretely efficient MPC protocols with semi-honest security in the dishonest majority setting. Motivated by the fact that within the dishonest majority setting the efficiency of most practical protocols does not depend on the number of honest parties, we investigate how to construct protocols which improve in efficiency as the number of honest parties increases. Our central idea is to take a protocol which is secure for $$n-1$$ n-1 corruptions and modify it to use short symmetric keys, with the aim of basing security on the concatenation of all honest parties’ keys. This results in a more efficient protocol tolerating fewer corruptions, whilst also introducing an LPN-style syndrome decoding assumption.We first apply this technique to a modified version of the semi-honest GMW protocol, using OT extension with short keys, to improve the efficiency of standard GMW with fewer corruptions. We also obtain more efficient constant-round MPC, using BMR-style garbled circuits with short keys, and present an implementation of the online phase of this protocol. Our techniques start to improve upon existing protocols when there are around $$n=20$$ n=20 parties with $$h=6$$ h=6 honest parties, and as these increase we obtain up to a 13 times reduction (for $$n=400, h=120$$ n=400,h=120) in communication complexity for our GMW variant, compared with the best-known GMW-based protocol modified to use the same threshold.
2018
CRYPTO
Two-Round Multiparty Secure Computation Minimizing Public Key Operations 📺
We show new constructions of semi-honest and malicious two-round multiparty secure computation protocols using only (a fixed) $$\mathsf {poly}(n,\lambda )$$ poly(n,λ) invocations of a two-round oblivious transfer protocol (which use expensive public-key operations) and $$\mathsf {poly}(\lambda , |C|)$$ poly(λ,|C|) cheaper one-way function calls, where $$\lambda $$ λ is the security parameter, n is the number of parties, and C is the circuit being computed. All previously known two-round multiparty secure computation protocols required $$\mathsf {poly}(\lambda ,|C|)$$ poly(λ,|C|) expensive public-key operations.
2018
CRYPTO
Limits of Practical Sublinear Secure Computation 📺
Secure computations on big data call for protocols that have sublinear communication complexity in the input length. While fully homomorphic encryption (FHE) provides a general solution to the problem, employing it on a large scale is currently quite far from being practical. This is also the case for secure computation tasks that reduce to weaker forms of FHE such as “somewhat homomorphic encryption” or single-server private information retrieval (PIR).Quite unexpectedly, Aggarwal, Mishra, and Pinkas (Eurocrypt 2004), Brickell and Shmatikov (Asiacrypt 2005), and Shelat and Venkitasubramaniam (Asiacrypt 2015) have shown that in several natural instances of secure computation on big data, there are practical sublinear communication protocols that only require sublinear local computation and minimize the use of expensive public-key operations. This raises the question of whether similar protocols exist for other natural problems.In this paper we put forward a framework for separating “practical” sublinear protocols from “impractical” ones, and establish a methodology for identifying “provably hard” big-data problems that do not admit practical protocols. This is akin to the use of NP-completeness to separate hard algorithmic problems from easy ones. We show that while the previous protocols of Aggarwal et al., Brickell and Shmatikov, and Shelat and Venkitasubramaniam are indeed classified as being “practical” in this framework, slight variations of the problems they solve and other natural computational problems on big data are hard.Our negative results are established by showing that the problem at hand is “PIR-hard” in the sense that any secure protocol for the problem implies PIR on a large database. This imposes a barrier on the local computational cost of secure protocols for the problem. We also identify a new natural relaxation of PIR that we call semi-PIR, which is useful for establishing “intermediate hardness” of several practically motivated secure computation tasks. We show that semi-PIR implies slightly sublinear PIR via an adaptive black-box reduction and that ruling out a stronger black-box reduction would imply a major breakthrough in complexity theory. We also establish information-theoretic separations between semi-PIR and PIR, showing that some problems that we prove to be semi-PIR-hard are not PIR-hard.
2018
CRYPTO
Limits on the Power of Garbling Techniques for Public-Key Encryption 📺
Understanding whether public-key encryption can be based on one-way functions is a fundamental open problem in cryptography. The seminal work of Impagliazzo and Rudich [STOC’89] shows that black-box constructions of public-key encryption from one-way functions are impossible. However, this impossibility result leaves open the possibility of using non-black-box techniques for achieving this goal.One of the most powerful classes of non-black-box techniques, which can be based on one-way functions (OWFs) alone, is Yao’s garbled circuit technique [FOCS’86]. As for the non-black-box power of this technique, the recent work of Döttling and Garg [CRYPTO’17] shows that the use of garbling allows us to circumvent known black-box barriers in the context of identity-based encryption.We prove that garbling of circuits that have OWF (or even random oracle) gates in them are insufficient for obtaining public-key encryption. Additionally, we show that this model also captures (non-interactive) zero-knowledge proofs for relations with OWF gates. This indicates that currently known OWF-based non-black-box techniques are perhaps insufficient for realizing public-key encryption.
2018
CRYPTO
Optimizing Authenticated Garbling for Faster Secure Two-Party Computation 📺
Wang et al. (CCS 2017) recently proposed a protocol for malicious secure two-party computation that represents the state-of-the-art with regard to concrete efficiency in both the single-execution and amortized settings, with or without preprocessing. We show here several optimizations of their protocol that result in a significant improvement in the overall communication and running time. Specifically:We show how to make the “authenticated garbling” at the heart of their protocol compatible with the half-gate optimization of Zahur et al. (Eurocrypt 2015). We also show how to avoid sending an information-theoretic MAC for each garbled row. These two optimizations give up to a 2.6$$\times $$× improvement in communication, and make the communication of the online phase essentially equivalent to that of state-of-the-art semi-honest secure computation.We show various optimizations to their protocol for generating AND triples that, overall, result in a 1.5$$\times $$× improvement in the communication and a 2$$\times $$× improvement in the computation for that step.
2018
CRYPTO
Amortized Complexity of Information-Theoretically Secure MPC Revisited 📺
A fundamental and widely-applied paradigm due to Franklin and Yung (STOC 1992) on Shamir-secret-sharing based general n-player MPC shows how one may trade the adversary thresholdt against amortized communication complexity, by using a so-called packed version of Shamir’s scheme. For e.g. the BGW-protocol (with active security), this trade-off means that if $$t + 2k -2 < n/3$$ t+2k-2<n/3, then kparallel evaluations of the same arithmetic circuit on different inputs can be performed at the overall cost corresponding to a single BGW-execution.In this paper we propose a novel paradigm for amortized MPC that offers a different trade-off, namely with the size of the field of the circuit which is securely computed, instead of the adversary threshold. Thus, unlike the Franklin-Yung paradigm, this leaves the adversary threshold unchanged. Therefore, for instance, this paradigm may yield constructions enjoying the maximal adversary threshold $$\lfloor (n-1)/3 \rfloor $$ ⌊(n-1)/3⌋ in the BGW-model (secure channels, perfect security, active adversary, synchronous communication).Our idea is to compile an MPC for a circuit over an extension field to a parallel MPC of the same circuit but with inputs defined over its base field and with the same adversary threshold. Key technical handles are our notion of reverse multiplication-friendly embeddings (RMFE) and our proof, by algebraic-geometric means, that these are constant-rate, as well as efficient auxiliary protocols for creating “subspace-randomness” with good amortized complexity. In the BGW-model, we show that the latter can be constructed by combining our tensored-up linear secret sharing with protocols based on hyper-invertible matrices á la Beerliova-Hirt (or variations thereof). Along the way, we suggest alternatives for hyper-invertible matrices with the same functionality but which can be defined over a large enough constant size field, which we believe is of independent interest.As a demonstration of the merits of the novel paradigm, we show that, in the BGW-model and with an optimal adversary threshold $$\lfloor (n-1)/3 \rfloor $$ ⌊(n-1)/3⌋, it is possible to securely compute a binary circuit with amortized complexity O(n) of bits per gate per instance. Known results would give $$n \log n$$ nlogn bits instead. By combining our result with the Franklin-Yung paradigm, and assuming a sub-optimal adversary (i.e., an arbitrarily small $$\epsilon >0$$ ϵ>0 fraction below 1/3), this is improved to O(1) bits instead of O(n).
2018
CRYPTO
Private Circuits: A Modular Approach 📺
We consider the problem of protecting general computations against constant-rate random leakage. That is, the computation is performed by a randomized boolean circuit that maps a randomly encoded input to a randomly encoded output, such that even if the value of every wire is independently leaked with some constant probability $$p > 0$$ p>0, the leakage reveals essentially nothing about the input.In this work we provide a conceptually simple, modular approach for solving the above problem, providing a simpler and self-contained alternative to previous constructions of Ajtai (STOC 2011) and Andrychowicz et al. (Eurocrypt 2016). We also obtain several extensions and generalizations of this result. In particular, we show that for every leakage probability $$p<1$$ p<1, there is a finite basis $$\mathbb {B}$$ B such that leakage-resilient computation with leakage probability p can be realized using circuits over the basis $$\mathbb {B}$$ B. We obtain similar positive results for the stronger notion of leakage tolerance, where the input is not encoded, but the leakage from the entire computation can be simulated given random $$p'$$ p′-leakage of input values alone, for any $$p<p'<1$$ p<p′<1. Finally, we complement this by a negative result, showing that for every basis $$\mathbb {B}$$ B there is some leakage probability $$p<1$$ p<1 such that for any $$p'<1$$ p′<1, leakage tolerance as above cannot be achieved in general.We show that our modular approach is also useful for protecting computations against worst case leakage. In this model, we require that leakage of any $$\mathbf{t}$$ t (adversarially chosen) wires reveal nothing about the input. By combining our construction with a previous derandomization technique of Ishai et al. (ICALP 2013), we show that security in this setting can be achieved with $$O(\mathbf{t}^{1+\varepsilon })$$ O(t1+ε) random bits, for every constant $$\varepsilon > 0$$ ε>0. This (near-optimal) bound significantly improves upon previous constructions that required more than $$\mathbf{t}^{3}$$ t3 random bits.
2018
CRYPTO
A New Public-Key Cryptosystem via Mersenne Numbers 📺
In this work, we propose a new public-key cryptosystem whose security is based on the computational intractability of the following problem: Given a Mersenne number $$p = 2^n - 1$$ p=2n-1, where n is a prime, a positive integer h, and two n-bit integers T, R, decide whether their exist n-bit integers F, G each of Hamming weight less than h such that $$T = F\cdot R + G$$ T=F·R+G modulo p.
2018
CRYPTO
Fast Homomorphic Evaluation of Deep Discretized Neural Networks 📺
The rise of machine learning as a service multiplies scenarios where one faces a privacy dilemma: either sensitive user data must be revealed to the entity that evaluates the cognitive model (e.g., in the Cloud), or the model itself must be revealed to the user so that the evaluation can take place locally. Fully Homomorphic Encryption (FHE) offers an elegant way to reconcile these conflicting interests in the Cloud-based scenario and also preserve non-interactivity. However, due to the inefficiency of existing FHE schemes, most applications prefer to use Somewhat Homomorphic Encryption (SHE), where the complexity of the computation to be performed has to be known in advance, and the efficiency of the scheme depends on this global complexity.In this paper, we present a new framework for homomorphic evaluation of neural networks, that we call FHE–DiNN, whose complexity is strictly linear in the depth of the network and whose parameters can be set beforehand. To obtain this scale-invariance property, we rely heavily on the bootstrapping procedure. We refine the recent FHE construction by Chillotti et al. (ASIACRYPT 2016) in order to increase the message space and apply the sign function (that we use to activate the neurons in the network) during the bootstrapping. We derive some empirical results, using TFHE library as a starting point, and classify encrypted images from the MNIST dataset with more than 96% accuracy in less than 1.7 s.Finally, as a side contribution, we analyze and introduce some variations to the bootstrapping technique of Chillotti et al. that offer an improvement in efficiency at the cost of increasing the storage requirements.
2018
CRYPTO
On the Round Complexity of OT Extension 📺
We show that any OT extension protocol based on one-way functions (or more generally any symmetric-key primitive) either requires an additional round compared to the base OTs or must make a non-black-box use of one-way functions. This result also holds in the semi-honest setting or in the case of certain setup models such as the common random string model. This implies that OT extension in any secure computation protocol must come at the price of an additional round of communication or the non-black-box use of symmetric key primitives. Moreover, we observe that our result is tight in the sense that positive results can indeed be obtained using non-black-box techniques or at the cost of one additional round of communication.
2018
CRYPTO
Fast Large-Scale Honest-Majority MPC for Malicious Adversaries 📺
Protocols for secure multiparty computation enable a set of parties to compute a function of their inputs without revealing anything but the output. The security properties of the protocol must be preserved in the presence of adversarial behavior. The two classic adversary models considered are semi-honest (where the adversary follows the protocol specification but tries to learn more than allowed by examining the protocol transcript) and malicious (where the adversary may follow any arbitrary attack strategy). Protocols for semi-honest adversaries are often far more efficient, but in many cases the security guarantees are not strong enough.In this paper, we present new protocols for securely computing any functionality represented by an arithmetic circuit. We utilize a new method for verifying that the adversary does not cheat, that yields a cost of just twice that of semi-honest protocols in some settings. Our protocols are information-theoretically secure in the presence of a malicious adversaries, assuming an honest majority. We present protocol variants for small and large fields, and show how to efficiently instantiate them based on replicated secret sharing and Shamir sharing. As with previous works in this area aiming to achieve high efficiency, our protocol is secure with abort and does not achieve fairness, meaning that the adversary may receive output while the honest parties do not.We implemented our protocol and ran experiments for different numbers of parties, different network configurations and different circuit depths. Our protocol significantly outperforms the previous best for this setting (Lindell and Nof, CCS 2017); for a large number of parties, our implementation runs almost an order of magnitude faster than theirs.
2018
CRYPTO
Non-Malleable Codes for Partial Functions with Manipulation Detection 📺
Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs (ICS ’10) and its main application is the protection of cryptographic devices against tampering attacks on memory. In this work, we initiate a comprehensive study on non-malleable codes for the class of partial functions, that read/write on an arbitrary subset of codeword bits with specific cardinality. Our constructions are efficient in terms of information rate, while allowing the attacker to access asymptotically almost the entire codeword. In addition, they satisfy a notion which is stronger than non-malleability, that we call non-malleability with manipulation detection, guaranteeing that any modified codeword decodes to either the original message or to $$\bot $$⊥. Finally, our primitive implies All-Or-Nothing Transforms (AONTs) and as a result our constructions yield efficient AONTs under standard assumptions (only one-way functions), which, to the best of our knowledge, was an open question until now. In addition to this, we present a number of additional applications of our primitive in tamper resilience.
2018
CRYPTO
Continuously Non-Malleable Codes in the Split-State Model from Minimal Assumptions 📺
At ICS 2010, Dziembowski, Pietrzak and Wichs introduced the notion of non-malleable codes, a weaker form of error-correcting codes guaranteeing that the decoding of a tampered codeword either corresponds to the original message or to an unrelated value. The last few years established non-malleable codes as one of the recently invented cryptographic primitives with the highest impact and potential, with very challenging open problems and applications.In this work, we focus on so-called continuously non-malleable codes in the split-state model, as proposed by Faust et al. (TCC 2014), where a codeword is made of two shares and an adaptive adversary makes a polynomial number of attempts in order to tamper the target codeword, where each attempt is allowed to modify the two shares independently (yet arbitrarily). Achieving continuous non-malleability in the split-state model has been so far very hard. Indeed, the only known constructions require strong setup assumptions (i.e., the existence of a common reference string) and strong complexity-theoretic assumptions (i.e., the existence of non-interactive zero-knowledge proofs and collision-resistant hash functions).As our main result, we construct a continuously non-malleable code in the split-state model without setup assumptions, requiring only one-to-one one-way functions (i.e., essentially optimal computational assumptions). Our result introduces several new ideas that make progress towards understanding continuous non-malleability, and shows interesting connections with protocol-design and proof-approach techniques used in other contexts (e.g., look-ahead simulation in zero-knowledge proofs, non-malleable commitments, and leakage resilience).
2018
CRYPTO
Non-Interactive Zero-Knowledge Proofs for Composite Statements 📺
The two most common ways to design non-interactive zero-knowledge (NIZK) proofs are based on Sigma protocols and QAP-based SNARKs. The former is highly efficient for proving algebraic statements while the latter is superior for arithmetic representations.   Motivated by applications such as privacy-preserving credentials and privacy-preserving audits in cryptocurrencies, we study the design of NIZKs for composite statements that compose algebraic and arithmetic statements in arbitrary ways. Specifically, we provide a framework for proving statements that consist of ANDs, ORs and function compositions of a mix of algebraic and arithmetic components. This allows us to explore the full spectrum of trade-offs between proof size, prover cost, and CRS size/generation cost. This leads to proofs for statements of the form: knowledge of x such that $$SHA(g^x)=y$$SHA(gx)=y for some public y where the prover’s work is 500 times fewer exponentiations compared to a QAP-based SNARK at the cost of increasing the proof size to 2404 group and field elements. In application to anonymous credentials, our techniques result in 8 times fewer exponentiations for the prover at the cost of increasing the proof size to 298 elements.
2018
CRYPTO
From Laconic Zero-Knowledge to Public-Key Cryptography 📺
Since its inception, public-key encryption ( $$\mathsf {PKE}$$ PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language .In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers.Languages in with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing $$\mathsf {PKE}$$ PKE which, in particular, captures many of the assumptions that were already known to yield $$\mathsf {PKE}$$ PKE.We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to $$\mathsf {PKE}$$ PKE, thereby giving a complexity-theoretic characterization of $$\mathsf {PKE}$$ PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol.
2018
CRYPTO
Updatable and Universal Common Reference Strings with Applications to zk-SNARKs 📺
By design, existing (pre-processing) zk-SNARKs embed a secret trapdoor in a relation-dependent common reference strings (CRS). The trapdoor is exploited by a (hypothetical) simulator to prove the scheme is zero knowledge, and the secret-dependent structure facilitates a linear-size CRS and linear-time prover computation. If known by a real party, however, the trapdoor can be used to subvert the security of the system. The structured CRS that makes zk-SNARKs practical also makes deploying zk-SNARKS problematic, as it is difficult to argue why the trapdoor would not be available to the entity responsible for generating the CRS. Moreover, for pre-processing zk-SNARKs a new trusted CRS needs to be computed every time the relation is changed.In this paper, we address both issues by proposing a model where a number of users can update a universal CRS. The updatable CRS model guarantees security if at least one of the users updating the CRS is honest. We provide both a negative result, by showing that zk-SNARKs with private secret-dependent polynomials in the CRS cannot be updatable, and a positive result by constructing a zk-SNARK based on a CRS consisting only of secret-dependent monomials. The CRS is of quadratic size, is updatable, and is universal in the sense that it can be specialized into one or more relation-dependent CRS of linear size with linear-time prover computation.
2018
CRYPTO
A Simple Obfuscation Scheme for Pattern-Matching with Wildcards 📺
We give a simple and efficient method for obfuscating pattern matching with wildcards. In other words, we construct a way to check an input against a secret pattern, which is described in terms of prescribed values interspersed with unconstrained “wildcard” slots. As long as the support of the pattern is sufficiently sparse and the pattern itself is chosen from an appropriate distribution, we prove that a polynomial-time adversary cannot find a matching input, except with negligible probability. We rely upon the generic group heuristic (in a regular group, with no multilinearity). Previous work [9, 10, 32] provided less efficient constructions based on multilinear maps or LWE.
2018
CRYPTO
On the Complexity of Compressing Obfuscation 📺
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct.In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters.We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
2018
CRYPTO
Quantum FHE (Almost) As Secure As Classical 📺
Fully homomorphic encryption schemes (FHE) allow to apply arbitrary efficient computation to encrypted data without decrypting it first. In Quantum FHE (QFHE) we may want to apply an arbitrary quantumly efficient computation to (classical or quantum) encrypted data.We present a QFHE scheme with classical key generation (and classical encryption and decryption if the encrypted message is itself classical) with comparable properties to classical FHE. Security relies on the hardness of the learning with errors (LWE) problem with polynomial modulus, which translates to the worst case hardness of approximating short vector problems in lattices to within a polynomial factor. Up to polynomial factors, this matches the best known assumption for classical FHE. Similarly to the classical setting, relying on LWE alone only implies leveled QFHE (where the public key length depends linearly on the maximal allowed evaluation depth). An additional circular security assumption is required to support completely unbounded depth. Interestingly, our circular security assumption is the same assumption that is made to achieve unbounded depth multi-key classical FHE.Technically, we rely on the outline of Mahadev (arXiv 2017) which achieves this functionality by relying on super-polynomial LWE modulus and on a new circular security assumption. We observe a connection between the functionality of evaluating quantum gates and the circuit privacy property of classical homomorphic encryption. While this connection is not sufficient to imply QFHE by itself, it leads us to a path that ultimately allows using classical FHE schemes with polynomial modulus towards constructing QFHE with the same modulus.
2018
CRYPTO
IND-CCA-Secure Key Encapsulation Mechanism in the Quantum Random Oracle Model, Revisited 📺
With the gradual progress of NIST’s post-quantum cryptography standardization, the Round-1 KEM proposals have been posted for public to discuss and evaluate. Among the IND-CCA-secure KEM constructions, mostly, an IND-CPA-secure (or OW-CPA-secure) public-key encryption (PKE) scheme is first introduced, then some generic transformations are applied to it. All these generic transformations are constructed in the random oracle model (ROM). To fully assess the post-quantum security, security analysis in the quantum random oracle model (QROM) is preferred. However, current works either lacked a QROM security proof or just followed Targhi and Unruh’s proof technique (TCC-B 2016) and modified the original transformations by adding an additional hash to the ciphertext to achieve the QROM security.In this paper, by using a novel proof technique, we present QROM security reductions for two widely used generic transformations without suffering any ciphertext overhead. Meanwhile, the security bounds are much tighter than the ones derived by utilizing Targhi and Unruh’s proof technique. Thus, our QROM security proofs not only provide a solid post-quantum security guarantee for NIST Round-1 KEM schemes, but also simplify the constructions and reduce the ciphertext sizes. We also provide QROM security reductions for Hofheinz-Hövelmanns-Kiltz modular transformations (TCC 2017), which can help to obtain a variety of combined transformations with different requirements and properties.
2018
CRYPTO
Pseudorandom Quantum States 📺
We propose the concept of pseudorandom quantum states, which appear random to any quantum polynomial-time adversary. It offers a computational approximation to perfectly random quantum states analogous in spirit to cryptographic pseudorandom generators, as opposed to statistical notions of quantum pseudorandomness that have been studied previously, such as quantum t-designs analogous to t-wise independent distributions.Under the assumption that quantum-secure one-way functions exist, we present efficient constructions of pseudorandom states, showing that our definition is achievable. We then prove several basic properties of pseudorandom states, which show the utility of our definition. First, we show a cryptographic no-cloning theorem: no efficient quantum algorithm can create additional copies of a pseudorandom state, when given polynomially-many copies as input. Second, as expected for random quantum states, we show that pseudorandom quantum states are highly entangled on average. Finally, as a main application, we prove that any family of pseudorandom states naturally gives rise to a private-key quantum money scheme.
2018
CRYPTO
Cryptanalyses of Branching Program Obfuscations over GGH13 Multilinear Map from the NTRU Problem 📺
In this paper, we propose cryptanalyses of all existing indistinguishability obfuscation (iO) candidates based on branching programs (BP) over GGH13 multilinear map for all recommended parameter settings. To achieve this, we introduce two novel techniques, program converting using NTRU-solver and matrix zeroizing, which can be applied to a wide range of obfuscation constructions and BPs compared to previous attacks. We then prove that, for the suggested parameters, the existing general-purpose BP obfuscations over GGH13 do not have the desired security. Especially, the first candidate indistinguishability obfuscation with input-unpartitionable branching programs (FOCS 2013) and the recent BP obfuscation (TCC 2016) are not secure against our attack when they use the GGH13 with recommended parameters. Previously, there has been no known polynomial time attack for these cases.Our attack shows that the lattice dimension of GGH13 must be set much larger than previous thought in order to maintain security. More precisely, the underlying lattice dimension of GGH13 should be set to $$n=\tilde{\varTheta }( \kappa ^2 \lambda )$$n=Θ~(κ2λ) to rule out attacks from the subfield algorithm for NTRU where $$\kappa $$κ is the multilinearity level and $$\lambda $$λ the security parameter.
2018
CRYPTO
An Optimal Distributed Discrete Log Protocol with Applications to Homomorphic Secret Sharing 📺
The distributed discrete logarithm (DDL) problem was introduced by Boyle et al. at CRYPTO 2016. A protocol solving this problem was the main tool used in the share conversion procedure of their homomorphic secret sharing (HSS) scheme which allows non-interactive evaluation of branching programs among two parties over shares of secret inputs.Let g be a generator of a multiplicative group $$\mathbb {G}$$G. Given a random group element $$g^{x}$$gx and an unknown integer $$b \in [-M,M]$$b∈[-M,M] for a small M, two parties A and B (that cannot communicate) successfully solve DDL if $$A(g^{x}) - B(g^{x+b}) = b$$A(gx)-B(gx+b)=b. Otherwise, the parties err. In the DDL protocol of Boyle et al., A and B run in time T and have error probability that is roughly linear in M/T. Since it has a significant impact on the HSS scheme’s performance, a major open problem raised by Boyle et al. was to reduce the error probability as a function of T.In this paper we devise a new DDL protocol that substantially reduces the error probability to $$O(M \cdot T^{-2})$$O(M·T-2). Our new protocol improves the asymptotic evaluation time complexity of the HSS scheme by Boyle et al. on branching programs of size S from $$O(S^2)$$O(S2) to $$O(S^{3/2})$$O(S3/2). We further show that our protocol is optimal up to a constant factor for all relevant cryptographic group families, unless one can solve the discrete logarithm problem in a short interval of length R in time $$o(\sqrt{R})$$o(R).Our DDL protocol is based on a new type of random walk that is composed of several iterations in which the expected step length gradually increases. We believe that this random walk is of independent interest and will find additional applications.
2018
CRYPTO
Must the Communication Graph of MPC Protocols be an Expander? 📺
Secure multiparty computation (MPC) on incomplete communication networks has been studied within two primary models: (1) Where a partial network is fixed a priori, and thus corruptions can occur dependent on its structure, and (2) Where edges in the communication graph are determined dynamically as part of the protocol. Whereas a rich literature has succeeded in mapping out the feasibility and limitations of graph structures supporting secure computation in the fixed-graph model (including strong classical lower bounds), these bounds do not apply in the latter dynamic-graph setting, which has recently seen exciting new results, but remains relatively unexplored.In this work, we initiate a similar foundational study of MPC within the dynamic-graph model. As a first step, we investigate the property of graph expansion. All existing protocols (implicitly or explicitly) yield communication graphs which are expanders, but it is not clear whether this is inherent. Our results consist of two types:Upper bounds: We demonstrate secure protocols whose induced communication graphs are not expanders, within a wide range of settings (computational, information theoretic, with low locality, and adaptive security), each assuming some form of input-independent setup.Lower bounds: In the setting without setup and adaptive corruptions, we demonstrate that for certain functionalities, no protocol can maintain a non-expanding communication graph against all adversarial strategies. Our lower bound relies only on protocol correctness (not privacy), and requires a surprisingly delicate argument.
2018
CRYPTO
Simplifying Game-Based Definitions 📺
Often the simplest way of specifying game-based cryptographic definitions is apparently barred because the adversary would have some trivial win. Disallowing or invalidating these wins can lead to complex or unconvincing definitions. We suggest a generic way around this difficulty. We call it indistinguishability up to correctness, or IND$$\vert $$C. Given games $${{\text {G}}}$$ and $${{\text {H}}}$$ and a correctness condition $${{\text {C}}}$$ we define an advantage measure $${\mathbf {Adv}_{{{\text {G}}},{{\text {H}}},{{\text {C}}}}^{{\text {indc}}}}$$ wherein $${{{\text {G}}}}$$/$${{{\text {H}}}}$$ distinguishing attacks are effaced to the extent that they are inevitable due to $${{\text {C}}}$$. We formalize this in the language of oracle silencing, an alternative to exclusion-style and penalty-style definitions. We apply our ideas to a domain where game-based definitions have been cumbersome: stateful authenticated-encryption (sAE). We rework existing sAE notions and encompass new ones, like replay-free AE permitting a specified degree of out-of-order message delivery.
2018
CRYPTO
Combiners for Backdoored Random Oracles 📺
We formulate and study the security of cryptographic hash functions in the backdoored random-oracle (BRO) model, whereby a big brother designs a “good” hash function, but can also see arbitrary functions of its table via backdoor capabilities. This model captures intentional (and unintentional) weaknesses due to the existence of collision-finding or inversion algorithms, but goes well beyond them by allowing, for example, to search for structured preimages. The latter can easily break constructions that are secure under random inversions.BROs make the task of bootstrapping cryptographic hardness somewhat challenging. Indeed, with only a single arbitrarily backdoored function no hardness can be bootstrapped as any construction can be inverted. However, when two (or more) independent hash functions are available, hardness emerges even with unrestricted and adaptive access to all backdoor oracles. At the core of our results lie new reductions from cryptographic problems to the communication complexities of various two-party tasks. Along the way we establish a communication complexity lower bound for set-intersection for cryptographically relevant ranges of parameters and distributions and where set-disjointness can be easy.
2018
CRYPTO
On Distributional Collision Resistant Hashing 📺
Collision resistant hashing is a fundamental concept that is the basis for many of the important cryptographic primitives and protocols. Collision resistant hashing is a family of compressing functions such that no efficient adversary can find any collision given a random function in the family.In this work we study a relaxation of collision resistance called distributional collision resistance, introduced by Dubrov and Ishai (STOC ’06). This relaxation of collision resistance only guarantees that no efficient adversary, given a random function in the family, can sample a pair (x, y) where x is uniformly random and y is uniformly random conditioned on colliding with x.Our first result shows that distributional collision resistance can be based on the existence of multi-collision resistance hash (with no additional assumptions). Multi-collision resistance is another relaxation of collision resistance which guarantees that an efficient adversary cannot find any tuple of $$k>2$$ inputs that collide relative to a random function in the family. The construction is non-explicit, non-black-box, and yields an infinitely-often secure family. This partially resolves a question of Berman et al. (EUROCRYPT ’18). We further observe that in a black-box model such an implication (from multi-collision resistance to distributional collision resistance) does not exist.Our second result is a construction of a distributional collision resistant hash from the average-case hardness of SZK. Previously, this assumption was not known to imply any form of collision resistance (other than the ones implied by one-way functions).
2018
CRYPTO
Fast Distributed RSA Key Generation for Semi-honest and Malicious Adversaries 📺
We present two new, highly efficient, protocols for securely generating a distributed RSA key pair in the two-party setting. One protocol is semi-honestly secure and the other maliciously secure. Both are constant round and do not rely on any specific number-theoretic assumptions and improve significantly over the state-of-the-art by allowing a slight leakage (which we show to not affect security).For our maliciously secure protocol our most significant improvement comes from executing most of the protocol in a “strong” semi-honest manner and then doing a single, light, zero-knowledge argument of correct execution. We introduce other significant improvements as well. One such improvement arrives in showing that certain, limited leakage does not compromise security, which allows us to use lightweight subprotocols. Another improvement, which may be of independent interest, comes in our approach for multiplying two large integers using OT, in the malicious setting, without being susceptible to a selective-failure attack.Finally, we implement our malicious protocol and show that its performance is an order of magnitude better than the best previous protocol, which provided only semi-honest security.
2018
CRYPTO
Trapdoor Functions from the Computational Diffie-Hellman Assumption 📺
Trapdoor functions (TDFs) are a fundamental primitive in cryptography. Yet, the current set of assumptions known to imply TDFs is surprisingly limited, when compared to public-key encryption. We present a new general approach for constructing TDFs. Specifically, we give a generic construction of TDFs from any Chameleon Encryption (Döttling and Garg [CRYPTO’17]) satisfying a novel property which we call recyclability. By showing how to adapt current Computational Diffie-Hellman (CDH) based constructions of chameleon encryption to yield recyclability, we obtain the first construction of TDFs with security proved under the CDH assumption. While TDFs from the Decisional Diffie-Hellman (DDH) assumption were previously known, the possibility of basing them on CDH had remained open for more than 30 years.
2018
CRYPTO
Round-Optimal Secure Multiparty Computation with Honest Majority 📺
We study the exact round complexity of secure multiparty computation (MPC) in the honest majority setting. We construct several round-optimaln-party protocols, tolerating any $$t<\frac{n}{2}$$ corruptions. 1.Security with abort: We give the first construction of two round MPC for general functions that achieves security with abort against malicious adversaries in the plain model. The security of our protocol only relies on one-way functions.2.Guaranteed output delivery: We also construct protocols that achieve security with guaranteed output delivery: (i) Against fail-stop adversaries, we construct two round MPC either in the (bare) public-key infrastructure model with no additional assumptions, or in the plain model assuming two-round semi-honest oblivious transfer. In three rounds, however, we can achieve security assuming only one-way functions. (ii) Against malicious adversaries, we construct three round MPC in the plain model, assuming public-key encryption and Zaps.Previously, such protocols were only known based on specific learning assumptions and required the use of common reference strings. All of our results are obtained via general compilers that may be of independent interest.
2018
CRYPTO
On the Exact Round Complexity of Secure Three-Party Computation 📺
We settle the exact round complexity of three-party computation (3PC) in honest-majority setting, for a range of security notions such as selective abort, unanimous abort, fairness and guaranteed output delivery. Selective abort security, the weakest in the lot, allows the corrupt parties to selectively deprive some of the honest parties of the output. In the mildly stronger version of unanimous abort, either all or none of the honest parties receive the output. Fairness implies that the corrupted parties receive their output only if all honest parties receive output and lastly, the strongest notion of guaranteed output delivery implies that the corrupted parties cannot prevent honest parties from receiving their output. It is a folklore that the implication holds from the guaranteed output delivery to fairness to unanimous abort to selective abort. We focus on two network settings– pairwise-private channels without and with a broadcast channel.In the minimal setting of pairwise-private channels, 3PC with selective abort is known to be feasible in just two rounds, while guaranteed output delivery is infeasible to achieve irrespective of the number of rounds. Settling the quest for exact round complexity of 3PC in this setting, we show that three rounds are necessary and sufficient for unanimous abort and fairness. Extending our study to the setting with an additional broadcast channel, we show that while unanimous abort is achievable in just two rounds, three rounds are necessary and sufficient for fairness and guaranteed output delivery. Our lower bound results extend for any number of parties in honest majority setting and imply tightness of several known constructions.The fundamental concept of garbled circuits underlies all our upper bounds. Concretely, our constructions involve transmitting and evaluating only constant number of garbled circuits. Assumption-wise, our constructions rely on injective (one-to-one) one-way functions.
2018
CRYPTO
Promise Zero Knowledge and Its Applications to Round Optimal MPC 📺
We devise a new partitioned simulation technique for MPC where the simulator uses different strategies for simulating the view of aborting adversaries and non-aborting adversaries. The protagonist of this technique is a new notion of promise zero knowledge (ZK) where the ZK property only holds against non-aborting verifiers. We show how to realize promise ZK in three rounds in the simultaneous-message model assuming polynomially hard DDH (or QR or N$$^{th}$$-Residuosity).We demonstrate the following applications of our new technique:We construct the first round-optimal (i.e., four round) MPC protocol for general functions based on polynomially hard DDH (or QR or N$$^{th}$$-Residuosity).We further show how to overcome the four-round barrier for MPC by constructing a three-round protocol for “list coin-tossing” – a slight relaxation of coin-tossing that suffices for most conceivable applications – based on polynomially hard DDH (or QR or N$$^{th}$$-Residuosity). This result generalizes to randomized input-less functionalities. Previously, four round MPC protocols required sub-exponential-time hardness assumptions and no multi-party three-round protocols were known for any relaxed security notions with polynomial-time simulation against malicious adversaries.In order to base security on polynomial-time standard assumptions, we also rely upon a leveled rewinding security technique that can be viewed as a polynomial-time alternative to leveled complexity leveraging for achieving “non-malleability” across different primitives.
2018
CRYPTO
Round-Optimal Secure Multi-Party Computation 📺
Secure multi-party computation (MPC) is a central cryptographic task that allows a set of mutually distrustful parties to jointly compute some function of their private inputs where security should hold in the presence of a malicious adversary that can corrupt any number of parties. Despite extensive research, the precise round complexity of this “standard-bearer” cryptographic primitive is unknown. Recently, Garg, Mukherjee, Pandey and Polychroniadou, in EUROCRYPT 2016 demonstrated that the round complexity of any MPC protocol relying on black-box proofs of security in the plain model must be at least four. Following this work, independently Ananth, Choudhuri and Jain, CRYPTO 2017 and Brakerski, Halevi, and Polychroniadou, TCC 2017 made progress towards solving this question and constructed four-round protocols based on non-polynomial time assumptions. More recently, Ciampi, Ostrovsky, Siniscalchi and Visconti in TCC 2017 closed the gap for two-party protocols by constructing a four-round protocol from polynomial-time assumptions. In another work, Ciampi, Ostrovsky, Siniscalchi and Visconti TCC 2017 showed how to design a four-round multi-party protocol for the specific case of multi-party coin-tossing.In this work, we resolve this question by designing a four-round actively secure multi-party (two or more parties) protocol for general functionalities under standard polynomial-time hardness assumptions with a black-box proof of security.
2018
CRYPTO
Yes, There is an Oblivious RAM Lower Bound! 📺
Best Paper Award
An Oblivious RAM (ORAM) introduced by Goldreich and Ostrovsky [JACM’96] is a (possibly randomized) RAM, for which the memory access pattern reveals no information about the operations performed. The main performance metric of an ORAM is the bandwidth overhead, i.e., the multiplicative factor extra memory blocks that must be accessed to hide the operation sequence. In their seminal paper introducing the ORAM, Goldreich and Ostrovsky proved an amortized $$\varOmega (\lg n)$$ bandwidth overhead lower bound for ORAMs with memory size n. Their lower bound is very strong in the sense that it applies to the “offline” setting in which the ORAM knows the entire sequence of operations ahead of time.However, as pointed out by Boyle and Naor [ITCS’16] in the paper “Is there an oblivious RAM lower bound?”, there are two caveats with the lower bound of Goldreich and Ostrovsky: (1) it only applies to “balls in bins” algorithms, i.e., algorithms where the ORAM may only shuffle blocks around and not apply any sophisticated encoding of the data, and (2), it only applies to statistically secure constructions. Boyle and Naor showed that removing the “balls in bins” assumption would result in super linear lower bounds for sorting circuits, a long standing open problem in circuit complexity. As a way to circumventing this barrier, they also proposed a notion of an “online” ORAM, which is an ORAM that remains secure even if the operations arrive in an online manner. They argued that most known ORAM constructions work in the online setting as well.Our contribution is an $$\varOmega (\lg n)$$ lower bound on the bandwidth overhead of any online ORAM, even if we require only computational security and allow arbitrary representations of data, thus greatly strengthening the lower bound of Goldreich and Ostrovsky in the online setting. Our lower bound applies to ORAMs with memory size n and any word size $$r \ge 1$$ . The bound therefore asymptotically matches the known upper bounds when $$r = \varOmega (\lg ^2 n)$$ .
2018
CRYPTO
Constrained PRFs for $\mathrm{NC}^1$ in Traditional Groups 📺
We propose new constrained pseudorandom functions (CPRFs) in traditional groups. Traditional groups mean cyclic and multiplicative groups of prime order that were widely used in the 1980s and 1990s (sometimes called “pairing free” groups). Our main constructions are as follows. We propose a selectively single-key secure CPRF for circuits with depth$$O(\log n)$$(that is,NC$$^1$$circuits) in traditional groups where n is the input size. It is secure under the L-decisional Diffie-Hellman inversion (L-DDHI) assumption in the group of quadratic residues $$\mathbb {QR}_q$$ and the decisional Diffie-Hellman (DDH) assumption in a traditional group of order qin the standard model.We propose a selectively single-key private bit-fixing CPRF in traditional groups. It is secure under the DDH assumption in any prime-order cyclic group in the standard model.We propose adaptively single-key secure CPRF for NC$$^1$$ and private bit-fixing CPRF in the random oracle model. To achieve the security in the standard model, we develop a new technique using correlated-input secure hash functions.
2018
CRYPTO
The Algebraic Group Model and its Applications 📺
One of the most important and successful tools for assessing hardness assumptions in cryptography is the Generic Group Model (GGM). Over the past two decades, numerous assumptions and protocols have been analyzed within this model. While a proof in the GGM can certainly provide some measure of confidence in an assumption, its scope is rather limited since it does not capture group-specific algorithms that make use of the representation of the group.To overcome this limitation, we propose the Algebraic Group Model (AGM), a model that lies in between the Standard Model and the GGM. It is the first restricted model of computation covering group-specific algorithms yet allowing to derive simple and meaningful security statements. To prove its usefulness, we show that several important assumptions, among them the Computational Diffie-Hellman, the Strong Diffie-Hellman, and the interactive LRSW assumptions, are equivalent to the Discrete Logarithm (DLog) assumption in the AGM. On the more practical side, we prove tight security reductions for two important schemes in the AGM to DLog or a variant thereof: the BLS signature scheme and Groth’s zero-knowledge SNARK (EUROCRYPT 2016), which is the most efficient SNARK for which only a proof in the GGM was known. Our proofs are quite simple and therefore less prone to subtle errors than those in the GGM.Moreover, in combination with known lower bounds on the Discrete Logarithm assumption in the GGM, our results can be used to derive lower bounds for all the above-mentioned results in the GGM.
2018
CRYPTO
GGH15 Beyond Permutation Branching Programs: Proofs, Attacks, and Candidates 📺
We carry out a systematic study of the GGH15 graded encoding scheme used with general branching programs. This is motivated by the fact that general branching programs are more efficient than permutation branching programs and also substantially more expressive in the read-once setting. Our main results are as follows:Proofs. We present new constructions of private constrained PRFs and lockable obfuscation, for constraints (resp. functions to be obfuscated) that are computable by general branching programs. Our constructions are secure under LWE with subexponential approximation factors. Previous constructions of this kind crucially rely on the permutation structure of the underlying branching programs. Using general branching programs allows us to obtain more efficient constructions for certain classes of constraints (resp. functions), while posing new challenges in the proof, which we overcome using new proof techniques.Attacks. We extend the previous attacks on indistinguishability obfuscation (iO) candidates that use GGH15 encodings. The new attack simply uses the rank of a matrix as the distinguisher, so we call it a “rank attack”. The rank attack breaks, among others, the iO candidate for general read-once branching programs by Halevi, Halevi, Shoup and Stephens-Davidowitz (CCS 2017).Candidate Witness Encryption and iO. Drawing upon insights from our proofs and attacks, we present simple candidates for witness encryption and iO that resist the existing attacks, using GGH15 encodings. Our candidate for witness encryption crucially exploits the fact that formulas in conjunctive normal form (CNFs) can be represented by general, read-once branching programs.
2018
CRYPTO
Lower Bounds on Lattice Enumeration with Extreme Pruning 📺
At Eurocrypt ’10, Gama, Nguyen and Regev introduced lattice enumeration with extreme pruning: this algorithm is implemented in state-of-the-art lattice reduction software and used in challenge records. They showed that extreme pruning provided an exponential speed-up over full enumeration. However, no limit on its efficiency was known, which was problematic for long-term security estimates of lattice-based cryptosystems. We prove the first lower bounds on lattice enumeration with extreme pruning: if the success probability is lower bounded, we can lower bound the global running time taken by extreme pruning. Our results are based on geometric properties of cylinder intersections and some form of isoperimetry. We discuss their impact on lattice security estimates.
2018
CRYPTO
Dissection-BKW 📺
The slightly subexponential algorithm of Blum, Kalai and Wasserman (BKW) provides a basis for assessing LPN/LWE security. However, its huge memory consumption strongly limits its practical applicability, thereby preventing precise security estimates for cryptographic LPN/LWE instantiations.We provide the first time-memory trade-offs for the BKW algorithm. For instance, we show how to solve LPN in dimension k in time $$2^{\frac{4}{3} \frac{k}{\log k} }$$ and memory $$2^{\frac{2}{3} \frac{k}{\log k} }$$. Using the Dissection technique due to Dinur et al. (Crypto ’12) and a novel, slight generalization thereof, we obtain fine-grained trade-offs for any available (subexponential) memory while the running time remains subexponential.Reducing the memory consumption of BKW below its running time also allows us to propose a first quantum version QBKW for the BKW algorithm.
2018
CRYPTO
Sub-linear Lattice-Based Zero-Knowledge Arguments for Arithmetic Circuits 📺
We propose the first zero-knowledge argument with sub-linear communication complexity for arithmetic circuit satisfiability over a prime $${p}$$ whose security is based on the hardness of the short integer solution (SIS) problem. For a circuit with $${N}$$ gates, the communication complexity of our protocol is $$O\left( \sqrt{{N}{\lambda }\log ^3{{N}}}\right) $$ , where $${\lambda }$$ is the security parameter. A key component of our construction is a surprisingly simple zero-knowledge proof for pre-images of linear relations whose amortized communication complexity depends only logarithmically on the number of relations being proved. This latter protocol is a substantial improvement, both theoretically and in practice, over the previous results in this line of research of Damgård et al. (CRYPTO 2012), Baum et al. (CRYPTO 2016), Cramer et al. (EUROCRYPT 2017) and del Pino and Lyubashevsky (CRYPTO 2017), and we believe it to be of independent interest.
2018
CRYPTO
Lattice-Based Zero-Knowledge Arguments for Integer Relations 📺
We provide lattice-based protocols allowing to prove relations among committed integers. While the most general zero-knowledge proof techniques can handle arithmetic circuits in the lattice setting, adapting them to prove statements over the integers is non-trivial, at least if we want to handle exponentially large integers while working with a polynomial-size modulus q. For a polynomial L, we provide zero-knowledge arguments allowing a prover to convince a verifier that committed L-bit bitstrings x, y and z are the binary representations of integers X, Y and Z satisfying $$Z=X+Y$$ over $$\mathbb {Z}$$. The complexity of our arguments is only linear in L. Using them, we construct arguments allowing to prove inequalities $$X<Z$$ among committed integers, as well as arguments showing that a committed X belongs to a public interval $$[\alpha ,\beta ]$$, where $$\alpha $$ and $$\beta $$ can be arbitrarily large. Our range arguments have logarithmic cost (i.e., linear in L) in the maximal range magnitude. Using these tools, we obtain zero-knowledge arguments showing that a committed element X does not belong to a public set S using $$\widetilde{\mathcal {O}}(n \cdot \log |S|)$$ bits of communication, where n is the security parameter. We finally give a protocol allowing to argue that committed L-bit integers X, Y and Z satisfy multiplicative relations $$Z=XY$$ over the integers, with communication cost subquadratic in L. To this end, we use our protocol for integer addition to prove the correct recursive execution of Karatsuba’s multiplication algorithm. The security of our protocols relies on standard lattice assumptions with polynomial modulus and polynomial approximation factor.
2018
CRYPTO
Multi-Theorem Preprocessing NIZKs from Lattices 📺
Best Young Researcher Paper
Non-interactive zero-knowledge (NIZK) proofs are fundamental to modern cryptography. Numerous NIZK constructions are known in both the random oracle and the common reference string (CRS) models. In the CRS model, there exist constructions from several classes of cryptographic assumptions such as trapdoor permutations, pairings, and indistinguishability obfuscation. Notably absent from this list, however, are constructions from standard lattice assumptions. While there has been partial progress in realizing NIZKs from lattices for specific languages, constructing NIZK proofs (and arguments) for all of $$\mathsf {NP}$$ from standard lattice assumptions remains open.   In this work, we make progress on this problem by giving the first construction of a multi-theorem NIZK argument for $$\mathsf {NP}$$ from standard lattice assumptions in the preprocessing model. In the preprocessing model, a (trusted) setup algorithm generates proving and verification keys. The proving key is needed to construct proofs and the verification key is needed to check proofs. In the multi-theorem setting, the proving and verification keys should be reusable for an unbounded number of theorems without compromising soundness or zero-knowledge. Existing constructions of NIZKs in the preprocessing model (or even the designated-verifier model) that rely on weaker assumptions like one-way functions or oblivious transfer are only secure in a single-theorem setting. Thus, constructing multi-theorem NIZKs in the preprocessing model does not seem to be inherently easier than constructing them in the CRS model.   We begin by constructing a multi-theorem preprocessing NIZK directly from context-hiding homomorphic signatures. Then, we show how to efficiently implement the preprocessing step using a new cryptographic primitive called blind homomorphic signatures. This primitive may be of independent interest. Finally, we show how to leverage our new lattice-based preprocessing NIZKs to obtain new malicious-secure MPC protocols purely from standard lattice assumptions.
2018
CRYPTO
SPD$\mathbb {Z}_{2^k}$: Efficient MPC mod $2^k$ for Dishonest Majority 📺
Most multi-party computation protocols allow secure computation of arithmetic circuits over a finite field, such as the integers modulo a prime. In the more natural setting of integer computations modulo $$2^{k}$$, which are useful for simplifying implementations and applications, no solutions with active security are known unless the majority of the participants are honest.We present a new scheme for information-theoretic MACs that are homomorphic modulo $$2^k$$, and are as efficient as the well-known standard solutions that are homomorphic over fields. We apply this to construct an MPC protocol for dishonest majority in the preprocessing model that has efficiency comparable to the well-known SPDZ protocol (Damgård et al., CRYPTO 2012), with operations modulo $$2^k$$ instead of over a field. We also construct a matching preprocessing protocol based on oblivious transfer, which is in the style of the MASCOT protocol (Keller et al., CCS 2016) and almost as efficient.
2018
CRYPTO
Yet Another Compiler for Active Security or: Efficient MPC Over Arbitrary Rings 📺
We present a very simple yet very powerful idea for turning any passively secure MPC protocol into an actively secure one, at the price of reducing the threshold of tolerated corruptions.Our compiler leads to a very efficient MPC protocols for the important case of secure evaluation of arithmetic circuits over arbitrary rings (e.g., the natural case of $${\mathbb {Z}}_{2^{\ell }}\!$$) for a small number of parties. We show this by giving a concrete protocol in the preprocessing model for the popular setting with three parties and one corruption. This is the first protocol for secure computation over rings that achieves active security with constant overhead.
2018
CRYPTO
On Tightly Secure Non-Interactive Key Exchange 📺
We consider the reduction loss of security reductions for non-interactive key exchange (NIKE) schemes. Currently, no tightly secure NIKE schemes exist, and in fact Bader et al. (EUROCRYPT 2016) provide a lower bound (of $$\varOmega (n^2)$$, where $$n$$ is the number of parties an adversary interacts with) on the reduction loss for a large class of NIKE schemes.We offer two results: the first NIKE scheme with a reduction loss of $$n/2$$ that circumvents the lower bound of Bader et al., but is of course still far from tightly secure. Second, we provide a generalization of Bader et al.’s lower bound to a larger class of NIKE schemes (that also covers our NIKE scheme), with an adapted lower bound of $$n/2$$ on the reduction loss. Hence, in that sense, the reduction for our NIKE scheme is optimal.
2018
CRYPTO
Practical and Tightly-Secure Digital Signatures and Authenticated Key Exchange 📺
Tight security is increasingly gaining importance in real-world cryptography, as it allows to choose cryptographic parameters in a way that is supported by a security proof, without the need to sacrifice efficiency by compensating the security loss of a reduction with larger parameters. However, for many important cryptographic primitives, including digital signatures and authenticated key exchange (AKE), we are still lacking constructions that are suitable for real-world deployment.We construct the first truly practical signature scheme with tight security in a real-world multi-user setting with adaptive corruptions. The scheme is based on a new way of applying the Fiat-Shamir approach to construct tightly-secure signatures from certain identification schemes.Then we use this scheme as a building block to construct the first practical AKE protocol with tight security. It allows the establishment of a key within 1 RTT in a practical client-server setting, provides forward security, is simple and easy to implement, and thus very suitable for practical deployment. It is essentially the “signed Diffie-Hellman” protocol, but with an additional message, which is crucial to achieve tight security. This additional message is used to overcome a technical difficulty in constructing tightly-secure AKE protocols.For a theoretically-sound choice of parameters and a moderate number of users and sessions, our protocol has comparable computational efficiency to the simple signed Diffie-Hellman protocol with EC-DSA, while for large-scale settings our protocol has even better computational performance, at moderately increased communication complexity.
2018
CRYPTO
Fast Correlation Attack Revisited 📺
A fast correlation attack (FCA) is a well-known cryptanalysis technique for LFSR-based stream ciphers. The correlation between the initial state of an LFSR and corresponding key stream is exploited, and the goal is to recover the initial state of the LFSR. In this paper, we revisit the FCA from a new point of view based on a finite field, and it brings a new property for the FCA when there are multiple linear approximations. Moreover, we propose a novel algorithm based on the new property, which enables us to reduce both time and data complexities. We finally apply this technique to the Grain family, which is a well-analyzed class of stream ciphers. There are three stream ciphers, Grain-128a, Grain-128, and Grain-v1 in the Grain family, and Grain-v1 is in the eSTREAM portfolio and Grain-128a is standardized by ISO/IEC. As a result, we break them all, and especially for Grain-128a, the cryptanalysis on its full version is reported for the first time.
2018
CRYPTO
A Key-Recovery Attack on 855-round Trivium 📺
In this paper, we propose a key-recovery attack on Trivium reduced to 855 rounds. As the output is a complex Boolean polynomial over secret key and IV bits and it is hard to find the solution of the secret keys, we propose a novel nullification technique of the Boolean polynomial to reduce the output Boolean polynomial of 855-round Trivium. Then we determine the degree upper bound of the reduced nonlinear boolean polynomial and detect the right keys. These techniques can be applicable to most stream ciphers based on nonlinear feedback shift registers (NFSR). Our attack on 855-round Trivium costs time complexity $$2^{77}$$. As far as we know, this is the best key-recovery attack on round-reduced Trivium. To verify our attack, we also give some experimental data on 721-round reduced Trivium.
2018
CRYPTO
Bernstein Bound on WCS is Tight 📺
In Eurocrypt 2018, Luykx and Preneel described hash-key-recovery and forgery attacks against polynomial hash based Wegman-Carter-Shoup (WCS) authenticators. Their attacks require $$2^{n/2}$$ message-tag pairs and recover hash-key with probability about $$1.34\, \times \, 2^{-n}$$ where n is the bit-size of the hash-key. Bernstein in Eurocrypt 2005 had provided an upper bound (known as Bernstein bound) of the maximum forgery advantages. The bound says that all adversaries making $$O(2^{n/2})$$ queries of WCS can have maximum forgery advantage $$O(2^{-n})$$ . So, Luykx and Preneel essentially analyze WCS in a range of query complexities where WCS is known to be perfectly secure. Here we revisit the bound and found that WCS remains secure against all adversaries making $$q \ll \sqrt{n} \times 2^{n/2}$$ queries. So it would be meaningful to analyze adversaries with beyond birthday bound complexities.In this paper, we show that the Bernstein bound is tight by describing two attacks (one in the “chosen-plaintext model” and other in the “known-plaintext model”) which recover the hash-key (hence forges) with probability at leastbased on $$\sqrt{n} \times 2^{n/2}$$ message-tag pairs. We also extend the forgery adversary to the Galois Counter Mode (or GCM). More precisely, we recover the hash-key of GCM with probability at least $$\frac{1}{2}$$ based on only $$\sqrt{\frac{n}{\ell }} \times 2^{n/2}$$ encryption queries, where $$\ell $$ is the number of blocks present in encryption queries.
2018
CRYPTO
Correcting Subverted Random Oracles 📺
The random oracle methodology has proven to be a powerful tool for designing and reasoning about cryptographic schemes, and can often act as an effective bridge between theory and practice. In this paper, we focus on the basic problem of correcting faulty—or adversarially corrupted—random oracles, so that they can be confidently applied for such cryptographic purposes.We prove that a simple construction can transform a “subverted” random oracle—which disagrees with the original one at a negligible fraction of inputs—into a construction that is indifferentiable from a random function. Our results permit future designers of cryptographic primitives in typical kleptographic settings (i.e., with adversaries who may subvert the implementation of cryptographic algorithms but undetectable via blackbox testing) to use random oracles as a trusted black box, in spite of not trusting the implementation. Our analysis relies on a general rejection re-sampling lemma which is a tool of possible independent interest.
2018
CRYPTO
Towards Bidirectional Ratcheted Key Exchange 📺
Ratcheted key exchange (RKE) is a cryptographic technique used in instant messaging systems like Signal and the WhatsApp messenger for attaining strong security in the face of state exposure attacks. RKE received academic attention in the recent works of Cohn-Gordon et al. (EuroS&P 2017) and Bellare et al. (CRYPTO 2017). While the former is analytical in the sense that it aims primarily at assessing the security that one particular protocol does achieve (which might be weaker than the notion that it should achieve), the authors of the latter develop and instantiate a notion of security from scratch, independently of existing implementations. Unfortunately, however, their model is quite restricted, e.g. for considering only unidirectional communication and the exposure of only one of the two parties.In this article we resolve the limitations of prior work by developing alternative security definitions, for unidirectional RKE as well as for RKE where both parties contribute. We follow a purist approach, aiming at finding strong yet convincing notions that cover a realistic communication model with fully concurrent operation of both participants. We further propose secure instantiations (as the protocols analyzed or proposed by Cohn-Gordon et al. and Bellare et al. turn out to be weak in our models). While our scheme for the unidirectional case builds on a generic KEM as the main building block (differently to prior work that requires explicitly Diffie–Hellman), our schemes for bidirectional RKE require a stronger, HIBE-like component.
2018
CRYPTO
Improved Division Property Based Cube Attacks Exploiting Algebraic Properties of Superpoly 📺
The cube attack is an important technique for the cryptanalysis of symmetric key primitives, especially for stream ciphers. Aiming at recovering some secret key bits, the adversary reconstructs a superpoly with the secret key bits involved, by summing over a set of the plaintexts/IV which is called a cube. Traditional cube attack only exploits linear/quadratic superpolies. Moreover, for a long time after its proposal, the size of the cubes has been largely confined to an experimental range, e.g., typically 40. These limits were first overcome by the division property based cube attacks proposed by Todo et al. at CRYPTO 2017. Based on MILP modelled division property, for a cube (index set) I, they identify the small (index) subset J of the secret key bits involved in the resultant superpoly. During the precomputation phase which dominates the complexity of the cube attacks, $$2^{|I|+|J|}$$2|I|+|J| encryptions are required to recover the superpoly. Therefore, their attacks can only be available when the restriction $$|I|+|J|<n$$|I|+|J|<n is met.In this paper, we introduced several techniques to improve the division property based cube attacks by exploiting various algebraic properties of the superpoly. 1.We propose the “flag” technique to enhance the preciseness of MILP models so that the proper non-cube IV assignments can be identified to obtain a non-constant superpoly.2.A degree evaluation algorithm is presented to upper bound the degree of the superpoly. With the knowledge of its degree, the superpoly can be recovered without constructing its whole truth table. This enables us to explore larger cubes I’s even if $$|I|+|J|\ge n$$|I|+|J|≥n.3.We provide a term enumeration algorithm for finding the monomials of the superpoly, so that the complexity of many attacks can be further reduced. As an illustration, we apply our techniques to attack the initialization of several ciphers. To be specific, our key recovery attacks have mounted to 839-round Trivium, 891-round Kreyvium, 184-round Grain-128a and 750-round Acornrespectively.
2018
CRYPTO
Generic Attacks Against Beyond-Birthday-Bound MACs 📺
In this work, we study the security of several recent MAC constructions with provable security beyond the birthday bound. We consider block-cipher based constructions with a double-block internal state, such as SUM-ECBC, PMAC+, 3kf9, GCM-SIV2, and some variants (LightMAC+, 1kPMAC+). All these MACs have a security proof up to $$2^{2n/3}$$ queries, but there are no known attacks with less than $$2^{n}$$ queries.We describe a new cryptanalysis technique for double-block MACs based on finding quadruples of messages with four pairwise collisions in halves of the state. We show how to detect such quadruples in SUM-ECBC, PMAC+, 3kf9, GCM-SIV2 and their variants with $$\mathcal {O}(2^{3n/4})$$ queries, and how to build a forgery attack with the same query complexity. The time complexity of these attacks is above $$2^n$$, but it shows that the schemes do not reach full security in the information theoretic model. Surprisingly, our attack on LightMAC+ also invalidates a recent security proof by Naito.Moreover, we give a variant of the attack against SUM-ECBC and GCM-SIV2 with time and data complexity $$\tilde{\mathcal {O}}(2^{6n/7})$$. As far as we know, this is the first attack with complexity below $$2^n$$ against a deterministic beyond-birthday-bound secure MAC.As a side result, we also give a birthday attack against 1kf9, a single-key variant of 3kf9 that was withdrawn due to issues with the proof.
2018
CRYPTO
Searchable Encryption with Optimal Locality: Achieving Sublogarithmic Read Efficiency 📺
We propose the first linear-space searchable encryption scheme with constant locality and sublogarithmic read efficiency, strictly improving the previously best known read efficiency bound (Asharov et al., STOC 2016) from $$\varTheta (\log N \log \log N)$$Θ(logNloglogN) to $$O(\log ^{\gamma } N)$$O(logγN) where $$\gamma =\frac{2}{3}+\delta $$γ=23+δ for any fixed $$\delta >0$$δ>0 and where N is the number of keyword-document pairs. Our scheme employs four different allocation algorithms for storing the keyword lists, depending on the size of the list considered each time. For our construction we develop (i) new probability bounds for the offline two-choice allocation problem; (ii) and a new I/O-efficient oblivious RAM with $$\tilde{O}(n^{1/3})$$O~(n1/3) bandwidth overhead and zero failure probability, both of which can be of independent interest.
2018
CRYPTO
Tight Tradeoffs in Searchable Symmetric Encryption 📺
A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead, locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT ’14) and Asharov et al. (STOC ’16) to construct SSE schemes offering various such tradeoffs, and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed.We establish tight bounds on the tradeoff between the space overhead, locality and read efficiency of SSE schemes within two general frameworks that capture the memory access pattern underlying all existing schemes. First, we introduce the “pad-and-split” framework, refining that of Cash and Tessaro while still capturing the same existing schemes. Within our framework we significantly strengthen their lower bound, proving that any scheme with locality L must use space $$\varOmega ( N \log N / \log L )$$Ω(NlogN/logL) for databases of size N. This is a tight lower bound, matching the tradeoff provided by the scheme of Demertzis and Papamanthou (SIGMOD ’17) which is captured by our pad-and-split framework.Then, within the “statistical-independence” framework of Asharov et al. we show that their lower bound is essentially tight: We construct a scheme whose tradeoff matches their lower bound within an additive $$O(\log \log \log N)$$O(logloglogN) factor in its read efficiency, once again improving upon the existing schemes. Our scheme offers optimal space and locality, and nearly-optimal read efficiency that depends on the frequency of the queried keywords: For a keyword that is associated with $$n = N^{1 - \epsilon (n)}$$n=N1-ϵ(n) document identifiers, the read efficiency is $$\omega (1) \cdot {\epsilon }(n)^{-1}+ O(\log \log \log N)$$ω(1)·ϵ(n)-1+O(logloglogN) when retrieving its identifiers (where the $$\omega (1)$$ω(1) term may be arbitrarily small, and $$\omega (1) \cdot {\epsilon }(n)^{-1}$$ω(1)·ϵ(n)-1 is the lower bound proved by Asharov et al.). In particular, for any keyword that is associated with at most $$N^{1 - 1/o(\log \log \log N)}$$N1-1/o(logloglogN) document identifiers (i.e., for any keyword that is not exceptionally common), we provide read efficiency $$O(\log \log \log N)$$O(logloglogN) when retrieving its identifiers.
2018
CRYPTO
Hardness of Non-interactive Differential Privacy from One-Way Functions 📺
A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset $$D \in X^n$$D∈Xn consisting of some small number of elements n from some large data universe X, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family Q.Ignoring computational constraints, this problem can be solved even when X and Q are exponentially large and n is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation, no efficient differentially private algorithm exists when X and Q can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption.In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when X and Q are exponentially large, and n is an arbitrary polynomial. In fact, we show that this result holds even if X is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey [52].
2018
CRYPTO
Non-malleable Secret Sharing for General Access Structures 📺
Goyal and Kumar (STOC’18) recently introduced the notion of non-malleable secret sharing. Very roughly, the guarantee they seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is “destroyed” and the reconstruction outputs a string which is completely “unrelated” to the original secret. Prior works on non-malleable codes in the 2 split-state model imply constructions which can be seen as 2-out-of-2 non-malleable secret sharing (NMSS) schemes. Goyal and Kumar proposed constructions of t-out-of-n NMSS schemes. These constructions have already been shown to have a number of applications in cryptography.We continue this line of research and construct NMSS for more general access structures. We give a generic compiler that converts any statistical (resp. computational) secret sharing scheme realizing any access structure into another statistical (resp. computational) secret sharing scheme that not only realizes the same access structure but also ensures statistical non-malleability against a computationally unbounded adversary who tampers each of the shares arbitrarily and independently. Instantiating with known schemes we get unconditional NMMS schemes that realize any access structures generated by polynomial size monotone span programs. Similarly, we also obtain conditional NMMS schemes realizing access structure in $$\mathbf {monotone \;P}$$ monotoneP (resp. $$\mathbf {monotone \;NP}$$ monotoneNP) assuming one-way functions (resp. witness encryption).Towards considering more general tampering models, we also propose a construction of n-out-of-n NMSS. Our construction is secure even if the adversary could divide the shares into any two (possibly overlapping) subsets and then arbitrarily tamper the shares in each subset. Our construction is based on a property of inner product and an observation that the inner-product based construction of Aggarwal, Dodis and Lovett (STOC’14) is in fact secure against a tampering class that is stronger than 2 split-states. We also show applications of our construction to the problem of non-malleable message transmission.
2018
CRYPTO
On the Local Leakage Resilience of Linear Secret Sharing Schemes 📺
We consider the following basic question: to what extent are standard secret sharing schemes and protocols for secure multiparty computation that build on them resilient to leakage? We focus on a simple local leakage model, where the adversary can apply an arbitrary function of a bounded output length to the secret state of each party, but cannot otherwise learn joint information about the states.We show that additive secret sharing schemes and high-threshold instances of Shamir’s secret sharing scheme are secure under local leakage attacks when the underlying field is of a large prime order and the number of parties is sufficiently large. This should be contrasted with the fact that any linear secret sharing scheme over a small characteristic field is clearly insecure under local leakage attacks, regardless of the number of parties. Our results are obtained via tools from Fourier analysis and additive combinatorics.We present two types of applications of the above results and techniques. As a positive application, we show that the “GMW protocol” for honest-but-curious parties, when implemented using shared products of random field elements (so-called “Beaver Triples”), is resilient in the local leakage model for sufficiently many parties and over certain fields. This holds even when the adversary has full access to a constant fraction of the views. As a negative application, we rule out multi-party variants of the share conversion scheme used in the 2-party homomorphic secret sharing scheme of Boyle et al. (Crypto 2016).
2018
CRYPTO
Threshold Cryptosystems from Threshold Fully Homomorphic Encryption 📺
We develop a general approach to adding a threshold functionality to a large class of (non-threshold) cryptographic schemes. A threshold functionality enables a secret key to be split into a number of shares, so that only a threshold of parties can use the key, without reconstructing the key. We begin by constructing a threshold fully-homomorphic encryption scheme (ThFHE) from the learning with errors (LWE) problem. We next introduce a new concept, called a universal thresholdizer, from which many threshold systems are possible. We show how to construct a universal thresholdizer from our ThFHE. A universal thresholdizer can be used to add threshold functionality to many systems, such as CCA-secure public-key encryption (PKE), signature schemes, pseudorandom functions, and others primitives. In particular, by applying this paradigm to a (non-threshold) lattice signature system, we obtain the first single-round threshold signature scheme from LWE.
2018
CRYPTO
Optimal Channel Security Against Fine-Grained State Compromise: The Safety of Messaging 📺
We aim to understand the best possible security of a (bidirectional) cryptographic channel against an adversary that may arbitrarily and repeatedly learn the secret state of either communicating party. We give a formal security definition and a proven-secure construction. This construction provides better security against state compromise than the Signal Double Ratchet Algorithm or any other known channel construction. To facilitate this we define and construct new forms of public-key encryption and digital signatures that update their keys over time.
2018
CRYPTO
Multi-Input Functional Encryption for Inner Products: Function-Hiding Realizations and Constructions Without Pairings 📺
We present new constructions of multi-input functional encryption (MIFE) schemes for the inner-product functionality that improve the state of the art solution of Abdalla et al. (Eurocrypt 2017) in two main directions.First, we put forward a novel methodology to convert single-input functional encryption for inner products into multi-input schemes for the same functionality. Our transformation is surprisingly simple, general and efficient. In particular, it does not require pairings and it can be instantiated with all known single-input schemes. This leads to two main advances. First, we enlarge the set of assumptions this primitive can be based on, notably, obtaining new MIFEs for inner products from plain DDH, LWE, and Decisional Composite Residuosity. Second, we obtain the first MIFE schemes from standard assumptions where decryption works efficiently even for messages of super-polynomial size.Our second main contribution is the first function-hiding MIFE scheme for inner products based on standard assumptions. To this end, we show how to extend the original, pairing-based, MIFE by Abdalla et al. in order to make it function hiding, thus obtaining a function-hiding MIFE from the MDDH assumption.
2018
CRYPTO
Encrypt or Decrypt? To Make a Single-Key Beyond Birthday Secure Nonce-Based MAC 📺
At CRYPTO 2016, Cogliati and Seurin have proposed a highly secure nonce-based MAC called Encrypted Wegman-Carter with Davies-Meyer ($$\textsf {EWCDM}$$EWCDM) construction, as $$\textsf {E}_{K_2}\bigl (\textsf {E}_{K_1}(N)\oplus N\oplus \textsf {H}_{K_h}(M)\bigr )$$EK2(EK1(N)⊕N⊕HKh(M)) for a nonce N and a message M. This construction achieves roughly $$2^{2n/3}$$22n/3 bit MAC security with the assumption that $$\textsf {E}$$E is a PRP secure n-bit block cipher and $$\textsf {H}$$H is an almost xor universal n-bit hash function. In this paper we propose Decrypted Wegman-Carter with Davies-Meyer ($$\textsf {DWCDM}$$DWCDM) construction, which is structurally very similar to its predecessor $$\textsf {EWCDM}$$EWCDM except that the outer encryption call is replaced by decryption. The biggest advantage of $$\textsf {DWCDM}$$DWCDM is that we can make a truly single key MAC: the two block cipher calls can use the same block cipher key $$K=K_1=K_2$$K=K1=K2. Moreover, we can derive the hash key as $$K_h=\textsf {E}_K(1)$$Kh=EK(1), as long as $$|K_h|=n$$|Kh|=n. Whether we use encryption or decryption in the outer layer makes a huge difference; using the decryption instead enables us to apply an extended version of the mirror theory by Patarin to the security analysis of the construction. $$\textsf {DWCDM}$$DWCDM is secure beyond the birthday bound, roughly up to $$2^{2n/3}$$22n/3 MAC queries and $$2^n$$2n verification queries against nonce-respecting adversaries. $$\textsf {DWCDM}$$DWCDM remains secure up to $$2^{n/2}$$2n/2 MAC queries and $$2^n$$2n verification queries against nonce-misusing adversaries.
2018
CRYPTO
Rasta: A Cipher with Low ANDdepth and Few ANDs per Bit 📺
Recent developments in multi party computation (MPC) and fully homomorphic encryption (FHE) promoted the design and analysis of symmetric cryptographic schemes that minimize multiplications in one way or another. In this paper, we propose with Rastaa design strategy for symmetric encryption that has ANDdepth d and at the same time only needs d ANDs per encrypted bit. Even for very low values of d between 2 and 6 we can give strong evidence that attacks may not exist. This contributes to a better understanding of the limits of what concrete symmetric-key constructions can theoretically achieve with respect to AND-related metrics, and is to the best of our knowledge the first attempt that minimizes both metrics simultaneously. Furthermore, we can give evidence that for choices of d between 4 and 6 the resulting implementation properties may well be competitive by testing our construction in the use-case of removing the large ciphertext-expansion when using the BGV scheme.
2018
CRYPTO
Non-Uniform Bounds in the Random-Permutation, Ideal-Cipher, and Generic-Group Models 📺
The random-permutation model (RPM) and the ideal-cipher model (ICM) are idealized models that offer a simple and intuitive way to assess the conjectured standard-model security of many important symmetric-key and hash-function constructions. Similarly, the generic-group model (GGM) captures generic algorithms against assumptions in cyclic groups by modeling encodings of group elements as random injections and allows to derive simple bounds on the advantage of such algorithms.Unfortunately, both well-known attacks, e.g., based on rainbow tables (Hellman, IEEE Transactions on Information Theory ’80), and more recent ones, e.g., against the discrete-logarithm problem (Corrigan-Gibbs and Kogan, EUROCRYPT ’18), suggest that the concrete security bounds one obtains from such idealized proofs are often completely inaccurate if one considers non-uniform or preprocessing attacks in the standard model. To remedy this situation, this workdefines the auxiliary-input (AI) RPM/ICM/GGM, which capture both non-uniform and preprocessing attacks by allowing an attacker to leak an arbitrary (bounded-output) function of the oracle’s function table;derives the first non-uniform bounds for a number of important practical applications in the AI-RPM/ICM, including constructions based on the Merkle-Damgård and sponge paradigms, which underly the SHA hashing standards, and for AI-RPM/ICM applications with computational security; andusing simpler proofs, recovers the AI-GGM security bounds obtained by Corrigan-Gibbs and Kogan against preprocessing attackers, for a number of assumptions related to cyclic groups, such as discrete logarithms and Diffie-Hellman problems, and provides new bounds for two assumptions. An important step in obtaining these results is to port the tools used in recent work by Coretti et al. (EUROCRYPT ’18) from the ROM to the RPM/ICM/GGM, resulting in very powerful and easy-to-use tools for proving security bounds against non-uniform and preprocessing attacks.
2018
CRYPTO
Provable Security of (Tweakable) Block Ciphers Based on Substitution-Permutation Networks 📺
Substitution-Permutation Networks (SPNs) refer to a family of constructions which build a wn-bit block cipher from n-bit public permutations (often called S-boxes), which alternate keyless and “local” substitution steps utilizing such S-boxes, with keyed and “global” permutation steps which are non-cryptographic. Many widely deployed block ciphers are constructed based on the SPNs, but there are essentially no provable-security results about SPNs.In this work, we initiate a comprehensive study of the provable security of SPNs as (possibly tweakable) wn-bit block ciphers, when the underlying n-bit permutation is modeled as a public random permutation. When the permutation step is linear (which is the case for most existing designs), we show that 3 SPN rounds are necessary and sufficient for security. On the other hand, even 1-round SPNs can be secure when non-linearity is allowed. Moreover, 2-round non-linear SPNs can achieve “beyond-birthday” (up to $$2^{2n/3}$$ 22n/3 adversarial queries) security, and, as the number of non-linear rounds increases, our bounds are meaningful for the number of queries approaching $$2^n$$ 2n. Finally, our non-linear SPNs can be made tweakable by incorporating the tweak into the permutation layer, and provide good multi-user security.As an application, our construction can turn two public n-bit permutations (or fixed-key block ciphers) into a tweakable block cipher working on wn-bit inputs, 6n-bit key and an n-bit tweak (for any $$w\ge 2$$ w≥2); the tweakable block cipher provides security up to $$2^{2n/3}$$ 22n/3 adversarial queries in the random permutation model, while only requiring w calls to each permutation, and 3w field multiplications for each wn-bit input.
2018
CRYPTO
Verifiable Delay Functions 📺
We study the problem of building a verifiable delay function (VDF). A $$\text {VDF}$$VDFrequires a specified number of sequential steps to evaluate, yet produces a unique output that can be efficiently and publicly verified. $$\text {VDF}$$VDFs have many applications in decentralized systems, including public randomness beacons, leader election in consensus protocols, and proofs of replication. We formalize the requirements for $$\text {VDF}$$VDFs and present new candidate constructions that are the first to achieve an exponential gap between evaluation and verification time.
2018
CRYPTO
Proofs of Work From Worst-Case Assumptions 📺
We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC ’17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a ‘proof of concept’ of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs’ structure can be exploited in ways previous heuristic PoWs could not.This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS ’16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al., STOC ’17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.
2018
CRYPTO
Out-of-Band Authentication in Group Messaging: Computational, Statistical, Optimal 📺
Extensive efforts are currently put into securing messaging platforms, where a key challenge is that of protecting against man-in-the-middle attacks when setting up secure end-to-end channels. The vast majority of these efforts, however, have so far focused on securing user-to-user messaging, and recent attacks indicate that the security of group messaging is still quite fragile.We initiate the study of out-of-band authentication in the group setting, extending the user-to-user setting where messaging platforms (e.g., Telegram and WhatsApp) protect against man-in-the-middle attacks by assuming that users have access to an external channel for authenticating one short value (e.g., two users who recognize each other’s voice can compare a short value). Inspired by the frameworks of Vaudenay (CRYPTO ’05) and Naor et al. (CRYPTO ’06) in the user-to-user setting, we assume that users communicate over a completely-insecure channel, and that a group administrator can out-of-band authenticate one short message to all users. An adversary may read, remove, or delay this message (for all or for some of the users), but cannot undetectably modify it.Within our framework we establish tight bounds on the tradeoff between the adversary’s success probability and the length of the out-of-band authenticated message (which is a crucial bottleneck given that the out-of-band channel is of low bandwidth). We consider both computationally-secure and statistically-secure protocols, and for each flavor of security we construct an authentication protocol and prove a lower bound showing that our protocol achieves essentially the best possible tradeoff.In particular, considering groups that consist of an administrator and k additional users, for statistically-secure protocols we show that at least $$(k+1)\cdot (\log (1/\epsilon ) - \varTheta (1))$$ (k+1)·(log(1/ϵ)-Θ(1)) bits must be out-of-band authenticated, whereas for computationally-secure ones $$\log (1/\epsilon ) + \log k$$ log(1/ϵ)+logk bits suffice, where $$\epsilon $$ ϵ is the adversary’s success probability. Moreover, instantiating our computationally-secure protocol in the random-oracle model yields an efficient and practically-relevant protocol (which, alternatively, can also be based on any one-way function in the standard model).
2018
CRYPTO
Faster Homomorphic Linear Transformations in HElib 📺
HElib is a software library that implements homomorphic encryption (HE), with a focus on effective use of “packed” ciphertexts. An important operation is applying a known linear map to a vector of encrypted data. In this paper, we describe several algorithmic improvements that significantly speed up this operation: in our experiments, our new algorithms are 30–75 times faster than those previously implemented in HElib for typical parameters.One application that can benefit from faster linear transformations is bootstrapping (in particular, “thin bootstrapping” as described in [Chen and Han, Eurocrypt 2018]). In some settings, our new algorithms for linear transformations result in a $$6{\times }$$6× speedup for the entire thin bootstrapping operation.Our techniques also reduce the size of the large public evaluation key, often using 33%–50% less space than the previous HElib implementation. We also implemented a new tradeoff that enables a drastic reduction in size, resulting in a $$25{\times }$$25× factor or more for some parameters, paying only a penalty of a 2–$$4{\times }$$4× times slowdown in running time (and giving up some parallelization opportunities).
2018
CRYPTO
CAPA: The Spirit of Beaver Against Physical Attacks 📺
In this paper we introduce two things: On one hand we introduce the Tile-Probe-and-Fault model, a model generalising the wire-probe model of Ishai et al. extending it to cover both more realistic side-channel leakage scenarios on a chip and also to cover fault and combined attacks. Secondly we introduce CAPA: a combined Countermeasure Against Physical Attacks. Our countermeasure is motivated by our model, and aims to provide security against higher-order SCA, multiple-shot FA and combined attacks. The tile-probe-and-fault model leads one to naturally look (by analogy) at actively secure multi-party computation protocols. Indeed, CAPA draws much inspiration from the MPC protocol SPDZ. So as to demonstrate that the model, and the CAPA countermeasure, are not just theoretical constructions, but could also serve to build practical countermeasures, we present initial experiments of proof-of-concept designs using the CAPA methodology. Namely, a hardware implementation of the KATAN and AES block ciphers, as well as a software bitsliced AES S-box implementation. We demonstrate experimentally that the design can resist second-order DPA attacks, even when the attacker is presented with many hundreds of thousands of traces. In addition our proof-of-concept can also detect faults within our model with high probability in accordance to the methodology.
2018
CRYPTO
Fast Message Franking: From Invisible Salamanders to Encryptment 📺
Message franking enables cryptographically verifiable reporting of abusive messages in end-to-end encrypted messaging. Grubbs, Lu, and Ristenpart recently formalized the needed underlying primitive, what they call compactly committing authenticated encryption (AE), and analyze security of a number of approaches. But all known secure schemes are still slow compared to the fastest standard AE schemes. For this reason Facebook Messenger uses AES-GCM for franking of attachments such as images or videos.We show how to break Facebook’s attachment franking scheme: a malicious user can send an objectionable image to a recipient but that recipient cannot report it as abuse. The core problem stems from use of fast but non-committing AE, and so we build the fastest compactly committing AE schemes to date. To do so we introduce a new primitive, called encryptment, which captures the essential properties needed. We prove that, unfortunately, schemes with performance profile similar to AES-GCM won’t work. Instead, we show how to efficiently transform Merkle-Damgärd-style hash functions into secure encryptments, and how to efficiently build compactly committing AE from encryptment. Ultimately our main construction allows franking using just a single computation of SHA-256 or SHA-3. Encryptment proves useful for a variety of other applications, such as remotely keyed AE and concealments, and our results imply the first single-pass schemes in these settings as well.
2018
CRYPTO
Indifferentiable Authenticated Encryption 📺
We study Authenticated Encryption with Associated Data (AEAD) from the viewpoint of composition in arbitrary (single-stage) environments. We use the indifferentiability framework to formalize the intuition that a “good” AEAD scheme should have random ciphertexts subject to decryptability. Within this framework, we can then apply the indifferentiability composition theorem to show that such schemes offer extra safeguards wherever the relevant security properties are not known, or cannot be predicted in advance, as in general-purpose crypto libraries and standards.We show, on the negative side, that generic composition (in many of its configurations) and well-known classical and recent schemes fail to achieve indifferentiability. On the positive side, we give a provably indifferentiable Feistel-based construction, which reduces the round complexity from at least 6, needed for blockciphers, to only 3 for encryption. This result is not too far off the theoretical optimum as we give a lower bound that rules out the indifferentiability of any construction with less than 2 rounds.
2018
CRYPTO
The Curse of Small Domains: New Attacks on Format-Preserving Encryption 📺
Format-preserving encryption (FPE) produces ciphertexts which have the same format as the plaintexts. Building secure FPE is very challenging, and recent attacks (Bellare, Hoang, Tessaro, CCS ’16; Durak and Vaudenay, CRYPTO ’17) have highlighted security deficiencies in the recent NIST SP800-38G standard. This has left the question open of whether practical schemes with high security exist.In this paper, we continue the investigation of attacks against FPE schemes. Our first contribution are new known-plaintext message recovery attacks against Feistel-based FPEs (such as FF1/FF3 from the NIST SP800-38G standard) which improve upon previous work in terms of amortized complexity in multi-target scenarios, where multiple ciphertexts are to be decrypted. Our attacks are also qualitatively better in that they make no assumptions on the correlation between the targets to be decrypted and the known plaintexts. We also surface a new vulnerability specific to FF3 and how it handles odd length domains, which leads to a substantial speedup in our attacks.We also show the first attacks against non-Feistel based FPEs. Specifically, we show a strong message-recovery attack for FNR, a construction proposed by Cisco which replaces two rounds in the Feistel construction with a pairwise-independent permutation, following the paradigm by Naor and Reingold (JoC, ’99). We also provide a strong ciphertext-only attack against a variant of the DTP construction by Brightwell and Smith, which is deployed by Protegrity within commercial applications. All of our attacks show that existing constructions fall short of achieving desirable security levels. For Feistel and the FNR schemes, our attacks become feasible on small domains, e.g., 8 bits, for suggested round numbers. Our attack against the DTP construction is practical even for large domains. We provide proof-of-concept implementations of our attacks that verify our theoretical findings.
2018
CRYPTO
Cryptanalysis via Algebraic Spans 📺
We introduce a method for obtaining provable polynomial time solutions of problems in nonabelian algebraic cryptography. This method is widely applicable, easier to apply, and more efficient than earlier methods. After demonstrating its applicability to the major classic nonabelian protocols, we use this method to cryptanalyze the Triple Decomposition key exchange protocol, the only classic group theory based key exchange protocol that could not be cryptanalyzed by earlier methods.
2018
CRYPTO
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
EUROCRYPT
2018
TCHES
FPGA-based Accelerator for Post-Quantum Signature Scheme SPHINCS-256 📺
In recent years, a substantial amount of research has been conducted and progress made in the area of quantum computers. Small functional prototypes have already been reported. If they scale as expected, they will eventually be able to break current public-key cryptosystems. The goal of post-quantum cryptography is to develop cryptographic systems that are secure against attacks originating from both quantum and classical computers. Frequently referred post-quantum signature schemes are based on the security of hash functions. A promising candidate in this group is SPHINCS-256. This paper presents the first FPGA-based hardware accelerator for SPHINCS-256. It can be implemented on an entry-level FPGA, occupying roughly 19,000 LUTs, 38,000 FFs and 36 BRAMs. On a Kintex-7 Xilinx FPGA, signing takes 1.53 milliseconds, and verification needs only 65 microseconds. Area and throughput of the accelerator are in a range that outperform today’s widely used RSA signature scheme. The performance can even keep up with ECDSA accelerators. Hence, SPHINCS-256 is a hot candidate to replace RSA and ECDSA in a post-quantum world.
2018
TCHES
High Order Masking of Look-up Tables with Common Shares 📺
Masking is an effective countermeasure against side-channel attacks. In this paper, we improve the efficiency of the high-order masking of look-up tables countermeasure introduced at Eurocrypt 2014, based on a combination of three techniques, and still with a proof of security in the Ishai-Sahai-Wagner (ISW) probing model. The first technique consists in proving security under the stronger t-SNI definition, which enables to use n = t+1 shares instead of n = 2t+1 against t-th order attacks. The second technique consists in progressively incrementing the number of shares within the countermeasure, from a single share to n, thereby reducing the complexity of the countermeasure. The third technique consists in adapting the common shares approach introduced by Coron et al. at CHES 2016, so that half of a randomized look-up table can be pre-computed for multiple SBoxes. We show that our techniques perform well in practice. In theory, the combination of the three techniques should lead to a factor 10.7 improvement in efficiency, for a large number of shares. For a practical implementation with a reasonable number of shares, we get a 4.8 speed-up factor for AES.
2018
TCHES
A Cautionary Note When Looking for a Truly Reconfigurable Resistive RAM PUF 📺
The reconfigurable physically unclonable function (PUF) is an advanced security hardware primitive, suitable for applications requiring key renewal or similar refresh functions. The Oxygen vacancies-based resistive RAM (RRAM), has been claimed to be a physically reconfigurable PUF due to its intrinsic switching variability. This paper first analyzes and compares various previously published RRAM-based PUFs with a physics-based RRAM model. We next discuss their possible reconfigurability assuming an ideal configuration-to-configuration behavior. The RRAM-to-RRAM variability, which mainly originates from a variable number of unremovable vacancies inside the RRAM filament, however, has been observed to have significant impact on the reconfigurability. We show by quantitative analysis on the clear uniqueness degradation from the ideal situation in all the discussed implementations. Thus we conclude that true reconfigurability with RRAM PUFs might be unachievable due to this physical phenomena.
2018
TCHES
EM Analysis in the IoT Context: Lessons Learned from an Attack on Thread 📺
The distinguishing feature of the Internet of Things is that many devices get interconnected. The threat of side-channel attacks in this setting is less understood than the threat of traditional network and software exploitation attacks that are perceived to be more powerful.This work is a case study of Thread, an emerging network and transport level stack designed to facilitate secure communication between heterogeneous IoT devices. We perform the first side-channel vulnerability analysis of the Thread networking stack. We leverage various network mechanisms to trigger manipulations of the security material or to get access to the network credentials. We choose the most feasible attack vector to build a complete attack that combines network specific mechanisms and Differential Electromagnetic Analysis. When successfully applied on a Thread network, the attack gives full network access to the adversary. We evaluate the feasibility of our attack in a TI CC2538 setup running OpenThread, a certified open-source implementation of the stack.The full attack does not succeed in our setting. The root cause for this failure is not any particular security feature of the protocol or the implementation, but a side-effect of a feature not related to security. We summarize the problems that we find in the protocol with respect to side-channel analysis, and suggest a range of countermeasures to prevent our attack and the other attack vectors we identified during the vulnerability analysis.In general, we demonstrate that elaborate security mechanisms of Thread make a side-channel attack not trivial to mount. Similar to a modern software exploit, it requires chaining multiple vulnerabilities. Nevertheless, such attacks are feasible. Being perhaps too expensive for settings like smart homes, they pose a relatively higher threat to the commercial setting. We believe our experience provides a useful lesson to designers of IoT protocols and devices.
2018
TCHES
Stealthy Opaque Predicates in Hardware - Obfuscating Constant Expressions at Negligible Overhead 📺
Opaque predicates are a well-established fundamental building block for software obfuscation. Simplified, an opaque predicate implements an expression that provides constant Boolean output, but appears to have dynamic behavior for static analysis. Even though there has been extensive research regarding opaque predicates in software, techniques for opaque predicates in hardware are barely explored. In this work, we propose a novel technique to instantiate opaque predicates in hardware, such that they (1) are resource-efficient, and (2) are challenging to reverse engineer even with dynamic analysis capabilities. We demonstrate the applicability of opaque predicates in hardware for both, protection of intellectual property and obfuscation of cryptographic hardware Trojans. Our results show that we are able to implement stealthy opaque predicates in hardware with minimal overhead in area and no impact on latency.
2018
TCHES
Dismantling the AUT64 Automotive Cipher 📺
AUT64 is a 64-bit automotive block cipher with a 120-bit secret key used in a number of security sensitive applications such as vehicle immobilization and remote keyless entry systems. In this paper, we present for the first time full details of AUT64 including a complete specification and analysis of the block cipher, the associated authentication protocol, and its implementation in a widely-used vehicle immobiliser system that we have reverse engineered. Secondly, we reveal a number of cryptographic weaknesses in the block cipher design. Finally, we study the concrete use of AUT64 in a real immobiliser system, and pinpoint severe weaknesses in the key diversification scheme employed by the vehicle manufacturer. We present two key-recovery attacks based on the cryptographic weaknesses that, combined with the implementation flaws, break both the 8 and 24 round configurations of AUT64. Our attack on eight rounds requires only 512 plaintext-ciphertext pairs and, in the worst case, just 237.3 offline encryptions. In most cases, the attack can be executed within milliseconds on a standard laptop. Our attack on 24 rounds requires 2 plaintext-ciphertext pairs and 248.3 encryptions to recover the 120-bit secret key in the worst case. We have strong indications that a large part of the key is kept constant across vehicles, which would enable an attack using a single communication with the transponder and negligible offline computation.
2018
TCHES
Generic Low-Latency Masking in Hardware 📺
In this work, we introduce a generalized concept for low-latency masking that is applicable to any implementation and protection order, and (in its most extreme form) does not require on-the-fly randomness. The main idea of our approach is to avoid collisions of shared variables in nonlinear circuit parts and to skip the share compression. We show the feasibility of our approach on a full implementation of a one-round unrolled Ascon variant and on an AES S-box case study. Additionally, we discuss possible trade-offs to make our approach interesting for practical implementations. As a result, we obtain a first-order masked AES S-box that is calculated in a single clock cycle with rather high implementation costs (60.7 kGE), and a two-cycle variant with much less implementation costs (6.7 kGE). The side-channel resistance of our Ascon S-box designs up to order three are then verified using the formal analysis tool of [BGI+18]. Furthermore, we introduce a taint checking based verification approach that works specifically for our low-latency approach and allows us to verify large circuits like our low-latency AES S-box design in reasonable time.
2018
TCHES
Fast FPGA Implementations of Diffie-Hellman on the Kummer Surface of a Genus-2 Curve 📺
We present the first hardware implementations of Diffie-Hellman key exchange based on the Kummer surface of Gaudry and Schost’s genus-2 curve targeting a 128-bit security level. We describe a single-core architecture for lowlatency applications and a multi-core architecture for high-throughput applications. Synthesized on a Xilinx Zynq-7020 FPGA, our architectures perform a key exchange with lower latency and higher throughput than any other reported implementation using prime-field elliptic curves at the same security level. Our single-core architecture performs a scalar multiplication with a latency of 82 microseconds while our multicore architecture achieves a throughput of 91,226 scalar multiplications per second. When compared to similar implementations of Microsoft’s Fourℚ on the same FPGA, this translates to an improvement of 48% in latency and 40% in throughput for the single-core and multi-core architecture, respectively. Both our designs exhibit constant-time execution to thwart timing attacks, use the Montgomery ladder for improved resistance against SPA, and support a countermeasure against fault attacks.
2018
TCHES
Hardware Masking, Revisited 📺
MaskingHardware masking schemes have shown many advances in the past few years. Through a series of publications their implementation cost has dropped significantly and flaws have been fixed where present. Despite these advancements it seems that a limit has been reached when implementing masking schemes on FPGA platforms. Indeed, even with a correct transition from the masking scheme to the masking realization (i.e., when the implementation is not buggy) it has been shown that the implementation can still exhibit unexpected leakage, e.g., through variations in placement and routing. In this work, we show that the reason for such unexpected leakages is the violation of an underlying assumption made by all masking schemes, i.e., that the leakage of the circuit is a linear sum of leakages associated to each share. In addition to the theory of VLSI which supports our claim, we perform a wide range of experiments based on an FPGA) to find out under what circumstances this causes a masked hardware implementation to show undesirable leakage. We further illustrate case studies, where publicly-known secure designs exhibit first-order leakage when being operated at certain conditions.
2018
TCHES
Mixing Additive and Multiplicative Masking for Probing Secure Polynomial Evaluation Methods 📺
Masking is a sound countermeasure to protect implementations of block- cipher algorithms against Side Channel Analysis (SCA). Currently, the most efficient masking schemes use Lagrange’s Interpolation Theorem in order to represent any S- box by a polynomial function over a binary finite field. Masking the processing of an S-box is then achieved by masking every operation involved in the evaluation of its polynomial representation. While the common approach requires to use the well- known Ishai-Sahai-Wagner (ISW) scheme in order to secure this processing, there exist alternatives. In the particular case of power functions, Genelle, Prouff and Quisquater proposed an efficient masking scheme (GPQ). However, no generalization has been suggested for polynomial functions so far. In this paper, we solve the open problem of extending GPQ for polynomials, and we also solve the open problem of proving that both the original scheme and its variants for polynomials satisfy the t-SNI security definition. Our approach to extend GPQ is based on the cyclotomic method and results in an alternate cyclotomic method which is three times faster in practice than the original proposal in almost all scenarios we address. The best- known method for polynomial evaluation is currently CRV which requires to use the cyclotomic method for one of its step. We also show how to plug our alternate cyclo- tomic approach into CRV and again provide an alternate approach that outperforms the original in almost all scenarios. We consider the masking of n-bit S-boxes for n ∈ [4;8] and we get in practice 35% improvement of efficiency for S-boxes with dimension n ∈ {5,7,8} and 25% for 6-bit S-boxes.
2018
TCHES
Smashing the Implementation Records of AES S-box 📺
Canright S-box has been known as the most compact S-box design since its introduction back in CHES’05. Boyar-Peralta proposed logic-minimization heuristics that could reduce the gate count of Canright S-box from 120 gates to 113 gates, however synthesis results did not reflect much improvement. In CHES’15, Ueno et al. proposed an S-box that has a slightly higher area, but significantly faster than the previous designs, hence it was the most efficient (measured by area×delay) S-box implementation to date. In this paper, we propose two new designs for the AES S-box. One design has a smaller implementation area than both Canright and the 113-gate S-boxes. Hence, our first design is the smallest AES S-box to date, breaking the 13 years implementation record of Canright. The second design is faster and smaller than the Ueno S-box. Hence, our second design is both the fastest and the most efficient S-box design to date. While doing so, we also propose new logicminimization heuristics that outperform the previous algorithms of Boyar-Peralta. Finally, we conduct an exhaustive evaluation of each and every block in the S-box circuit, using both structural and behavioral HDL modeling, to reach the optimum synergy between theoretical algorithms and technology-supported optimization tools. We show that involving the technology-supported CAD tools in the analysis results in several counter-intuitive results.
2018
TCHES
High-Performance FV Somewhat Homomorphic Encryption on GPUs: An Implementation using CUDA 📺
Homomorphic encryption (HE) offers great capabilities that can solve a wide range of privacy-preserving computing problems. This tool allows anyone to process encrypted data producing encrypted results that only the decryption key’s owner can decrypt. Although HE has been realized in several public implementations, its performance is quite demanding. The reason for this is attributed to the huge amount of computation required by secure HE schemes. In this work, we present a CUDAbased implementation of the Fan and Vercauteren (FV) Somewhat HomomorphicEncryption (SHE) scheme. We demonstrate several algebraic tools such as the Chinese Remainder Theorem (CRT), Residual Number System (RNS) and Discrete Galois Transform (DGT) to accelerate and facilitate FV computation on GPUs. We also show how the entire FV computation can be done on GPU without multi-precision arithmetic. We compare our GPU implementation with two mature state-of-the-art implementations: 1) Microsoft SEAL v2.3.0-4 and 2) NFLlib-FV. Our implementation outperforms them and achieves on average 5.37x, 7.37x, 22.22x, 5.11x and 13.18x (resp. 2.03x, 2.94x, 27.86x, 8.53x and 18.69x) for key generation, encryption, decryption, homomorphic addition and homomorphic multiplication against SEAL-FVRNS (resp. NFLlib-FV).
2018
TCHES
Rhythmic Keccak: SCA Security and Low Latency in HW 📺
Glitches entail a great issue when securing a cryptographic implementation in hardware. Several masking schemes have been proposed in the literature that provide security even in the presence of glitches. The key property that allows this protection was introduced in threshold implementations as non-completeness. We address crucial points to ensure the right compliance of this property especially for low-latency implementations. Specifically, we first discuss the existence of a flaw in DSD 2017 implementation of Keccak by Gross et al. in violation of the non-completeness property and propose a solution. We perform a side-channel evaluation on the first-order and second-order implementations of the proposed design where no leakage is detected with up to 55 million traces. Then, we present a method to ensure a non-complete scheme of an unrolled implementation applicable to any order of security or algebraic degree of the shared function. By using this method we design a two-rounds unrolled first-order Keccak-
2018
TCHES
CacheQuote: Efficiently Recovering Long-term Secrets of SGX EPID via Cache Attacks 📺
Intel Software Guard Extensions (SGX) allows users to perform secure computation on platforms that run untrusted software. To validate that the computation is correctly initialized and that it executes on trusted hardware, SGX supports attestation providers that can vouch for the user’s computation. Communication with these attestation providers is based on the Extended Privacy ID (EPID) protocol, which not only validates the computation but is also designed to maintain the user’s privacy. In particular, EPID is designed to ensure that the attestation provider is unable to identify the host on which the computation executes. In this work we investigate the security of the Intel implementation of the EPID protocol. We identify an implementation weakness that leaks information via a cache side channel. We show that a malicious attestation provider can use the leaked information to break the unlinkability guarantees of EPID. We analyze the leaked information using a lattice-based approach for solving the hidden number problem, which we adapt to the zero-knowledge proof in the EPID scheme, extending prior attacks on signature schemes.
2018
TCHES
Practical CCA2-Secure and Masked Ring-LWE Implementation 📺
During the last years public-key encryption schemes based on the hardness of ring-LWE have gained significant popularity. For real-world security applications assuming strong adversary models, a number of practical issues still need to be addressed. In this work we thus present an instance of ring-LWE encryption that is protected against active attacks (i.e., adaptive chosen-ciphertext attacks) and equipped with countermeasures against side-channel analysis. Our solution is based on a postquantum variant of the Fujisaki-Okamoto (FO) transform combined with provably secure first-order masking. To protect the key and message during decryption, we developed a masked binomial sampler that secures the re-encryption process required by FO. Our work shows that CCA2-secured RLWE-based encryption can be achieved with reasonable performance on constrained devices but also stresses that the required transformation and handling of decryption errors implies a performance overhead that has been overlooked by the community so far. With parameters providing 233 bits of quantum security, our implementation requires 4,176,684 cycles for encryption and 25,640,380 cycles for decryption with masking and hiding countermeasures on a Cortex-M4F. The first-order security of our masked implementation is also practically verified using the non-specific t-test evaluation methodology.
2018
TCHES
ExpFault: An Automated Framework for Exploitable Fault Characterization in Block Ciphers 📺
Malicious exploitation of faults for extracting secrets is one of the most practical and potent threats to modern cryptographic primitives. Interestingly, not every possible fault for a cryptosystem is maliciously exploitable, and evaluation of the exploitability of a fault is nontrivial. In order to devise precise defense mechanisms against such rogue faults, a comprehensive knowledge is required about the exploitable part of the fault space of a cryptosystem. Unfortunately, the fault space is diversified and of formidable size even while a single cryptoprimitive is considered and traditional manual fault analysis techniques may often fall short to practically cover such a fault space within reasonable time. An automation for analyzing individual fault instances for their exploitability is thus inevitable. Such an automation is supposed to work as the core engine for analyzing the fault spaces of cryptographic primitives. In this paper, we propose an automation for evaluating the exploitability status of fault instances from block ciphers, mainly in the context of Differential Fault Analysis (DFA) attacks. The proposed framework is generic and scalable, which are perhaps the two most important features for covering diversified fault spaces of formidable size originating from different ciphers. As a proof-of-concept, we reconstruct some known attack examples on AES and PRESENT using the framework and finally analyze a recently proposed cipher GIFT [BPP+17] for the first time. It is found that the secret key of GIFT can be uniquely determined with 1 nibble fault instance injected at the beginning of the 25th round with a reasonable computational complexity of 214.
2018
TCHES
Attacking GlobalPlatform SCP02-compliant Smart Cards Using a Padding Oracle Attack 📺
We describe in this paper how to perform a padding oracle attack against the GlobalPlatform SCP02 protocol. SCP02 is implemented in smart cards and used by transport companies, in the banking world and by mobile network operators (UICC/SIM cards). The attack allows an adversary to efficiently retrieve plaintext bytes from an encrypted data field. We provide results of our experiments done with 10 smart cards from six different card manufacturers, and show that, in our experimental setting, the attack is fully practical. Given that billions SIM cards are produced every year, the number of affected cards, although difficult to estimate, is potentially high. To the best of our knowledge, this is the first successful attack against SCP02.
2018
TCHES
Beetle Family of Lightweight and Secure Authenticated Encryption Ciphers 📺
This paper presents a lightweight, sponge-based authenticated encryption (AE) family called Beetle. When instantiated with the PHOTON permutation from CRYPTO 2011, Beetle achieves the smallest footprint—consuming only a few more than 600 LUTs on FPGA while maintaining 64-bit security. This figure is significantly smaller than all known lightweight AE candidates which consume more than 1,000 LUTs, including the latest COFB-AES from CHES 2017. In order to realize such small hardware implementation, we equip Beetle with an “extremely tight” bound of security. The trick is to use combined feedback to create a difference between the cipher text block and the rate part of the next feedback (in traditional sponge these two values are the same). Then we are able to show that Beetle is provably secure up to min{c − log r, b/2, r} bits, where b is the permutation size and r and c are parameters called rate and capacity, respectively. The tight security bound allows us to select the smallest security parameters, which in turn result in the smallest footprint.
2018
TCHES
Improved High-Order Conversion From Boolean to Arithmetic Masking 📺
Masking is a very common countermeasure against side channel attacks. When combining Boolean and arithmetic masking, one must be able to convert between the two types of masking, and the conversion algorithm itself must be secure against side-channel attacks. An efficient high-order Boolean to arithmetic conversion scheme was recently described at CHES 2017, with complexity independent of the register size. In this paper we describe a simplified variant with fewer mask refreshing, and still with a proof of security in the ISW probing model. In practical implementations, our variant is roughly 25% faster.
2018
TCHES
Fault Attacks Made Easy: Differential Fault Analysis Automation on Assembly Code 📺
Over the past decades, fault injection attacks have been extensively studied due to their capability to efficiently break cryptographic implementations. Fault injection attack models are normally determined by analyzing the cipher structure and finding exploitable spots in non-linear and permutation layers. However, this level of abstraction is often too high to distinguish vulnerable parts of software implementations, due to specific operations and optimizations. On the other hand, manually analyzing the assembly code requires non-negligible amount of time and expertise. In this paper, we propose an automated approach for analyzing cipher implementations in assembly. We represent the whole assembly program as a data flow graph so that the vulnerable spots can be found efficiently. Fault propagation is analyzed in a subgraph constructed from each vulnerable spot, allowing equations for Differential Fault Analysis (DFA) to be automatically generated. We have created a tool that implements our approach: DATAC – DFA Automation Tool for Assembly Code. We have successfully used this tool for attacking PRESENT- 80, being able to find implementation-specific vulnerabilities that can be exploited in order to recover the last round key with 16 faults. Our results show that DATAC is useful in finding attack spots that are not visible from the cipher structure, but can be easily exploited when dealing with real-world implementations.
2018
TCHES
Linear Repairing Codes and Side-Channel Attacks 📺
To strengthen the resistance of countermeasures based on secret sharing,several works have suggested to use the scheme introduced by Shamir in 1978, which proposes to use the evaluation of a random d-degree polynomial into n ≥ d + 1 public points to share the sensitive data. Applying the same principles used against the classical Boolean sharing, all these works have assumed that the most efficient attack strategy was to exploit the minimum number of shares required to rebuild the sensitive value; which is d + 1 if the reconstruction is made with Lagrange’s interpolation. In this paper, we highlight first an important difference between Boolean and Shamir’s sharings which implies that, for some signal-to-noise ratio, it is more advantageous for the adversary to observe strictly more than d + 1 shares. We argue that this difference is related to the existence of so-called linear exact repairing codes, which themselves come with reconstruction formulae that need (much) less information (counted in bits) than Lagrange’s interpolation. In particular, this result implies that the choice of the public points in Shamir’s sharing has an impact on the countermeasure strength, which confirms previous observations made by Wang et al. at CARDIS 2016 for the so-called inner product sharing which is a generalization of Shamir’s scheme. As another contribution, we exhibit a positive impact of the existence of linear exact repairing schemes; we indeed propose to use them to improve the state-of-the-art multiplication algorithms dedicated to Shamir’s sharing. We argue that the improvement can be effective when the multiplication operation in the sub-fields is at least two times smaller than that of the base field.
2018
TCHES
Leakage Detection with the x2-Test 📺
We describe how Pearson’s χ2-test can be used as a natural complement to Welch’s t-test for black box leakage detection. In particular, we show that by using these two tests in combination, we can mitigate some of the limitations due to the moment-based nature of existing detection techniques based on Welch’s t-test (e.g., for the evaluation of higher-order masked implementations with insufficient noise). We also show that Pearson’s χ2-test is naturally suited to analyze threshold implementations with information lying in multiple statistical moments, and can be easily extended to a distinguisher for key recovery attacks. As a result, we believe the proposed test and methodology are interesting complementary ingredients of the side-channel evaluation toolbox, for black box leakage detection and non-profiled attacks, and as a preliminary before more demanding advanced analyses.
2018
TCHES
SAEB: A Lightweight Blockcipher-Based AEAD Mode of Operation 📺
Lightweight cryptography in computationally constrained devices is actively studied. In contrast to advances of lightweight blockcipher in the last decade, lightweight mode of operation is seemingly not so mature, yet it has large impact in performance. Therefore, there is a great demand for lightweight mode of operation, especially that for authenticated encryption with associated data (AEAD). Among many known properties of conventional modes of operation, the following four properties are essential for constrained devices: Minimum State Size: the state size equals to a block size of a blockcipher. Inverse Free: no need for a blockcipher decryption. XOR Only: only XOR is needed in addition to a blockcipher encryption. Online: a data block is processed only once. The properties 1 and 4 contribute to small memory usage, and the properties 2 and 3 contribute to small program/circuit footprint. On top of the above properties, the fifth property regarding associated data (AD) is also important for performance: Efficient Handling of Static AD: static AD can be precomputed. We design a lightweight blockcipher-based AEAD mode of operation called SAEB: the first mode of operation that satisfies all the five properties to the best of our knowledge. Performance of SAEB is evaluated in various software and hardware platforms. The evaluation results show that SAEB outperforms conventional blockcipher-based AEAD modes of operation in various performance metrics for lightweight cryptography.
2018
TOSC
Towards Low Energy Stream Ciphers 📺
Energy optimization is an important design aspect of lightweight cryptography. Since low energy ciphers drain less battery, they are invaluable components of devices that operate on a tight energy budget such as handheld devices or RFID tags. At Asiacrypt 2015, Banik et al. presented the block cipher family Midori which was designed to optimize the energy consumed per encryption and which reduces the energy consumption by more than 30% compared to previous block ciphers. However, if one has to encrypt/decrypt longer streams of data, i.e. for bulk data encryption/decryption, it is expected that a stream cipher should perform even better than block ciphers in terms of energy required to encrypt. In this paper, we address the question of designing low energy stream ciphers. To this end, we analyze for common stream cipher design components their impact on the energy consumption. Based on this, we give arguments why indeed stream ciphers allow for encrypting long data streams with less energy than block ciphers and validate our findings by implementations. Afterwards, we use the analysis results to identify energy minimizing design principles for stream ciphers.
2018
TOSC
ShiftRows Alternatives for AES-like Ciphers and Optimal Cell Permutations for Midori and Skinny 📺
We study possible alternatives for ShiftRows to be used as cell permutations in AES-like ciphers. As observed during the design process of the block cipher Midori, when using a matrix with a non-optimal branch number for the MixColumns operation, the choice of the cell permutation, i.e., an alternative for ShiftRows, can actually improve the security of the primitive. In contrast, when using an MDS matrix it is known that one cannot increase the minimum number of active S-boxes by deviating from the ShiftRows-type permutation. However, finding the optimal choice for the cell permutation for a given, non-optimal, MixColumns operation is a highly non-trivial problem. In this work, we propose techniques to speed up the search for the optimal cell permutations significantly. As case studies, we apply those techniques to Midori and Skinny and provide possible alternatives for their cell permutations. We finally state an easy-to-verify sufficient condition on a cell permutation, to be used as an alternative in Midori, that attains a high number of active S-boxes and thus provides good resistance against differential and linear attacks.
2018
TOSC
MDS Matrices with Lightweight Circuits 📺
MDS matrices are an important element for the design of block ciphers such as the AES. In recent years, there has been a lot of work on the construction of MDS matrices with a low implementation cost, in the context of lightweight cryptography. Most of the previous efforts focused on local optimization, constructing MDS matrices with coefficients that can be efficiently computed. In particular, this led to a matrix with a direct xor count of only 106, while a direct implementation of the MixColumn matrix of the AES requires 152 bitwise xors. More recently, techniques based on global optimization have been introduced, where the implementation can reuse some intermediate variables. In particular, Kranz et al. used optimization tools to find a good implementation from the description of an MDS matrix. They have lowered the cost of implementing the MixColumn matrix to 97 bitwise xors, and proposed a new matrix with only 72 bitwise xors, the lowest cost known so far. In this work we propose a different approach to global optimization. Instead of looking for an optimized circuit of a given matrix, we run a search through a space of circuits, to find optimal circuits yielding MDS matrices. This results in MDS matrices with an even lower cost, with only 67 bitwise xors.
2018
TOSC
Separable Statistics and Multidimensional Linear Cryptanalysis 📺
Multidimensional linear cryptanalysis of block ciphers is improved in this work by introducing a number of new ideas. Firstly, formulae is given to compute approximate multidimensional distributions of the encryption algorithm internal bits. Conventional statistics like LLR (Logarithmic Likelihood Ratio) do not fit to work in Matsui’s Algorithm 2 for large dimension data, as the observation may depend on too many cipher key bits. So, secondly, a new statistic which reflects the structure of the cipher round is constructed instead. Thirdly, computing the statistic values that will fall into a critical region is presented as an optimisation problem for which an efficient algorithm is suggested. The algorithm works much faster than brute forcing all relevant key bits to compute the statistic. An attack for 16-round DES was implemented. We got an improvement over Matsui’s attack on DES in data and time complexity keeping success probability the same. With 241.81 plaintext blocks and success rate 0.83 (computed theoretically) we found 241.46 (which is close to the theoretically predicted number 241.81) key-candidates to 56-bit DES key. Search tree to compute the statistic values which fall into the critical region incorporated 245.45 nodes in the experiment and that is at least theoretically inferior in comparison with the final brute force. To get success probability 0.85, which is a fairer comparison to Matsui’s results, we would need 241.85 data and to brute force 241.85 key-candidates. That compares favourably with 243 achieved by Matsui.
2018
TOSC
Clustering Related-Tweak Characteristics: Application to MANTIS-6 📺
The TWEAKEY/STK construction is an increasingly popular approach for designing tweakable block ciphers that notably uses a linear tweakey schedule. Several recent attacks have analyzed the implications of this approach for differential cryptanalysis and other attacks that can take advantage of related tweakeys. We generalize the clustering approach of a recent differential attack on the tweakable block cipher MANTIS5 and describe a tool for efficiently finding and evaluating such clusters. More specifically, we consider the set of all differential characteristics compatible with a given truncated characteristic, tweak difference, and optional constraints for the differential. We refer to this set as a semi-truncated characteristic and estimate its probability by analyzing the distribution of compatible differences at each step. We apply this approach to find a semi-truncated differential characteristic for MANTIS6 with probability about 2−67.73 and derive a key-recovery attack with a complexity of about 255.09 chosen-plaintext queries and 255.52 computations. The data-time product is 2110.61 &lt;&lt; 2126.
2018
TOSC
Mixture Differential Cryptanalysis: a New Approach to Distinguishers and Attacks on round-reduced AES 📺
At Eurocrypt 2017 the first secret-key distinguisher for 5-round AES - based on the “multiple-of-8” property - has been presented. Although it allows to distinguish a random permutation from an AES-like one, it seems rather hard to implement a key-recovery attack different than brute-force like using such a distinguisher. In this paper we introduce “Mixture Differential Cryptanalysis” on round-reduced AESlike ciphers, a way to translate the (complex) “multiple-of-8” 5-round distinguisher into a simpler and more convenient one (though, on a smaller number of rounds). Given a pair of chosen plaintexts, the idea is to construct new pairs of plaintexts by mixing the generating variables of the original pair of plaintexts. Here we theoretically prove that for 4-round AES the corresponding ciphertexts of the original pair of plaintexts lie in a particular subspace if and only if the corresponding pairs of ciphertexts of the new pairs of plaintexts have the same property. Such secret-key distinguisher - which is independent of the secret-key, of the details of the S-Box and of the MixColumns matrix (except for the branch number equal to 5) - can be used as starting point to set up new key-recovery attacks on round-reduced AES. Besides a theoretical explanation, we also provide a practical verification both of the distinguisher and of the attack.
2018
TOSC
Cryptanalysis of AES-PRF and Its Dual 📺
A dedicated pseudorandom function (PRF) called AES-PRF was proposed by Mennink and Neves at FSE 2018 (ToSC 2017, Issue 3). AES-PRF is obtained from AES by using the output of the 5-th round as the feed-forward to the output state. This paper presents extensive security analysis of AES-PRF and its variants. Specifically, we consider unbalanced variants where the output of the s-th round is used as the feed-forward. We also analyze the security of “dual” constructions of the unbalanced variants, where the input state is used as the feed-forward to the output of the s-th round. We apply an impossible differential attack, zero-correlation linear attack, traditional differential attack, zero correlation linear distinguishing attack and a meet-in-the-middle attack on these PRFs and reduced round versions. We show that AES-PRF is broken whenever s ≤ 2 or s ≥ 6, or reduced to 7 rounds, and Dual-AES-PRF is broken whenever s ≤ 4 or s ≥ 8. Our results on AES-PRF improve the initial security evaluation by the designers in various ways, and our results on Dual-AES-PRF give the first insight to its security.
2018
TOSC
SUNDAE: Small Universal Deterministic Authenticated Encryption for the Internet of Things 📺
Lightweight cryptography was developed in response to the increasing need to secure devices for the Internet of Things. After significant research effort, many new block ciphers have been designed targeting lightweight settings, optimizing efficiency metrics which conventional block ciphers did not. However, block ciphers must be used in modes of operation to achieve more advanced security goals such as data confidentiality and authenticity, a research area given relatively little attention in the lightweight setting. We introduce a new authenticated encryption (AE) mode of operation, SUNDAE, specially targeted for constrained environments. SUNDAE is smaller than other known lightweight modes in implementation area, such as CLOC, JAMBU, and COFB, however unlike these modes, SUNDAE is designed as a deterministic authenticated encryption (DAE) scheme, meaning it provides maximal security in settings where proper randomness is hard to generate, or secure storage must be minimized due to expense. Unlike other DAE schemes, such as GCM-SIV, SUNDAE can be implemented efficiently on both constrained devices, as well as the servers communicating with those devices. We prove SUNDAE secure relative to its underlying block cipher, and provide an extensive implementation study, with results in both software and hardware, demonstrating that SUNDAE offers improved compactness and power consumption in hardware compared to other lightweight AE modes, while simultaneously offering comparable performance to GCM-SIV on parallel high-end platforms.
2018
TOSC
Double-block Hash-then-Sum: A Paradigm for Constructing BBB Secure PRF 📺
SUM-ECBC (Yasuda, CT-RSA 2010) is the first beyond birthday bound (BBB) secure block cipher based deterministic MAC. After this work, some more BBB secure deterministic MACs have been proposed, namely PMAC_Plus (Yasuda, CRYPTO 2011), 3kf9 (Zhang et al., ASIACRYPT 2012) and LightMAC_Plus (Naito, ASIACRYPT 2017). In this paper, we have abstracted out the inherent design principle of all these BBB secure MACs and present a generic design paradigm to construct a BBB secure pseudo random function, namely Double-block Hash-then- Sum or in short (DbHtS). A DbHtS construction, as the name implies, computes a double block hash on the message and then sum the encrypted output of the two hash blocks. Our result renders that if the underlying hash function meets certain security requirements (namely cover-free and block-wise universal advantage is low), DbHtS construction provides 2n/3-bit security. We demonstrate the applicability of our result by instantiating all the existing beyond birthday secure deterministic MACs (e.g., SUM-ECBC, PMAC_Plus, 3kf9, LightMAC_Plus) as well as a simple two-keyed variant for each of them and some algebraic hash based constructions.
2018
TOSC
More Accurate Differential Properties of LED64 and Midori64 📺
In differential cryptanalysis, a differential is more valuable than the single trail belonging to it in general. The traditional way to compute the probability of the differential is to sum the probabilities of all trails within it. The automatic tool for the search of differentials based on Mixed Integer Linear Programming (MILP) has been proposed and realises the task of finding multiple trails of a given differential. The problem is whether it is reliable to evaluate the probability of the differential traditionally. In this paper, we focus on two lightweight block ciphers – LED64 and Midori64 and show the more accurate estimation of differential probability considering the key schedule. Firstly, an automated tool based on Boolean Satisfiability Problem (SAT) is put forward to accomplish the automatic search of differentials for ciphers with S-boxes and is applied to LED64 and Midori64. Secondly, we provide an automatic approach to detect the right pairs following a given differential, which can be exploited to calculate the differential property. Applying this technique to the STEP function of LED64, we discover some differentials with enhanced probability. As a result, the previous attacks relying upon high probability differentials can be improved definitely. Thirdly, we present a method to compute an upper-bound of the weak-key ratio for a given differential, which is utilised to analyse 4-round differentials of Midori64. We detect two differentials whose weak-key ratios are much lower than the expected 50%. More than 78% of the keys will make these two differentials being impossible differentials. The idea of the estimation for an upper-bound of the weak-key ratio can be employed for other ciphers and allows us to launch differential attacks more reliably. Finally, we introduce how to compute the enhanced differential probability and evaluate the size of keys achieving the improved probability. Such a property may incur an efficient weak-key attack. For a 4-round differential of Midori64, we obtain an improved differential property for a portion of keys.
2018
TOSC
Cryptanalysis of Reduced round SKINNY Block Cipher 📺
SKINNY is a family of lightweight tweakable block ciphers designed to have the smallest hardware footprint. In this paper, we present zero-correlation linear approximations and the related-tweakey impossible differential characteristics for different versions of SKINNY .We utilize Mixed Integer Linear Programming (MILP) to search all zero-correlation linear distinguishers for all variants of SKINNY, where the longest distinguisher found reaches 10 rounds. Using a 9-round characteristic, we present 14 and 18-round zero correlation attacks on SKINNY-64-64 and SKINNY- 64-128, respectively. Also, for SKINNY-n-n and SKINNY-n-2n, we construct 13 and 15-round related-tweakey impossible differential characteristics, respectively. Utilizing these characteristics, we propose 23-round related-tweakey impossible differential cryptanalysis by applying the key recovery attack for SKINNY-n-2n and 19-round attack for SKINNY-n-n. To the best of our knowledge, the presented zero-correlation characteristics in this paper are the first attempt to investigate the security of SKINNY against this attack and the results on the related-tweakey impossible differential attack are the best reported ones.
2018
TOSC
Cryptanalysis of Low-Data Instances of Full LowMCv2 📺
LowMC is a family of block ciphers designed for a low multiplicative complexity. The specification allows a large variety of instantiations, differing in block size, key size, number of S-boxes applied per round and allowed data complexity. The number of rounds deemed secure is determined by evaluating a number of attack vectors and taking the number of rounds still secure against the best of these. In this paper, we demonstrate that the attacks considered by the designers of LowMC in the version 2 of the round-formular were not sufficient to fend off all possible attacks. In the case of instantiations of LowMC with one of the most useful settings, namely with few applied S-boxes per round and only low allowable data complexities, efficient attacks based on difference enumeration techniques can be constructed. We show that it is most effective to consider tuples of differences instead of simple differences, both to increase the range of the distinguishers and to enable key recovery attacks. All applications for LowMC we are aware of, including signature schemes like Picnic and more recent (ring/group) signature schemes have used version 3 of the roundformular for LowMC, which takes our attack already into account.
2018
TOSC
Cube-Attack-Like Cryptanalysis of Round-Reduced Keccak Using MILP 📺
Cube-attack-like cryptanalysis on round-reduced Keccak was proposed by Dinur et al. at EUROCRYPT 2015. It recovers the key through two phases: the preprocessing phase for precomputing a look-up table and online phase for querying the output and getting the cube sum with which the right key can be retrieved by looking up the precomputed table. It was shown that such attacks are efficient specifically for Keccak-based constructions with small nonce or message block size. In this paper, we provide a mixed integer linear programming (MILP) model for cubeattack- like cryptanalysis on keyed Keccak, which does not impose any unnecessary constraint on cube variables and finds almost optimal cubes by balancing the two phases of cube-attack-like cryptanalysis. Our model is applied to Ketje Jr, Ketje Sr, a Xoodoo-based authenticated encryption and Keccak-MAC-512, all of which have a relatively small nonce or message block size. As a result, time complexities of 5-round attacks on Ketje Jr and 7-round attacks on Ketje Sr can be improved significantly. Meanwhile, 6-round attacks, one more round than the previous best attack, are possible if the key size of Ketje V1 (V2) is reduced to 72 (80) bits. For Xoodoo-based AE in Ketje style, the attack reaches 6 rounds. Additionally, a 7-round attack of Keccak-MAC-512 is achieved. To verify the correctness of our attacks, a 5-round attack on Ketje V1 is implemented and tested practically. It is noted that this work does not threaten the security of any Keccak-based construction.
2018
TOSC
Conditional Linear Cryptanalysis – Cryptanalysis of DES with Less Than 242 Complexity 📺
In this paper we introduce a new extension of linear cryptanalysis that may reduce the complexity of attacks by conditioning linear approximations on other linear approximations. We show that the bias of some linear approximations may increase under such conditions, so that after discarding the known plaintexts that do not satisfy the conditions, the bias of the remaining known plaintexts increases. We show that this extension can lead to improvements of attacks, which may require fewer known plaintexts and time of analysis. We present several types of such conditions, including one that is especially useful for the analysis of Feistel ciphers. We exemplify the usage of such conditions for attacks by a careful application of our extension to Matsui’s attack on the full 16-round DES, which succeeds to reduce the complexity of the best attack on DES to less than 242. We programmed a test implementation of our attack and verified our claimed results with a large number of runs. We also introduce a new type of approximations, to which we call scattered approximations, and discuss its applications.
2018
TOSC
Generating Graphs Packed with Paths Estimation of Linear Approximations and Differentials 📺
When designing a new symmetric-key primitive, the designer must show resistance to known attacks. Perhaps most prominent amongst these are linear and differential cryptanalysis. However, it is notoriously difficult to accurately demonstrate e.g. a block cipher’s resistance to these attacks, and thus most designers resort to deriving bounds on the linear correlations and differential probabilities of their design. On the other side of the spectrum, the cryptanalyst is interested in accurately assessing the strength of a linear or differential attack.While several tools have been developed to search for optimal linear and differential trails, e.g. MILP and SAT based methods, only few approaches specifically try to find as many trails of a single approximation or differential as possible. This can result in an overestimate of a cipher’s resistance to linear and differential attacks, as was for example the case for PRESENT.In this work, we present a new algorithm for linear and differential trail search. The algorithm represents the problem of estimating approximations and differentials as the problem of finding many long paths through a multistage graph. We demonstrate that this approach allows us to find a very large number of good trails for each approximation or differential. Moreover, we show how the algorithm can be used to efficiently estimate the key dependent correlation distribution of a linear approximation, facilitating advanced linear attacks. We apply the algorithm to 17 different ciphers, and present new and improved results on several of these.
2018
TOSC
On the Boomerang Uniformity of Cryptographic Sboxes 📺
The boomerang attack is a cryptanalysis technique against block ciphers which combines two differentials for the upper part and the lower part of the cipher. The dependency between these two differentials then highly affects the complexity of the attack and all its variants. Recently, Cid et al. introduced at Eurocrypt’18 a new tool, called the Boomerang Connectivity Table (BCT) that permits to simplify this complexity analysis, by storing and unifying the different switching probabilities of the cipher’s Sbox in one table. In this seminal paper a brief analysis of the properties of these tables is provided and some open questions are raised. It is being asked in particular whether Sboxes with optimal BCTs exist for even dimensions, where optimal means that the maximal value in the BCT equals the lowest known differential uniformity. When the dimension is even and differs from 6, such optimal Sboxes correspond to permutations such that the maximal value in their DDT and in their BCT equals 4 (unless APN permutations for such dimensions exist). We provide in this work a more in-depth analysis of boomerang connectivity tables, by studying more closely differentially 4-uniform Sboxes. We first completely characterize the BCT of all differentially 4-uniform permutations of 4 bits and then study these objects for some cryptographically relevant families of Sboxes, as the inverse function and quadratic permutations. These two families provide us with the first examples of differentially 4-uniform Sboxes optimal against boomerang attacks for an even number of variables, answering the above open question.
2018
TOSC
Lightweight and Side-channel Secure 4 × 4 S-Boxes from Cellular Automata Rules 📺
This work focuses on side-channel resilient design strategies for symmetrickey cryptographic primitives targeting lightweight applications. In light of NIST’s lightweight cryptography project, design choices for block ciphers must consider not only security against traditional cryptanalysis, but also side-channel security, while adhering to low area and power requirements. In this paper, we explore design strategies for substitution-permutation network (SPN)-based block ciphers that make them amenable to low-cost threshold implementations (TI) - a provably secure strategy against side-channel attacks. The core building blocks for our strategy are cryptographically optimal 4×4 S-Boxes, implemented via repeated iterations of simple cellular automata (CA) rules. We present highly optimized TI circuits for such S-Boxes, that consume nearly 40% less area and power as compared to popular lightweight S-Boxes such as PRESENT and GIFT. We validate our claims via implementation results on ASIC using 180nm technology. We also present a comparison of TI circuits for two popular lightweight linear diffusion layer choices - bit permutations and MixColumns using almost-maximum-distance-separable (almost-MDS) matrices. We finally illustrate design paradigms that combine the aforementioned TI circuits for S-Boxes and diffusion layers to obtain fully side-channel secure SPN block cipher implementations with low area and power requirements.
2018
TOSC
The design of Xoodoo and Xoofff 📺
This paper presents Xoodoo, a 48-byte cryptographic permutation with excellent propagation properties. Its design approach is inspired by Keccak-p, while it is dimensioned like Gimli for efficiency on low-end processors. The structure consists of three planes of 128 bits each, which interact per 3-bit columns through mixing and nonlinear operations, and which otherwise move as three independent rigid objects. We analyze its differential and linear propagation properties and, in particular, prove lower bounds on the weight of trails using the tree search-based technique of Mella et al. (ToSC 2017). Xoodoo’s primary target application is in the Farfalle construction that we instantiate for the doubly-extendable cryptographic keyed (or deck) function Xoofff. Combining a relatively narrow permutation with the parallelism of Farfalle results in very efficient schemes on a wide range of platforms, from low-end devices to high-end processors with vector instructions.
2018
TOSC
Adiantum: length-preserving encryption for entry-level processors 📺
We present HBSH, a simple construction for tweakable length-preserving encryption which supports the fastest options for hashing and stream encryption for processors without AES or other crypto instructions, with a provable quadratic advantage bound. Our composition Adiantum uses NH, Poly1305, XChaCha12, and a single AES invocation. On an ARM Cortex-A7 processor, Adiantum decrypts 4096-byte messages at 10.6 cycles per byte, over five times faster than AES-256-XTS, with a constant-time implementation. We also define HPolyC which is simpler and has excellent key agility at 13.6 cycles per byte.
2018
TOSC
Generalized Nonlinear Invariant Attack and a New Design Criterion for Round Constants 📺
The nonlinear invariant attack was introduced at ASIACRYPT 2016 by Todo et al.. The attack has received extensive attention of cryptographic community due to its practical application on the full-round block ciphers SCREAM, iSCREAM, and Midori64. However, the attack heavily relies on the choice of round constants and it becomes inefficient in the case these constants nonlinearly affect the so-called nonlinear invariants. In this article, to eliminate the impact from the round constants, a generalized nonlinear invariant attack which uses a pair of constants in the input of nonlinear invariants is proposed. The efficiency of this extended framework is practically confirmed by mounting a distinguishing attack on a variant of full-round iSCREAM cipher under a class of 280 weak keys. The considered variant of iSCREAM is however resistant against nonlinear invariant attack of Todo et al.. Furthermore, we investigate the resistance of block ciphers against generalized nonlinear invariant attacks with respect to the choice of round constants in an extended framework. We introduce a useful concept of closed-loop invariants of the substitution box (S-box) and show that the choice of robust round constants is closely related to the existence of linear structure of the closed-loop invariants of the substitution layer. In particular, we demonstrate that the design criteria for the round constants in Beierle et al.’s work at CRYPTO 2017 is not an optimal strategy. The round constants selected using this method may induce certain weaknesses that can be exploited in our generalized nonlinear invariant attack model. This scenario is efficiently demonstrated in the case of a slightly modified variant of the Midori64 block cipher.
2018
TOSC
Nonlinear Approximations in Cryptanalysis Revisited 📺
This work studies deterministic and non-deterministic nonlinear approximations for cryptanalysis of block ciphers and cryptographic permutations and embeds it into the well-understood framework of linear cryptanalysis. For a deterministic (i.e., with correlation ±1) nonlinear approximation we show that in many cases, such a nonlinear approximation implies the existence of a highly-biased linear approximation. For non-deterministic nonlinear approximations, by transforming the cipher under consideration by conjugating each keyed instance with a fixed permutation, we are able to transfer many methods from linear cryptanalysis to the nonlinear case. Using this framework we in particular show that there exist ciphers for which some transformed versions are significantly weaker with regard to linear cryptanalysis than their original counterparts.
2018
TOSC
New Yoyo Tricks with AES-based Permutations 📺
In Asiacrypt 2017, Rønjom et al. reported some interesting generic properties of SPNs, leading to what they call the Yoyo trick, and applied it to find the most efficient distinguishers on AES. In this work, we explore the Yoyo idea in distinguishing public permutations for the first time. We introduce the notion of nested zero difference pattern which extends the Yoyo idea and helps to compose it using improbable and impossible differential strategies to penetrate higher number of rounds. We devise a novel inside-out application of Yoyo which enables us to start the Yoyo game from an internal round. As an application, we investigate the AES-based public permutation AESQ used inside the authenticated cipher PAEQ. We achieve the first deterministic distinguisher of AESQ up to 8 rounds and the first 9-round distinguisher of AESQ that start from the first round with a practical complexity of around 226. We manage to augment Yoyo with improbable and impossible differentials leading to distinguishers on 9, 10, 12 rounds with complexities of about 22, 228, 2126 respectively. Further, with impossible differentials and a bi-directional Yoyo strategy, we obtain a 16-round impossible differential distinguisher with a complexity of 2126. Our results outperform all previous records on AESQ by a substantial margin. As another application, we apply the proposed strategies on AES in the known-key setting leading to one of the best 8-round known-key distinguisher with a complexity of 230. Finally, this work amplifies the scope of the Yoyo technique as a generic cryptanalysis tool.
2018
TOSC
Key Prediction Security of Keyed Sponges 📺
The keyed sponge is a well-accepted method for message authentication. It processes data at a certain rate by sequential evaluation of an underlying permutation. If the key size k is smaller than the rate, currently known bounds are tight, but if it exceeds the rate, state of the art only dictates security up to 2k/2. We take closer inspection at the key prediction security of the sponge and close the remaining gap in the existing security analysis: we confirm key security up to close to 2k, regardless of the rate. The result impacts all applications of the keyed sponge and duplex that process at a rate smaller than the key size, including the STROBE protocol framework, as well as the related constructions such as HMAC-SHA-3 and the sandwich sponge.
2018
TOSC
Key Assignment Scheme with Authenticated Encryption 📺
The Key Assignment Scheme (KAS) is a well-studied cryptographic primitive used for hierarchical access control (HAC) in a multilevel organisation where the classes of people with higher privileges can access files of those with lower ones. Our first contribution is the formalization of a new cryptographic primitive, namely, KAS-AE that supports the aforementioned HAC solution with an additional authenticated encryption property. Next, we present three efficient KAS-AE schemes that solve the HAC and the associated authenticated encryption problem more efficiently – both with respect to time and memory – than the existing solutions that achieve it by executing KAS and AE separately. Our first KAS-AE construction is built by using the cryptographic primitive MLE (EUROCRYPT 2013) as a black box; the other two constructions (which are the most efficient ones) have been derived by cleverly tweaking the hash function FP (Indocrypt 2012) and the authenticated encryption scheme APE (FSE 2014). This high efficiency of our constructions is critically achieved by using two techniques: design of a mechanism for reverse decryption used for reduction of time complexity, and a novel key management scheme for optimizing storage requirements when organizational hierarchy forms an arbitrary access graph (instead of a linear graph). We observe that constructing a highly efficient KAS-AE scheme using primitives other than MLE, FP and APE is a non-trivial task. We leave it as an open problem. Finally, we provide a detailed comparison of all the KAS-AE schemes.
2018
TOSC
Sound Hashing Modes of Arbitrary Functions, Permutations, and Block Ciphers 📺
Cryptographic hashing modes come in many flavors, including Merkle-Damgård with various types of strengthening, Merkle trees, and sponge functions. As underlying primitives, these functions use arbitrary functions, permutations, or block ciphers. In this work we provide three simple proofs, one per primitive type, that cover all modes where the input to the primitive consists of message bits, chaining value bits, and bits that only depend on the mode and message length. Our approach generalizes and simplifies over earlier attempts of Dodis et al. (FSE 2009) and Bertoni et al. (Int. J. Inf. Sec. 2014). We prove tight indifferentiability bounds for modes using each of these three primitive types provided that the mode satisfies some easy to verify conditions.
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
CRYPTO
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2017
EUROCRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
ASIACRYPT
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CHES
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
The Magic of ELFs 📺
Early Career Award
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
CRYPTO
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2016
EUROCRYPT
2015
ASIACRYPT
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
CRYPTO
2014
TCC
2014
TCC
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
CRYPTO
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2013
FSE
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
The End of Crypto 📺
Invited paper
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
CRYPTO
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2012
EUROCRYPT
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2011
CRYPTO
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC
2010
PKC