IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
04 June 2018
Rafail Ostrovsky, Giuseppe Persiano, Daniele Venturi, Ivan Visconti
At ICS 2010, Dziembowski, Pietrzak and Wichs introduced the notion of *non-malleable codes*, a weaker form of error-correcting codes guaranteeing that the decoding of a tampered codeword either corresponds to the original message or to an unrelated value. The last few years established non-malleable codes as one of the recently invented cryptographic primitives with the highest impact and potential, with very challenging open problems and applications.
In this work, we focus on so-called *continuously* non-malleable codes in the split-state model, as proposed by Faust et al. (TCC 2014), where a codeword is made of two shares and an adaptive adversary makes a polynomial number of attempts in order to tamper the target codeword, where each attempt is allowed to modify the two shares independently (yet arbitrarily). Achieving continuous non-malleability in the split-state model has been so far very hard. Indeed, the only known constructions require strong setup assumptions (i.e., the existence of a common reference string) and strong complexity-theoretic assumptions (i.e., the existence of non-interactive zero-knowledge proofs and collision-resistant hash functions).
As our main result, we construct a continuously non-malleable code in the split-state model without setup assumptions, requiring only one-to-one one-way functions (i.e., essentially optimal computational assumptions). Our result introduces several new ideas that make progress towards understanding continuous non-malleability, and shows interesting connections with protocol-design and proof-approach techniques used in other contexts (e.g., look-ahead simulation in zero-knowledge proofs, non-malleable commitments, and leakage resilience).
In this work, we focus on so-called *continuously* non-malleable codes in the split-state model, as proposed by Faust et al. (TCC 2014), where a codeword is made of two shares and an adaptive adversary makes a polynomial number of attempts in order to tamper the target codeword, where each attempt is allowed to modify the two shares independently (yet arbitrarily). Achieving continuous non-malleability in the split-state model has been so far very hard. Indeed, the only known constructions require strong setup assumptions (i.e., the existence of a common reference string) and strong complexity-theoretic assumptions (i.e., the existence of non-interactive zero-knowledge proofs and collision-resistant hash functions).
As our main result, we construct a continuously non-malleable code in the split-state model without setup assumptions, requiring only one-to-one one-way functions (i.e., essentially optimal computational assumptions). Our result introduces several new ideas that make progress towards understanding continuous non-malleability, and shows interesting connections with protocol-design and proof-approach techniques used in other contexts (e.g., look-ahead simulation in zero-knowledge proofs, non-malleable commitments, and leakage resilience).
Gaëtan Leurent, Mridul Nandi, Ferdinand Sibleyras
In this work, we study the security of several recent MAC constructions
with provable security beyond the birthday bound. We consider
block-cipher based constructions with a double-block internal state,
such as SUM-ECBC, PMAC+, 3kf9, GCM-SIV2, and some variants (LightMAC+,
1kPMAC+). All these MACs have a security proof up to $2^{2n/3}$
queries, but there are no known attacks with less than $2^{n}$ queries.
We describe a new cryptanalysis technique for double-block MACs based on finding quadruples of messages with four pairwise collisions in halves of the state. We show how to detect such quadruples in SUM-ECBC, PMAC+, 3kf9, GCM-SIV2 and their variants with $\mathcal{O}(2^{3n/4})$ queries, and how to build a forgery attack with the same query complexity. The time complexity of these attacks is above $2^n$, but it shows that the schemes do not reach full security in the information theoretic model. Surprisingly, our attack on LightMAC+ also invalidates a recent security proof by Naito.
Moreover, we give a variant of the attack against SUM-ECBC and GCM-SIV2 with time and data complexity $\tilde{\mathcal{O}}(2^{6n/7})$. As far as we know, this is the first attack with complexity below $2^n$ against a deterministic beyond-birthday-bound secure MAC.
As a side result, we also give a birthday attack against 1kf9, a single-key variant of 3kf9 that was withdrawn due to issues with the proof.
We describe a new cryptanalysis technique for double-block MACs based on finding quadruples of messages with four pairwise collisions in halves of the state. We show how to detect such quadruples in SUM-ECBC, PMAC+, 3kf9, GCM-SIV2 and their variants with $\mathcal{O}(2^{3n/4})$ queries, and how to build a forgery attack with the same query complexity. The time complexity of these attacks is above $2^n$, but it shows that the schemes do not reach full security in the information theoretic model. Surprisingly, our attack on LightMAC+ also invalidates a recent security proof by Naito.
Moreover, we give a variant of the attack against SUM-ECBC and GCM-SIV2 with time and data complexity $\tilde{\mathcal{O}}(2^{6n/7})$. As far as we know, this is the first attack with complexity below $2^n$ against a deterministic beyond-birthday-bound secure MAC.
As a side result, we also give a birthday attack against 1kf9, a single-key variant of 3kf9 that was withdrawn due to issues with the proof.
Elette Boyle, Ran Cohen, Deepesh Data, Pavel Hubacek
Secure multiparty computation (MPC) on incomplete communication networks has been studied within two primary models: (1) Where a partial network is fixed a priori, and thus corruptions can occur dependent on its structure, and (2) Where edges in the communication graph are determined dynamically as part of the protocol. Whereas a rich literature has succeeded in mapping out the feasibility and limitations of graph structures supporting secure computation in the fixed-graph model (including strong classical lower bounds), these bounds do not apply in the latter dynamic-graph setting, which has recently seen exciting new results, but remains relatively unexplored.
In this work, we initiate a similar foundational study of MPC within the dynamic-graph model. As a first step, we investigate the property of graph expansion. All existing protocols (implicitly or explicitly) yield communication graphs which are expanders, but it is not clear whether this is inherent.
Our results consist of two types (for constant fraction of corruptions):
* Upper bounds: We demonstrate secure protocols whose induced communication graphs are not expander graphs, within a wide range of settings (computational, information theoretic, with low locality, even with low locality and adaptive security) each assuming some form of input-independent setup.
* Lower bounds: In the setting without setup and adaptive corruptions, we demonstrate that for certain functionalities, no protocol can maintain a non-expanding communication graph against all adversarial strategies. Our lower bound relies only on protocol correctness (not privacy), and requires a surprisingly delicate argument.
More generally, we provide a formal framework for analyzing the evolving communication graph of MPC protocols, giving a starting point for studying the relation between secure computation and further, more general graph properties.
In this work, we initiate a similar foundational study of MPC within the dynamic-graph model. As a first step, we investigate the property of graph expansion. All existing protocols (implicitly or explicitly) yield communication graphs which are expanders, but it is not clear whether this is inherent.
Our results consist of two types (for constant fraction of corruptions):
* Upper bounds: We demonstrate secure protocols whose induced communication graphs are not expander graphs, within a wide range of settings (computational, information theoretic, with low locality, even with low locality and adaptive security) each assuming some form of input-independent setup.
* Lower bounds: In the setting without setup and adaptive corruptions, we demonstrate that for certain functionalities, no protocol can maintain a non-expanding communication graph against all adversarial strategies. Our lower bound relies only on protocol correctness (not privacy), and requires a surprisingly delicate argument.
More generally, we provide a formal framework for analyzing the evolving communication graph of MPC protocols, giving a starting point for studying the relation between secure computation and further, more general graph properties.
Daniel Smith-Tone
This note was originally written under the name ``On the Security of HMFEv'' and was submitted to PQCrypto 2018. The author was informed by the referees of his oversight of an eprint work of the same name by Hashimoto, see eprint article /2017/689/, that completely breaks HMFEv, rendering the result on HMFEv obsolete. Still, the author feels that the technique used here is interesting and that, at least in principal, this method could contribute to future cryptanalysis. Thus, with a change of title indicating the direction in which this work is leading, we present the original work with all of its oversights intact and with minimal correction (only references fixed).
At PQCRYPTO 2017, a new multivariate digital signature based on Multi-HFE and utilizing the vinegar modifier was proposed. The vinegar modifier increases the Q-rank of the central map, preventing a direct application of the MinRank attack that defeated Multi-HFE. The authors were, therefore, confident enough to choose aggressive parameters for the Multi-HFE component of the central map (with vinegar variables fixed). Their analysis indicated that the security of the scheme depends on the sum of the number of variables $k$ over the extension field and the number $v$ of vinegar variables with the individual values being unimportant as long as they are not ``too small.'' We analyze the consequences of this choice of parameters and derive some new attacks showing that the parameter $v$ must be chosen with care.
At PQCRYPTO 2017, a new multivariate digital signature based on Multi-HFE and utilizing the vinegar modifier was proposed. The vinegar modifier increases the Q-rank of the central map, preventing a direct application of the MinRank attack that defeated Multi-HFE. The authors were, therefore, confident enough to choose aggressive parameters for the Multi-HFE component of the central map (with vinegar variables fixed). Their analysis indicated that the security of the scheme depends on the sum of the number of variables $k$ over the extension field and the number $v$ of vinegar variables with the individual values being unimportant as long as they are not ``too small.'' We analyze the consequences of this choice of parameters and derive some new attacks showing that the parameter $v$ must be chosen with care.
Aggelos Kiayias, Feng-Hao Liu, Yiannis Tselekounis
Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs (ICS
'10) and its main application is the protection of cryptographic devices
against tampering attacks on memory. In this work, we initiate a comprehensive
study on non-malleable codes for the class of partial functions, that
read/write on an arbitrary subset of codeword bits with specific cardinality.
Our constructions are efficient in terms of information rate, while allowing
the attacker to access asymptotically almost the entire codeword. In addition,
they satisfy a notion which is stronger than non-malleability, that we call
non-malleability with manipulation detection, guaranteeing that any modified
codeword decodes to either the original message or to $\bot$. Finally, our
primitive implies All-Or-Nothing Transforms (AONTs) and as a result our
constructions yield efficient AONTs under standard assumptions (only one-way
functions), which, to the best of our knowledge, was an open question until
now. In addition to this, we present a number of additional applications of
our primitive in tamper resilience.
Xavier Bonnetain, André Schrottenloher
CSIDH is a recent proposal by Castryck, Lange, Martindale, Panny and Renes for post-quantum non-interactive key-exchange. It is similar in design to a scheme by Couveignes, Rostovtsev and Stolbunov, but it replaces ordinary elliptic curves by supersingular elliptic curves, in order to make significant gains in time and key lengths.
Isogeny-based key-exchange on ordinary elliptic curves can be targeted by a quantum subexponential hidden shift algorithm found by Childs, Jao and Soukharev. Although CSIDH uses supersingular curves, it is analog to the case of ordinary curves, hence this algorithm applies.
In the proposal, the authors suggest a choice of parameters that should ensure security against this.
In this paper, we show that those security parameters were too optimistic. Our result relies on two steps: first, we give a more precise complexity analysis of the hidden shift algorithm in this context, which greatly reduces the number of group actions to compute; second, we show how to compute efficiently this group action.
Our computations show, for example, that the 128-bit classical, 64-bit quantum security parameters proposed actually offer at most $37$ bits of quantum security.
Finally, we extend our analysis to ordinary isogeny computations, and show that an instance proposed by De Feo, Kieffer and Smith and expected to offer $56$ bits of quantum security offers at most $41$ bits of quantum security.
Isogeny-based key-exchange on ordinary elliptic curves can be targeted by a quantum subexponential hidden shift algorithm found by Childs, Jao and Soukharev. Although CSIDH uses supersingular curves, it is analog to the case of ordinary curves, hence this algorithm applies.
In the proposal, the authors suggest a choice of parameters that should ensure security against this.
In this paper, we show that those security parameters were too optimistic. Our result relies on two steps: first, we give a more precise complexity analysis of the hidden shift algorithm in this context, which greatly reduces the number of group actions to compute; second, we show how to compute efficiently this group action.
Our computations show, for example, that the 128-bit classical, 64-bit quantum security parameters proposed actually offer at most $37$ bits of quantum security.
Finally, we extend our analysis to ordinary isogeny computations, and show that an instance proposed by De Feo, Kieffer and Smith and expected to offer $56$ bits of quantum security offers at most $41$ bits of quantum security.
Long Chen, Zhenfeng Zhang, Zhenfei Zhang
In this paper, we propose a new assumption, the Computational Learning With Rounding over rings, which is inspired by the computational Diffie-Hellman problem. Assuming the hardness of ring-LWE, we prove this problem is hard when the secret is small, uniform and invertible.
From a theoretical point of view, we give examples of a key exchange scheme and a public key encryption scheme, and prove the worst-case
hardness for both schemes with the help of a random oracle.
Our result improves both speed, as a result of not requiring Gaussian secret or noise, and size, as a result of rounding. In practice, our result suggests that decisional ring-LWR based schemes, such as Saber, Round2 and Lizard, which are among the most efficient solutions to the NIST post-quantum cryptography competition,stem from a provable secure design.
There are no hardness results on the decisional ring-LWR with polynomial modulus prior to this work, to the best of our knowledge.
Kurt M. Alonso, Jordi Herrera Joancomartí
Monero is a cryptocurrency that aims to provide transaction confidentiality and untraceability to its users. This is achieved through various cryptographic schemes, which allow concealing amounts and parties in transactions.
Unfortunately, there has been a severe lack of literature describing with sufficient precision the cryptographic schemes at hand.
With this report we aim at bridging this gap, by supplying detailed descriptions of the algorithms used in Monero to warrant privacy to its users. Furthermore, we do so minimizing the use of technicisms, so as to appeal to a broad readership.
Unfortunately, there has been a severe lack of literature describing with sufficient precision the cryptographic schemes at hand.
With this report we aim at bridging this gap, by supplying detailed descriptions of the algorithms used in Monero to warrant privacy to its users. Furthermore, we do so minimizing the use of technicisms, so as to appeal to a broad readership.
Michael Kounavis, David Durham, Sergej Deutsch, Antonios Papadimitriou, Amitabh Das
We study a new methodology supporting data integrity called 'implicit integrity' and present cryptographic constructions supporting it. Implicit integrity allows for corruption detection without producing, storing or verifying mathematical summaries of the content such as MACs, ICVs etc. The main idea behind this methodology is that, whereas typical user data demonstrate patterns such as repeated bytes or words, decrypted data resulting from corrupted ciphertexts no longer demonstrate such patterns. Thus, by checking the entropy of some decrypted ciphertexts, corruption can be possibly detected.
We discuss that implicit integrity can be associated with a notion of security which is different from the typical requirement that the output of cryptographic systems should be indistinguishable from the output of a random permutation. The notion of security we discuss is that it should be computationally difficult for an adversary to corrupt some ciphertext so that the resulting plaintext demonstrates specific patterns. We further introduce two kinds of adversaries. First, an input perturbing adversary performs content corruption attacks. Second an oracle replacing adversary performs content replay attacks. We discuss requirements for supporting implicit integrity in these two adversary models, and provide security bounds for a construction called IVP, a three-level confusion diffusion network which can support implicit integrity and is inexpensive to implement.
We discuss that implicit integrity can be associated with a notion of security which is different from the typical requirement that the output of cryptographic systems should be indistinguishable from the output of a random permutation. The notion of security we discuss is that it should be computationally difficult for an adversary to corrupt some ciphertext so that the resulting plaintext demonstrates specific patterns. We further introduce two kinds of adversaries. First, an input perturbing adversary performs content corruption attacks. Second an oracle replacing adversary performs content replay attacks. We discuss requirements for supporting implicit integrity in these two adversary models, and provide security bounds for a construction called IVP, a three-level confusion diffusion network which can support implicit integrity and is inexpensive to implement.
Alice Pellet-Mary
We present a quantum polynomial time attack against the GMMSSZ branching program obfuscator of Garg et al. (TCC'16), when instantiated with the GGH13 multilinear map of Garg et al. (EUROCRYPT'13). This candidate obfuscator was proved secure in the weak multilinear map model introduced by Miles et al. (CRYPTO'16).
Our attack uses the short principal ideal solver of Cramer et al. (EUROCRYPT'16), to recover a secret element of the GGH13 multilinear map in quantum polynomial time. We then use this secret element to mount a (classical) polynomial time mixed-input attack against the GMMSSZ obfuscator. The main result of this article can hence be seen as a classical reduction from the security of the GMMSSZ obfuscator to the short principal ideal problem (the quantum setting is then only used to solve this problem in polynomial time).
As an additional contribution, we explain how the same ideas can be adapted to mount a quantum polynomial time attack against the DGGMM obfuscator of D\"ottling et al. (ePrint 2016), which was also proved secure in the weak multilinear map model.
As an additional contribution, we explain how the same ideas can be adapted to mount a quantum polynomial time attack against the DGGMM obfuscator of D\"ottling et al. (ePrint 2016), which was also proved secure in the weak multilinear map model.
Daniele Micciancio, Jessica Sorrell
The FHEW fully homomorphic encryption scheme (Ducas and Micciancio, Eurocrypt 2015) offers very fast homomorphic NAND-gate computations (on encrypted data) and a relatively fast refreshing procedure that allows to homomorphically evaluate arbitrary NAND boolean circuits. Unfortunately, the refreshing procedure needs to be executed after every single NAND computation, and each refreshing operates on a single encrypted bit, greatly decreasing the overall throughput of the scheme. We give a new refreshing procedure that simultaneously refreshes $n$ FHEW ciphertexts, at a cost comparable to a single-bit FHEW refreshing operation. As a result, the cost of each refreshing is amortized over $n$ encrypted bits, improving the throughput for the homomorphic evaluation of boolean circuits roughly by a factor $n$.
Claude Carlet, Abderrahman Daif, Sylvain Guilley, Cédric Tavernier
Side Channel Attacks (SCA) and Fault Injection Attacks (FIA) allow an opponent to have partial access to the internal behavior of the hardware. Since the end of the nineties, many works have shown that this type of attacks constitute a serious threat to cryptosystems implemented in embedded devices. In the state of the art, there exist several countermeasures to protect symmetric encryption (especially AES-128). Most of them protect only against one of these two attacks (SCA or FIA). A method called ODSM has been proposed to withstand SCA and FIA , but its implementation in the whole algorithm is a big open problem when no particular hardware protection is possible. In the present paper, we propose a practical masking scheme specifying ODSM which makes it possible to protect the symmetric encryption against these two attacks.
Zvika Brakerski, Nico Döttling
We construct a two-message oblivious transfer (OT) protocol without setup that guarantees statistical privacy for the sender even against malicious receivers. Receiver privacy is game based and relies on the hardness of learning with errors (LWE). This flavor of OT has been a central building block for minimizing the round complexity of witness indistinguishable and zero knowledge proof systems and multi-party computation protocols, as well as for achieving circuit privacy for homomorphic encryption in the malicious setting. Prior to this work, all candidates in the literature from standard assumptions relied on number theoretic assumptions and were thus insecure in the post-quantum setting. This work provides the first (presumed) post-quantum secure candidate and thus allows to instantiate the aforementioned applications in a post-quantum secure manner.
Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.
Technically, we rely on the transference principle: Either a lattice or its dual must have short vectors. Short vectors, in turn, can be translated to information loss in encryption. Thus encrypting one message with respect to the lattice and one with respect to its dual guarantees that at least one of them will be statistically hidden.
Sanjam Garg, Mohammad Hajiabadi
Trapdoor functions (TDFs) are a fundamental primitive in cryptography. Yet, the current set of assumptions known to imply TDFs is surprisingly limited, when compared to public-key encryption. We present a new general approach for constructing TDFs. Specifically, we give a generic construction of TDFs from any Hash Encryption (Doettling and Garg [CRYPTO '17]) satisfying a novel property which we call recyclability. By showing how to adapt current Computational Diffie-Hellman (CDH) based constructions of hash encryption to yield recyclability, we obtain the first construction of TDFs with security proved under the CDH assumption. While TDFs from the Decisional Diffie-Hellman (DDH) assumption were previously known, the possibility of basing them on CDH had remained open for more than 30 years.
Alain Couvreur, Matthieu Lequesne, Jean-Pierre Tillich
We present a key recovery attack against Y. Wang's Random Linear Code Encryption (RLCE) scheme recently submitted to the NIST call for post-quantum cryptography. This attack recovers the secret key for all the short key parameters proposed by the author.
Achiya Bar-On, Orr Dunkelman, Nathan Keller, Eyal Ronen, Adi Shamir
Determining the security of AES is a central problem in cryptanalysis, but progress in this area had been slow and only a handful of cryptanalytic techniques led to significant advancements. At Eurocrypt 2017 Grassi et al. presented a novel type of distinguisher for AES-like structures, but so far all the published attacks which were based on this distinguisher were inferior to previously known attacks in their complexity. In this paper we combine the technique of Grassi et al. with several other techniques to obtain the best known key recovery attack on 5-round AES in the single-key model, reducing its overall complexity from about $2^{32}$ to about $2^{22.5}$. Extending our techniques to 7-round AES, we obtain the best known attacks on AES-192 which use practical amounts of data and memory, breaking the record for such attacks which was obtained 18 years ago by the classical Square attack.
Daniel J. Bernstein, Edoardo Persichetti
This paper highlights a particular construction of a correct KEM without failures and without ciphertext expansion from any correct deterministic PKE, and presents a simple tight proof of ROM IND-CCA2 security for the KEM assuming merely OW-CPA security for the PKE. Compared to previous proofs, this proof is simpler, and is also factored into smaller pieces that can be audited independently. In particular, this paper introduces the notion of ``IND-Hash'' security and shows that this allows a new separation between checking encryptions and randomizing decapsulations. The KEM is easy to implement in constant time, given a constant-time implementation of the PKE.
Aurélien Dupin, Jean-Marc Robert, Christophe Bidan
Location-based services are now quite popular. The variety of applications and their numerous users show it clearly. However, these applications rely on the persons' honesty to use their real location. If they are motivated to lie about their position, they can do so. A location-proof system allows a prover to obtain proofs from nearby witnesses, for being at a given location at a given time. Such a proof can be used to convince a verifier later on.
Yet, provers and witnesses may want to keep their location and their identity private. In this paper, a solution is presented in which a malicious adversary, acting as a prover, cannot cheat on his position and remain undetected. It relies on multi-party computations and group-signature schemes to protect the private information of both the prover and the witnesses against any semi-honest participant.
Bing Zeng
Oblivious transfer is an important tool against malicious cloud server providers. Halevi-Kalai OT, which is based on smooth projective hash(SPH), is a famous and the most efficient framework for $1$-out-of-$2$ oblivious transfer ($\mbox{OT}^{2}_{1}$) against malicious adversaries in plain model. A natural question however, which so far has not been answered, is whether its security level can be improved, i.e., whether it can be made fully-simulatable.
In this paper, we press a new SPH variant, which enables a positive answer to above question. In more details, it even makes fully-simulatable $\mbox{OT}^{n}_{t}$ ($n,t\in \mathbb{N}$ and $n>t$) possible. We instantiate this new SPH variant under not only the decisional Diffie-Hellman assumption, the decisional $N$-th residuosity assumption and the decisional quadratic residuosity assumption as currently existing SPH constructions, but also the learning with errors (LWE) problem. Before this paper, there is a folklore that it is technically difficult to instantiate SPH under the lattice assumption (e.g., LWE). Considering quantum adversaries in the future, lattice-based SPH makes important sense.
In this paper, we press a new SPH variant, which enables a positive answer to above question. In more details, it even makes fully-simulatable $\mbox{OT}^{n}_{t}$ ($n,t\in \mathbb{N}$ and $n>t$) possible. We instantiate this new SPH variant under not only the decisional Diffie-Hellman assumption, the decisional $N$-th residuosity assumption and the decisional quadratic residuosity assumption as currently existing SPH constructions, but also the learning with errors (LWE) problem. Before this paper, there is a folklore that it is technically difficult to instantiate SPH under the lattice assumption (e.g., LWE). Considering quantum adversaries in the future, lattice-based SPH makes important sense.
Adam Bobowski, Marcin Słowik
We propose a new method for reducing complexity of the pairing comparisons based on polynomials. Thought the construction introduces uncertainty into (usually deterministic) checks, it is easily quantifiable and in most cases extremely small. The application to CL-LRSW signature verification under n messages and group order q allows to reduce the number of computed pairings from 4n down to just 4, while the introduced uncertainty is just (2n-1)/q.