IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
23 September 2018
Thom Wiggers
Servers with many cores cost a lot of money and consume large amounts of energy. The developments in hardware for mobile devices has resulted in a surge in relatively cheap, powerful, and low-energy CPUs. In this paper we show how to build a low-energy, eighty-core cluster built around twenty ODROID-C2 development boards for under 1500 USD.
The ODROID-C2 is a 46 USD microcomputer that provides a 1.536 GHz quad-core Cortex-A53-based CPU and 2 GB of RAM. We investigate the cluster's application to cryptanalysis by implementing Pollard's Rho method to tackle the Certicom ECC2K-130 elliptic curve challenge. We optimise software from the "Breaking ECC2K-130" technical report for the Cortex-A53. To do so, we show how to use microbenchmarking to derive the needed instruction characteristics which ARM neglected to document for the public. The implementation of the ECC2K-130 attack finally allows us to compare the proposed platform to various other platforms, including classical desktop CPUs, GPUs and FPGAs. Although it may still be slower than for example FPGAs, our cluster still provides a lot of value for money.
Serge Fehr
Hash functions are of fundamental importance in theoretical and in practical cryptography, and with the threat of quantum computers possibly emerging in the future, it is an urgent objective to understand the security of hash functions in the light of potential future quantum attacks. To this end, we reconsider the collapsing property of hash functions, as introduced by Unruh, which replaces the notion of collision resistance when considering quantum attacks. Our contribution is a formalism and a framework that offers significantly simpler proofs for the collapsing property of hash functions. With our framework, we can prove the collapsing property for hash domain extension constructions entirely by means of decomposing the iteration function into suitable elementary composition operations. In particular, given our framework, one can argue purely classically about the quantum-security of hash functions; this is in contrast to previous proofs which are in terms of sophisticated quantum-information-theoretic and quantum-algorithmic reasoning.
Oleg Taraskin, Vladimir Soukharev, David Jao, Jason LeGrow
Password authenticated key establishment (PAKE) is a cryptographic primitive that allows two parties who share a low-entropy secret (a password) to securely establish cryptographic keys in the absence of public key infrastructure. We present the first quantum-resistant password-authenticated key exchange scheme based on supersingular elliptic curve isogenies. The scheme is built upon supersingular isogeny Diffie-Hellman, and uses the password to generate functions which obscure the auxiliary points used in the computation. We include a detailed security proof based on a number of reasonable computational problems on supersingular elliptic curves.
Shashank Agrawal, Peihan Miao, Payman Mohassel, Pratyay Mukherjee
Token-based authentication is commonly used to enable a single-sign-on experience on the web, in mobile applications and on enterprise networks using a wide range of open standards and network authentication protocols: clients sign on to an identity provider using their username/password to obtain a cryptographic token generated with a master secret key, and store the token for future accesses to various services and applications. The authentication server(s) are single point of failures that if breached, enable attackers to forge arbitrary tokens or mount offline dictionary attacks to recover client credentials.
Our work is the first to introduce and formalize the notion of password-based threshold token-based authentication which distributes the role of an identity provider among $n$ servers. Any t servers can collectively verify passwords and generate tokens, while no t-1 servers can forge a valid token or mount offline dictionary attacks. We then introduce PASTA, a general framework that can be instantiated using any threshold token generation scheme, wherein clients can "sign-on" using a two-round (optimal) protocol that meets our strong notions of unforgeability and password-safety.
We instantiate and implement our framework in C++ using two threshold message authentication codes (MAC) and two threshold digital signatures with different trade-offs. Our experiments show that the overhead of protecting secrets and credentials against breaches in PASTA, i.e. compared to a naive single server solution, is extremely low (1-5%) in the most likely setting where client and servers communicate over the internet. The overhead is higher in case of MAC-based tokens over a LAN (though still only a few milliseconds) due to public-key operations in PASTA. We show, however, that this cost is inherent by proving a symmetric-key only solution impossible.
Our work is the first to introduce and formalize the notion of password-based threshold token-based authentication which distributes the role of an identity provider among $n$ servers. Any t servers can collectively verify passwords and generate tokens, while no t-1 servers can forge a valid token or mount offline dictionary attacks. We then introduce PASTA, a general framework that can be instantiated using any threshold token generation scheme, wherein clients can "sign-on" using a two-round (optimal) protocol that meets our strong notions of unforgeability and password-safety.
We instantiate and implement our framework in C++ using two threshold message authentication codes (MAC) and two threshold digital signatures with different trade-offs. Our experiments show that the overhead of protecting secrets and credentials against breaches in PASTA, i.e. compared to a naive single server solution, is extremely low (1-5%) in the most likely setting where client and servers communicate over the internet. The overhead is higher in case of MAC-based tokens over a LAN (though still only a few milliseconds) due to public-key operations in PASTA. We show, however, that this cost is inherent by proving a symmetric-key only solution impossible.
Alan Szepieniec, Reza Reyhanitabar, Bart Preneel
A multitude of post-quantum key encapsulation mechanisms (KEMs) and public key encryption (PKE) schemes implicitly rely on a protocol by which Alice and Bob exchange public messages and converge on secret values that are identical up to some small noise. By our count, 24 out of 49 KEM or PKE submissions to the NIST Post-Quantum Cryptography Standardization project follow this strategy. Yet the notion of a noisy key agreement (NKA) protocol lacks a formal definition as a primitive in its own right. We provide such a formalization by defining the syntax and security for an NKA protocol. This formalization brings out four generic problems, called A and B State Recovery, Noisy Key Search and Noisy Key Distinguishing, whose solutions must be hard in the quantum computing model. Informally speaking, these can be viewed as noisy, quantum-resistant counterparts of the problems arising from the classical Diffie-Hellman type protocols. We show that many existing proposals contain an NKA component that fits our formalization and we reveal the induced concrete hardness assumptions. The question arises whether considering NKA as an independent primitive can help provide modular designs with improved efficiency and/or proofs. As the second contribution of this paper, we answer this question positively by presenting a generic transform from a secure NKA protocol to an IND-CCA secure KEM in the quantum random oracle model, with a security bound tightly related to the NKD problem. This transformation is essentially the same as that of the NIST candidate Ramstake. While establishing the security of Ramstake was our initial objective, the collection of tools that came about as a result of this journey is of independent interest.
Suvradip Chakraborty, C. Pandu Rangan
In this paper, we introduce a new framework for constructing public-key encryption (PKE) schemes resilient to joint post-challenge/after-the-fact leakage and tampering attacks in the bounded leakage and tampering (BLT) model, introduced by Damgård et al. (Asiacrypt 2013). All the prior formulations of PKE schemes considered leakage and tampering attacks only before the challenge ciphertext is made available to the adversary. However, this restriction seems necessary, since achieving security against post-challenge leakage and tampering attacks in its full generality is impossible as shown in previous works. In this paper, we study the post-challenge/after-the-fact security for PKE schemes against bounded leakage and tampering under a restricted yet meaningful and reasonable notion of security, namely, the split-state leakage and tampering model. We show that it is possible to construct secure PKE schemes in this model, tolerating arbitrary (but bounded) leakage and tampering queries; thus overcoming the previous impossibility results.
To this end, we formulate a new notion of security, which we call entropic post-challenge IND-CCA-BLT secure PKE. We first define a weaker notion called entropic restricted post-challenge IND-CCA-BLT secure PKE, which can be instantiated using the (standard) DDH assumption. We then show a generic compiler from our entropic restricted notion to the entropic notion of security using a simulation-extractable non-interactive zero-knowledge argument system. This requires an untamperable common reference string as in previous works. Finally, we demonstrate the usefulness of our entropic notion of security by giving a simple and generic construction of post-challenge IND-CCA-BLT secure PKE scheme in the split-state leakage and tampering model. This also settles the open problem posed by Faonio and Venturi (Asiacrypt 2016).
To this end, we formulate a new notion of security, which we call entropic post-challenge IND-CCA-BLT secure PKE. We first define a weaker notion called entropic restricted post-challenge IND-CCA-BLT secure PKE, which can be instantiated using the (standard) DDH assumption. We then show a generic compiler from our entropic restricted notion to the entropic notion of security using a simulation-extractable non-interactive zero-knowledge argument system. This requires an untamperable common reference string as in previous works. Finally, we demonstrate the usefulness of our entropic notion of security by giving a simple and generic construction of post-challenge IND-CCA-BLT secure PKE scheme in the split-state leakage and tampering model. This also settles the open problem posed by Faonio and Venturi (Asiacrypt 2016).
Benjamin Smith
Diffie--Hellman key exchange
is at the foundations of public-key cryptography,
but conventional group-based Diffie--Hellman
is vulnerable to Shor's quantum algorithm.
A range of ``post-quantum Diffie--Hellman''
protocols have been proposed
to mitigate this threat,
including the Couveignes, Rostovtsev--Stolbunov,
SIDH, and CSIDH schemes,
all based on the combinatorial and number-theoretic structures
formed by isogenies of elliptic curves.
Pre- and post-quantum Diffie--Hellman schemes
resemble each other at the highest level,
but the further down we dive, the more differences
emerge---differences that are critical when we use Diffie--Hellman
as a basic component in more complicated constructions.
In this survey we compare and contrast pre- and post-quantum
Diffie--Hellman algorithms,
highlighting some important subtleties.
Falk Schellenberg, Dennis R.E. Gnad, Amir Moradi, Mehdi B. Tahoori
The current practice in board-level integration is to incorporate chips and components from numerous vendors. A fully trusted supply chain for all used components and chipsets is an important, yet extremely difficult to achieve, prerequisite to validate a complete board-level system for safe and secure operation. An increasing risk is that most chips nowadays run software or firmware, typically updated throughout the system lifetime, making it practically impossible to validate the full system at every given point in the manufacturing, integration and operational life cycle. This risk is elevated in devices that run 3rd party firmware. In this paper we show that an FPGA used as a common accelerator in various boards can be reprogrammed by software to introduce a sensor, suitable as a remote power analysis side-channel attack vector at the board-level. We show successful power analysis attacks from one FPGA on the board to another chip implementing RSA and AES cryptographic modules. Since the sensor is only mapped through firmware, this threat is very hard to detect, because data can be exfiltrated without requiring inter-chip communication between victim and attacker. Our results also prove the potential vulnerability in which any untrusted chip on the board can launch such attacks on the remaining system.
Christophe Pfeifer, Patrick Haddad
Recent publications, such as [10] and [13], exploit the advantages of deep-learning techniques in performing Side-Channel Attacks.
One example of the Side-Channel community interest for such techniques
is the release of the public ASCAD database, which provides power
consumption traces of a masked 128-bit AES implementation, and is meant
to be a common benchmark to compare deep-learning techniques
performances. In this paper, we propose two ways of improving the effectiveness of such attacks. The first one is as new kind of layer for neural networks, called "Spread" layer, which is efficient at tackling side-channel attacks issues, since it reduces the number of layers required and speeds up the learning phase. Our second proposal is an efficient way to correct the neural network predictions, based on its confusion matrix. We have validated both methods on ASCAD database, and conclude that they reduce the number of traces required to succeed attacks. In this article, we show their effectiveness for first-order and second-order attacks.
Ke Gu, Bo Yin
Group signature is a useful cryptographic primitive, which makes every group member sign messages on behalf of a group they belong to. Namely group signature allows that group member anonymously signs any message without revealing his/her specific identity. However, group signature may make the signers abuse their signing rights if there are no measures of keeping them from abusing signing rights in the group signature schemes. So, group manager must be able to trace (or reveal) the identity of the signer by the signature when the result of the signature needs to be arbitrated, and some revoked group members must fully lose their capability of signing a message on behalf of the group they belong to. A practical model meeting the requirement is verifier-local revocation, which supports the revocation of group member. In this model, the verifiers receive the group member revocation messages from the trusted authority when the relevant signatures need to be verified.
Although currently many group signature schemes have been proposed, most of them are constructed on pairings. In this paper, we present an efficient group signature scheme without pairings under the
model of verifier-local revocation, which is based on the modified EDL signature (first proposed by D. Chaum et al. in Crypto 92). Compared with other group signature schemes, the proposed scheme does
not employ pairing computation and has the constant signing time and signature size, whose security can be reduced to the computational Diffie-Hellman (CDH) assumption in the random oracle model.
Also, we give a formal security model for group signature and prove that the proposed scheme has the properties of traceability and anonymity.
Marc Joye, Yan Michalevsky
We would like to compute RSA signatures with the help of a Hardware Security Module (HSM). But what can we do when we want to use a certain public exponent that the HSM does not allow or support? Surprisingly, this scenario comes up in real-world settings such as code-signing of Intel SGX enclaves. Intel SGX enclaves have to be signed in order to execute in release mode, using 3072-bit RSA signature scheme with a particular public exponent. However, we encountered commercial hardware security modules that do not support storing RSA keys corresponding to this exponent.
We ask whether it is possible to overcome such a limitation of an HSM and answer it in the affirmative (under stated assumptions). We show how to convert RSA signatures corresponding to one public exponent, to valid RSA signatures corresponding to another exponent. We define security and show that it is not compromised by the additional public knowledge available to an adversary in this setting.
Behzad Abdolmaleki, Helger Lipmaa, Janno Siim, Micha{\l} Zaj\k{a}c
While the CRS model is widely accepted for construction of non-interactive zero knowledge (NIZK) proofs, from the practical viewpoint, a very important question is to minimize the trust needed from the creators of the CRS. Recently, Bellare et al. defined subversion-resistance (security in the case the CRS creator may be malicious) for NIZK. First, we observe that subversion zero knowledge (Sub-ZK) in the CRS model corresponds to no-auxiliary-string non-black-box NIZK (also known as nonuniform NIZK) in the Bare Public Key (BPK) model. Due to well-known impossibility results, this observation provides a simple proof that the use of non-black-box techniques is needed to obtain Sub-ZK. Second, we prove that the most efficient known QA-NIZK for linear subspaces by Kiltz and Wee is nonuniform zero knowledge in the BPK model under two alternative novel knowledge assumptions, both secure in the subversion generic bilinear group model. We prove that (for a different set of parameters) a slightly less efficient variant of Kiltz-Wee is nonuniform zero knowledge in the BPK model under a known knowledge assumption that is also secure in the subversion generic bilinear group model.
Haibat Khan, Benjamin Dowling, Keith M. Martin
The 3rd Generation Partnership Project (3GPP) recently proposed a standard for 5G telecommunications, containing an identity protection scheme meant to address the long-outstanding privacy problem of permanent subscriber-identity disclosure. The proposal is essentially two disjoint phases: an identification phase, followed by an establishment of security context between mobile subscribers and their service providers via symmetric-key based authenticated key agreement. Currently, 3GPP proposes to protect the identification phase with a public-key based solution, and while the current proposal is secure against a classical adversary, the same would not be true of a quantum adversary. 5G specifications target very long-term deployment scenarios (well beyond the year 2030), therefore it is imperative that quantum-secure alternatives be part of the current specification. In this paper, we present such an alternative scheme for the problem of private identification protection. Our solution is compatible with the current 5G specifications, depending mostly on cryptographic primitives already specified in 5G, adding minimal performance overhead and requiring minor changes in existing message structures. Finally, we provide a detailed formal security analysis of our solution in a novel security framework.
Varun Narayanan, Vinod M. Prabahakaran
Secure message transmission and Byzantine agreement have been studied extensively in incomplete networks. However, information theoretically secure multiparty computation (MPC) in incomplete networks is less well understood. In this paper, we characterize the conditions under which a pair of parties can compute oblivious transfer (OT) information theoretically securely against a general adversary structure in an incomplete network of reliable, private channels. We provide characterizations for both semi-honest and malicious models. A consequence of our results is a complete characterization of networks in which a given subset of parties can compute any functionality securely with respect to an adversary structure in the semi-honest case and a partial characterization in the malicious case.
Johannes Bl{\"o}mer, Fabian Eidens, Jakob Juhnke
Despite the recent advances in attribute-based signatures (ABS), no schemes have yet been considered under a strong privacy definition. We enhance the security of ABS by presenting a strengthened simulation-based privacy definition and the first attribute-based signature functionality in the framework of universal composability (UC). Additionally, we show that the UC definition is equivalent to our strengthened experiment-based security definitions.
To achieve this we rely on a general unforgeability and a simulation-based privacy definition that is stronger than standard indistinguishability-based privacy. Further, we show that two extant concrete ABS constructions satisfy this simulation-based privacy definition and are therefore UC secure. The two concrete constructions are the schemes by Sakai et al. (PKC'16) and by Maji et al. (CT-RSA'11). Additionally, we identify the common feature that allows these schemes to meet our privacy definition, giving us further insights into the security requirements of ABS.
To achieve this we rely on a general unforgeability and a simulation-based privacy definition that is stronger than standard indistinguishability-based privacy. Further, we show that two extant concrete ABS constructions satisfy this simulation-based privacy definition and are therefore UC secure. The two concrete constructions are the schemes by Sakai et al. (PKC'16) and by Maji et al. (CT-RSA'11). Additionally, we identify the common feature that allows these schemes to meet our privacy definition, giving us further insights into the security requirements of ABS.
Rouzbeh Behnia, Muslum Ozgur Ozmen, Attila A. Yavuz, Mike Rosulek
We introduce a simple, yet efficient digital signature scheme which offers post-quantum security promise. Our scheme, named $\texttt{TACHYON}$, is based on a novel approach for extending one-time hash-based signatures to (polynomially bounded) many-time signatures, using the additively homomorphic properties of generalized compact knapsack functions. Our design permits $\texttt{TACHYON}$ to achieve several key properties. First, its signing and verification algorithms are the fastest among its current counterparts with a higher level of security. This allows $\texttt{TACHYON}$ to achieve the lowest end-to-end delay among its counterparts, while also making it suitable for resource-limited signers. Second, its private keys can be as small as $\kappa$ bits, where $\kappa$ is the desired security level. Third, unlike most of its lattice-based counterparts, $\texttt{TACHYON}$ does not require any Gaussian sampling during signing, and therefore, is free from side-channel attacks targeting this process. We also explore various speed and storage trade-offs for $\texttt{TACHYON}$, thanks to its highly tunable parameters. Some of these trade-offs can speed up $\texttt{TACHYON}$ signing in exchange for larger keys, thereby permitting $\texttt{TACHYON}$ to further improve its end-to-end delay.
Sanjam Garg, Romain Gay, Mohammad Hajiabadi
We develop techniques for constructing trapdoor functions (TDFs) with short image size and advanced security properties. Our approach builds on the recent framework of Garg and Hajiabadi [CRYPTO 2018]. As applications of our techniques, we obtain
-- The first construction of lossy TDFs based on the Decisional Diffie-Hellman (DDH) assumption with image size linear in input size, while retaining the lossiness rate of [Peikert-Waters STOC 2008].
-- The first construction of deterministic-encryption schemes for block-source inputs (both for the CPA and CCA cases) based on the Computational Diffie-Hellman (CDH) assumption. Moreover, by applying our efficiency-enhancing techniques, we obtain CDH-based schemes with ciphertext size linear in plaintext size.
Prior to our work, all DDH-based constructions of lossy TDFs had image size quadratic in input size. Moreover, all previous constructions of deterministic encryption based even on the stronger DDH assumption incurred a quadratic gap between the ciphertext and plaintext sizes. At a high level, we break the previous quadratic barriers by introducing novel techniques for encoding input bits via hardcore output bits with the use of erasure-resilient codes. All previous schemes used group elements for encoding input bits, resulting in quadratic blowup.
-- The first construction of lossy TDFs based on the Decisional Diffie-Hellman (DDH) assumption with image size linear in input size, while retaining the lossiness rate of [Peikert-Waters STOC 2008].
-- The first construction of deterministic-encryption schemes for block-source inputs (both for the CPA and CCA cases) based on the Computational Diffie-Hellman (CDH) assumption. Moreover, by applying our efficiency-enhancing techniques, we obtain CDH-based schemes with ciphertext size linear in plaintext size.
Prior to our work, all DDH-based constructions of lossy TDFs had image size quadratic in input size. Moreover, all previous constructions of deterministic encryption based even on the stronger DDH assumption incurred a quadratic gap between the ciphertext and plaintext sizes. At a high level, we break the previous quadratic barriers by introducing novel techniques for encoding input bits via hardcore output bits with the use of erasure-resilient codes. All previous schemes used group elements for encoding input bits, resulting in quadratic blowup.
Si Gao, Elisabeth Oswald, Hua Chen, Wei Xi
As one of the most prevalent SCA countermeasures, masking schemes are designed to defeat a broad range of side channel attacks. An attack vector that is suitable for low-order masking schemes is to try and directly determine the mask(s) (for each trace) by utilising the fact that often an attacker has access to several leakage points of the respectively used mask(s). Good examples for implementations of low order masking schemes are the based on table re-computations and also the masking scheme in DPAContest V4.2. We propose a novel approach based on Independent Component Analysis (ICA) to efficiently utilise the information from several leakage points to reconstruct the respective masks (for each trace) and show it is a competitive attack vector in practice.
George Teseleanu
We present two simple backdoors that can be implemented into Maurer's unified zero-knowledge protocol. Thus, we show that a high level abstraction can replace individual backdoors embedded into protocols for proving knowledge of a discrete logarithm (e.g. the Schnorr and Girault protocols), protocols for proving knowledge of an $e^{th}$-root (e.g. the Fiat-Shamir and Guillou-Quisquater protocols), protocols for proving knowledge of a discrete logarithm representation (e.g. the Okamoto protocol) and protocols for proving knowledge of an $e^{th}$-root representation.
Andrey Bogdanov, Matthieu Rivain, Philip S. Vejre, Junwei Wang
At CHES 2016, Bos $\textit{et al.}$ introduced $\textit{differential computational analysis}$ (DCA) as an attack on white-box software implementations of block ciphers. This attack builds on the same principles as DPA in the classical side-channel context, but uses traces consisting of plain values computed by the implementation during execution. This attack was shown to be able to recover the key of many existing AES white-box implementations.
The $\textit{DCA adversary}$ is $\textit{passive}$, and so does not exploit the full power of the white-box setting, implying that many white-box schemes are insecure even in a weaker setting than the one they were designed for. An important problem is therefore how to develop implementations which are resistant to this attack. A natural approach is to apply standard side-channel countermeasures such as $\textit{masking}$ and $\textit{shuffling}$. In this paper, we study the security brought by this approach against the DCA adversary. We show that under some necessary conditions on the underlying randomness generation, these countermeasures provide resistance to standard (first-order) DCA. Furthermore, we introduce $\textit{higher-order DCA}$, and analyze the security of the countermeasures against this attack. This attack is enhanced by introducing a $\textit{multivariate}$ version based on the maximum likelihood approach. We derive analytic expressions for the complexity of the attacks which are backed up through extensive attack experiments. As a result, we can quantify the security level of a masked and shuffled implementation in the (higher-order) DCA setting. This enables a designer to choose appropriate implementation parameters in order to obtain the desired level of protection against passive DCA attacks.
The $\textit{DCA adversary}$ is $\textit{passive}$, and so does not exploit the full power of the white-box setting, implying that many white-box schemes are insecure even in a weaker setting than the one they were designed for. An important problem is therefore how to develop implementations which are resistant to this attack. A natural approach is to apply standard side-channel countermeasures such as $\textit{masking}$ and $\textit{shuffling}$. In this paper, we study the security brought by this approach against the DCA adversary. We show that under some necessary conditions on the underlying randomness generation, these countermeasures provide resistance to standard (first-order) DCA. Furthermore, we introduce $\textit{higher-order DCA}$, and analyze the security of the countermeasures against this attack. This attack is enhanced by introducing a $\textit{multivariate}$ version based on the maximum likelihood approach. We derive analytic expressions for the complexity of the attacks which are backed up through extensive attack experiments. As a result, we can quantify the security level of a masked and shuffled implementation in the (higher-order) DCA setting. This enables a designer to choose appropriate implementation parameters in order to obtain the desired level of protection against passive DCA attacks.