IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
24 September 2018
Nina Bindel, Jacqueline Brendel, Marc Fischlin, Brian Goncalves, Douglas Stebila
Concerns about the impact of quantum computers on currently deployed public key cryptography have instigated research into not only quantum-resistant cryptographic primitives but also how to transition applications from classical to quantum-resistant solutions. One approach to mitigate the risk of quantum attacks and to preserve common security guarantees are hybrid schemes, which combine classically secure and quantum-resistant schemes.
Various academic and industry experiments and draft standards related to the Transport Layer Security (TLS) protocol already use some form of hybrid key exchange; however sound theoretical approaches to substantiate the design and security of such hybrid key exchange protocols are missing so far.
We initiate the modeling of hybrid authenticated key exchange protocols. We consider security against adversaries with varying levels of quantum power over time, such as adversaries who may become quantum in the future or are quantum in the present. We reach our goal using a three-step approach: First, we introduce security notions for key encapsulation mechanisms (KEMs) that enable a fine-grained distinction between different quantum scenarios. Second, we propose several combiners for constructing hybrid KEMs that correspond closely to recently proposed Internet-Drafts for hybrid key exchange in TLS 1.3. Finally, we present a provably sound design for hybrid key exchange using KEMs as building blocks.
We initiate the modeling of hybrid authenticated key exchange protocols. We consider security against adversaries with varying levels of quantum power over time, such as adversaries who may become quantum in the future or are quantum in the present. We reach our goal using a three-step approach: First, we introduce security notions for key encapsulation mechanisms (KEMs) that enable a fine-grained distinction between different quantum scenarios. Second, we propose several combiners for constructing hybrid KEMs that correspond closely to recently proposed Internet-Drafts for hybrid key exchange in TLS 1.3. Finally, we present a provably sound design for hybrid key exchange using KEMs as building blocks.
Aritra Dhar, Ivan Puddu, Kari Kostiainen, Srdjan Capkun
Intel's Software Guard Extensions (SGX) enables isolated execution environments, called enclaves, on untrusted operating systems (OS), and thus it can improve the security for various applications and online services. However, SGX has also well-known limitations. First, its remote attestation mechanism is vulnerable to relay attacks that allow the attacker to redirect attestation and the following provisioning of secrets to an unintended platform. Second, attestation keys have been shown to leak thus enabling attackers to fake the secure attested environment by emulating it. Third, there exists no secure way to let enclaves communicate with the I/O devices and as a consequence the user.
To address these shortcomings, we propose a hardened variant of SGX attestation using proximity verification. We design and implement a system called ProximiTEE, where a simple embedded device with a low TCB is attached to the target platform. The embedded device verifies the proximity of the attested enclave by using distance bounding and secure boot-time initialization, thus allowing secure attestation regardless of a compromised OS or leaked attestation keys. Our boot-time initialization can be seen as a novel variant of ``trust on first use'' (TOFU) that makes deployment of secure attestation easier, reduces the system's attack surface and enables secure revocation. We further leverage the embedded device to build a trusted I/O path between peripherals (e.g., keyboards, displays) and enclaves, by letting it securely mediate every I/O communication between them. Our prototype implementation shows that such proximity verification is reliable in practice.
To address these shortcomings, we propose a hardened variant of SGX attestation using proximity verification. We design and implement a system called ProximiTEE, where a simple embedded device with a low TCB is attached to the target platform. The embedded device verifies the proximity of the attested enclave by using distance bounding and secure boot-time initialization, thus allowing secure attestation regardless of a compromised OS or leaked attestation keys. Our boot-time initialization can be seen as a novel variant of ``trust on first use'' (TOFU) that makes deployment of secure attestation easier, reduces the system's attack surface and enables secure revocation. We further leverage the embedded device to build a trusted I/O path between peripherals (e.g., keyboards, displays) and enclaves, by letting it securely mediate every I/O communication between them. Our prototype implementation shows that such proximity verification is reliable in practice.
Iftach Haitner, Nikolaos Makriyannis, Eran Omri
A two-party coin-flipping protocol is $\epsilon$-fair if no efficient adversary can bias the output of the honest party (who always outputs a bit, even if the other party aborts) by more than $\epsilon$. Cleve [STOC '86] showed that $r$-round $o(1/r)$-fair coin-flipping protocols do not exist. Awerbuch et al. [Manuscript '85] constructed a $\Theta(1/\sqrt{r})$-fair coin-flipping protocol, assuming the existence of one-way functions. Moran et al. [Journal of Cryptology '16] constructed an $r$-round coin-flipping protocol that is $\Theta(1/r)$-fair (thus matching the aforementioned lower bound of Cleve [STOC '86]), assuming the existence of oblivious transfer.
The above gives rise to the intriguing question of whether oblivious transfer, or more generally ``public-key primitives'', is required for an $o(1/\sqrt r)$-fair coin flipping. This question was partially answered by Dachman-Soled et al. [TCC '11] and Dachman-Soled et al. [TCC '14], who showed that restricted types of fully black-box reductions cannot establish $o(1/\sqrt r)$-fair coin-flipping protocols from one-way functions. In particular, for constant-round coin-flipping protocols, Dachman-Soled et al. showed that black-box techniques from one-way functions can only guarantee fairness of order $1/\sqrt{r}$.
We make progress towards answering the above question by showing that, for any constant $r\in \mathbb N$, the existence of an $1/(c\cdot \sqrt{r})$-fair, $r$-round coin-flipping protocol implies the existence of an infinitely-often key-agreement protocol, where $c$ denotes some universal constant (independent of $r$). Our reduction is non black-box and makes a novel use of the recent dichotomy for two-party protocols of Haitner et al. [FOCS '18] to facilitate a two-party variant of the recent attack of Beimel et al. [FOCS '18] on multi-party coin-flipping protocols.
The above gives rise to the intriguing question of whether oblivious transfer, or more generally ``public-key primitives'', is required for an $o(1/\sqrt r)$-fair coin flipping. This question was partially answered by Dachman-Soled et al. [TCC '11] and Dachman-Soled et al. [TCC '14], who showed that restricted types of fully black-box reductions cannot establish $o(1/\sqrt r)$-fair coin-flipping protocols from one-way functions. In particular, for constant-round coin-flipping protocols, Dachman-Soled et al. showed that black-box techniques from one-way functions can only guarantee fairness of order $1/\sqrt{r}$.
We make progress towards answering the above question by showing that, for any constant $r\in \mathbb N$, the existence of an $1/(c\cdot \sqrt{r})$-fair, $r$-round coin-flipping protocol implies the existence of an infinitely-often key-agreement protocol, where $c$ denotes some universal constant (independent of $r$). Our reduction is non black-box and makes a novel use of the recent dichotomy for two-party protocols of Haitner et al. [FOCS '18] to facilitate a two-party variant of the recent attack of Beimel et al. [FOCS '18] on multi-party coin-flipping protocols.
Mohammad Hajiabadi
Trapdoor permutations (TDP) are a fundamental primitive in cryptography. Over the years, several variants of this notion have emerged as a result of various applications. However, it is not clear whether these variants may be based on the standard notion of TDPs.
We study the question of whether enhanced trapdoor permutations can be based on classical trapdoor permutations. The main motivation of our work is in the context of existing TDP-based constructions of oblivious transfer and non-interactive zero-knowledge protocols, which require enhancements to the classical TDP notion. We prove that these enhancements are non-trivial, in the sense that there does not exist fully blackbox constructions of enhanced TDPs from classical TDPs.
At a technical level, we show that the enhanced TDP security of any construction in the random TDP oracle world can be broken via a polynomial number of queries to the TDP oracle as well as a weakening oracle, which provides inversion with respect to randomness. We also show that the standard one-wayness of a random TDP oracle stays intact in the presence of this weakening oracle.
We study the question of whether enhanced trapdoor permutations can be based on classical trapdoor permutations. The main motivation of our work is in the context of existing TDP-based constructions of oblivious transfer and non-interactive zero-knowledge protocols, which require enhancements to the classical TDP notion. We prove that these enhancements are non-trivial, in the sense that there does not exist fully blackbox constructions of enhanced TDPs from classical TDPs.
At a technical level, we show that the enhanced TDP security of any construction in the random TDP oracle world can be broken via a polynomial number of queries to the TDP oracle as well as a weakening oracle, which provides inversion with respect to randomness. We also show that the standard one-wayness of a random TDP oracle stays intact in the presence of this weakening oracle.
Ashutosh Dhar Dwivedi, Pawel Morawiecki
We propose a new algorithm inspired by Nested to find differential path in ARX ciphers. To enhance the decision process of our algorithm and to reduce the search space of our heuristic nested tool, we used the concept of partial difference distribution table (pDDT) along with the algorithm. The algorithm is applied on reduced round variants of SPECK block cipher family. In our previous paper we applied naive algorithm with a large search space of values and presented the result only for one block size variant of SPECK. Our new approach in this paper provide the results within a simpler framework and within a very short period of time (few minutes) for all bigger block size variants of SPECK. More specifically, we report the differential path for up to 8, 9, 11, 10 and 11 rounds of SPECK32, SPECK48, SPECK64, SPECK96 and SPECK128, respectively. To construct a differential characteristics for large number of rounds, we divide long characteristics into short ones, say constructing a large characteristics from two short characteristics. Instead of starting from first round we start from the middle and run experiment in forward as well as reverse direction. Using this method we improved our results and report the differential path for up to 9, 10, 12, 13 and 15 rounds of SPECK32, SPECK48, SPECK64, SPECK96 and SPECK128, respectively.
Ashutosh Dhar Dwivedi, Gautam Srivastava
In this paper we focus on differential cryptanalysis dedicated to a particular class of cryptographic algorithms, namely ARX ciphers. We propose a new algorithm inspired by the Nested Monte-Carlo Search algorithm to find a differential path in ARX ciphers. We apply our algorithm to a round reduced variant of the block cipher LEA. We use the concept of a partial difference distribution table (pDDT) in our algorithm to reduce the search space. This methodology reduced the search space of the algorithm by using only those differentials whose probabilities are greater than or equal to pre-defined threshold. Using this concept we removed many differentials which are not valid or whose probabilities are very low. By doing this we decreased the time of finding a differential path by our nested algorithm due to a smaller search space. This partial difference distribution table also made our nested algorithm suitable for bigger block size ARX ciphers. Finding long differential characteristics is one of the hardest problems where we have seen other algorithms take many hours or days to find differential characteristics in ARX ciphers. Our algorithm finds the differential characteristics in just a few minutes with a very simple framework. We report the differential path for up to 9 rounds in LEA. To construct differential characteristics for a large number of rounds, we divide long characteristics into short ones, by constructing a large characteristic from two short characteristics. Instead of starting from the first round, we start from the middle and run experiments in forward as well as in the reverse direction. Using this method we improved our results and report the differential path for up to 13 rounds. Overall, the best property of our algorithm is that it has potential to provide state-of-the-art results but within a simpler framework as well as less time. Our algorithm is also very interesting for future aspect of research, as it could be applied to other ARX ciphers with a very easy going framework.
Yilei Chen, Vinod Vaikuntanathan, Brent Waters, Hoeteck Wee, Daniel Wichs
A traitor tracing scheme is a public key encryption scheme for which there are many secret decryption keys. Any of these keys can decrypt a ciphertext; moreover, even if a coalition of users collude, put together their decryption keys and attempt to create a new decryption key, there is an efficient algorithm to trace the new key to at least one the colluders.
Recently, Goyal, Koppula and Waters (GKW, STOC 18) provided the first traitor tracing scheme from LWE with ciphertext and secret key sizes that grow polynomially in $\log n$, where $n$ is the number of users. The main technical building block in their construction is a strengthening of (bounded collusion secure) secret-key functional encryption which they refer to as mixed functional encryption (FE).
In this work, we improve upon and extend the GKW traitor tracing scheme: - We provide simpler constructions of mixed FE schemes based on the LWE assumption. Our constructions improve upon the GKW construction in terms of expressiveness, modularity, and security. - We provide a construction of attribute-based traitor tracing for all circuits based on the LWE assumption.
Recently, Goyal, Koppula and Waters (GKW, STOC 18) provided the first traitor tracing scheme from LWE with ciphertext and secret key sizes that grow polynomially in $\log n$, where $n$ is the number of users. The main technical building block in their construction is a strengthening of (bounded collusion secure) secret-key functional encryption which they refer to as mixed functional encryption (FE).
In this work, we improve upon and extend the GKW traitor tracing scheme: - We provide simpler constructions of mixed FE schemes based on the LWE assumption. Our constructions improve upon the GKW construction in terms of expressiveness, modularity, and security. - We provide a construction of attribute-based traitor tracing for all circuits based on the LWE assumption.
Solaris, Croatia, 17 June - 21 June 2019
Event date: 17 June to 21 June 2019
23 September 2018
Apoorvaa Deshpande, Yael Kalai
We consider the following paradoxical question: Can one prove lack of knowledge? We define the notion of Proofs of Ignorance, construct such proofs, and use these proofs to construct a 2-message witness hiding protocol for all of NP.
More specifically, we define a proof of ignorance (PoI) with respect to any language L in NP and distribution D over instances in L. Loosely speaking, such a proof system allows a prover to generate an instance x according to D along with a proof that she does not know a witness corresponding to x. We construct construct a PoI protocol for any random self-reducible NP language L that is hard on average. Our PoI protocol is non-interactive assuming the existence of a common reference string.
We use such a PoI protocol to construct a 2-message witness hiding protocol for NP with adaptive soundness. Constructing a 2-message WH protocol for all of NP has been a long standing open problem. We construct our witness hiding protocol using the following ingredients (where T is any super-polynomial function in the security parameter):
1. T-secure PoI protocol,
2. T-secure non-interactive witness indistinguishable (NIWI) proofs,
3. T-secure rerandomizable encryption with strong KDM security,
where the first two ingredients can be constructed based on the T-security of DLIN.
At the heart of our witness-hiding proof is a new non-black-box technique. As opposed to previous works, we do not apply an efficiently computable function to the code of the cheating verifier, rather we resort to a form of case analysis and show that the prover's message can be simulated in both cases, without knowing in which case we reside.
More specifically, we define a proof of ignorance (PoI) with respect to any language L in NP and distribution D over instances in L. Loosely speaking, such a proof system allows a prover to generate an instance x according to D along with a proof that she does not know a witness corresponding to x. We construct construct a PoI protocol for any random self-reducible NP language L that is hard on average. Our PoI protocol is non-interactive assuming the existence of a common reference string.
We use such a PoI protocol to construct a 2-message witness hiding protocol for NP with adaptive soundness. Constructing a 2-message WH protocol for all of NP has been a long standing open problem. We construct our witness hiding protocol using the following ingredients (where T is any super-polynomial function in the security parameter):
1. T-secure PoI protocol,
2. T-secure non-interactive witness indistinguishable (NIWI) proofs,
3. T-secure rerandomizable encryption with strong KDM security,
where the first two ingredients can be constructed based on the T-security of DLIN.
At the heart of our witness-hiding proof is a new non-black-box technique. As opposed to previous works, we do not apply an efficiently computable function to the code of the cheating verifier, rather we resort to a form of case analysis and show that the prover's message can be simulated in both cases, without knowing in which case we reside.
Nir Bitansky, Omer Paneth
The round complexity of zero-knowledge protocols is a long-standing open question, yet to be settled under standard assumptions. So far, the question has appeared equally challenging for relaxations such as weak zero-knowledge and witness hiding. Protocols satisfying these relaxed notions, under standard assumptions, have at least four messages, just like full-fledged zero knowledge. The difficulty in improving round complexity stems from a fundamental barrier: non of these notions can be achieved in three messages via reductions (or simulators) that treat the verifier as a black box.
We introduce a new non-black-box technique and use it to obtain the first protocols that cross this barrier under standard assumptions. Our main results are: - Weak zero-knowledge for NP in three messages, assuming fully-homomorphic encryption, and other standard primitives (known for example under the Learning with Errors assumption). - Weak zero-knowledge for NP intersect coNP in two messages, assuming in addition non-interactive witness-indistinguishable proofs.
We also give witness-hiding protocol for NP in two-message, assuming in addition witness encryption. This protocol is also publicly verifiable.
Our technique is based on a new homomorphic trapdoor paradigm, which can be seen as a non-black-box analog of the classical Feige-Lapidot-Shamir trapdoor paradigm.
We introduce a new non-black-box technique and use it to obtain the first protocols that cross this barrier under standard assumptions. Our main results are: - Weak zero-knowledge for NP in three messages, assuming fully-homomorphic encryption, and other standard primitives (known for example under the Learning with Errors assumption). - Weak zero-knowledge for NP intersect coNP in two messages, assuming in addition non-interactive witness-indistinguishable proofs.
We also give witness-hiding protocol for NP in two-message, assuming in addition witness encryption. This protocol is also publicly verifiable.
Our technique is based on a new homomorphic trapdoor paradigm, which can be seen as a non-black-box analog of the classical Feige-Lapidot-Shamir trapdoor paradigm.
Benny Applebaum, Zvika Brakerski, Rotem Tsabary
We show that any multi-party functionality can be evaluated using a two-round protocol with perfect correctness and perfect semi-honest security, provided that the majority of parties are honest. This settles the round complexity of information-theoretic semi-honest MPC, resolving a longstanding open question (cf. Ishai and Kushilevitz, FOCS 2000). The protocol is efficient for $NC^1$ functionalities. Furthermore, given black-box access to a one-way function, the protocol can be made efficient for any polynomial functionality, at the cost of only guaranteeing computational security.
Technically, we extend and relax the notion of randomized encoding to specifically address multi-party functionalities. The property of a multi-party randomized encoding (MPRE) is that if the functionality $g$ is an encoding of the functionality $f$, then for any (permitted) coalition of players, their respective outputs and inputs in $g$ allow them to simulate their respective inputs and outputs in $f$, without learning anything else, including the other outputs of $f$.
Technically, we extend and relax the notion of randomized encoding to specifically address multi-party functionalities. The property of a multi-party randomized encoding (MPRE) is that if the functionality $g$ is an encoding of the functionality $f$, then for any (permitted) coalition of players, their respective outputs and inputs in $g$ allow them to simulate their respective inputs and outputs in $f$, without learning anything else, including the other outputs of $f$.
Manfred Lochter
One approach for blockchain based applications to provide a proof-of-work is the computation of hash-values. In our opinion these computations are a waste of energy. It would be highly desirable to find an alternative method that
generates useful output. We show how to substitute hashing by performing
multiplications on Elliptic Curves in order to find distinguished points that can then be used to solve the discrete
logarithm problem on a chosen curve. Today's digital infrastructures rely on only a few curves. We argue that the advent
of blockchain based technologies makes the use of only few standardised curves questionable.
In principle all cryptanalytic algorithms that use Rabin's idea of distinguished points can be used in blockchain based attacks. Similar ideas can be used for the number field sieve.
In principle all cryptanalytic algorithms that use Rabin's idea of distinguished points can be used in blockchain based attacks. Similar ideas can be used for the number field sieve.
Gilad Asharov, Ilan Komargodski, Wei-Kai Lin, Kartik Nayak, Elaine Shi
Oblivious RAM (ORAM), first introduced in the ground-breaking work of Goldreich and Ostrovsky (STOC '87 and J. ACM '96) is a technique for provably obfuscating programs' access patterns, such that the access patterns leak no information about the programs' secret inputs. To compile a general program to an oblivious counterpart, it is well-known that $\Omega(\log N)$ amortized blowup is necessary, where $N$ is the size of the logical memory. This was shown in Goldreich and Ostrovksy's original ORAM work for statistical security and in a somewhat restricted model (the so called balls-and-bins model), and recently by Larsen and Nielsen (CRYPTO '18) for computational security.
A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS '18) who gave a construction with $O(\log N\cdot {\sf poly}(\log\log N))$ amortized blowup, assuming one-way functions.
A long standing open question is whether there exists an optimal ORAM construction that matches the aforementioned logarithmic lower bounds (without making large memory word assumptions, and assuming a constant number of CPU registers). In this paper, we resolve this problem and present the first secure ORAM with $O(\log N)$ amortized blowup, assuming one-way functions. Our result is inspired by and non-trivially improves on the recent beautiful work of Patel et al. (FOCS '18) who gave a construction with $O(\log N\cdot {\sf poly}(\log\log N))$ amortized blowup, assuming one-way functions.
Seyed Farhad Aghili, Hamid Mala
The concept of the Industrial Internet of Things (IIoT) can be defined as the integration of smart sensor networks and the Internet of Things (IoT). This technology can be employed in various industries such as agriculture, food processing industry, environmental monitoring, security surveillance, and so on. Generally, a smart sensor is a resource-constrained device which is responsible for gathering data from the monitored area. Machine-to-Machine (M2M) communication is one of the most important technologies to exchange information between entities in industrial areas. However, due to the insecure wireless communication channel and the smart sensors limitations, security and privacy concerns are the important challenges in IIoT environments. The goal of this paper is to address the security flaws of a recent M2M authentication protocol proposed for employing in IIoT including DoS, router impersonation and smart sensor traceability attacks. Moreover, we showed that an untrusted smart sensor can obtain the secret key of the router and the session key which another sensor establishes with the target router.
Alex Davidson, Ryo Nishimaki
Constrained pseudorandom functions (CPRFs) allow learning modified PRF keys that can evaluate the PRF on a subset of the input space, or based on some sort of predicate. First introduced by Boneh and Waters [Asiacrypt 2013], they have been shown to be a useful cryptographic primitive with many applications. The full security definition of CPRFs requires the adversary to learn multiple constrained keys, a requirement for all of these applications. Unfortunately, existing constructions of CPRFs satisfying this security notion are only known from exceptionally strong cryptographic assumptions, such as indistinguishability obfuscation and the existence of multilinear maps, even for very weak predicates. CPRFs from more standard assumptions only satisfy security when one key is learnt.
In this work, we give the first construction of a CPRF that can issue a constant number of constrained keys for bit-fixing predicates, from learning with errors (LWE). Our construction immediately leads to the first constructions of broadcast encryption with optimal ciphertext size and non-interactive ID-based key exchange from LWE, for constant-sized user groups. It also satisfies \(1\)-key privacy (otherwise known as constraint-hiding). Finally, our construction achieves fully adaptive security with polynomial security loss; the only construction to achieve such security under a standard assumption.
Our technique represents a noted departure existing for CPRF constructions. We hope that it may lead to future constructions that can expose a greater number of keys, or consider more expressive predicates (such as circuit-based constraints).
In this work, we give the first construction of a CPRF that can issue a constant number of constrained keys for bit-fixing predicates, from learning with errors (LWE). Our construction immediately leads to the first constructions of broadcast encryption with optimal ciphertext size and non-interactive ID-based key exchange from LWE, for constant-sized user groups. It also satisfies \(1\)-key privacy (otherwise known as constraint-hiding). Finally, our construction achieves fully adaptive security with polynomial security loss; the only construction to achieve such security under a standard assumption.
Our technique represents a noted departure existing for CPRF constructions. We hope that it may lead to future constructions that can expose a greater number of keys, or consider more expressive predicates (such as circuit-based constraints).
F. Betül Durak, Serge Vaudenay
Following up mass surveillance and privacy issues, modern secure communication protocols now seek for more security such as forward secrecy and post-compromise security. They cannot rely on any assumption such as synchronization, predictable sender/receiver roles, or online availability. At EUROCRYPT 2017 and 2018, key agreement with forward secrecy and zero round-trip time (0-RTT) were studied. Ratcheting was introduced to address forward secrecy and post-compromise security in real-world messaging protocols. At CSF 2016 and CRYPTO 2017, ratcheting was studied either without 0-RTT or without bidirectional communication. At CRYPTO 2018, it was done using key-update primitives, which involve hierarchical identity-based encryption (HIBE).
In this work, we define the bidirectional asynchronous ratcheted key agreement (BARK) with formal security notions. We provide a simple security model with a pragmatic approach and design the first secure BARK scheme not using key-update primitives. Our notion offers forward secrecy and post-compromise security. It is asynchronous, with random roles, and 0-RTT. It is based on a cryptosystem, a signature scheme, and a collision-resistant hash function family without key-update primitives or random oracles. We further show that BARK (even unidirectional) implies public-key cryptography, meaning that it cannot solely rely on symmetric cryptography.
In this work, we define the bidirectional asynchronous ratcheted key agreement (BARK) with formal security notions. We provide a simple security model with a pragmatic approach and design the first secure BARK scheme not using key-update primitives. Our notion offers forward secrecy and post-compromise security. It is asynchronous, with random roles, and 0-RTT. It is based on a cryptosystem, a signature scheme, and a collision-resistant hash function family without key-update primitives or random oracles. We further show that BARK (even unidirectional) implies public-key cryptography, meaning that it cannot solely rely on symmetric cryptography.
Thom Wiggers
Servers with many cores cost a lot of money and consume large amounts of energy. The developments in hardware for mobile devices has resulted in a surge in relatively cheap, powerful, and low-energy CPUs. In this paper we show how to build a low-energy, eighty-core cluster built around twenty ODROID-C2 development boards for under 1500 USD.
The ODROID-C2 is a 46 USD microcomputer that provides a 1.536 GHz quad-core Cortex-A53-based CPU and 2 GB of RAM. We investigate the cluster's application to cryptanalysis by implementing Pollard's Rho method to tackle the Certicom ECC2K-130 elliptic curve challenge. We optimise software from the "Breaking ECC2K-130" technical report for the Cortex-A53. To do so, we show how to use microbenchmarking to derive the needed instruction characteristics which ARM neglected to document for the public. The implementation of the ECC2K-130 attack finally allows us to compare the proposed platform to various other platforms, including classical desktop CPUs, GPUs and FPGAs. Although it may still be slower than for example FPGAs, our cluster still provides a lot of value for money.
Serge Fehr
Hash functions are of fundamental importance in theoretical and in practical cryptography, and with the threat of quantum computers possibly emerging in the future, it is an urgent objective to understand the security of hash functions in the light of potential future quantum attacks. To this end, we reconsider the collapsing property of hash functions, as introduced by Unruh, which replaces the notion of collision resistance when considering quantum attacks. Our contribution is a formalism and a framework that offers significantly simpler proofs for the collapsing property of hash functions. With our framework, we can prove the collapsing property for hash domain extension constructions entirely by means of decomposing the iteration function into suitable elementary composition operations. In particular, given our framework, one can argue purely classically about the quantum-security of hash functions; this is in contrast to previous proofs which are in terms of sophisticated quantum-information-theoretic and quantum-algorithmic reasoning.
Oleg Taraskin, Vladimir Soukharev, David Jao, Jason LeGrow
Password authenticated key establishment (PAKE) is a cryptographic primitive that allows two parties who share a low-entropy secret (a password) to securely establish cryptographic keys in the absence of public key infrastructure. We present the first quantum-resistant password-authenticated key exchange scheme based on supersingular elliptic curve isogenies. The scheme is built upon supersingular isogeny Diffie-Hellman, and uses the password to generate functions which obscure the auxiliary points used in the computation. We include a detailed security proof based on a number of reasonable computational problems on supersingular elliptic curves.
Shashank Agrawal, Peihan Miao, Payman Mohassel, Pratyay Mukherjee
Token-based authentication is commonly used to enable a single-sign-on experience on the web, in mobile applications and on enterprise networks using a wide range of open standards and network authentication protocols: clients sign on to an identity provider using their username/password to obtain a cryptographic token generated with a master secret key, and store the token for future accesses to various services and applications. The authentication server(s) are single point of failures that if breached, enable attackers to forge arbitrary tokens or mount offline dictionary attacks to recover client credentials.
Our work is the first to introduce and formalize the notion of password-based threshold token-based authentication which distributes the role of an identity provider among $n$ servers. Any t servers can collectively verify passwords and generate tokens, while no t-1 servers can forge a valid token or mount offline dictionary attacks. We then introduce PASTA, a general framework that can be instantiated using any threshold token generation scheme, wherein clients can "sign-on" using a two-round (optimal) protocol that meets our strong notions of unforgeability and password-safety.
We instantiate and implement our framework in C++ using two threshold message authentication codes (MAC) and two threshold digital signatures with different trade-offs. Our experiments show that the overhead of protecting secrets and credentials against breaches in PASTA, i.e. compared to a naive single server solution, is extremely low (1-5%) in the most likely setting where client and servers communicate over the internet. The overhead is higher in case of MAC-based tokens over a LAN (though still only a few milliseconds) due to public-key operations in PASTA. We show, however, that this cost is inherent by proving a symmetric-key only solution impossible.
Our work is the first to introduce and formalize the notion of password-based threshold token-based authentication which distributes the role of an identity provider among $n$ servers. Any t servers can collectively verify passwords and generate tokens, while no t-1 servers can forge a valid token or mount offline dictionary attacks. We then introduce PASTA, a general framework that can be instantiated using any threshold token generation scheme, wherein clients can "sign-on" using a two-round (optimal) protocol that meets our strong notions of unforgeability and password-safety.
We instantiate and implement our framework in C++ using two threshold message authentication codes (MAC) and two threshold digital signatures with different trade-offs. Our experiments show that the overhead of protecting secrets and credentials against breaches in PASTA, i.e. compared to a naive single server solution, is extremely low (1-5%) in the most likely setting where client and servers communicate over the internet. The overhead is higher in case of MAC-based tokens over a LAN (though still only a few milliseconds) due to public-key operations in PASTA. We show, however, that this cost is inherent by proving a symmetric-key only solution impossible.