IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
06 September 2018
Aljosha Judmayer, Nicholas Stifter, Philipp Schindler, Edgar Weippl
The increasing number of cryptocurrencies, as well as the rising number of actors within each single cryptocurrency, inevitably leads to tensions between the respective communities.
As with open source projects, (protocol) forks are often the result of broad disagreement.
Usually, after a permanent fork both communities ``mine'' their own business and the conflict is resolved.
But what if this is not the case?
In this paper, we outline the possibility of malicious forking and consensus techniques that aim at destroying the other branch of a protocol fork.
Thereby, we illustrate how merged mining can be used as an attack method against a permissionless PoW cryptocurrency, which itself involuntarily serves as the parent chain for an attacking merge mined branch of a hard fork.
Kai Bemmann, Johannes Bl\"{o}mer, Jan Bobolz, Henrik Br\"{o}cher, Denis Diemert, Fabian Eidens, Lukas Eilers, Jan Haltermann, Jakob Juhnke, Burhan Otour, Laurens Porzenheim, Simon Pukrop, Erik Schilli
We present $\mathsf{CLARC}$ (Cryptographic Library for Anonymous Reputation and Credentials), an anonymous credentials system (ACS) combined with an anonymous reputation system.
Using $\mathsf{CLARC}$, users can receive attribute-based credentials from issuers. They can efficiently prove that their credentials satisfy complex (access) policies in a privacy-preserving way. This implements anonymous access control with complex policies.
Furthermore, $\mathsf{CLARC}$ is the first ACS that is combined with an anonymous reputation system where users can anonymously rate services. A user who gets access to a service via a credential, also anonymously receives a review token to rate the service. If a user creates more than a single rating, this can be detected by anyone, preventing users from spamming ratings to sway public opinion.
To evaluate feasibility of our construction, we present an open-source prototype implementation.
Using $\mathsf{CLARC}$, users can receive attribute-based credentials from issuers. They can efficiently prove that their credentials satisfy complex (access) policies in a privacy-preserving way. This implements anonymous access control with complex policies.
Furthermore, $\mathsf{CLARC}$ is the first ACS that is combined with an anonymous reputation system where users can anonymously rate services. A user who gets access to a service via a credential, also anonymously receives a review token to rate the service. If a user creates more than a single rating, this can be detected by anyone, preventing users from spamming ratings to sway public opinion.
To evaluate feasibility of our construction, we present an open-source prototype implementation.
Dennis Hofheinz, Dingding Jia, Jiaxin Pan
We propose the first identity-based encryption (IBE) scheme that is (almost) tightly secure against chosen-ciphertext attacks. Our scheme is efficient, in the sense that its ciphertext overhead is only seven group elements, three group elements more than that of the state-of-the-art passively (almost) tightly secure IBE scheme. Our scheme is secure in a multi-challenge setting, i.e., in face of an arbitrary number of challenge ciphertexts. The security of our scheme is based upon the standard symmetric external Diffie-Hellman assumption in pairing-friendly groups, but we also consider (less efficient) generalizations under weaker assumptions.
Jie Chen, Junqing Gong, Hoeteck Wee
In this work, we propose two IPE schemes achieving both adaptive security and full attribute-hiding in the prime-order bilinear group, which improve upon the unique existing result satisfying both features from Okamoto and Takashima [Eurocrypt '12] in terms of efficiency.
- Our first IPE scheme is based on the standard $k$-Lin assumption and has shorter master public key and shorter secret keys than Okamoto and Takashima's IPE under weaker DLIN=$2$-lin assumption.
- Our second IPE scheme is adapted from the first one; the security is based on the XDLIN assumption (as Okamoto and Takashima's IPE) but now it also enjoys shorter ciphertexts.
Technically, instead of starting from composite-order IPE and applying existing transformation, we start from an IPE scheme in a very restricted setting but already in the prime-order group, and then gradually upgrade it to our full-fledged IPE scheme. This method allows us to integrate Chen et al.'s framework [Eurocrypt '15] with recent new techniques [TCC '17, Eurocrypt '18] in an optimized way.
- Our first IPE scheme is based on the standard $k$-Lin assumption and has shorter master public key and shorter secret keys than Okamoto and Takashima's IPE under weaker DLIN=$2$-lin assumption.
- Our second IPE scheme is adapted from the first one; the security is based on the XDLIN assumption (as Okamoto and Takashima's IPE) but now it also enjoys shorter ciphertexts.
Technically, instead of starting from composite-order IPE and applying existing transformation, we start from an IPE scheme in a very restricted setting but already in the prime-order group, and then gradually upgrade it to our full-fledged IPE scheme. This method allows us to integrate Chen et al.'s framework [Eurocrypt '15] with recent new techniques [TCC '17, Eurocrypt '18] in an optimized way.
Ashrujit Ghoshal, Rajat Sadhukhan, Sikhar Patranabis, Nilanjan Datta, Stjepan Picek, Debdeep Mukhopadhyay
This work focuses on side-channel resilient design strategies for symmetric-key cryptographic primitives targeting lightweight applications. In light of NIST's lightweight cryptography project, design choices for block ciphers must consider not only security against traditional cryptanalysis, but also side-channel security, while adhering to low area and power requirements. In this paper, we explore design strategies for substitution-permutation network (SPN)-based block ciphers that make them amenable to low-cost threshold implementations (TI) - a provably secure strategy against side-channel attacks. The core building blocks for our strategy are cryptographically optimal 4x4 S-Boxes, implemented via repeated iterations of simple cellular automata~(CA) rules. We present highly optimized TI circuits for such S-Boxes, that consume nearly 40% less area and power as compared to popular lightweight S-Boxes such as PRESENT and GIFT. We validate our claims via implementation results on ASIC using 180nm technology. We also present a comparison of TI circuits for two popular lightweight linear diffusion layer choices - bit permutations and MixColumns using almost-maximum-distance-separable (almost-MDS) matrices. We finally illustrate design paradigms that combine the aforementioned TI circuits for S-Boxes and diffusion layers to obtain fully side-channel secure SPN block cipher implementations with low area and power requirements.
Keita Xagawa
RaCoSS is a signature scheme based on the syndrome decoding problem over the random linear code and proposed by Fukushima, Roy, Xu, Kiyomoto, Morozov, and Takagi. This scheme is cryptanalyzed Bernstein, Hülsing, Lange, and Panny (pqc-forum on 23 Dec. 2017).
Roy, Morozov, Fukushima, Kiyomoto, and Takagi recently gave a patch and call the patched scheme as RaCoSS-R (ISEC Conf. on 25 Jul. 2018).
This short note describes how to break RaCoSS-R by modifying the forgery attack against RaCoSS.
Roy, Morozov, Fukushima, Kiyomoto, and Takagi recently gave a patch and call the patched scheme as RaCoSS-R (ISEC Conf. on 25 Jul. 2018).
This short note describes how to break RaCoSS-R by modifying the forgery attack against RaCoSS.
Andreas Wiemers
The success rate is the most common evaluation metric for measuring the performance of a particular side channel attack scenario. We improve on an analytic formula for the success rate.
Wutichai Chongchitmate, Rafail Ostrovsky
Byzantine broadcast is a fundamental primitive for secure computation. In a setting with $n$ parties in the presence of an adversary controlling at most $t$ parties,
while a lot of progress in optimizing communication complexity has been made for $t < n/2$, little progress has been made for the general case $t<n$, especially for information-theoretic security. In particular, all information-theoretic secure broadcast protocols for $\ell$-bit messages and $t<n$ and optimal round complexity $\mathcal{O}(n)$ have, so far, required a communication complexity of $\mathcal{O}(\ell n^2)$. A broadcast extension protocol allows a long message to be broadcast more efficiently using a small number of single-bit broadcasts. Through broadcast extension, so far, the best achievable round complexity for $t<n$ setting with the optimal communication complexity
of $\mathcal{O}(\ell n)$ is $\mathcal{O}(n^4)$ rounds.
In this work, we construct a new broadcast extension protocol for $t<n$ with information-theoretic security. Our protocol improves the round complexity to $\mathcal{O}(n^3)$ while maintaining the optimal communication complexity for long messages. Our result shortens the gap between the information-theoretic setting and the computational setting, and between the optimal communication protocol and the optimal round protocol in the information-theoretic setting for $t<n$.
In this work, we construct a new broadcast extension protocol for $t<n$ with information-theoretic security. Our protocol improves the round complexity to $\mathcal{O}(n^3)$ while maintaining the optimal communication complexity for long messages. Our result shortens the gap between the information-theoretic setting and the computational setting, and between the optimal communication protocol and the optimal round protocol in the information-theoretic setting for $t<n$.
Eli Ben-Sasson, Alessandro Chiesa, Michael Riabzev, Nicholas Spooner, Madars Virza, Nicholas P. Ward
We design, implement, and evaluate a zero knowledge succinct non-interactive argument (SNARG) for Rank-1 Constraint Satisfaction (R1CS), a widely-deployed NP language undergoing standardization. Our SNARG has a transparent setup, is plausibly post-quantum secure, and uses lightweight cryptography. A proof attesting to the satisfiability of $n$ constraints has size $O(\log^2 n)$; it can be produced with $O(n \log n)$ field operations and verified with $O(n)$. At 128 bits of security, proofs are less than 250kB even for several million constraints, more than 10x shorter than prior SNARGs with similar features.
A key ingredient of our construction is a new Interactive Oracle Proof (IOP) for solving a univariate analogue of the classical sumcheck problem [LFKN92], originally studied for multivariate polynomials. Our protocol verifies the sum of entries of a Reed--Solomon codeword over any subgroup of a field.
We also provide libiop, a library for writing IOP-based arguments, in which a toolchain of transformations enables programmers to write new arguments by writing simple IOP sub-components. We have used this library to specify our construction and prior ones, and plan to open-source it.
A key ingredient of our construction is a new Interactive Oracle Proof (IOP) for solving a univariate analogue of the classical sumcheck problem [LFKN92], originally studied for multivariate polynomials. Our protocol verifies the sum of entries of a Reed--Solomon codeword over any subgroup of a field.
We also provide libiop, a library for writing IOP-based arguments, in which a toolchain of transformations enables programmers to write new arguments by writing simple IOP sub-components. We have used this library to specify our construction and prior ones, and plan to open-source it.
Sébastien Canard, David Pointcheval, Quentin Santos, Jacques Traoré
Recent advances in cryptography promise to let us run complex algorithms in the
encrypted domain. However, these results are still mostly theoretical since the
running times are still much larger than their equivalents in the plaintext
domain. In this context, Majority Judgment is a recent proposal for a new voting system with
several interesting practical advantages, but which implies a more involved tallying process than
first-past-the-post voting. To protect voters' privacy, such a process needs to be done by only manipulating encrypted data.
In this paper, we then explore the possibility of computing the (ordered) winners in the Majority Judgment election without leaking any other information, using homomorphic encryption and multiparty computation. We particularly focus on the practicality of such a solution and, for this purpose, we optimize both the algorithms and the implementations of several cryptographic building blocks. Our result is very positive, showing that this is as of now possible to attain practical running times for such a complex privacy-protecting tallying process, even for large-scale elections.
In this paper, we then explore the possibility of computing the (ordered) winners in the Majority Judgment election without leaking any other information, using homomorphic encryption and multiparty computation. We particularly focus on the practicality of such a solution and, for this purpose, we optimize both the algorithms and the implementations of several cryptographic building blocks. Our result is very positive, showing that this is as of now possible to attain practical running times for such a complex privacy-protecting tallying process, even for large-scale elections.
Tibor Jager, Rafael Kurek, Jiaxin Pan
We construct efficient and tightly secure pseudorandom functions (PRFs) with only logarithmic security loss and short secret keys. This yields very simple and efficient variants of well-known constructions, including those of Naor-Reingold (FOCS 1997) and Lewko-Waters (ACM CCS 2009). Most importantly, in combination with the construction of Banerjee, Peikert and Rosen (EUROCRYPT 2012) we obtain the currently most efficient LWE-based PRF from a weak LWE-assumption with a much smaller modulus than the original construction. In comparison to the only previous construction with this property, which is due to Doettling and Schroeder (CRYPTO 2015), we use a modulus of similar size, but only a single instance of the underlying PRF, instead of $\lambda \cdot \omega(\log \lambda)$ parallel instances, where $\lambda$ is the security parameter. Like Doettling and Schroeder, our security proof is only almost back-box, due to the fact that the number of queries made by the adversary and its advantage must be known a-priori.
Technically, we introduce all-prefix universal hash functions (APUHFs), which are hash functions that are (almost-)universal, even if any prefix of the output is considered. We give simple and very efficient constructions of APUHFs, and show how they can be combined with the augmented cascade of Boneh et al. (ACM CCS 2010) to obtain our results. Along the way, we develop a new and more direct way to prove security of PRFs based on the augmented cascade.
Technically, we introduce all-prefix universal hash functions (APUHFs), which are hash functions that are (almost-)universal, even if any prefix of the output is considered. We give simple and very efficient constructions of APUHFs, and show how they can be combined with the augmented cascade of Boneh et al. (ACM CCS 2010) to obtain our results. Along the way, we develop a new and more direct way to prove security of PRFs based on the augmented cascade.
Kostas Papagiannopoulos
Side-channel countermeasure designers often face severe performance
overheads when trying to protect a device. Widely applied countermeasures such as masking and shuffling entail generating a large amount of random numbers, which can result in a computational bottleneck. To mitigate the randomness cost, this work evaluates low-randomness versions of both masking and shuffling, namely Recycled
Randomness Masking (RRM) and Reduced Randomness Shuffling (RRS). These countermeasures employ memory units to store generated random numbers and reuse them in subsequent computations,making them primarily suitable for implementation on devices with sufficient memory. Both RRM and RRS are evaluated using the MI-based framework in the context of horizontal attacks.
The evaluation exhibits the tradeoff between the randomness cost and the noisy leakage security level offered by the countermeasures, enabling the designer to fine-tune a masking or shuffling scheme and maximize the security level achieved for a certain cost.
Luca De Feo, Steven D. Galbraith
We give a new signature scheme for isogenies that combines the class group actions of CSIDH with the notion of Fiat-Shamir with aborts. Our techniques allow to have signatures of size less than one kilobyte at the 128-bit security level, even with tight security reduction (to a non-standard problem) in the quantum random oracle model. Hence our signatures are potentially shorter than lattice signatures, but signing and verification are currently very expensive.
Moni Naor, Lior Rotem, Gil Segev
Faced with the threats posed by man-in-the-middle attacks, messaging platforms rely on "out-of-band'' authentication, assuming that users have access to an external channel for authenticating one short value. For example, assuming that users recognizing each other's voice can authenticate a short value, Telegram and WhatApp ask their users to compare $288$-bit and $200$-bit values, respectively. The existing protocols, however, do not take into account the plausible behavior of users who may be "lazy'' and only compare parts of these values (rather than their entirety).
Motivated by such a security-critical user behavior, we study the security of lazy users in out-of-band authentication. We start by showing that both the protocol implemented by WhatsApp and the statistically-optimal protocol of Naor, Segev and Smith (CRYPTO '06) are completely vulnerable to man-in-the-middle attacks when the users consider only a half of the out-of-band authenticated value. In this light, we put forward a framework that captures the behavior and security of lazy users. Our notions of security consider both statistical security and computational security, and for each flavor we derive a lower bound on the tradeoff between the number of positions that are considered by the lazy users and the adversary's forgery probability.
Within our framework we then provide two authentication protocols. First, in the statistical setting, we present a transformation that converts any out-of-band authentication protocol into one that is secure even when executed by lazy users. Instantiating our transformation with a new refinement of the protocol of Naor et al. results in a protocol whose tradeoff essentially matches our lower bound in the statistical setting. Then, in the computational setting, we show that the computationally-optimal protocol of Vaudenay (CRYPTO '05) is secure even when executed by lazy users -- and its tradeoff matches our lower bound in the computational setting.
Motivated by such a security-critical user behavior, we study the security of lazy users in out-of-band authentication. We start by showing that both the protocol implemented by WhatsApp and the statistically-optimal protocol of Naor, Segev and Smith (CRYPTO '06) are completely vulnerable to man-in-the-middle attacks when the users consider only a half of the out-of-band authenticated value. In this light, we put forward a framework that captures the behavior and security of lazy users. Our notions of security consider both statistical security and computational security, and for each flavor we derive a lower bound on the tradeoff between the number of positions that are considered by the lazy users and the adversary's forgery probability.
Within our framework we then provide two authentication protocols. First, in the statistical setting, we present a transformation that converts any out-of-band authentication protocol into one that is secure even when executed by lazy users. Instantiating our transformation with a new refinement of the protocol of Naor et al. results in a protocol whose tradeoff essentially matches our lower bound in the statistical setting. Then, in the computational setting, we show that the computationally-optimal protocol of Vaudenay (CRYPTO '05) is secure even when executed by lazy users -- and its tradeoff matches our lower bound in the computational setting.
Jonathan Bootle, Claire Delaplace, Thomas Espitau, Pierre-Alain Fouque, Mehdi Tibouchi
This paper is devoted to analyzing the variant of Regev's learning with
errors (LWE) problem in which modular reduction is omitted: namely, the
problem (ILWE) of recovering a vector $\vec s\in\mathbb{Z}^n$ given polynomially many
samples of the form $(\vec a,\langle\vec a,\vec s\rangle + e)\in\mathbb{Z}^{n+1}$
where $\vec a$ and $e$ follow fixed distributions. Unsurprisingly, this
problem is much easier than LWE: under mild conditions on the
distributions, we show that the problem can be solved efficiently as long
as the variance of $e$ is not superpolynomially larger than that of $\vec
a$. We also provide almost tight bounds on the number of samples needed
to recover $\vec s$.
Our interest in studying this problem stems from the side-channel attack against the BLISS lattice-based signature scheme described by Espitau et al. at CCS 2017. The attack targets a quadratic function of the secret that leaks in the rejection sampling step of BLISS. The same part of the algorithm also suffers from a linear leakage, but the authors claimed that this leakage could not be exploited due to signature compression: the linear system arising from it turns out to be noisy, and hence key recovery amounts to solving a high-dimensional problem analogous to LWE, which seemed infeasible. However, this noisy linear algebra problem does not involve any modular reduction: it is essentially an instance of ILWE, and can therefore be solved efficiently using our techniques. This allows us to obtain an improved side-channel attack on BLISS, which applies to 100% of secret keys (as opposed to ~7% in the CCS paper), and is also considerably faster.
Our interest in studying this problem stems from the side-channel attack against the BLISS lattice-based signature scheme described by Espitau et al. at CCS 2017. The attack targets a quadratic function of the secret that leaks in the rejection sampling step of BLISS. The same part of the algorithm also suffers from a linear leakage, but the authors claimed that this leakage could not be exploited due to signature compression: the linear system arising from it turns out to be noisy, and hence key recovery amounts to solving a high-dimensional problem analogous to LWE, which seemed infeasible. However, this noisy linear algebra problem does not involve any modular reduction: it is essentially an instance of ILWE, and can therefore be solved efficiently using our techniques. This allows us to obtain an improved side-channel attack on BLISS, which applies to 100% of secret keys (as opposed to ~7% in the CCS paper), and is also considerably faster.
Prasanna Ravi, Mahabir P. Jhanwar, James Howe, Anupam Chattopadhyay, Shivam Bhasin
The recent lattice-based signature scheme Dilithium, submitted as part of the CRYSTALS (Cryptographic Suite for Algebraic Lattices) package, is one of a number of strong candidates submitted for the NIST standardisation process of post-quantum cryptography. The Dilithium signature scheme is based on the Fiat-Shamir paradigm and can be seen as a variant of the Bai-Galbraith scheme (BG) combined with several improvements from previous ancestor lattice-based schemes like GLP and BLISS signature schemes. One of the main features of Dilithium is the compressed public-key, which is a rounded version of the LWE instance. This implies that Dilithium is not breakable with the knowledge of only the secret or the error of the LWE instance, unlike its ancestor lattice-based signature schemes. In this paper, we investigate the security of Dilithium against a combination of side-channel and classical attacks. Side-channel attacks on schoolbook and optimised polynomial multiplication algorithms in the signing procedure are shown to extract the secret component of the LWE instance, which is just one among the multiple components of the secret-key of Dilithium. We then propose an alternative signing procedure, through which it is possible to forge signatures with only the extracted portion of the secret-key, without requiring the knowledge of all its elements. Thus showing that Dilithium too breaks on just knowing the secret portion of the LWE instance, similar to previous lattice-based schemes.
David Sommer, Esfandiar Mohammadi, Sebastian Meiser
In recent years, privacy enhancing technologies have gained tremendous momentum and they are expected to keep a sustained importance. Quantifying the degree of privacy offered by any mechanism working on potentially sensitive data is a complex and well-researched topic; epsilon-differential privacy (DP) and its slightly weaker and more versatile variant (epsilon,delta)-approximate differential privacy (ADP) have become the de-facto standard for privacy measures in the literature. Recently, novel variants of (A)DP focused on giving tighter privacy bounds under continual observation.
In this paper, we unify many of these previous works in a common core theory, focused on the privacy loss of a mechanism. We show that in sequential composition of the mechanism, the privacy loss (represented as a distribution) undergoes a convolution, which in turn enables us to show the central limit theorem for differential privacy: the privacy loss of any mechanism will converge to a Gauss distribution. This observation leads us to several practically relevant insights: 1) we show that several of the novel DP-variants are equally expressive as ADP, 2) we improve existing bounds, such as the moments accountant bound, 3) we derive exact ADP guarantees for the Gauss mechanism, i.e., an analytical and simple formula to directly calculate ADP (not an over-approximating bound), 4) we derive exact ADP guarantees for the Randomized Response, and, 5) we characterize the privacy guarantees of a mechanism by the Gauss distribution to which it converges, its privacy class, and using normal approximation theorems derive novel upper and lower ADP bounds for arbitrary mechanisms.
In this paper, we unify many of these previous works in a common core theory, focused on the privacy loss of a mechanism. We show that in sequential composition of the mechanism, the privacy loss (represented as a distribution) undergoes a convolution, which in turn enables us to show the central limit theorem for differential privacy: the privacy loss of any mechanism will converge to a Gauss distribution. This observation leads us to several practically relevant insights: 1) we show that several of the novel DP-variants are equally expressive as ADP, 2) we improve existing bounds, such as the moments accountant bound, 3) we derive exact ADP guarantees for the Gauss mechanism, i.e., an analytical and simple formula to directly calculate ADP (not an over-approximating bound), 4) we derive exact ADP guarantees for the Randomized Response, and, 5) we characterize the privacy guarantees of a mechanism by the Gauss distribution to which it converges, its privacy class, and using normal approximation theorems derive novel upper and lower ADP bounds for arbitrary mechanisms.
Ritam Bhaumik, Eik List, Mridul Nandi
Strong Pseudo-random Permutations (SPRPs) are important for various applications. In general, it is desirable to base an SPRP on a single-keyed primitive for minimizing the implementation costs. For constructions built on classical block ciphers, Nandi showed at ASIACRYPT'15 that at least two calls to the primitive per processed message block are required for SPRP security, assuming that all further operations are linear. The ongoing trend of using tweakable block ciphers as primitive has already led to MACs or encryption modes with high security and efficiency properties. Thus, three interesting research questions are hovering in the domain of SPRPs: (1) if and to which extent the bound of two calls per block can be reduced with a tweakable block cipher, (2) how concrete constructions could be realized, and (3) whether full $n$-bit security is achievable from primitives with $n$-bit state size.
The present work addresses all three questions. Inspired by Iwata et al.'s ZHash proposal at CRYPTO'17, we propose the ZCZ (ZHash-Counter-ZHash) construction, a single-key variable-input-length SPRP based on a single tweakable block cipher whose tweak length is at least its state size. ZCZ possesses close to optimal properties with regards to both performance and security: not only does it require only asymptotically $3\ell/2$ calls to the primitive for $\ell$-block messages, but we also show that this figure is close to the minimum by an PRP distinguishing attack on any construction with tweak size of $\tau = n$ bits and fewer than $(3\ell-1)/2$ calls to the same primitive. Moreover, it provides optimal $n$-bit security for a primitive with $n$-bit state and tweak size.
Yunhua Wen, Shengli Liu
A fuzzy extractor (FE) aims at deriving and reproducing (almost) uniform cryptographic keys from noisy non-uniform sources. To reproduce an identical key $R$ from subsequent readings of a noisy source, it is necessary to eliminate the noises from those readings. To this end, a public helper string $P$, together with key $R$, is produced from the first reading of the source during the initial enrollment phase.
In this paper, we consider computational fuzzy extractor. We formalize robustly reusable fuzzy extractor (rrFE) which considers reusability and robustness simultaneously in the Common Reference String (CRS) model. Reusability of rrFE deals with source reuse. It guarantees that the key $R$ output by fuzzy extractor is pseudo-random even if the initial enrollment is applied to the same source several times, generating multiple public helper strings and keys $(P_i, R_i)$. Robustness of rrFE deals with active probabilistic polynomial-time adversaries, who may manipulate the public helper string $P_i$ to affect the reproduction of $R_i$. Any modification of $P_i$ by the adversary will be detected by the robustness of rrFE.
In this paper, we consider computational fuzzy extractor. We formalize robustly reusable fuzzy extractor (rrFE) which considers reusability and robustness simultaneously in the Common Reference String (CRS) model. Reusability of rrFE deals with source reuse. It guarantees that the key $R$ output by fuzzy extractor is pseudo-random even if the initial enrollment is applied to the same source several times, generating multiple public helper strings and keys $(P_i, R_i)$. Robustness of rrFE deals with active probabilistic polynomial-time adversaries, who may manipulate the public helper string $P_i$ to affect the reproduction of $R_i$. Any modification of $P_i$ by the adversary will be detected by the robustness of rrFE.
Haiyang Xue, Xianhui Lu, Bao Li, Bei Liang, Jingnan He
Motivated by abstracting the common idea behind several implicitly authenticated key exchange (AKE) protocols, we introduce a primitive that we call double-key key encapsulation mechanism (2-key KEM). It is a special type of KEM involving two pairs of secret-public keys and satisfying some function and security property. Such 2-key KEM serves as the core building block and provides alternative approaches to simplify the constructions of AKE.
To see the usefulness of 2-key KEM, we show how several existing constructions of AKE can be captured as 2-key KEM and understood in a unified framework, including widely used HMQV, NAXOS, Okamoto-AKE, and FSXY12-13 schemes. Then, we show 1) how to construct 2-key KEM from concrete assumptions, 2) how to adapt the classical Fujisaki-Okamoto transformation and KEM combiner to achieve the security requirement of 2-key KEM, 3) an elegant Kyber-AKE over lattice using the improved Fujisaki-Okamoto technique.