International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

16:17 [Pub][ePrint] Leakage-Resilient Symmetric Encryption via Re-keying, by Michel Abdalla and Sonia Belaïd and Pierre-Alain Fouque

  In the paper, we study whether it is possible to construct an efficient leakage-resilient symmetric scheme using the AES block cipher. We aim at bridging the gap between the theoretical leakage-resilient symmetric primitives used to build encryption schemes and the practical schemes that do not have any security proof against side-channel adversaries. Our goal is to construct an as efficient as possible leakage-resilient encryption scheme, but we do not want to change the cryptographic schemes already implemented. The basic idea consists in adding a leakage-resilient re-keying scheme on top of the encryption scheme and has been already suggested by Kocher to thwart differential power analysis techniques. Indeed, in such analysis, the adversary queries the encryption box and from the knowledge of the plaintext/ciphertext, she can perform a divide-and-conquer key recovery attack. The method consisting in changing the key for each or after a small number of encryptions with the same key is known as re-keying. It prevents DPA adversaries but not SPA attacks which use one single leakage trace. Here, we prove that using a leakage-resilient re-keying scheme on top of a secure encryption scheme in the standard model, leads to a leakage-resilient encryption scheme. The main advantage of the AES block cipher is that its implementations are generally heuristically-secure against SPA adversaries. This assumption is used in many concrete instantiations of leakage-resilient symmetric primitives. Consequently, if we use it and change the key for each new message block, the adversary will not be able to recover any key if the re-keying scheme is leakage-resilient. There is mainly two different techniques for re-keying scheme, either parallel or sequential, but if we want to avoid the adversary having access to many inputs/outputs, only the sequential method is possible. However, the main drawback of the latter technique is that in case of de-synchronization, many useless computations are required. In our re-keying scheme, we use ideas from the skip-list data structure to efficiently recover a specific key.

16:17 [Pub][ePrint] Towards Key-Length Extension\\\\ with Optimal Security: Cascade Encryption and Xor-cascade Encryption, by Jooyoung Lee and Martijn Stam

  This paper discusses provable security of two types of cascade encryptions. The first construction $\\CE^l$, called $l$-cascade encryption, is obtained by sequentially composing $l$ blockcipher calls with independent keys. The security of $\\CE^l$ has been a longstanding open problem until Ga\\v{z}i and Maurer~\\cite{GM09} proved its security up to $2^{\\ka+\\min\\{\\frac{n}{2},\\ka\\}}$ query complexity for large cascading length, where $\\ka$ and $n$ denote the key size and the block size of the underlying blockcipher, respectively. We improve this limit by proving the security of $\\CE^l$ up to $2^{\\ka+\\min\\left\\{\\ka,n\\right\\}-\\frac{16}{l}\\left(\\frac{n}{2}+2\\right)}$ query complexity: this bound approaches $2^{\\ka+\\min\\left\\{\\ka,n\\right\\}}$ with increasing cascade length $l$.

The second construction $\\XCE^l$ is a natural cascade version of the DESX scheme with intermediate keys xored between blockcipher calls. This can also be viewed as an extension of double XOR-cascade proposed by Ga\\v{z}i and Tessaro~\\cite{GT12}. We prove that $\\XCE^l$ is secure up to $2^{\\ka+n-\\frac{8}{l}\\left(\\frac{n}{2}+2\\right)}$ query complexity. As cascade length $l$ increases, this bound approaches $2^{\\ka+n}$.

In the ideal cipher model, one can obtain all the evaluations of the underlying blockcipher by making $2^{\\ka+n}$ queries, so the $(\\ka+n)$-bit security becomes the maximum that key-length extension based on a single $\\ka$-bit key $n$-bit blockcipher is able to achieve. Cascade encryptions $\\CE^l$~(with $n\\leq\\ka$) and $\\XCE^l$ provide almost optimal security with large cascade length.

16:17 [Pub][ePrint] Efficient and Secure Delegation of Group Exponentiation to a Single Server, by Bren Cavallo and Giovanni Di Crescenzo and Delaram Kahrobaei and Vladimir Shpilrain

  We consider the problem of delegating computation of group operations from a computationally weaker client holding an input and a description of a function, to a {\\em single} computationally stronger server holding a description of the same function. Solutions need to satisfy natural correctness, security, privacy and efficiency requirements. We obtain delegated computation protocols for the following functions, defined for an {\\em arbitrary} commutative group:


\\item Group inverses, with security and privacy holding against any computationally unrestricted malicious server.

\\item Group exponentiation, with security and privacy holding against any computationally unrestricted ``partially honest\" server.

\\item Group exponentiation, with security and privacy holding against any polynomial-time malicious server, under a pseudorandom generation assumption, and security holding with constant probability.


16:17 [Pub][ePrint] Leakage Assessment Methodology - a clear roadmap for side-channel evaluations, by Tobias Schneider and Amir Moradi

  Evoked by the increasing need to integrate side-channel countermeasures into security-enabled commercial devices, evaluation labs are seeking a standard approach that enables a fast, reliable and robust evaluation of the side-channel vulnerability of the given products. To this end, standardization bodies such as NIST intend to establish a leakage assessment methodology fulfilling these demands. One of such proposals is the Welch\'s t-test, which is being put forward by Cryptography Research Inc., and is able to relax the dependency between the evaluations and the device\'s underlying architecture. In this work, we deeply study the theoretical background of the test\'s different flavors, and present a roadmap which can be followed by the evaluation labs to efficiently and correctly conduct the tests. More precisely, we express a stable, robust and efficient way to perform the tests at higher orders. Further, we extend the test to multivariate settings, and provide details on how to efficiently and rapidly carry out such a multivariate higher-order test. Including a suggested methodology to collect the traces for these tests, we present practical case studies where different types of t-tests can exhibit the leakage of supposedly secure designs.

16:17 [Pub][ePrint] Towards Secure Distance Bounding, by Ioana Boureanu, Aikaterini Mitrokotsa and Serge Vaudenay

  Relay attacks (and, more generally, man-in-the-middle attacks) are a serious threat against many access control and payment schemes.

In this work, we present distance-bounding protocols, how these can deter relay attacks, and the security models formalizing these protocols. We show several pitfalls making existing protocols insecure (or at least, vulnerable, in some cases). Then, we introduce the SKI protocol which enjoys resistance to all popular attack-models and features provable security. As far as we know, this is the first protocol with such all-encompassing security guarantees.

16:17 [Pub][ePrint] Triathlon of Lightweight Block Ciphers for the Internet of Things, by Daniel Dinu and Yann Le Corre and Dmitry Khovratovich and Léo Perrin and Johann Großschädl and Alex Biryukov

  In this paper we introduce an open framework for the benchmarking of lightweight block ciphers on a multitude of embedded platforms. Our framework is able to evaluate execution time, RAM footprint, as well as (binary) code size, and allows a user to define a custom \"figure of merit\" according to which all evaluated candidates can be ranked. We used the framework to benchmark various implementation of 13 lightweight ciphers, namely AES, Fantomas, HIGHT, LBlock, LED, Piccolo, PRESENT, PRINCE, RC5, Robin, Simon, Speck, and TWINE, on three different platforms: 8-bit ATmega, 16-bit MSP430, and 32-bit ARM. Our results give new insights to the question of how well these ciphers are suited to secure the Internet of Things (IoT). The benchmarking framework provides cipher designers with a tool to compare new algorithms with the state-of-the-art and allows standardization bodies to conduct a fair and comprehensive evaluation of a large number of candidates.

16:17 [Pub][ePrint] Secure and Efficient Initialization and Authentication Protocols for SHIELD, by Chenglu Jin and Marten van Dijk

  With the globalization of semiconductor production, out-sourcing IC fabrication has become a trend in various aspects. This, however, introduces serious threats from the entire untrusted supply chain. To combat these threats, DARPA (Defense Advanced Research Projects Agency) proposed the SHIELD (Supply Chain Hardware Integrity for Electronics Defense) program to design a secure hardware root-of-trust, called dielet, to be inserted into the host package of legitimately produced ICs. Dielets are RF powered and communicate with the outside world through their RF antennas. They have sensors which allow them to passively (without the need for power) record malicious events which can later be read out during an authentication protocol between the dielet and server with a smartphone as intermediary.

In this paper, we propose to use AES counter mode encryption (as opposed to DARPA\'s suggested plain AES encryption) as a basis for the authentication process. We show that this leads to several advantages: (1) resistance to a ``try-and-check\'\' attack which in case of DARPA\'s authentication protocol nullifies the effectiveness of one of SHIELD\'s main goals (that of being able to detect and trace adversarial activities with significant probability), (2) immunity against differential power analysis and differential fault analysis for free, (3) a 2$\\times$ speed up of the authentication phase by halving the the number of communication rounds with the server, and (4) a significant reduction of the power consumption of the dielet by halving the number of needed AES encryptions and by a 4$\\times$ reduction of the number of transmitted bits.

Each dielet needs to go through an initialization phase during which the manufacturer sets a serial ID and cryptographic key.

%Fusing this information one dielet at a time (while they are still on the wafer) takes a significant amount of time ($\\approx$7 minutes).

In this paper we propose the first efficient and secure initialization protocol where dielets generate their own serial ID and key by using a true random number generator (TRNG). Finally, the area overhead of our authentication and initialization protocols is only 64-bit NVM, one 8-bit counter and a TRNG based on a single SRAM-cell together with corresponding control logic.

16:17 [Pub][ePrint] Faster sieving for shortest lattice vectors using spherical locality-sensitive hashing, by Thijs Laarhoven and Benne de Weger

  Recently, it was shown that angular locality-sensitive hashing (LSH) can be used to significantly speed up lattice sieving, leading to heuristic time and space complexities for solving the shortest vector problem (SVP) of $2^{0.3366n + o(n)}$. We study the possibility of applying other LSH methods to sieving, and show that with the recent spherical LSH method of Andoni et al.\\ we can heuristically solve SVP in time and space $2^{0.2972n + o(n)}$. We further show that a practical variant of the resulting SphereSieve is very similar to Wang et al.\'s two-level sieve, with the key difference that we impose an order on the outer list of centers.

16:17 [Pub][ePrint] Analyzing Permutations for AES-like Ciphers: Understanding ShiftRows, by Christof Beierle and Philipp Jovanovic and Martin M. Lauridsen and Gregor Leander and Christian Rechberger

  Designing block ciphers and hash functions in a manner that resemble the AES in many aspects has been very popular since Rijndael was adopted as the Advanced Encryption Standard. However, in sharp contrast to the MixColumns operation, the security implications of the way the state is permuted by the operation resembling ShiftRows has never been studied in depth.

Here, we provide the first structured study of the influence of ShiftRows-like operations, or more generally, word-wise permutations, in AES-like ciphers with respect to diffusion properties and resistance towards differential- and linear attacks. After formalizing the concept of guaranteed trail weights, we show a range of equivalence results for permutation layers in this context. We prove that the trail weight analysis when using arbitrary word-wise permutations, with rotations as a special case, reduces to a consideration of a specific normal form. Using a mixed-integer linear programming approach, we obtain optimal parameters for a wide range of AES-like ciphers, and show improvements on parameters for Rijndael-192, Rijndael-256, PRIMATEs-80 and Prøst-128. As a separate result, we show for specific cases of the state geometry that a seemingly optimal bound on the trail weight can be obtained using cyclic rotations only for the permutation layer, i.e. in a very implementation friendly way.

16:17 [Pub][ePrint] Attribute-Based Versions of Schnorr and ElGamal, by Javier Herranz

  We design in this paper the first attribute-based cryptosystems that work in the classical Discrete Logarithm, pairing-free, setting. The attribute-based signature scheme can be seen as an extension of Schnorr signatures, with adaptive security relying on the Discrete Logarithm Assumption, in the random oracle model. The attribute-based encryption schemes can be seen as extensions of ElGamal cryptosystem, with adaptive security relying on the Decisional Diffie-Hellman Assumption, in the standard model.

The proposed schemes are secure only in a bounded model: the systems admit $L$ secret keys, at most, for a bound $L$ that must be fixed in the setup of the systems. The efficiency of the cryptosystems, later, depends on this bound $L$. Although this is an important drawback that can limit the applicability of the proposed schemes in some real-life applications, it turns out that the bounded security of our key-policy attribute-based encryption scheme (in particular, with $L=1$) is enough to implement the generic transformation of Parno, Raykova and Vaikuntanathan at TCC\'2012. As a direct result, we obtain a protocol for the verifiable delegation of computation of boolean functions, which does not employ pairings or lattices, and whose adaptive security relies on the Decisional Diffie-Hellman Assumption.

19:17 [Pub][ePrint] Key-Homomorphic Constrained Pseudorandom Functions, by Abhishek Banerjee and Georg Fuchsbauer and Chris Peikert and Krzysztof Pietrzak and Sophie Stevens

  A pseudorandom function (PRF) is a keyed function $F \\colon {\\cal

K}\\times{\\cal X}\\rightarrow {\\cal Y}$ where, for a random key

$k\\in{\\cal K}$, the function $F(k,\\cdot)$ is indistinguishable from a

uniformly random function, given black-box access. A

\\emph{key-homomorphic} PRF has the additional feature that for any

keys $k,k\'$ and any input $x$, we have $F(k + k\', x)= F(k,x) \\oplus

F(k\',x)$ for some group operations $+, \\oplus$ on $\\cal{K}$ and

$\\cal{Y}$, respectively. A \\emph{constrained} PRF for a family of

sets ${\\cal S} \\subseteq \\cal{P}({\\cal X})$ has the property that,

given any key $k$ and set $S \\in \\cal{S}$, one can efficiently compute

a ``constrained\'\' key $k_S$ that enables evaluation of $F(k,x)$ on all

inputs $x\\in S$, while the values $F(k,x)$ for $x \\notin S$ remain

pseudorandom even given $k_{S}$.

In this paper we construct PRFs that are simultaneously constrained

\\emph{and} key homomorphic, where the homomorphic property holds even

for constrained keys. We first show that the multilinear map-based

bit-fixing and circuit-constrained PRFs of Boneh and Waters (Asiacrypt

2013) can be modified to also be \\emph{key-homomorphic}. We then show

that the LWE-based key-homomorphic PRFs of Banerjee and Peikert

(Crypto 2014) are essentially already \\emph{prefix-constrained} PRFs,

using a (non-obvious) definition of constrained keys and associated

group operation. Moreover, the constrained keys themselves are

pseudorandom, and the constraining and evaluation functions can all be

computed in low depth.

As an application of key-homomorphic constrained PRFs, we construct a

proxy re-encryption scheme with fine-grained access control. This

scheme allows storing encrypted data on an untrusted server, where

each file can be encrypted relative to some attributes, so that only

parties whose constrained keys match the attributes can decrypt.

Moreover, the server can re-key (arbitrary subsets of) the ciphertexts

without learning anything about the plaintexts, thus permitting

efficient and fine-grained revocation.