International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

15:17 [Pub][ePrint] Deja Q: Using Dual Systems to Revisit q-Type Assumptions, by Melissa Chase and Sarah Meiklejohn

  After more than a decade of usage, bilinear groups have established their place in the cryptographic canon by enabling the construction of many advanced cryptographic primitives. Unfortunately, this explosion in functionality has been accompanied by an analogous growth in the complexity of the assumptions used to prove security. Many of these assumptions have been gathered under the umbrella of the \"uber-assumption,\" yet certain classes of these assumptions -- namely, q-type assumptions -- are stronger and require larger parameter sizes than their static counterparts.

In this paper, we show that in certain groups, many classes of q-type assumptions are in fact implied by subgroup hiding (a well-established, static assumption). Our main tool in this endeavor is the dual-system technique, as introduced by Waters in 2009. As a case study, we first show that in composite-order groups, we can prove the security of the Dodis-Yampolskiy PRF based solely on subgroup hiding and allow for a domain of arbitrary size (the original proof only allowed a polynomially-sized domain). We then turn our attention to classes of q-type assumptions and show that they are implied -- when instantiated in appropriate groups -- solely by subgroup hiding. These classes are quite general and include assumptions such as q-SDH. Concretely, our result implies that every construction relying on such assumptions for security (e.g., Boneh-Boyen signatures) can, when instantiated in appropriate composite-order bilinear groups, be proved secure under subgroup hiding instead.

15:17 [Pub][ePrint] How to manipulate curve standards: a white paper for the black hat, by Daniel J. Bernstein and Tung Chou and Chitchanok Chuengsatiansup and Andreas H\\\"ulsing and Tanja Lange and Ruben Niederhagen an

  This paper analyzes the cost of breaking ECC under the following assumptions: (1) ECC is using a standardized elliptic curve that was actually chosen by an attacker; (2) the attacker is aware of a vulnerability in some curves that are not publicly known to be vulnerable.

This cost includes the cost of exploiting the vulnerability, but also the initial cost of computing a curve suitable for sabotaging the standard. This initial cost depends upon the acceptability criteria used by the public to decide whether to allow a curve as a standard, and (in most cases) also upon the chance of a curve being vulnerable.

This paper shows the importance of accurately modeling the actual acceptability criteria: i.e., figuring out what the public can be fooled into accepting. For example, this paper shows that plausible models of the \"Brainpool acceptability criteria\" allow the attacker to target a one-in-a-million vulnerability.

15:17 [Pub][ePrint] On the Optimality of Differential Fault Analyses on CLEFIA, by Juliane Krämer and Anke Stüber and Ágnes Kiss

  Differential Fault Analysis is a powerful cryptanalytic tool to reveal secret

keys of cryptographic algorithms.

By corrupting the computation of an algorithm, an attacker gets

additional information about the secret key.

In 2012, several Differential Fault Analyses on the AES cipher were


from an information-theoretic perspective.

This analysis exposed whether or not the leaked information was fully exploited.

It revealed if an analysis was already optimal or if it could still be improved.

We applied the same approach to all existing Differential Fault Analyses

on the CLEFIA cipher.

We show that only some of these attacks are already optimal.

We improve those analyses which did not exploit all information.

With one exception, all attacks against CLEFIA-128 reach the theoretical limit

after our improvement.

Our improvement of an attack against CLEFIA-192 and CLEFIA-256 reduces the

number of fault injections to the lowest possible number reached to date.

15:17 [Pub][ePrint] A new public key system based on Polynomials over finite fields GF(2), by Gurgen Khachatrian

  In this paper a new public key system based on polynomials over finite fields GF(2) is developed.The security of the system is based on the difficulty of solving discrete logarithms over GF(2^k)

with sufficiently large k. The presented system has all features of ordinary public key schemes such as public key encryption and digital signatures. The security and implementation aspects of the presented system are also introduced along with comparison with other well known public- key systems.

15:17 [Pub][ePrint] Security Analysis of Multilinear Maps over the Integers, by Hyung Tae Lee and Jae Hong Seo

  At Crypto 2013, Coron, Lepoint, and Tibouchi~(CLT) proposed a practical Graded Encoding Scheme (GES) over the integers, which has very similar cryptographic features to ideal multilinear maps. In fact, the scheme of Coron~{\\em et al.} is the second proposal of a secure GES, and has advantages over the first scheme of Garg, Gentry, and Halevi~(GGH). For example, unlike the GGH construction, the subgroup decision assumption holds in the CLT construction. Immediately following the elegant innovations of the GES, numerous GES-based cryptographic applications were proposed. Although these applications rely on the security of the underlying GES, the security of the GES has not been analyzed in detail, aside from the original papers produced by Garg~{\\em et~al.} and Coron~{\\em et~al.}

We present an attack algorithm against the system parameters of the CLT GES. The proposed algorithm\'s complexity $\\tilde\\bO(2^{\\rho/2})$ is exponentially smaller than $\\tilde\\bO(2^{\\rho})$ of the previous best attack of Coron~{\\em et al.}, where $\\rho$ is a function of the security parameter. Furthermore, we identify a flaw in the generation of the zero-testing parameter of the CLT GES, which drastically reduces the running time of the proposed algorithm. The experimental results demonstrate the practicality of our attack.

15:17 [Pub][ePrint] Simple AEAD Hardware Interface (S{\\AE}HI) in a SoC: Implementing an On-Chip Keyak/WhirlBob Coprocessor, by Markku-Juhani O. Saarinen

  Simple AEAD Hardware Interface (S{\\AE}HI) is a hardware cryptographic

interface aimed at CAESAR Authenticated Encryption with Associated

Data (AEAD) algorithms. Cryptographic acceleration is typically

achieved either with a coprocessor or via instruction set

extensions. ISA modifications require re-engineering the CPU core,

making the approach inapplicable outside the realm of open source

processor cores. Our proposed hardware interface is a memory-mapped

cryptographic coprocessor, implementable even on very low end FPGA

evaluation platforms. Algorithms complying to S{\\AE}HI must also

include C language API drivers that directly utilize the

memory mapping in a ``bare metal\'\' fashion. This can also

be accommodated on MMU systems.

Extended battery life and bandwidth resulting from dedicated

cryptographic hardware is vital for currently dominant computing and

communication devices: mobile phones, tablets, and Internet-of-Things

(IoT) applications. We argue that these should be priority hardware

optimization targets for AEAD algorithms with realistic payload


We demonstrate a fully integrated implementation of WhirlBob

and Keyak AEADs on the FPGA fabric of Xilinx Zynq 7010. This low-cost

System-on-Chip (SoC) also houses a dual-core Cortex-A9 CPU, closely

matching the architecture of many embedded devices. The on-chip

coprocessor is accessible from user space with a Linux

kernel driver. An integration path exists all the way to end-user


15:17 [Pub][ePrint] Vernam Two, by Dan P. Milleville

  The Vernam Algorithm has long been unable to be commercially used because it requires the key to be transmitted with the ciphertext, an obvious security hazard. What has been developed with this methodology is to take a fixed key, XOR 4 randomly selected/extracted segments of this key together forming an \'Effective Key Stream\' and XOR\'ing that with the plaintext. With both the sender and receiver possessing the fixed key, using this new methodology circumvents the requirement to send the key along with the ciphertext.

Considering the drastic reduction in computations needed, this algorithm executes at better than four times faster than the AES. With our society becoming more and more digitally oriented, faster security is needed.

15:17 [Pub][ePrint] Reducing Communication Overhead of the Subset Difference Scheme, by Sanjay Bhattacherjee and Palash Sarkar

  In Broadcast Encryption (BE) systems like Pay-TV, AACS, online content sharing and broadcasting, reducing the header length (communication overhead per session) is of practical interest. The Subset Difference (SD) scheme due to Naor-Naor-Lotspiech (NNL) is the most popularly used BE scheme. It assumes an underlying full binary tree to assign keys to subsets of users. In this work, we associate short tree structures of height $a$ to nodes in the binary tree to assign keys to more subsets. This ensures that for any set of revoked users, the header length is in general less and never more than the NNL-SD scheme. Experimental studies show that the average header length is always less than that of the NNL-SD scheme and decreases as $a$ increases. User storage in the new scheme is more than that of the NNL-SD scheme but is not prohibitively expensive. By choosing a suitable value of $a$, it is possible to obtain substantial reduction in the communication overhead at the cost of a tolerable increase in the user storage.

15:17 [Pub][ePrint] The Exact PRF-Security of NMAC and HMAC, by Peter Gazi and Krzysztof Pietrzak and Michal Rybár

  NMAC is a mode of operation which turns a fixed input-length

keyed hash function f into a variable input-length function.

A~practical single-key variant of NMAC called HMAC is a very

popular and widely deployed message authentication code

(MAC). Security proofs and attacks for NMAC can typically

be lifted to HMAC.

NMAC was introduced by Bellare, Canetti and Krawczyk

[Crypto\'96], who proved it to be a secure pseudorandom

function (PRF), and thus also a MAC, assuming that

(1) f is a PRF and

(2) the function we get when cascading f is weakly


Unfortunately, HMAC is typically instantiated with

cryptographic hash functions like MD5 or SHA-1 for which (2)

has been found to be wrong. To restore the provable

guarantees for NMAC, Bellare [Crypto\'06] showed its

security based solely on the assumption that f is a PRF,

albeit via a non-uniform reduction.

Our first contribution is a simpler and uniform proof: If f

is an \\eps-secure PRF (against q queries) and a

\\delta-non-adaptively secure PRF (against q queries), then

NMAC^f is an (\\eps+lq\\delta)-secure PRF against q queries of

length at most l blocks each.

We then show that this \\eps+lq\\delta bound is basically

tight. For the most interesting case where lq\\delta>=\\eps

we prove this by constructing an f for which an attack with

advantage lq\\delta exists. This also violates the bound

O(l\\eps) on the PRF-security of NMAC recently claimed by

Koblitz and Menezes.

Finally, we analyze the PRF-security of a modification of

NMAC called NI [An and Bellare, Crypto\'99] that differs

mainly by using a compression function with an additional

keying input. This avoids the constant rekeying on

multi-block messages in NMAC and allows for a security proof

starting by the standard switch from a PRF to a random

function, followed by an information-theoretic analysis. We

carry out such an analysis, obtaining a tight lq^2/2^c bound

for this step, improving over the trivial bound of

l^2q^2/2^c. The proof borrows combinatorial techniques

originally developed for proving the security of CBC-MAC

[Bellare et al., Crypto\'05]. We also analyze a variant of

NI that does not include the message length in the last call

to the compression function, proving a l^{1+o(1)}q^2/2^c

bound in this case.

00:17 [Pub][ePrint] Fast Lattice Point Enumeration with Minimal Overhead, by Daniele Micciancio and Michael Walter

  Enumeration algorithms are the best currently known methods to solve lattice problems, both in theory (within the class of polynomial space algorithms), and in practice (where they are routinely used to evaluate the concrete security of lattice cryptography). However, there is an uncomfortable gap between our theoretical understanding and practical performance of lattice point enumeration algorithms.

The algorithms typically used in practice have worst-case asymptotic running time $2^{O(n^2)}$, but perform extremely well in practice, at least for all values of the lattice dimension for which experimentation is feasible. At the same time, theoretical algorithms

(Kannan, Mathematics of Operation Research 12(3):415-440, 1987) are asymptotically superior (achieving $2^{O(n \\log n)}$ running time), but they are never used in practice because they incur a substantial overhead that makes them uncompetitive for all reasonable values of the lattice dimension $n$. This gap is especially troublesome when algorithms are run in practice to evaluate the concrete security of a cryptosystem, and then experimental results are extrapolated to much larger dimension where solving lattice problems is computationally infeasible.

We introduce a new class of (polynomial space) lattice enumeration algorithms that simultaneously achieve asymptotic efficiency (meeting the theoretical $n^{O(n)} = 2^{O(n \\log n)}$ time bound) and practicality, matching or surpassing the performance of practical algorithms already in moderately low dimension. Key technical contributions that allow us to achieve this result are a new analysis technique that allows us to greatly reduce the number of recursive calls performed during preprocessing (from super exponential in $n$ to single exponential, or even polynomial in $n$), a new enumeration technique that can be directly applied to projected lattice (basis) vectors, without the need to remove linear dependencies, and a modified block basis reduction method with fast (logarithmic) convergence properties. The last technique is used to obtain a new SVP enumeration procedure with $\\tilde O(n^{n/2e})$ running time, matching (even in the constant in the exponent) the optimal worst-case analysis (Hanrot and Stehl{\\\'e}, CRYPTO 2007)

of Kannan\'s theoretical algorithm, but with far superior performance

in practice.

We complement our theoretical analysis with a comprehensive set of experiments that not only support our practicality claims, but also allow to estimate the cross-over point between different versions of enumeration algorithms, as well as asymptotically faster (but not quite practical) algorithms running in single exponential $2^{O(n)}$ time and space.

09:17 [Pub][ePrint] Direct Construction of Recursive MDS Diffusion Layers using Shortened BCH Codes, by Daniel Augot and Matthieu Finiasz

  MDS matrices allow to build optimal linear diffusion layers in block ciphers. However, MDS matrices cannot be sparse and usually have a large description, inducing costly software/hardware implementations.

Recursive MDS matrices allow to solve this problem by focusing on MDS

matrices that can be computed as a power of a simple companion matrix,

thus having a compact description suitable even for constrained environments. However, up to now, finding recursive MDS matrices required to perform an exhaustive search on families of companion matrices, thus limiting the size of MDS matrices one could look for. In this article we propose a new direct construction based on shortened BCH codes, allowing to efficiently construct such matrices for whatever parameters. Unfortunately, not all recursive MDS matrices can be obtained from BCH codes, and our algorithm is not always guaranteed to find the best matrices for a given set of parameters.