International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

15:17 [Pub][ePrint] The Hunting of the SNARK, by Nir Bitansky and Ran Canetti and Alessandro Chiesa and Shafi Goldwasser and Huijia Lin and Aviad Rubinstein and Eran Tromer

  The existence of succinct non-interactive arguments for NP (i.e.,

non-interactive computationally-sound proofs where the verifier\'s

work is essentially independent of the complexity of the NP

nondeterministic verifier) has been an intriguing question for the

past two decades. Other than CS proofs in the random oracle model

[Micali, FOCS \'94], the only existing candidate construction is

based on an elaborate assumption that is tailored to a specific

protocol [Di Crescenzo and Lipmaa, CiE \'08].

We formulate a general and relatively natural notion of an

\\emph{extractable collision-resistant hash function (ECRH)} and show

that, if ECRHs exist, then a modified version of Di Crescenzo and

Lipmaa\'s protocol is a succinct non-interactive argument for

NP. Furthermore, the modified protocol is actually a succinct

non-interactive \\emph{adaptive argument of knowledge (SNARK).} We

then propose several candidate constructions for ECRHs and

relaxations thereof.

We demonstrate the applicability of SNARKs to various forms of delegation of computation, to succinct non-interactive zero knowledge arguments, and to succinct two-party secure computation. Finally, we show that SNARKs essentially imply the existence of ECRHs, thus demonstrating the necessity of the assumption.

Going beyond $\\ECRH$s, we formulate the notion of {\\em extractable

one-way functions ($\\EOWF$s)}. Assuming the existence of a natural

variant of $\\EOWF$s, we construct a $2$-message

selective-opening-attack secure commitment scheme and a 3-round

zero-knowledge argument of knowledge. Furthermore, if the $\\EOWF$s are

concurrently extractable, the 3-round zero-knowledge protocol is also

concurrent zero-knowledge.

Our constructions circumvent previous black-box impossibility

results regarding these protocols by relying on $\\EOWF$s as the non-black-box component in the security reductions.

13:35 [Event][New] COST Action IC1306 - School on Cryptographic Attacks

  From October 13 to October 16
Location: Porto, Portugal
More Information:

15:17 [Pub][ePrint] Deja Q: Using Dual Systems to Revisit q-Type Assumptions, by Melissa Chase and Sarah Meiklejohn

  After more than a decade of usage, bilinear groups have established their place in the cryptographic canon by enabling the construction of many advanced cryptographic primitives. Unfortunately, this explosion in functionality has been accompanied by an analogous growth in the complexity of the assumptions used to prove security. Many of these assumptions have been gathered under the umbrella of the \"uber-assumption,\" yet certain classes of these assumptions -- namely, q-type assumptions -- are stronger and require larger parameter sizes than their static counterparts.

In this paper, we show that in certain groups, many classes of q-type assumptions are in fact implied by subgroup hiding (a well-established, static assumption). Our main tool in this endeavor is the dual-system technique, as introduced by Waters in 2009. As a case study, we first show that in composite-order groups, we can prove the security of the Dodis-Yampolskiy PRF based solely on subgroup hiding and allow for a domain of arbitrary size (the original proof only allowed a polynomially-sized domain). We then turn our attention to classes of q-type assumptions and show that they are implied -- when instantiated in appropriate groups -- solely by subgroup hiding. These classes are quite general and include assumptions such as q-SDH. Concretely, our result implies that every construction relying on such assumptions for security (e.g., Boneh-Boyen signatures) can, when instantiated in appropriate composite-order bilinear groups, be proved secure under subgroup hiding instead.

15:17 [Pub][ePrint] How to manipulate curve standards: a white paper for the black hat, by Daniel J. Bernstein and Tung Chou and Chitchanok Chuengsatiansup and Andreas H\\\"ulsing and Tanja Lange and Ruben Niederhagen an

  This paper analyzes the cost of breaking ECC under the following assumptions: (1) ECC is using a standardized elliptic curve that was actually chosen by an attacker; (2) the attacker is aware of a vulnerability in some curves that are not publicly known to be vulnerable.

This cost includes the cost of exploiting the vulnerability, but also the initial cost of computing a curve suitable for sabotaging the standard. This initial cost depends upon the acceptability criteria used by the public to decide whether to allow a curve as a standard, and (in most cases) also upon the chance of a curve being vulnerable.

This paper shows the importance of accurately modeling the actual acceptability criteria: i.e., figuring out what the public can be fooled into accepting. For example, this paper shows that plausible models of the \"Brainpool acceptability criteria\" allow the attacker to target a one-in-a-million vulnerability.

15:17 [Pub][ePrint] On the Optimality of Differential Fault Analyses on CLEFIA, by Juliane Krämer and Anke Stüber and Ágnes Kiss

  Differential Fault Analysis is a powerful cryptanalytic tool to reveal secret

keys of cryptographic algorithms.

By corrupting the computation of an algorithm, an attacker gets

additional information about the secret key.

In 2012, several Differential Fault Analyses on the AES cipher were


from an information-theoretic perspective.

This analysis exposed whether or not the leaked information was fully exploited.

It revealed if an analysis was already optimal or if it could still be improved.

We applied the same approach to all existing Differential Fault Analyses

on the CLEFIA cipher.

We show that only some of these attacks are already optimal.

We improve those analyses which did not exploit all information.

With one exception, all attacks against CLEFIA-128 reach the theoretical limit

after our improvement.

Our improvement of an attack against CLEFIA-192 and CLEFIA-256 reduces the

number of fault injections to the lowest possible number reached to date.

15:17 [Pub][ePrint] A new public key system based on Polynomials over finite fields GF(2), by Gurgen Khachatrian

  In this paper a new public key system based on polynomials over finite fields GF(2) is developed.The security of the system is based on the difficulty of solving discrete logarithms over GF(2^k)

with sufficiently large k. The presented system has all features of ordinary public key schemes such as public key encryption and digital signatures. The security and implementation aspects of the presented system are also introduced along with comparison with other well known public- key systems.

15:17 [Pub][ePrint] Security Analysis of Multilinear Maps over the Integers, by Hyung Tae Lee and Jae Hong Seo

  At Crypto 2013, Coron, Lepoint, and Tibouchi~(CLT) proposed a practical Graded Encoding Scheme (GES) over the integers, which has very similar cryptographic features to ideal multilinear maps. In fact, the scheme of Coron~{\\em et al.} is the second proposal of a secure GES, and has advantages over the first scheme of Garg, Gentry, and Halevi~(GGH). For example, unlike the GGH construction, the subgroup decision assumption holds in the CLT construction. Immediately following the elegant innovations of the GES, numerous GES-based cryptographic applications were proposed. Although these applications rely on the security of the underlying GES, the security of the GES has not been analyzed in detail, aside from the original papers produced by Garg~{\\em et~al.} and Coron~{\\em et~al.}

We present an attack algorithm against the system parameters of the CLT GES. The proposed algorithm\'s complexity $\\tilde\\bO(2^{\\rho/2})$ is exponentially smaller than $\\tilde\\bO(2^{\\rho})$ of the previous best attack of Coron~{\\em et al.}, where $\\rho$ is a function of the security parameter. Furthermore, we identify a flaw in the generation of the zero-testing parameter of the CLT GES, which drastically reduces the running time of the proposed algorithm. The experimental results demonstrate the practicality of our attack.

15:17 [Pub][ePrint] Simple AEAD Hardware Interface (S{\\AE}HI) in a SoC: Implementing an On-Chip Keyak/WhirlBob Coprocessor, by Markku-Juhani O. Saarinen

  Simple AEAD Hardware Interface (S{\\AE}HI) is a hardware cryptographic

interface aimed at CAESAR Authenticated Encryption with Associated

Data (AEAD) algorithms. Cryptographic acceleration is typically

achieved either with a coprocessor or via instruction set

extensions. ISA modifications require re-engineering the CPU core,

making the approach inapplicable outside the realm of open source

processor cores. Our proposed hardware interface is a memory-mapped

cryptographic coprocessor, implementable even on very low end FPGA

evaluation platforms. Algorithms complying to S{\\AE}HI must also

include C language API drivers that directly utilize the

memory mapping in a ``bare metal\'\' fashion. This can also

be accommodated on MMU systems.

Extended battery life and bandwidth resulting from dedicated

cryptographic hardware is vital for currently dominant computing and

communication devices: mobile phones, tablets, and Internet-of-Things

(IoT) applications. We argue that these should be priority hardware

optimization targets for AEAD algorithms with realistic payload


We demonstrate a fully integrated implementation of WhirlBob

and Keyak AEADs on the FPGA fabric of Xilinx Zynq 7010. This low-cost

System-on-Chip (SoC) also houses a dual-core Cortex-A9 CPU, closely

matching the architecture of many embedded devices. The on-chip

coprocessor is accessible from user space with a Linux

kernel driver. An integration path exists all the way to end-user


15:17 [Pub][ePrint] Vernam Two, by Dan P. Milleville

  The Vernam Algorithm has long been unable to be commercially used because it requires the key to be transmitted with the ciphertext, an obvious security hazard. What has been developed with this methodology is to take a fixed key, XOR 4 randomly selected/extracted segments of this key together forming an \'Effective Key Stream\' and XOR\'ing that with the plaintext. With both the sender and receiver possessing the fixed key, using this new methodology circumvents the requirement to send the key along with the ciphertext.

Considering the drastic reduction in computations needed, this algorithm executes at better than four times faster than the AES. With our society becoming more and more digitally oriented, faster security is needed.

15:17 [Pub][ePrint] Reducing Communication Overhead of the Subset Difference Scheme, by Sanjay Bhattacherjee and Palash Sarkar

  In Broadcast Encryption (BE) systems like Pay-TV, AACS, online content sharing and broadcasting, reducing the header length (communication overhead per session) is of practical interest. The Subset Difference (SD) scheme due to Naor-Naor-Lotspiech (NNL) is the most popularly used BE scheme. It assumes an underlying full binary tree to assign keys to subsets of users. In this work, we associate short tree structures of height $a$ to nodes in the binary tree to assign keys to more subsets. This ensures that for any set of revoked users, the header length is in general less and never more than the NNL-SD scheme. Experimental studies show that the average header length is always less than that of the NNL-SD scheme and decreases as $a$ increases. User storage in the new scheme is more than that of the NNL-SD scheme but is not prohibitively expensive. By choosing a suitable value of $a$, it is possible to obtain substantial reduction in the communication overhead at the cost of a tolerable increase in the user storage.

15:17 [Pub][ePrint] The Exact PRF-Security of NMAC and HMAC, by Peter Gazi and Krzysztof Pietrzak and Michal Rybár

  NMAC is a mode of operation which turns a fixed input-length

keyed hash function f into a variable input-length function.

A~practical single-key variant of NMAC called HMAC is a very

popular and widely deployed message authentication code

(MAC). Security proofs and attacks for NMAC can typically

be lifted to HMAC.

NMAC was introduced by Bellare, Canetti and Krawczyk

[Crypto\'96], who proved it to be a secure pseudorandom

function (PRF), and thus also a MAC, assuming that

(1) f is a PRF and

(2) the function we get when cascading f is weakly


Unfortunately, HMAC is typically instantiated with

cryptographic hash functions like MD5 or SHA-1 for which (2)

has been found to be wrong. To restore the provable

guarantees for NMAC, Bellare [Crypto\'06] showed its

security based solely on the assumption that f is a PRF,

albeit via a non-uniform reduction.

Our first contribution is a simpler and uniform proof: If f

is an \\eps-secure PRF (against q queries) and a

\\delta-non-adaptively secure PRF (against q queries), then

NMAC^f is an (\\eps+lq\\delta)-secure PRF against q queries of

length at most l blocks each.

We then show that this \\eps+lq\\delta bound is basically

tight. For the most interesting case where lq\\delta>=\\eps

we prove this by constructing an f for which an attack with

advantage lq\\delta exists. This also violates the bound

O(l\\eps) on the PRF-security of NMAC recently claimed by

Koblitz and Menezes.

Finally, we analyze the PRF-security of a modification of

NMAC called NI [An and Bellare, Crypto\'99] that differs

mainly by using a compression function with an additional

keying input. This avoids the constant rekeying on

multi-block messages in NMAC and allows for a security proof

starting by the standard switch from a PRF to a random

function, followed by an information-theoretic analysis. We

carry out such an analysis, obtaining a tight lq^2/2^c bound

for this step, improving over the trivial bound of

l^2q^2/2^c. The proof borrows combinatorial techniques

originally developed for proving the security of CBC-MAC

[Bellare et al., Crypto\'05]. We also analyze a variant of

NI that does not include the message length in the last call

to the compression function, proving a l^{1+o(1)}q^2/2^c

bound in this case.