International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

15:17 [Pub][ePrint] Constructing Abelian Surfaces for Cryptography via Rosenhain Invariants, by Craig Costello and Alyson Deines-Schartz and Kristin Lauter and Tonghai Yang

  This paper presents an algorithm to construct cryptographically strong genus 2 curves and their Kummer surfaces via Rosenhain invariants and related Kummer parameters. The most common version of the complex multiplication (CM) algorithm for constructing cryptographic curves in genus 2 relies on the well-studied Igusa invariants and Mestre\'s algorithm for reconstructing the curve. On the other hand, the Rosenhain invariants typically have much smaller height, so computing them requires less precision, and in addition, the Rosenhain model for the curve can be written down directly given the Rosenhain invariants. Similarly, the parameters for a Kummer surface can be expressed directly in terms of rational functions of theta constants.

CM-values of these functions are algebraic numbers, and when computed to high enough precision, LLL can recognize their minimal polynomials. Motivated by fast cryptography on Kummer surfaces, we investigate a variant of the CM method for computing cryptographically strong Rosenhain models of curves (as well as their associated Kummer surfaces) and use it to generate several example curves at different security levels that are suitable for use in cryptography.

15:17 [Pub][ePrint] Note of Multidimensional MITM Attack on 25-Round TWINE-128, by Long Wen and Meiqin Wang and Andrey Bogdanov and Huaifeng Chen

  TWINE is a lightweight block cipher proposed in SAC 2012 by Suzaki et al. TWINE operates on 64-bit block and supports 80 or 128-bit key, denoted as TWINE-80 and TWINE-128 respectively. TWINE has attracted some attention since its publication and its security has been analyzed against several cryptanalytic techniques in both single-key and related-key settings. In the single-key setting, the best attack so far is reported by Bozta\\c{s} et al. at LightSec\'13, where a splice-and-cut attack on 21-round TWINE-128 and a multidimensional meet-in-the-middle (MITM) attack on 25-round TWINE-128 are presented. Yet, the evaluation of the time complexity of the multidimensional MITM attack on 25-round TWINE-128 is somehow controversial in the way we understand. We here describe the attack in detail and explains our concerns about the time complexity of the attack. And it turns out that the multidimensional MITM attack on 25-round TWINE-128 may have a time complexity higher than exhaustive search.

15:17 [Pub][ePrint] Efficient Authentication and Pseudorandomness from Weaker (Ring-)LPN Assumptions, by Ivan Damg{\\aa}rd and Sunoo Park and Sarah Zakarias

  We propose a two new approaches to authentication based on the (ring-)LPN problem. In contrast to all known approaches, we can use a noise rate for the LPN problem that is arbitrarily close to 1/2, without this affecting the communication complexity of the protocol, and while doing only (poly-)logarithmic depth computation. At the cost of having the prover keep a small amount of state, our approach allows us to ``upgrade\'\' the HB protocol from passive to the man-in-the-middle security (the strongest notion) while maintaining its simple structure.

A technical contribution of independent interest is a construction of a poly-logarithmic depth PRF from LPN that is secure if at most a predetermined number $\\ell$ of queries are asked; if more queries are asked, the same PRF is still secure, but now under a stronger assumption closely related to LPN. The basic idea of the construction also applies to other problems with a similar structure, such as subset-sum.

15:17 [Pub][ePrint] Fast point multiplication algorithms for binary elliptic curves with and without precomputation, by Thomaz Oliveira and Diego F. Aranha and Julio López and Francisco Rodríguez-Henríquez

  In this paper we introduce new methods for computing constant-time variable-base point multiplications over the Galbraith-Lin-Scott (GLS) and the Koblitz families of elliptic curves. Using a left-to-right double-and-add and a right-to-left halve-and-add Montgomery ladder over a GLS curve, we present some of the fastest timings yet reported in the literature for point multiplication. In addition, we combine these two procedures to compute a multi-core protected scalar multiplication. Furthermore, we designed for the first time a regular $\\tau$-adic scalar expansion for Koblitz curves. As a result, using the regular recoding approach, we set the speed record for a single constant-time point multiplication on standardized binary elliptic curves at the $128$-bit security level.

06:17 [Pub][ePrint] FNR : Arbitrary length small domain block cipher proposal, by Sashank Dara, Scott Fluhrer

  We propose a practical flexible (or arbitrary) length small domain block cipher.

FNR can cipher small domain data formats like IPv4, Port numbers, MAC Addresses, IPv6 address, any random short strings and numbers while preserving their input length.

In addition to the classic Feistel networks, Naor and Reingold propose usage of pair-wise independent permutation (PWIP) functions in first and last rounds of LR constructions to provide additional randomness and security. But their PWIP functions are based on Galois Fields. Representing GF(2n) for different input lengths would be

complicated for implementation. For this reason, the PWIP functions we propose are based on random N X N Invertible matrices.

In this paper we propose the specification of FNR mode of encryption. Its properties, limitations, features etc.

We provide possible example applications of this block cipher for preserving formats of input types like IPv4 addresses, Credit card numbers. We provide reference implementation\'s experimental results and performance numbers in different setups. FNR should be used only when deterministic encryption is needed. It does not provide semantic security.

FNR denotes Flexible Naor and Reingold

06:17 [Pub][ePrint] System-level non-interference for constant-time cryptography, by Gilles Barthe and Gustavo Betarte and Juan Diego Campo and Carlos Luna and David Pichardie

  Cache-based attacks are a class of side-channel attacks that are

particularly effective in virtualized or cloud-based environments,

where they have been used to recover secret keys from cryptographic

implementations. One common approach to thwart cache-based attacks is

to use \\emph{constant-time} implementations, i.e.\\, which do not

branch on secrets and do not perform memory accesses that depend on

secrets. However, there is no rigorous proof that constant-time

implementations are protected against concurrent cache-attacks in

virtualization platforms with shared cache; moreover, many prominent

implementations are not constant-time. An alternative approach is to

rely on system-level mechanisms. One recent such mechanism is stealth

memory, which provisions a small amount of private cache for programs

to carry potentially leaking computations securely. Stealth memory

induces a weak form of constant-time, called \\emph{S-constant-time},

which encompasses some widely used cryptographic

implementations. However, there is no rigorous analysis of stealth

memory and S-constant-time, and no tool support for checking if

applications are S-constant-time.

We propose a new information-flow analysis that checks if an x86

application executes in constant-time, or in

S-constant-time. Moreover, we prove that constant-time

(resp. S-constant-time) programs do not leak confidential information

through the cache to other operating systems executing concurrently on

virtualization platforms (resp. platforms supporting stealth

memory). The soundness proofs are based on new theorems of independent

interest, including isolation theorems for virtualization platforms

(resp. platforms supporting stealth memory), and proofs that

constant-time implementations (resp. S-constant-time implementations)

are non-interfering with respect to a strict information flow policy

which disallows that control flow and memory accesses depend on

secrets. We formalize our results using the \\textsf{Coq} proof

assistant and we demonstrate the effectiveness of our analyses on

cryptographic implementations, including PolarSSL AES, DES and RC4,

SHA256 and Salsa20.

06:17 [Pub][ePrint] The Hash Function \"Fugue\", by Shai Halevi and William E. Hall and Charanjit S. Jutla

  We describe Fugue, a hash function supporting inputs of length

upto 2^{64}-1 bits and hash outputs of length upto 512 bits. Notably, Fugue is not based on a compression function. Rather, it is directly a hash function that supports variable-length inputs.

The starting point for Fugue is the hash function Grindahl, but it extends that design to protect against the kind of attacks that were developed for Grindahl, as well as earlier hash functions like SHA-1.

A key enhancement is the design of a much stronger round function which replaces the AES round function of Grindahl, using better

codes (over longer words) than the AES 4 X 4 MDS matrix. Also,

Fugue makes judicious use of this new round function on a much larger

internal state.

The design of Fugue is proof-oriented: the various components are

designed in such a way as to allow proofs of security, and yet be efficient to implement. As a result, we can prove that current attack methods cannot find collisions in Fugue any faster than the trivial birthday attack. Although the proof is computer assisted, the assistance is limited to computing ranks of various matrices.

21:17 [Pub][ePrint] Virtual Proofs of Reality, by Ulrich Rührmair

  In this paper, we discuss the question how physical

statements can be proven remotely over digital communication

channels, but without using classical secret keys, and without

assuming tamper-resistant and trusted measurement hardware in the location of the prover. Examples for the considered physical statements are: (i) \"the temperature of a certain object is X

°C\", (ii) \"two certain objects are positioned at distance X\", or (iii) \"a certain object has been irreversibly altered or destroyed\". In lack of an established name, we would like to call the corresponding security protocols \"virtual proofs of reality\" (VPs).

While a host of variants seems conceivable, this paper focuses

on VPs in which the verifier has handed over one or more

specific physical objects O_i to the prover at some point prior

to the VP. These \"witness objects\" assist the prover during the

proof, but shall not contain classical digital keys nor be assumed

tamper-resistant in the classical sense. The prover is allowed to

open, inspect and alter these objects in our adversarial model,

only being limited by current technology, while he shall still

be unable to prove false claims to the verifier.

In order to illustrate our concept, we give example

protocols built on temperature sensitive integrated circuits, disordered optical scattering media, and quantum systems. These

protocols prove the temperature, destruction/modification, or

relative position of witness objects in the prover\'s location. Full

experimental realizations of these schemes are beyond the scope

of this paper. But the protocols utilize established technologies

from the areas of physical unclonable functions and quantum

cryptography, and hence appear plausible also without such

proof. Finally, we also discuss potential advancements of our

method in theory, for example \"public virtual proofs\" that

function without exchanging witness objects Oi between the

verifier and the prover.

Our work touches upon and partly extends several established cryptographic and security concepts, including physical unclonable functions, quantum cryptography, and interactive proof systems.

21:17 [Pub][ePrint] Adaptive Security of Constrained PRFs, by Georg Fuchsbauer and Momchil Konstantinov and Krzysztof Pietrzak and Vanishree Rao

  Constrained pseudorandom functions have recently been

introduced independently by Boneh and Waters [Asiacrypt\'13], Kiayias

et al.\\ [CCS\'13], and Boyle et al.\\ [PKC\'14].

In a standard pseudorandom function (PRF) a key $k$ is used to evaluate the PRF on all inputs

in the domain. Constrained PRFs additionally offer the functionality

to delegate ``constrained\'\' keys $k_S$ which allow to evaluate the PRF only on a subset $S$ of the domain.

The three above-mentioned papers all show that the classical GGM construction [J.ACM\'86]

of a PRF from a pseudorandom generator (PRG) directly

gives a constrained PRF where one can compute constrained keys to evaluate

the PRF on all inputs with a given prefix.

This constrained PRF has already found many interesting applications.

Unfortunately, the existing security proofs only show selective

security (by a reduction to the security of the underlying PRG). To

get full security, one has to use

complexity leveraging,

which loses

an exponential factor $2^N$ in security, where $N$ is the input length.

The first contribution of this paper is a new reduction that only

loses a quasipolynomial factor $q^{\\log N}$, where $q$ is the number

of adversarial queries.

For this we develop a novel proof technique which constructs a

distinguisher by interleaving simple guessing steps and hybrid arguments a

small number of times. This approach might be of interest also in

other contexts where currently the only technique to achieve full

security is complexity leveraging.

Our second contribution is concerned with another

constrained PRF, due to Boneh and Waters, which allows for

constrained keys for the more general class of bit-fixing

functions. Their

security proof also suffers from a $2^N$ loss.

We construct a meta-reduction which shows

that any ``simple\'\' reduction that proves full security of this construction from a non-interactive hardness assumption must incur an exponential security loss.

21:17 [Pub][ePrint] Using Random Error Correcting Codes in Near-Collision Attacks on Generic Hash-Functions, by Inna Polak, Adi Shamir

  In this paper we consider the problem of finding a near-collision

with Hamming distance bounded by $r$ in a generic cryptographic hash

function $h$ whose outputs can be modeled as random $n$-bit strings.

In 2011, Lamberger suggested a modified version of Pollard\'s rho method

which computes a chain of values by alternately applying the hash

function $h$ and an error correcting code $e$ to a random starting

value $x_{0}$ until it cycles. This turns some (but not all) of the

near-collisions in $h$ into full collisions in $f=e\\circ h$, which

are easy to find. In 2012, Leurent improved Lamberger\'s memoryless

algorithm by using any available amount of memory to store the endpoints

of multiple chains of $f$ values, and using Van Oorschot and Wiener\'s

algorithm to find many full collisions in $f$, hoping that one of

them will be an $r$-near-collision in $h$. This is currently the

best known time/memory tradeoff algorithm for the problem.

The efficiency of both Lamberger\'s and Leurent\'s algorithms depend

on the quality of their error correction code. Since they have to

apply error correction to \\emph{any} bit string, they want to use

perfect codes, but all the known constructions of such codes can correct

only $1$ or $3$ errors. To deal with a larger number of errors,

they recommend using a concatenation of many Hamming codes, each capable

of correcting a single error in a particular subset of the bits, along

with some projections. As we show in this paper, this is a suboptimal

choice, which can be considerably improved by using randomly chosen

linear codes instead of Hamming codes and storing a precomputed lookup

table to make the error correction process efficient. We show both

theoretically and experimentally that this is a better way to utilize

the available memory, instead of devoting all the memory to the storage

of chain endpoints. Compared to Leurent\'s algorithm, we demonstrate

an improvement ratio which grows with the size of the problem. In

particular, we experimentally verified an improvement ratio of about

$3$ in a small example with $n=160$ and $r=33$ which we implemented

on a single PC, and mathematically predicted an improvement ratio

of about $730$ in a large example with $n=1024$ and $r=100$, using

$2^{40}$ memory.

21:17 [Pub][ePrint] A Simple Recursive Tree Oblivious RAM, by Benny Pinkas and Tzachy Reinman

  Oblivious RAM (ORAM) has received increasing attention in the past few years. The goal of oblivious RAM is to enable a client, that can locally store only a small (preferably constant) amount of data, to store remotely N data items, and access them while hiding the identities of the items that are being accessed. Most of the earlier ORAM constructions were based on the hierarchical data structure of Goldreich and Ostrovsky. Shi et al. introduced a binary tree ORAM, which is simpler and more efficient than the classical hierarchical ORAM. Gentry et al. have improved the scheme. In this work, we improve these two constructions. Our scheme asymptotically outperforms all previous tree based ORAM schemes that have constant client memory, with an overhead of O(log^{2+eps}(N) * log^2(log(N))) per operation for a O(N) storage server. Although the best known asymptotic

result for ORAM is due to the hierarchical structure of Kushilevitz et al. (O(log^2(N)/log(log(N)))), tree based ORAM constructions are much simpler