International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2014-06-06
06:17 [Pub][ePrint]

We describe Fugue, a hash function supporting inputs of length

upto 2^{64}-1 bits and hash outputs of length upto 512 bits. Notably, Fugue is not based on a compression function. Rather, it is directly a hash function that supports variable-length inputs.

The starting point for Fugue is the hash function Grindahl, but it extends that design to protect against the kind of attacks that were developed for Grindahl, as well as earlier hash functions like SHA-1.

A key enhancement is the design of a much stronger round function which replaces the AES round function of Grindahl, using better

codes (over longer words) than the AES 4 X 4 MDS matrix. Also,

Fugue makes judicious use of this new round function on a much larger

internal state.

The design of Fugue is proof-oriented: the various components are

designed in such a way as to allow proofs of security, and yet be efficient to implement. As a result, we can prove that current attack methods cannot find collisions in Fugue any faster than the trivial birthday attack. Although the proof is computer assisted, the assistance is limited to computing ranks of various matrices.

2014-06-05
21:17 [Pub][ePrint]

In this paper, we discuss the question how physical

statements can be proven remotely over digital communication

channels, but without using classical secret keys, and without

assuming tamper-resistant and trusted measurement hardware in the location of the prover. Examples for the considered physical statements are: (i) \"the temperature of a certain object is X

°C\", (ii) \"two certain objects are positioned at distance X\", or (iii) \"a certain object has been irreversibly altered or destroyed\". In lack of an established name, we would like to call the corresponding security protocols \"virtual proofs of reality\" (VPs).

While a host of variants seems conceivable, this paper focuses

on VPs in which the verifier has handed over one or more

specific physical objects O_i to the prover at some point prior

to the VP. These \"witness objects\" assist the prover during the

proof, but shall not contain classical digital keys nor be assumed

tamper-resistant in the classical sense. The prover is allowed to

open, inspect and alter these objects in our adversarial model,

only being limited by current technology, while he shall still

be unable to prove false claims to the verifier.

In order to illustrate our concept, we give example

protocols built on temperature sensitive integrated circuits, disordered optical scattering media, and quantum systems. These

protocols prove the temperature, destruction/modification, or

relative position of witness objects in the prover\'s location. Full

experimental realizations of these schemes are beyond the scope

of this paper. But the protocols utilize established technologies

from the areas of physical unclonable functions and quantum

cryptography, and hence appear plausible also without such

proof. Finally, we also discuss potential advancements of our

method in theory, for example \"public virtual proofs\" that

function without exchanging witness objects Oi between the

verifier and the prover.

Our work touches upon and partly extends several established cryptographic and security concepts, including physical unclonable functions, quantum cryptography, and interactive proof systems.

21:17 [Pub][ePrint]

Constrained pseudorandom functions have recently been

introduced independently by Boneh and Waters [Asiacrypt\'13], Kiayias

et al.\\ [CCS\'13], and Boyle et al.\\ [PKC\'14].

In a standard pseudorandom function (PRF) a key $k$ is used to evaluate the PRF on all inputs

in the domain. Constrained PRFs additionally offer the functionality

to delegate constrained\'\' keys $k_S$ which allow to evaluate the PRF only on a subset $S$ of the domain.

The three above-mentioned papers all show that the classical GGM construction [J.ACM\'86]

of a PRF from a pseudorandom generator (PRG) directly

gives a constrained PRF where one can compute constrained keys to evaluate

the PRF on all inputs with a given prefix.

This constrained PRF has already found many interesting applications.

Unfortunately, the existing security proofs only show selective

security (by a reduction to the security of the underlying PRG). To

get full security, one has to use

complexity leveraging,

which loses

an exponential factor $2^N$ in security, where $N$ is the input length.

The first contribution of this paper is a new reduction that only

loses a quasipolynomial factor $q^{\\log N}$, where $q$ is the number

For this we develop a novel proof technique which constructs a

distinguisher by interleaving simple guessing steps and hybrid arguments a

small number of times. This approach might be of interest also in

other contexts where currently the only technique to achieve full

security is complexity leveraging.

Our second contribution is concerned with another

constrained PRF, due to Boneh and Waters, which allows for

constrained keys for the more general class of bit-fixing

functions. Their

security proof also suffers from a $2^N$ loss.

We construct a meta-reduction which shows

that any simple\'\' reduction that proves full security of this construction from a non-interactive hardness assumption must incur an exponential security loss.

21:17 [Pub][ePrint]

In this paper we consider the problem of finding a near-collision

with Hamming distance bounded by $r$ in a generic cryptographic hash

function $h$ whose outputs can be modeled as random $n$-bit strings.

In 2011, Lamberger suggested a modified version of Pollard\'s rho method

which computes a chain of values by alternately applying the hash

function $h$ and an error correcting code $e$ to a random starting

value $x_{0}$ until it cycles. This turns some (but not all) of the

near-collisions in $h$ into full collisions in $f=e\\circ h$, which

are easy to find. In 2012, Leurent improved Lamberger\'s memoryless

algorithm by using any available amount of memory to store the endpoints

of multiple chains of $f$ values, and using Van Oorschot and Wiener\'s

algorithm to find many full collisions in $f$, hoping that one of

them will be an $r$-near-collision in $h$. This is currently the

best known time/memory tradeoff algorithm for the problem.

The efficiency of both Lamberger\'s and Leurent\'s algorithms depend

on the quality of their error correction code. Since they have to

apply error correction to \\emph{any} bit string, they want to use

perfect codes, but all the known constructions of such codes can correct

only $1$ or $3$ errors. To deal with a larger number of errors,

they recommend using a concatenation of many Hamming codes, each capable

of correcting a single error in a particular subset of the bits, along

with some projections. As we show in this paper, this is a suboptimal

choice, which can be considerably improved by using randomly chosen

linear codes instead of Hamming codes and storing a precomputed lookup

table to make the error correction process efficient. We show both

theoretically and experimentally that this is a better way to utilize

the available memory, instead of devoting all the memory to the storage

of chain endpoints. Compared to Leurent\'s algorithm, we demonstrate

an improvement ratio which grows with the size of the problem. In

particular, we experimentally verified an improvement ratio of about

$3$ in a small example with $n=160$ and $r=33$ which we implemented

on a single PC, and mathematically predicted an improvement ratio

of about $730$ in a large example with $n=1024$ and $r=100$, using

$2^{40}$ memory.

21:17 [Pub][ePrint]

Oblivious RAM (ORAM) has received increasing attention in the past few years. The goal of oblivious RAM is to enable a client, that can locally store only a small (preferably constant) amount of data, to store remotely N data items, and access them while hiding the identities of the items that are being accessed. Most of the earlier ORAM constructions were based on the hierarchical data structure of Goldreich and Ostrovsky. Shi et al. introduced a binary tree ORAM, which is simpler and more efficient than the classical hierarchical ORAM. Gentry et al. have improved the scheme. In this work, we improve these two constructions. Our scheme asymptotically outperforms all previous tree based ORAM schemes that have constant client memory, with an overhead of O(log^{2+eps}(N) * log^2(log(N))) per operation for a O(N) storage server. Although the best known asymptotic

result for ORAM is due to the hierarchical structure of Kushilevitz et al. (O(log^2(N)/log(log(N)))), tree based ORAM constructions are much simpler

21:17 [Pub][ePrint]

In 1993, Coppersmith introduced the \"factorization factory\" approach as a means to speed up the Number Field Sieve algorithm (NFS) when factoring batches of integers of similar size: at the expense of a large precomputation whose cost is amortized when considering sufficiently many integers to factor, the complexity of each individual factorization can then be lowered.

We suggest here to extend this idea to the computation of discrete logarithms in finite fields of small characteristic using the Function Field Sieve (FFS), thus referring to this approach as the \"FFS factory\". In this paper, the benefits of the proposed technique are established thanks to both a theoretical complexity analysis along with a practical experiment in which we solved the discrete logarithm problem in fifty different binary fields of sizes ranging from 601 to 699 bits.

21:17 [Pub][ePrint]

Homomorphic signatures enable anyone to publicly perform computations on signed data and produce a compact tag to authenticate the results.

In this paper, we construct two bounded fully homomorphic signature schemes, as follows.

\\begin{itemize}

\\item For any two polynomials $d=d(\\lambda), s=s(\\lambda)$, where $\\lambda$ is the security parameter.

Our first scheme is able to evaluate any circuit on the signatures, as long as the depth and size of the circuit are bounded by $d$ and $s$, respectively.

The construction relies on indistinguishability obfuscation and injective (or polynomially bounded pre-image size) one-way functions.

\\medskip

\\item The second scheme, removing the restriction on the size of the circuits, is an extension of the first one,

with succinct verification and evaluation keys.

More specifically, for an a-prior polynomial $d=d(\\lambda)$, the scheme allows to evaluate any circuit on the signatures, as long as the depth of the circuit is bounded by $d$.

This scheme is based on differing-inputs obfuscation and collision-resistant hash functions and

relies on a technique called recording hash of circuits.

\\end{itemize}

Both schemes enjoy the composition property.

Namely, outputs of previously derived signatures can be re-used as inputs for new computations.

The length of derived signatures in both schemes is independent of the size of the data set.

Moreover, both constructions satisfy a strong privacy notion, we call {\\em semi-strong context hiding}, which requires that

the derived signatures of evaluating any circuit on the signatures of two data sets are {\\em identical} as long as the evaluations of the circuit on these two data sets are the same.

2014-06-04
15:17 [Pub][ePrint]

We describe a method to bootstrap a packed BGV ciphertext which does not depend (as much) on any special properties of the plaintext and ciphertext moduli. Prior efficient\'\' methods such as that of Gentry et al (PKC 2012) required a ciphertext modulus $q$ which was close to a power of the plaintext modulus $p$. This enables our method to be applied in a larger number of situations. Also unlike previous methods our depth grows only as $\\log \\log q$ as opposed to the $\\log q$ of previous methods. The basic bootstrapping technique makes use of a representation of the group $\\Z_q^+$ over the finite field $\\F_p$ (either based on polynomials or elliptic curves). This technique is then extended to the full BGV packed ciphertext space, using a method whose depth depends only logarithmically on the number of packed elements. This method may be of interest as an alternative to the method of Alperin-Sheriff and Peikert (CRYPTO 2013). To aid efficiency we utilize the ring/field switching technique of Gentry et al (SCN 2012, JCS 2013).

15:17 [Pub][ePrint]

We generalize correlation-enhanced power analysis collision attacks into moments-correlating DPA. The resulting distinguisher is applicable to the profiled and non-profiled (collision) settings and is able to exploit information lying in any statistical moment. It also benefits from a simple rule-of-thumb to estimate its data complexity. Experimental results show that such a tool allows answering with confidence to some important questions regarding the design of side-channel countermeasures (e.g. what is the most informative statistical moment in the leakages of a threshold implementation). We further argue that moments-correlating DPA is a natural candidate for leakage detection tests, enjoying the simplicity of correlation power analysis and advanced features for the evaluation of higher-order attacks with an easy-to-compute confidence level.

15:17 [Pub][ePrint]

In this paper, we introduce a new approach to side-channel key recovery, that combines the low time/memory complexity and noise tolerance of standard (divide and conquer) differential power analysis with the optimal data complexity of algebraic side-channel attacks. Our fundamental contribution for this purpose is to change the way of expressing the problem, from the system of equations used in algebraic attacks to a code, essentially inspired by low density parity check codes. We then show that such codes can be efficiently decoded, taking advantage of the sparsity of the information corresponding to intermediate variables in actual leakage traces. The resulting soft analytical side-channel attacks work under the same profiling assumptions as template attacks, and directly exploit the vectors of probabilities produced by these attacks. As a result, we bridge the gap between popular side-channel distinguishers based on simple statistical tests and previous approaches to analytical side-channel attacks that could only exploit hard information so far.

15:17 [Pub][ePrint]

Combining countermeasures is usually assumed to be the best way to protect embedded devices against side-channel attacks. These combinations are at least expected to increase the number of measurements of successful attacks to some reasonable extent, and at best to guarantee a bounded time complexity independent of the number of measurements. This latter guarantee, only possible in the context of leakage-resilient constructions, was only reached either for stateful (pseudo-random generator) constructions, or large parallel implementations so far. In this paper, we describe a first proposal of stateless (pseudo-random function) construction, for which we have strong hints that security bounded implementations are reachable under the constraints of small embedded devices. Our proposal essentially combines the well-known shuffling countermeasure with a tweaked pseudo-random function introduced at CHES 2012. We first detail is performances. Then we analyze it against standard differential power analysis and discuss the different parameters influencing its security bounds. Finally, we put forward that its implementation in 8-bit microcontrollers can provide a better security vs. performance tradeoff than state-of-the art (combinations of) countermeasures.