International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

21:17 [Pub][ePrint] Adaptive Security of Constrained PRFs, by Georg Fuchsbauer and Momchil Konstantinov and Krzysztof Pietrzak and Vanishree Rao

  Constrained pseudorandom functions have recently been

introduced independently by Boneh and Waters [Asiacrypt\'13], Kiayias

et al.\\ [CCS\'13], and Boyle et al.\\ [PKC\'14].

In a standard pseudorandom function (PRF) a key $k$ is used to evaluate the PRF on all inputs

in the domain. Constrained PRFs additionally offer the functionality

to delegate ``constrained\'\' keys $k_S$ which allow to evaluate the PRF only on a subset $S$ of the domain.

The three above-mentioned papers all show that the classical GGM construction [J.ACM\'86]

of a PRF from a pseudorandom generator (PRG) directly

gives a constrained PRF where one can compute constrained keys to evaluate

the PRF on all inputs with a given prefix.

This constrained PRF has already found many interesting applications.

Unfortunately, the existing security proofs only show selective

security (by a reduction to the security of the underlying PRG). To

get full security, one has to use

complexity leveraging,

which loses

an exponential factor $2^N$ in security, where $N$ is the input length.

The first contribution of this paper is a new reduction that only

loses a quasipolynomial factor $q^{\\log N}$, where $q$ is the number

of adversarial queries.

For this we develop a novel proof technique which constructs a

distinguisher by interleaving simple guessing steps and hybrid arguments a

small number of times. This approach might be of interest also in

other contexts where currently the only technique to achieve full

security is complexity leveraging.

Our second contribution is concerned with another

constrained PRF, due to Boneh and Waters, which allows for

constrained keys for the more general class of bit-fixing

functions. Their

security proof also suffers from a $2^N$ loss.

We construct a meta-reduction which shows

that any ``simple\'\' reduction that proves full security of this construction from a non-interactive hardness assumption must incur an exponential security loss.

21:17 [Pub][ePrint] Using Random Error Correcting Codes in Near-Collision Attacks on Generic Hash-Functions, by Inna Polak, Adi Shamir

  In this paper we consider the problem of finding a near-collision

with Hamming distance bounded by $r$ in a generic cryptographic hash

function $h$ whose outputs can be modeled as random $n$-bit strings.

In 2011, Lamberger suggested a modified version of Pollard\'s rho method

which computes a chain of values by alternately applying the hash

function $h$ and an error correcting code $e$ to a random starting

value $x_{0}$ until it cycles. This turns some (but not all) of the

near-collisions in $h$ into full collisions in $f=e\\circ h$, which

are easy to find. In 2012, Leurent improved Lamberger\'s memoryless

algorithm by using any available amount of memory to store the endpoints

of multiple chains of $f$ values, and using Van Oorschot and Wiener\'s

algorithm to find many full collisions in $f$, hoping that one of

them will be an $r$-near-collision in $h$. This is currently the

best known time/memory tradeoff algorithm for the problem.

The efficiency of both Lamberger\'s and Leurent\'s algorithms depend

on the quality of their error correction code. Since they have to

apply error correction to \\emph{any} bit string, they want to use

perfect codes, but all the known constructions of such codes can correct

only $1$ or $3$ errors. To deal with a larger number of errors,

they recommend using a concatenation of many Hamming codes, each capable

of correcting a single error in a particular subset of the bits, along

with some projections. As we show in this paper, this is a suboptimal

choice, which can be considerably improved by using randomly chosen

linear codes instead of Hamming codes and storing a precomputed lookup

table to make the error correction process efficient. We show both

theoretically and experimentally that this is a better way to utilize

the available memory, instead of devoting all the memory to the storage

of chain endpoints. Compared to Leurent\'s algorithm, we demonstrate

an improvement ratio which grows with the size of the problem. In

particular, we experimentally verified an improvement ratio of about

$3$ in a small example with $n=160$ and $r=33$ which we implemented

on a single PC, and mathematically predicted an improvement ratio

of about $730$ in a large example with $n=1024$ and $r=100$, using

$2^{40}$ memory.

21:17 [Pub][ePrint] A Simple Recursive Tree Oblivious RAM, by Benny Pinkas and Tzachy Reinman

  Oblivious RAM (ORAM) has received increasing attention in the past few years. The goal of oblivious RAM is to enable a client, that can locally store only a small (preferably constant) amount of data, to store remotely N data items, and access them while hiding the identities of the items that are being accessed. Most of the earlier ORAM constructions were based on the hierarchical data structure of Goldreich and Ostrovsky. Shi et al. introduced a binary tree ORAM, which is simpler and more efficient than the classical hierarchical ORAM. Gentry et al. have improved the scheme. In this work, we improve these two constructions. Our scheme asymptotically outperforms all previous tree based ORAM schemes that have constant client memory, with an overhead of O(log^{2+eps}(N) * log^2(log(N))) per operation for a O(N) storage server. Although the best known asymptotic

result for ORAM is due to the hierarchical structure of Kushilevitz et al. (O(log^2(N)/log(log(N)))), tree based ORAM constructions are much simpler

21:17 [Pub][ePrint] FFS Factory: Adapting Coppersmith\'s \"Factorization Factory\" to the Function Field Sieve, by J\\\'er\\\'emie Detrey

  In 1993, Coppersmith introduced the \"factorization factory\" approach as a means to speed up the Number Field Sieve algorithm (NFS) when factoring batches of integers of similar size: at the expense of a large precomputation whose cost is amortized when considering sufficiently many integers to factor, the complexity of each individual factorization can then be lowered.

We suggest here to extend this idea to the computation of discrete logarithms in finite fields of small characteristic using the Function Field Sieve (FFS), thus referring to this approach as the \"FFS factory\". In this paper, the benefits of the proposed technique are established thanks to both a theoretical complexity analysis along with a practical experiment in which we solved the discrete logarithm problem in fifty different binary fields of sizes ranging from 601 to 699 bits.

21:17 [Pub][ePrint] Bounded Fully Homomorphic Signature Schemes, by Xiang Xie and Rui Xue

  Homomorphic signatures enable anyone to publicly perform computations on signed data and produce a compact tag to authenticate the results.

In this paper, we construct two bounded fully homomorphic signature schemes, as follows.


\\item For any two polynomials $d=d(\\lambda), s=s(\\lambda)$, where $\\lambda$ is the security parameter.

Our first scheme is able to evaluate any circuit on the signatures, as long as the depth and size of the circuit are bounded by $d$ and $s$, respectively.

The construction relies on indistinguishability obfuscation and injective (or polynomially bounded pre-image size) one-way functions.


\\item The second scheme, removing the restriction on the size of the circuits, is an extension of the first one,

with succinct verification and evaluation keys.

More specifically, for an a-prior polynomial $d=d(\\lambda)$, the scheme allows to evaluate any circuit on the signatures, as long as the depth of the circuit is bounded by $d$.

This scheme is based on differing-inputs obfuscation and collision-resistant hash functions and

relies on a technique called recording hash of circuits.


Both schemes enjoy the composition property.

Namely, outputs of previously derived signatures can be re-used as inputs for new computations.

The length of derived signatures in both schemes is independent of the size of the data set.

Moreover, both constructions satisfy a strong privacy notion, we call {\\em semi-strong context hiding}, which requires that

the derived signatures of evaluating any circuit on the signatures of two data sets are {\\em identical} as long as the evaluations of the circuit on these two data sets are the same.

15:17 [Pub][ePrint] Bootstrapping BGV Ciphertexts With A Wider Choice of p and q., by Emmanuela Orsini and Joop van de Pol and Nigel P. Smart

  We describe a method to bootstrap a packed BGV ciphertext which does not depend (as much) on any special properties of the plaintext and ciphertext moduli. Prior ``efficient\'\' methods such as that of Gentry et al (PKC 2012) required a ciphertext modulus $q$ which was close to a power of the plaintext modulus $p$. This enables our method to be applied in a larger number of situations. Also unlike previous methods our depth grows only as $\\log \\log q$ as opposed to the $\\log q$ of previous methods. The basic bootstrapping technique makes use of a representation of the group $\\Z_q^+$ over the finite field $\\F_p$ (either based on polynomials or elliptic curves). This technique is then extended to the full BGV packed ciphertext space, using a method whose depth depends only logarithmically on the number of packed elements. This method may be of interest as an alternative to the method of Alperin-Sheriff and Peikert (CRYPTO 2013). To aid efficiency we utilize the ring/field switching technique of Gentry et al (SCN 2012, JCS 2013).

15:17 [Pub][ePrint] Moments-Correlating DPA, by Amir Moradi and François-Xavier Standaert

  We generalize correlation-enhanced power analysis collision attacks into moments-correlating DPA. The resulting distinguisher is applicable to the profiled and non-profiled (collision) settings and is able to exploit information lying in any statistical moment. It also benefits from a simple rule-of-thumb to estimate its data complexity. Experimental results show that such a tool allows answering with confidence to some important questions regarding the design of side-channel countermeasures (e.g. what is the most informative statistical moment in the leakages of a threshold implementation). We further argue that moments-correlating DPA is a natural candidate for leakage detection tests, enjoying the simplicity of correlation power analysis and advanced features for the evaluation of higher-order attacks with an easy-to-compute confidence level.

15:17 [Pub][ePrint] Soft Analytical Side-Channel Attacks, by Nicolas Veyrat-Charvillon and Benoît Gérard and François-Xavier Standaert

  In this paper, we introduce a new approach to side-channel key recovery, that combines the low time/memory complexity and noise tolerance of standard (divide and conquer) differential power analysis with the optimal data complexity of algebraic side-channel attacks. Our fundamental contribution for this purpose is to change the way of expressing the problem, from the system of equations used in algebraic attacks to a code, essentially inspired by low density parity check codes. We then show that such codes can be efficiently decoded, taking advantage of the sparsity of the information corresponding to intermediate variables in actual leakage traces. The resulting soft analytical side-channel attacks work under the same profiling assumptions as template attacks, and directly exploit the vectors of probabilities produced by these attacks. As a result, we bridge the gap between popular side-channel distinguishers based on simple statistical tests and previous approaches to analytical side-channel attacks that could only exploit hard information so far.

15:17 [Pub][ePrint] Combining Leakage-Resilient PRFs and Shuffling (Towards Bounded Security for Small Embedded Devices), by Vincent Grosso and Romain Poussier and François-Xavier Standaert and Lubos Gaspar

  Combining countermeasures is usually assumed to be the best way to protect embedded devices against side-channel attacks. These combinations are at least expected to increase the number of measurements of successful attacks to some reasonable extent, and at best to guarantee a bounded time complexity independent of the number of measurements. This latter guarantee, only possible in the context of leakage-resilient constructions, was only reached either for stateful (pseudo-random generator) constructions, or large parallel implementations so far. In this paper, we describe a first proposal of stateless (pseudo-random function) construction, for which we have strong hints that security bounded implementations are reachable under the constraints of small embedded devices. Our proposal essentially combines the well-known shuffling countermeasure with a tweaked pseudo-random function introduced at CHES 2012. We first detail is performances. Then we analyze it against standard differential power analysis and discuss the different parameters influencing its security bounds. Finally, we put forward that its implementation in 8-bit microcontrollers can provide a better security vs. performance tradeoff than state-of-the art (combinations of) countermeasures.

15:17 [Pub][ePrint] Efficient Selection of Time Samples for Higher-Order DPA with Projection Pursuits, by François Durvaux and François-Xavier Standaert and Nicolas Veyrat-Charvillon and Jean-Baptiste Mairy and Yves De

  The selection of points-of-interest in leakage traces is a frequently neglected problem in the side-channel literature. However, it can become the bottleneck of practical adversaries/evaluators as the size of the measurement traces increases, especially in the challenging context of masked implementations, where only a combination of multiple shares reveals information in higher-order statistical moments. In this paper, we describe new (black box) tools for efficiently dealing with this problem. The proposed techniques exploit projection pursuits and optimized local search algorithms, work with minimum memory requirements and practical time complexity. We validate them with two case-studies of unprotected and first-order masked implementations in an 8-bit device, the latter one being hard to analyze with previously known methods.

15:17 [Pub][ePrint] On the Cost of Lazy Engineering for Masked Software Implementations, by Josep Balasch and Benedikt Gierlichs and Vincent Grosso and Oscar Reparaz and François-Xavier Standaert

  Masking is one of the most popular countermeasures to mitigate side-channel analysis. Yet, its deployment in actual cryptographic devices is well known to be challenging, since designers have to ensure that the leakage corresponding to different shares is independent. Several works have shown that such an independent leakage assumption may be contradicted in practice, because of physical effects such as ``glitches\" or ``transition-based\" leakages. As a result, implementing masking securely can be a time-consuming engineering problem. This is in strong contrast with recent and promising approaches for the automatic insertion of countermeasures exploiting compilers, that aim to limit the development time of side-channel resistant software. Motivated by this contrast, we question what can be hoped for these approaches - or more generally for masked software implementations based on careless assembly generation. For this purpose, our first contribution is a simple reduction from security proofs obtained in a (usual but not always realistic) model where leakages depend on the intermediate variables manipulated by the target device, to security proofs in a (more realistic) model where the transitions between these intermediate variables are leaked. We show that the cost of moving from one context to the other implies a division of the security order by two for masking schemes. Next, our second and main contribution is to provide an exhaustive empirical validation of this reduction, based on two microcontrollers, several (handwritten and compiler-based) ways of generating assembly codes, with and without ``recycling\" the randomness used for sharing. These experiments confirm the relevance of our analysis, and therefore quantify the cost of lazy engineering for masking.