International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-10-28
21:17 [Pub][ePrint] Higher Order Masking of Look-up Tables, by Jean-Sebastien Coron

  We describe a new algorithm for masking look-up tables of block-ciphers at any order, as a countermeasure against side-channel attacks. Our technique is a generalization of the classical randomized table countermeasure against first-order attacks. We prove the security of our new algorithm against t-th order attacks in the usual Ishai-Sahai-Wagner model from Crypto 2003; we also improve the bound on the number of shares from n>=4t+1 to n>= 2t+1 for an adversary who can adaptively move its probes between successive executions.

Our algorithm has the same time complexity O(n^2) as the Rivain-Prouff algorithm for AES, and its extension by Carlet et al. to any look-up table. In practice for AES our algorithm is less efficient than Rivain-Prouff, which can take advantage of the special algebraic structure

of the AES Sbox; however for DES our algorithm performs slightly better.



21:17 [Pub][ePrint] More on the Impossibility of Virtual-Black-Box Obfuscation with Auxiliary Input, by Nir Bitansky and Ran Canetti and Omer Paneth and Alon Rosen

  We show that if there exist indistinguishability obfuscators for a certain class C of circuits then there do not exist independent-auxiliary-input virtual-black-box (VBB) obfuscators for any family of circuits that compute a pseudo-entropic function. A function f_k is pseudo-entropic if it is hard, given oracle access to f_k but without asking explicitly on a value x, to distinguish f_k(x) from a random variable with some real entropy.

This strengthens the bound of Goldwasser and Kalai [FOCS `05, ePrint `13] that rules out dependent-auxiliary-input VBB obfuscation for the same set of circuit families, assuming inditinguishability obfuscators for another class, C\', of circuits. That is, while they only rule out the case where the adversary and the simulator obtain auxiliary information that depends on the actual (secret) obfuscated function, we rule out even the case where the auxiliary input depends only

on the (public) family of programs.



21:17 [Pub][ePrint] Efficient Non-Malleable Codes and Key-Derivation for Poly-Size Tampering Circuits, by Sebastian Faust and Pratyay Mukherjee and Daniele Venturi and Daniel Wichs

  Non-malleable codes, defined by Dziembowski, Pietrzak and Wichs (ICS \'10), provide roughly the following guarantee: if a codeword $c$ encoding some message $x$ is tampered to $c\' = f(c)$ such that $c\' \\neq c$, then the tampered message $x\'$ contained in $c\'$ reveals no information about $x$. Non-malleable codes have applications to immunizing cryptosystems against tampering attacks and related-key attacks.

One cannot have an efficient non-malleable code that protects against all efficient tampering functions $f$. However, in this work we show ``the next best thing\'\': for any polynomial bound $s$ given a-priori, there is an efficient non-malleable code that protects against all tampering functions $f$ computable by a circuit of size $s$. More generally, for any family of tampering functions $\\F$ of size $|\\F| \\leq 2^{s}$, there is an efficient non-malleable code that protects against all $f \\in \\F$. The rate of our codes, defined as the ratio of message to codeword size, approaches $1$. Our results are information-theoretic and our main proof technique relies on a careful probabilistic method argument using limited independence. As a result, we get an efficiently samplable family of efficient codes, such that a random member of the family is non-malleable with overwhelming probability. Alternatively, we can view the result as providing an efficient non-malleable code in the ``common reference string\'\' (CRS) model.

We also introduce a new notion of non-malleable key derivation, which uses randomness $x$ to derive a secret key $y = h(x)$ in such a way that, even if $x$ is tampered to a different value $x\' = f(x)$, the derived key $y\' = h(x\')$ does not reveal any information about $y$. Our results for non-malleable key derivation are analogous to those for non-malleable codes.

As a useful tool in our analysis, we rely on the notion of ``leakage-resilient storage\'\' of Davi, Dziembowski and Venturi (SCN \'10) and, as a result of independent interest, we also significantly improve on the parameters of such schemes.





2013-10-24
18:17 [Pub][ePrint] Fully Bideniable Public-Key Encryption, by Marcel Sebek

  Bideniable encryption allows both sender and receiver in a public-key setting to simultaneously claim that a different message of their choice was transmitted, and support this claim by a good-looking encryption and key-generation randomness, respectively. A weaker version with two variants of algorithms is called flexible or multi-distributional deniability, a stronger one-algorithm version is called full deniability. Bendlin et al. (ASIACRYPT 2011) showed that certain kinds of fully-deniable schemes are constructible using corresponding flexible schemes. In this paper, we show that their construction in the bideniable case has a flaw, and we provide a fixed scheme. Our construction relies on a deterministic subset matching algorithm that assigns to each nonempty subset of a finite set its proper subset reduced by exactly one element. Although this algorithm is crucial in our construction, it may also be of independent interest.



18:17 [Pub][ePrint] Solving shortest and closest vector problems: The decomposition approach, by Anja Becker, Nicolas Gama and Antoine Joux

  In this paper, we present a heuristic algorithm for solving exact, as

well as approximate, SVP and CVP for lattices. This algorithm is based

on a new approach which is very different from and complementary to the

sieving technique. This new approach frees us from the kissing number bound and allows us to solve SVP and CVP in lattices of dimension $n$ in time $2^{0.377n}$ using memory $2^{0.292n}$. The key idea is to no longer work with a single lattice but to move the problems around in a tower of related lattices. We initiate the algorithm by sampling very short vectors in a dense overlattice of the original lattice that admits a quasi-orthonormal basis and hence an efficient enumeration of vectors of bounded norm. Taking sums of vectors in the sample, we construct short vectors in the next lattice of our tower. Repeating this, we climb all the way to the top of the tower and finally obtain solution vector(s) in the initial lattice as a sum of vectors of the overlattice just below it. The complexity analysis relies on the Gaussian heuristic. This heuristic is backed by experiments in low and high dimensions that closely reflect these estimates when solving hard lattice problems in the average case.



18:17 [Pub][ePrint] New abstractions in applied pi-calculus and automated verification of protected executions, by Shiwei Xu and Sergiu Bursuc and Julian P. Murphy

  Protocols for the protected execution of programs, like those based on a hardware root of trust, will become of fundamental importance for computer security. In parallel to such protocols, there is therefore a need to develop models and tools that allow formal specification and automated verification of the desired security properties. Still, current protocols lack realistic models and automated proofs of security. This is due to several challenges that we address in this paper.

We consider the classical setting of applied pi-calculus and ProVerif, that we enrich with several generic models that allow verification of protocols designed for a given computing platform. Our contributions include models for specifying platform states and for dynamically loading and executing protected programs. We also propose a new method to make ProVerif terminate on a challenging search space - the one obtained by allowing an unbounded number of extensions and resets for the platform configuration registers of the TPM.

We illustrate our methods with the case study of a protocol for a dynamic root of trust (based on a TPM), which includes dynamic loading, measurement and protected execution of programs. We prove automatically with ProVerif that code integrity and secrecy of sealed data hold for the considered protocol.



18:17 [Pub][ePrint] How to Compress (Reusable) Garbled Circuits, by Craig Gentry and Sergey Gorbunov and Shai Halevi and Vinod Vaikuntanathan and Dhinakaran Vinayagamurthy

  A fundamental question about (reusable) circuit garbling schemes is: how small can the garbled circuit be? Our main result is a reusable garbling scheme which produces garbled circuits that are the same size as the original circuit plus an additive poly(secp) bits, where secp is the security parameter. Save the additive poly(secp) factor, this is the best one could hope for. In contrast, all previous constructions of even single-use garbled circuits incurred a multiplicative poly(secp) blowup.

Our techniques result in constructions of attribute-based and (single key secure) functional encryption schemes where the secret key of a circuit C consists of C itself, plus poly(secp) additional bits. All of these constructions are based on the subexponential hardness of the learning with errors problem.

We also study the dual question of how short the garbled inputs can be, relative to the original input. We demonstrate a (different) reusable circuit garbling scheme, based on multilinear maps, where the size of the garbled input is the same as that of the original input, plus a poly(secp) factor. This improves on the result of Applebaum, Ishai, Kushilevitz and Waters (CRYPTO 2013)

who showed such a result for single-use garbling. Similar to the above, this also results in attribute-based and (single key secure) functional encryption schemes where the size of the ciphertext encrypting an input x is the same as that of x, plus poly(secp) additional bits.



18:17 [Pub][ePrint] Unbalancing Pairing-Based Key Exchange Protocols, by Michael Scott

  In many pairing-based protocols more than one party is involved, and some or all of them may be required to calculate pairings.

Commonly it is the pairing calculation itself which takes most time.

However some parties may be better equipped than others in terms of computational power. By exploiting the bilinearity

property there are established ways to off-load the pairing calculation to an untrusted third party. Here we observe

that this third party may in fact be one of the other participants in the protocol. In this way a protocol may be ``unbalanced\'\'

by shifting the computational load from one participant to another, which may be an advantage in some circumstances.

In this paper we focus on some simple key exchange protocols.

Surprisingly we find that unbalancing a key exchange protocol can endow it with the property of full forward secrecy, even if it did not originally possess it.

Finally we show that a new condition on the choice of pairing-friendly curve can help to minimize the overall computation.



18:17 [Pub][ePrint] Differing-Inputs Obfuscation and Applications, by Prabhanjan Ananth and Dan Boneh and Sanjam Garg and Amit Sahai and Mark Zhandry

  In this paper we study of the notion of differing-input obfuscation, introduced by Barak et al. (CRYPTO 2001, JACM 2012). For any two circuit C0 and C1, differing-input obfuscator diO guarantees that non-existence of a adversary that can find an input on which C0 and C1 differ implies that diO(C0) and diO(C1) are computationally indistinguishable. We show many applications of this notion:

- We define the notion of differing-input obfuscator for Turing machines and give a construction for the same (without converting it to a circuit) with input-specific running times. More specifically, for each input our obfuscated Turning machine takes times proportional to the running time of the Turing machine on that specific input rather than the machines worst-cast running time.

- We give a functional encryption scheme that is fully-secure even when the adversary can obtain an unbounded number of secret keys. Furthermore our scheme allows for secret-keys to be associated with Turing machines and thereby achieves input-specific running times and can be equipped with delegation properties. We stress that no previous scheme in the literature had any of these properties.

- We construct the first broadcast encryption system where the ciphertext and secret-key size is constant (i.e. independent of the number of users), and the public key is logarithmic in the number of users. It is the first such scheme where all three parameters are this short. Both our constructions make inherent use of the power provided by differing-input obfuscation. It is not currently known how to construct systems with these properties from the weaker notion of indistinguishability obfuscation.



18:17 [Pub][ePrint] Obfuscation ==> (IND-CPA Security =/=> Circular Security), by Antonio Marcedone and Claudio Orlandi

  Circular security is an important notion for public-key encryption schemes and is needed by several cryptographic protocols. In circular security the adversary is given an extra ``hint\'\' consisting of a cycle of encryption of secret keys i.e., (E_{pk_1}(sk_2),..., E_{pk_n}(sk_1)). A natural question is whether every IND-CPA encryption scheme is also circular secure. It is trivial to see that this is not the case when n=1. In 2010 a separation for n=2 was shown by [ABBC10,GH10] under standard assumptions in bilinear groups.

In this paper we finally settle the question showing that for every $n$ there exist an IND-CPA secure scheme which is not n-circular secure. Our result relies on the recent progress in program obfuscation.



12:17 [Pub][ePrint] Bounded Tamper Resilience: How to go beyond the Algebraic Barrier, by Ivan Damgaard and Sebastian Faust and Pratyay Mukherjee and Daniele Venturi

  Related key attacks (RKAs) are powerful cryptanalytic attacks where an adversary can change the secret key and observe the effect of such changes at the output. The state of the art in RKA security protects against an a-priori unbounded number of certain algebraic induced key relations, e.g., affine functions or polynomials of bounded degree. In this work, we show that it is possible to go beyond the algebraic barrier and achieve security against arbitrary key relations, by restricting the number of tampering queries the adversary is allowed to ask for. The latter restriction is necessary in case of arbitrary key relations, as otherwise a generic attack of Gennaro et al. (TCC 2004) shows how to recover the key of almost any cryptographic primitive. We describe our contributions in more detail below.

1) We show that standard ID and signature schemes constructed from a large class of $\\Sigma$-protocols (including the Okamoto scheme, for instance) are secure even if the adversary can arbitrarily tamper with the prover\'s state a bounded number of times and obtain some bounded amount of leakage. Interestingly, for the Okamoto scheme we can allow also independent tampering with the public parameters.

2) We show a bounded tamper and leakage resilient CCA secure public key cryptosystem based on the DDH assumption. We first define a weaker CPA-like security notion that we can instantiate based on DDH, and then we give a general compiler that yields CCA-security with tamper and leakage resilience. This requires a public tamper-proof common reference string.

3) Finally, we explain how to boost bounded tampering and leakage resilience (as in 1. and 2. above) to continuous tampering and leakage resilience, in the so-called floppy model where each user has a personal hardware token (containing leak- and tamper-free information) which can be used to refresh the secret key.

We believe that bounded tampering is a meaningful and interesting alternative to avoid known impossibility results and can provide important insights into the security of existing standard cryptographic schemes.