International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-07-23
15:17 [Pub][ePrint] How to Use Indistinguishability Obfuscation: Deniable Encryption, and More, by Amit Sahai and Brent Waters

  We introduce a new technique, that we call punctured programs,

to apply indistinguishability obfuscation towards cryptographic

problems. We use this technique to carry out a systematic study of

the applicability of indistinguishability obfuscation to a variety of

cryptographic goals. Along the way, we resolve the 16-year-old open

question of Deniable Encryption, posed by Canetti, Dwork, Naor,

and Ostrovsky in 1997: In deniable encryption, a sender who is forced

to reveal to an adversary both her message and the randomness she used

for encrypting it should be able to convincingly provide ``fake\'\'

randomness that can explain any alternative message that she would

like to pretend that she sent. We resolve this question by giving the

first construction of deniable encryption that does not require

any pre-planning by the party that must later issue a denial.

In addition, we show the generality of our punctured programs

technique by also constructing a variety of core cryptographic objects

from indistinguishability obfuscation and one-way functions (or close

variants). In particular we obtain: public key encryption, short

``hash-and-sign\'\' selectively secure signatures, chosen-ciphertext

secure public key encryption, non-interactive zero knowledge proofs

(NIZKs), injective trapdoor functions, and oblivious transfer. These

results suggest the possibility of indistinguishability

obfuscation becoming a ``central hub\'\' for cryptography.



15:17 [Pub][ePrint] Another Nail in the Coffin of White-Box AES Implementations, by Tancrède Lepoint and Matthieu Rivain

  The goal of white-box cryptography is to design implementations of common cryptographic algorithm (e.g. AES) that remain secure against an attacker with full control of the implementation and execution environment. This concept was put forward a decade ago by Chow et al. (SAC 2002) who proposed the first white-box implementation of AES. Since then, several works have been dedicated to the design of new implementations and/or the breaking of existing ones.

In this paper, we describe a new attack against the original implementation of Chow et al. (SAC 2002), which efficiently recovers the AES secret key as well as the private external encodings in complexity $2^{22}$. Compared to the previous attack due to Billet et al. (SAC 2004) of complexity $2^{30}$, our attack is not only more efficient but also simpler to implement. Then, we show that the \\emph{last} candidate white-box AES implementation due to Karroumi (ICISC 2010) can be broken by a direct application of either Billet et al. attack or ours. Specifically, we show that for any given secret key, the overall implementation has the \\emph{exact same} distribution as the implementation of Chow et al. making them both vulnerable to the same attacks.

By improving the state of the art of white-box cryptanalysis and putting forward new attack techniques, we believe our work brings new insights on the failure of existing white-box implementations, which could be useful for the design of future solutions.





2013-07-22
15:17 [Forum] [2010 Reports] Re: 2010/251 PUF exaggeration by GeorgeBest

  I must admit that I do not find D.J. Bernstein’s assessment appropriate, for a number of reasons. First of all, the optical example he gives does not really apply to the work of Rührmair et al. They only discuss modeling attacks on electrical PUFs with digital, one-bit outputs. Such outputs can easily be imitated if an efficient algorithm that predicts the electrical PUF is known. The simulation algorithms which Rührmair et al. derive are, in fact, extremely simple. They often merely require the addition of small integers, followed by a comparison operation. If implemented, their form factor would be quite close to a “real PUF”. In particular, they could be easily implemented in smart cards and the like, which are one of the standard application examples for PUFs. In order to notice the difference, honest users would have to physically open the PUF/smart card and inspect it with utmost thoroughness, which is in an impractical task for the average user. This makes the simulation algorithms of Rührmair et al an extremely effective way to cheat. Furthermore, even if the outputs of a PUF are analog and complex (as in the case of optical PUFs), a simulation algorithm can be used to break certain protocols. For example, an exchanged key can be derived already from mere numeric simulations of the PUF. The exact analog imitation of the complex optical signal is not necessary to this end. I also cannot share D.J. Bernstein’s claim that the authors would exaggerate their results. They give intensive and, in my opinion, well-balanced discussions on the reach, but at the same time on the limitations of their techniques in the introduction section (and partly also in the summary section). For example, they make clear that their attacks apply to so-called “Weak PUFs” only under very rare circumstances, and that they do not apply at all to Coating PUFs, SRAM PUFs, or Butterfly PUFs. Any readers who had a short glimpse at the introduction cannot miss this discussion, and could hardly make claims on a “PUF-exaggeration”, as the ones we have seen. Finally, it may be interesting to add that following its publication on the ePrint archive, the paper has been accepted to CCS in 2010. It has been quoted multiple times in the meantime (please see Google scholar). It is clear that such acceptance and citation numbers cannot always prove the quality of scientific work. But in this case, they show at least that the paper has been received rather well in the PUF community, which is somewhat at odds with D.J. Bernstein’s (perhaps too harsh) criticism of the paper. GeorgeBest From: 2013-22-07 15:15:59 (UTC)

15:17 [Pub][ePrint] Sequential message authentication code without random oracles, by Bin Wang and Xiaojing Hong

  Katz et al. provided a generic transform to construct aggregate message authentication codes and imposed a lower bound on the length of one aggregate MAC tag. The lower bound shows that the required tag length is at least linear with the number of messages when fast verification such as constant or logarithmic computation overhead is required. Aggregate message authentication codes are useful in settings such as mobile ad-hoc networks where devices are resource-constrained and energy cost is at a premium. In this paper, we introduce the notion of sequential aggregate message authentication code (SAMAC). We present a security model for this notion under unforgeability against chosen message and verification query attack and construct an efficient SAMAC scheme by extending a number-theoretic MAC construction due to Dodis et al. We prove the security of our SAMAC scheme under the CDH assumption in the standard model. Our SAMAC scheme improves the lower bound with the help of the underlying algebraic structure. Performance analysis shows that our SAMAC scheme yields constant computation for the verifier as well as fixed length for one aggregate.



15:17 [Pub][ePrint] Implementing Lightweight Block Ciphers on x86 Architectures, by Ryad Benadjila and Jian Guo and Victor Lomné and Thomas Peyrin

  Lightweight block ciphers are designed so as to fit into very constrained environments, but usually not really with software performance in mind. For classical lightweight applications where many constrained devices communicate with a server, it is also crucial that the cipher has good software performance on the server side. Recent work has shown that bitslice implementations applied to Piccolo and PRESENT led to very good software speeds, thus making lightweight ciphers interesting for cloud applications. However, we remark that bitslice implementations might not be interesting for some situations, where the amount of data to be enciphered at a time is usually small, and very little work has been done on non-bitslice implementations.

In this article, we explore general software implementations of lightweight ciphers on x86 architectures, with a special focus on LED, Piccolo and PRESENT. First, we analyze table-based implementations, and we provide a theoretical model to predict the behavior of various possible trade-offs depending on the processor cache latency profile. We obtain the fastest table-based implementations for our lightweight ciphers, which is of interest for legacy processors. Secondly, we apply to our portfolio of primitives the vperm implementation trick for 4-bit Sboxes, which gives good performance, extra side-channels protection, and is quite fit for many lightweight primitives. Finally, we investigate bitslice implementations, analyzing various costs which are usually neglected (bitsliced form (un)packing, key schedule, etc.), but that must be taken in account for many lightweight applications. We finally discuss which type of implementation seems to be the best suited depending on the applications profile.



15:17 [Pub][ePrint] Weakness of $\\mbox{${\\mathbb F}$}_{3^{6 \\cdot 509}}$ for Discrete Logarithm Cryptography, by Gora Adj and Alfred Menezes and Thomaz Oliveira and Francisco Rodr\\\'iguez-Henr\\\'iquez

  In 2013, Joux, and then Barbulescu, Gaudry, Joux and Thom\\\'{e},

presented new algorithms for computing discrete logarithms in finite

fields of small and medium characteristic. We show that these new

algorithms render the finite field $\\Fmain = \\FF_{3^{3054}}$ weak for

discrete logarithm cryptography in the sense that discrete logarithms

in this field can be computed significantly faster than with the

previous fastest algorithms. Our concrete analysis shows that the

supersingular elliptic curve over $\\FF_{3^{509}}$ with embedding degree

6 that had been considered for implementing pairing-based cryptosystems

at the 128-bit security level in fact provides only a significantly

lower level of security.



15:17 [Pub][ePrint] Dynamic Runtime Methods to Enhance Private Key Blinding, by Karine Gandolfi-Villegas and Nabil Hamzi

  In this paper we propose new methods to blind exponents

used in RSA and in elliptic curves based algorithms. Due to classical

differential power analysis (DPA and CPA), a lot of countermeasures to

protect exponents have been proposed since 1999 Kocher [20] and by

Coron [13]. However, these blinding methods present some drawbacks

regarding execution time and memory cost. It also got some weaknesses.

Indeed they could also be targeted by some attacks such as The Carry

Leakage on the Randomized Exponent proposed by P.A. Fouque et al.

in [23] or inefficient against some others analysis such as Single Power

Analysis. In this article, we explain how the most used method could

be exploited when an attacker can access test samples. We target here

new dynamic blinding methods in order to prevent from any learning

phase and also to improve the resistance against the latest side channel

analyses published.



15:17 [Pub][ePrint] Flush+Reload: a High Resolution, Low Noise, L3 Cache Side-Channel Attack, by Yuval Yarom and Katrina Falkner

  Flush+Reload is a cache side-channel attack that monitors access to data in shared pages. In this paper we demonstrate how to use the attack to extract private encryption keys from GnuPG. The high resolution and low noise of the Flush+Reload attack enables a spy program to recover over 98% of the bits of the private key in a single decryption or signing round. Unlike previous attacks, the attack targets the last level L3 cache. Consequently, the spy program and the victim do not need to share the execution core of the CPU. The attack is not limited to a traditional OS and can be used in a virtualised environment, where it can attack programs executing in a different VM.



15:17 [Pub][ePrint] A Note On the Storage Requirement for AKS Primality Testing Algorithm, by Zhengjun Cao

  We remark that AKS primality testing algorithm needs about 1,000,000,000 G (gigabyte) storage space for a number of 1024 bits. Such storage requirement is hard to meet in practice. To the best of our knowledge, it is impossible for current operating systems to write and read data in so huge storage space. Thus, the running time for AKS algorithm shuould not be simply estimated as usual in terms of the amount of arithmetic operations.



15:17 [Pub][ePrint] Revisiting the BGE Attack on a White-Box AES Implementation, by Yoni De Mulder and Peter Roelse and Bart Preneel

  White-box cryptography aims to protect the secret key of a cipher in an environment in which an adversary has full access to the implementation of the cipher and its execution environment. In 2002, Chow, Eisen, Johnson and van Oorschot proposed a white-box implementation of AES. In 2004, Billet, Gilbert and Ech-Chatbi presented an efficient attack (referred to as the BGE attack) on this implementation, extracting its embedded AES key with a work factor of $2^{30}$. In 2012, Tolhuizen presented an improvement of the most time-consuming phase of the BGE attack. This paper presents several improvements to the other phases of the BGE attack. The paper shows that the overall work factor of the BGE attack is reduced to $2^{22}$ when all improvements are implemented. In 2010, Karroumi presented a white-box AES implementation that is designed to withstand the BGE attack. This paper shows that the implementations of Karroumi and Chow \\emph{et al.} are the same. As a result, Karroumi\'s white-box AES implementation is vulnerable to the attack it was designed to resist.



15:17 [Pub][ePrint] Candidate Indistinguishability Obfuscation and Functional Encryption for all circuits, by Sanjam Garg and Craig Gentry and Shai Halevi and Mariana Raykova and Amit Sahai and Brent Waters

  In this work, we study indistinguishability obfuscation and functional encryption for general circuits:

Indistinguishability obfuscation requires that given any two equivalent circuits C_0 and C_1 of similar size, the obfuscations of C_0 and C_1 should be computationally indistinguishable.

In functional encryption, ciphertexts encrypt inputs x and keys are issued for circuits C. Using the key SK_C to decrypt a ciphertext CT_x = Enc(x), yields the value C(x) but does not reveal anything else about x. Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually.

We give constructions for indistinguishability obfuscation and functional encryption that supports all polynomial-size circuits. We accomplish this goal in three steps:

- We describe a candidate construction for indistinguishability obfuscation for NC1 circuits. The security of this construction is based on a new algebraic hardness assumption. The candidate and assumption use a simplified variant of multilinear maps, which we call Multilinear Jigsaw Puzzles.

- We show how to use indistinguishability obfuscation for NC1 together with Fully Homomorphic Encryption (with decryption in NC1) to achieve indistinguishability obfuscation for all circuits.

- Finally, we show how to use indistinguishability obfuscation for circuits, public-key encryption, and non-interactive zero knowledge to achieve functional encryption for all circuits. The functional encryption scheme we construct also enjoys succinct ciphertexts, which enables several other applications.