International Association for Cryptologic Research

# IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-07-27
03:17 [Pub][ePrint]

We study exponentiations in pairing groups for the most common security levels and show that, although the Weierstrass model is preferable for pairing computation, it can be worthwhile to map to alternative curve representations for the non-pairing group operations in protocols.

2013-07-23
17:09 [Job][New]

We are looking for an excellent PhD candidate to work in the area of information and communication security with a focus on authentication problems in constrained settings. This is particularly important for applications involving mobile phones, wireless communication and RFID systems, which suffer from restrictions in terms of power resources, network connectivity, computational capabilities, as well as potential privacy issues. The overall aim of the project will be to develop nearly optimal algorithms for achieving security and privacy while minimising resource use.

More concretely, part of the research will involve the analysis and development of authentication protocols in specific settings. This will include investigating resistance of both existing and novel protocols against different types of attacks, theoretically and experimentally. In addition to investigating established settings, such as RFID authentication, the research will also explore more general authentication problems, such as those that arise in the context of trust in social networks, smartphone applications and collaborative data processing. This will be done by grounding the work in a generalised decision-making framework. The project should result in the development of theory and authentication mechanisms for noisy, constrained settings that strike an optimal balance between reliable authentication, privacy-preservation and resource consumption. Some previous research related to this research project can be found here: http://lasecwww.epfl.ch/~katerina/Publications.html

Applicants for the position shall have a Master’s Degree or corresponding in Computer Science, Informatics, Telecommunications or in a related discipline. A master\\\'s degree in information security or cryptography is a bonus.

15:17 [Pub][ePrint]

We introduce a new technique, that we call punctured programs,

to apply indistinguishability obfuscation towards cryptographic

problems. We use this technique to carry out a systematic study of

the applicability of indistinguishability obfuscation to a variety of

cryptographic goals. Along the way, we resolve the 16-year-old open

question of Deniable Encryption, posed by Canetti, Dwork, Naor,

and Ostrovsky in 1997: In deniable encryption, a sender who is forced

to reveal to an adversary both her message and the randomness she used

for encrypting it should be able to convincingly provide fake\'\'

randomness that can explain any alternative message that she would

like to pretend that she sent. We resolve this question by giving the

first construction of deniable encryption that does not require

any pre-planning by the party that must later issue a denial.

In addition, we show the generality of our punctured programs

technique by also constructing a variety of core cryptographic objects

from indistinguishability obfuscation and one-way functions (or close

variants). In particular we obtain: public key encryption, short

hash-and-sign\'\' selectively secure signatures, chosen-ciphertext

secure public key encryption, non-interactive zero knowledge proofs

(NIZKs), injective trapdoor functions, and oblivious transfer. These

results suggest the possibility of indistinguishability

obfuscation becoming a central hub\'\' for cryptography.

15:17 [Pub][ePrint]

The goal of white-box cryptography is to design implementations of common cryptographic algorithm (e.g. AES) that remain secure against an attacker with full control of the implementation and execution environment. This concept was put forward a decade ago by Chow et al. (SAC 2002) who proposed the first white-box implementation of AES. Since then, several works have been dedicated to the design of new implementations and/or the breaking of existing ones.

In this paper, we describe a new attack against the original implementation of Chow et al. (SAC 2002), which efficiently recovers the AES secret key as well as the private external encodings in complexity $2^{22}$. Compared to the previous attack due to Billet et al. (SAC 2004) of complexity $2^{30}$, our attack is not only more efficient but also simpler to implement. Then, we show that the \\emph{last} candidate white-box AES implementation due to Karroumi (ICISC 2010) can be broken by a direct application of either Billet et al. attack or ours. Specifically, we show that for any given secret key, the overall implementation has the \\emph{exact same} distribution as the implementation of Chow et al. making them both vulnerable to the same attacks.

By improving the state of the art of white-box cryptanalysis and putting forward new attack techniques, we believe our work brings new insights on the failure of existing white-box implementations, which could be useful for the design of future solutions.

2013-07-22
15:17 [Forum]

I must admit that I do not find D.J. Bernsteins assessment appropriate, for a number of reasons. First of all, the optical example he gives does not really apply to the work of Rührmair et al. They only discuss modeling attacks on electrical PUFs with digital, one-bit outputs. Such outputs can easily be imitated if an efficient algorithm that predicts the electrical PUF is known. The simulation algorithms which Rührmair et al. derive are, in fact, extremely simple. They often merely require the addition of small integers, followed by a comparison operation. If implemented, their form factor would be quite close to a real PUF. In particular, they could be easily implemented in smart cards and the like, which are one of the standard application examples for PUFs. In order to notice the difference, honest users would have to physically open the PUF/smart card and inspect it with utmost thoroughness, which is in an impractical task for the average user. This makes the simulation algorithms of Rührmair et al an extremely effective way to cheat. Furthermore, even if the outputs of a PUF are analog and complex (as in the case of optical PUFs), a simulation algorithm can be used to break certain protocols. For example, an exchanged key can be derived already from mere numeric simulations of the PUF. The exact analog imitation of the complex optical signal is not necessary to this end. I also cannot share D.J. Bernsteins claim that the authors would exaggerate their results. They give intensive and, in my opinion, well-balanced discussions on the reach, but at the same time on the limitations of their techniques in the introduction section (and partly also in the summary section). For example, they make clear that their attacks apply to so-called Weak PUFs only under very rare circumstances, and that they do not apply at all to Coating PUFs, SRAM PUFs, or Butterfly PUFs. Any readers who had a short glimpse at the introduction cannot miss this discussion, and could hardly make claims on a PUF-exaggeration, as the ones we have seen. Finally, it may be interesting to add that following its publication on the ePrint archive, the paper has been accepted to CCS in 2010. It has been quoted multiple times in the meantime (please see Google scholar). It is clear that such acceptance and citation numbers cannot always prove the quality of scientific work. But in this case, they show at least that the paper has been received rather well in the PUF community, which is somewhat at odds with D.J. Bernsteins (perhaps too harsh) criticism of the paper. GeorgeBest From: 2013-22-07 15:15:59 (UTC)

15:17 [Pub][ePrint]

Katz et al. provided a generic transform to construct aggregate message authentication codes and imposed a lower bound on the length of one aggregate MAC tag. The lower bound shows that the required tag length is at least linear with the number of messages when fast verification such as constant or logarithmic computation overhead is required. Aggregate message authentication codes are useful in settings such as mobile ad-hoc networks where devices are resource-constrained and energy cost is at a premium. In this paper, we introduce the notion of sequential aggregate message authentication code (SAMAC). We present a security model for this notion under unforgeability against chosen message and verification query attack and construct an efficient SAMAC scheme by extending a number-theoretic MAC construction due to Dodis et al. We prove the security of our SAMAC scheme under the CDH assumption in the standard model. Our SAMAC scheme improves the lower bound with the help of the underlying algebraic structure. Performance analysis shows that our SAMAC scheme yields constant computation for the verifier as well as fixed length for one aggregate.

15:17 [Pub][ePrint]

Lightweight block ciphers are designed so as to fit into very constrained environments, but usually not really with software performance in mind. For classical lightweight applications where many constrained devices communicate with a server, it is also crucial that the cipher has good software performance on the server side. Recent work has shown that bitslice implementations applied to Piccolo and PRESENT led to very good software speeds, thus making lightweight ciphers interesting for cloud applications. However, we remark that bitslice implementations might not be interesting for some situations, where the amount of data to be enciphered at a time is usually small, and very little work has been done on non-bitslice implementations.

In this article, we explore general software implementations of lightweight ciphers on x86 architectures, with a special focus on LED, Piccolo and PRESENT. First, we analyze table-based implementations, and we provide a theoretical model to predict the behavior of various possible trade-offs depending on the processor cache latency profile. We obtain the fastest table-based implementations for our lightweight ciphers, which is of interest for legacy processors. Secondly, we apply to our portfolio of primitives the vperm implementation trick for 4-bit Sboxes, which gives good performance, extra side-channels protection, and is quite fit for many lightweight primitives. Finally, we investigate bitslice implementations, analyzing various costs which are usually neglected (bitsliced form (un)packing, key schedule, etc.), but that must be taken in account for many lightweight applications. We finally discuss which type of implementation seems to be the best suited depending on the applications profile.

15:17 [Pub][ePrint]

In 2013, Joux, and then Barbulescu, Gaudry, Joux and Thom\\\'{e},

presented new algorithms for computing discrete logarithms in finite

fields of small and medium characteristic. We show that these new

algorithms render the finite field $\\Fmain = \\FF_{3^{3054}}$ weak for

discrete logarithm cryptography in the sense that discrete logarithms

in this field can be computed significantly faster than with the

previous fastest algorithms. Our concrete analysis shows that the

supersingular elliptic curve over $\\FF_{3^{509}}$ with embedding degree

6 that had been considered for implementing pairing-based cryptosystems

at the 128-bit security level in fact provides only a significantly

lower level of security.

15:17 [Pub][ePrint]

In this paper we propose new methods to blind exponents

used in RSA and in elliptic curves based algorithms. Due to classical

differential power analysis (DPA and CPA), a lot of countermeasures to

protect exponents have been proposed since 1999 Kocher [20] and by

Coron [13]. However, these blinding methods present some drawbacks

regarding execution time and memory cost. It also got some weaknesses.

Indeed they could also be targeted by some attacks such as The Carry

Leakage on the Randomized Exponent proposed by P.A. Fouque et al.

in [23] or inefficient against some others analysis such as Single Power

Analysis. In this article, we explain how the most used method could

be exploited when an attacker can access test samples. We target here

new dynamic blinding methods in order to prevent from any learning

phase and also to improve the resistance against the latest side channel

analyses published.

15:17 [Pub][ePrint]

Flush+Reload is a cache side-channel attack that monitors access to data in shared pages. In this paper we demonstrate how to use the attack to extract private encryption keys from GnuPG. The high resolution and low noise of the Flush+Reload attack enables a spy program to recover over 98% of the bits of the private key in a single decryption or signing round. Unlike previous attacks, the attack targets the last level L3 cache. Consequently, the spy program and the victim do not need to share the execution core of the CPU. The attack is not limited to a traditional OS and can be used in a virtualised environment, where it can attack programs executing in a different VM.

15:17 [Pub][ePrint]

We remark that AKS primality testing algorithm needs about 1,000,000,000 G (gigabyte) storage space for a number of 1024 bits. Such storage requirement is hard to meet in practice. To the best of our knowledge, it is impossible for current operating systems to write and read data in so huge storage space. Thus, the running time for AKS algorithm shuould not be simply estimated as usual in terms of the amount of arithmetic operations.