International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-07-27
03:17 [Pub][ePrint] On the Security of Group-based Proxy Re-encryption Scheme, by Purushothama B R and B B Amberker

  Proxy re-encryption (PRE) allows a semi-trusted proxy to convert a ciphertext intended for Alice into a ciphertext for Bob without learning anything about the underlying plaintext. Chunbo Ma et al. have proposed a group based proxy re-encryption scheme to convert a ciphertext from one group to another. Any group member can independently decrypt the ciphertexts encrypted to its group. In their paper, the authors gave a security proof to say that the scheme is secure against adaptive chosen ciphertext attack. However, we highlight the flaws in their scheme and show that their scheme is not secure against adaptive chosen ciphertext attack. In this direction, we construct an adversary who issues only one decryption oracle query and break the security of their scheme with non-negligible advantage.



03:17 [Pub][ePrint] Deduction Soundness: Prove One, Get Five for Free, by Florian Böhl and Véronique Cortier and Bogdan Warinschi

  Most computational soundness theorems deal with a limited number of primitives, thereby limiting their applicability. The notion of deduction soundness of Cortier and Warinschi (CCS\'11) aims to facilitate soundness theorems for richer frameworks via composition results: deduction soundness extends, generically, with asymmetric encryption and public data structures. Unfortunately, that paper also hints at rather serious limitations regarding further composition results: composability with digital signatures seems to be precluded.

In this paper we provide techniques for bypassing the perceived limitations of deduction soundness and demonstrate that it enjoys vastly improved composition properties. More precisely, we show that a deduction sound implementation can be modularly extended with all of the basic cryptographic primitives (symmetric/asymmetric encryption, message authentication codes, digital signatures, and hash functions). We thus obtain the first soundness framework that allows for the joint use of multiple instances of all of the basic primitives.

In addition, we show how to overcome an important restriction of the bare deduction soundness framework which forbids sending encrypted secret keys. In turn, this prevents its use for the analysis of a large class of interesting protocols (e.g. key exchange protocols). We allow for more liberal uses of keys as long as they are hidden in a sense that we also define. All primitives typically used to send secret data (symmetric/asymmetric encryption) satisfy our requirement which we also show to be preserved under composition.



03:17 [Pub][ePrint] Exponentiating in Pairing Groups, by Joppe W. Bos and Craig Costello and Michael Naehrig

  We study exponentiations in pairing groups for the most common security levels and show that, although the Weierstrass model is preferable for pairing computation, it can be worthwhile to map to alternative curve representations for the non-pairing group operations in protocols.





2013-07-23
17:09 [Job][New] 1 PhD student in Information Security, Chalmers University of Technology, Gothenburg, Sweden

  We are looking for an excellent PhD candidate to work in the area of information and communication security with a focus on authentication problems in constrained settings. This is particularly important for applications involving mobile phones, wireless communication and RFID systems, which suffer from restrictions in terms of power resources, network connectivity, computational capabilities, as well as potential privacy issues. The overall aim of the project will be to develop nearly optimal algorithms for achieving security and privacy while minimising resource use.

More concretely, part of the research will involve the analysis and development of authentication protocols in specific settings. This will include investigating resistance of both existing and novel protocols against different types of attacks, theoretically and experimentally. In addition to investigating established settings, such as RFID authentication, the research will also explore more general authentication problems, such as those that arise in the context of trust in social networks, smartphone applications and collaborative data processing. This will be done by grounding the work in a generalised decision-making framework. The project should result in the development of theory and authentication mechanisms for noisy, constrained settings that strike an optimal balance between reliable authentication, privacy-preservation and resource consumption. Some previous research related to this research project can be found here: http://lasecwww.epfl.ch/~katerina/Publications.html

Applicants for the position shall have a Master’s Degree or corresponding in Computer Science, Informatics, Telecommunications or in a related discipline. A master\\\'s degree in information security or cryptography is a bonus.

15:17 [Pub][ePrint] How to Use Indistinguishability Obfuscation: Deniable Encryption, and More, by Amit Sahai and Brent Waters

  We introduce a new technique, that we call punctured programs,

to apply indistinguishability obfuscation towards cryptographic

problems. We use this technique to carry out a systematic study of

the applicability of indistinguishability obfuscation to a variety of

cryptographic goals. Along the way, we resolve the 16-year-old open

question of Deniable Encryption, posed by Canetti, Dwork, Naor,

and Ostrovsky in 1997: In deniable encryption, a sender who is forced

to reveal to an adversary both her message and the randomness she used

for encrypting it should be able to convincingly provide ``fake\'\'

randomness that can explain any alternative message that she would

like to pretend that she sent. We resolve this question by giving the

first construction of deniable encryption that does not require

any pre-planning by the party that must later issue a denial.

In addition, we show the generality of our punctured programs

technique by also constructing a variety of core cryptographic objects

from indistinguishability obfuscation and one-way functions (or close

variants). In particular we obtain: public key encryption, short

``hash-and-sign\'\' selectively secure signatures, chosen-ciphertext

secure public key encryption, non-interactive zero knowledge proofs

(NIZKs), injective trapdoor functions, and oblivious transfer. These

results suggest the possibility of indistinguishability

obfuscation becoming a ``central hub\'\' for cryptography.



15:17 [Pub][ePrint] Another Nail in the Coffin of White-Box AES Implementations, by Tancrède Lepoint and Matthieu Rivain

  The goal of white-box cryptography is to design implementations of common cryptographic algorithm (e.g. AES) that remain secure against an attacker with full control of the implementation and execution environment. This concept was put forward a decade ago by Chow et al. (SAC 2002) who proposed the first white-box implementation of AES. Since then, several works have been dedicated to the design of new implementations and/or the breaking of existing ones.

In this paper, we describe a new attack against the original implementation of Chow et al. (SAC 2002), which efficiently recovers the AES secret key as well as the private external encodings in complexity $2^{22}$. Compared to the previous attack due to Billet et al. (SAC 2004) of complexity $2^{30}$, our attack is not only more efficient but also simpler to implement. Then, we show that the \\emph{last} candidate white-box AES implementation due to Karroumi (ICISC 2010) can be broken by a direct application of either Billet et al. attack or ours. Specifically, we show that for any given secret key, the overall implementation has the \\emph{exact same} distribution as the implementation of Chow et al. making them both vulnerable to the same attacks.

By improving the state of the art of white-box cryptanalysis and putting forward new attack techniques, we believe our work brings new insights on the failure of existing white-box implementations, which could be useful for the design of future solutions.





2013-07-22
15:17 [Forum] [2010 Reports] Re: 2010/251 PUF exaggeration by GeorgeBest

  I must admit that I do not find D.J. Bernstein’s assessment appropriate, for a number of reasons. First of all, the optical example he gives does not really apply to the work of Rührmair et al. They only discuss modeling attacks on electrical PUFs with digital, one-bit outputs. Such outputs can easily be imitated if an efficient algorithm that predicts the electrical PUF is known. The simulation algorithms which Rührmair et al. derive are, in fact, extremely simple. They often merely require the addition of small integers, followed by a comparison operation. If implemented, their form factor would be quite close to a “real PUF”. In particular, they could be easily implemented in smart cards and the like, which are one of the standard application examples for PUFs. In order to notice the difference, honest users would have to physically open the PUF/smart card and inspect it with utmost thoroughness, which is in an impractical task for the average user. This makes the simulation algorithms of Rührmair et al an extremely effective way to cheat. Furthermore, even if the outputs of a PUF are analog and complex (as in the case of optical PUFs), a simulation algorithm can be used to break certain protocols. For example, an exchanged key can be derived already from mere numeric simulations of the PUF. The exact analog imitation of the complex optical signal is not necessary to this end. I also cannot share D.J. Bernstein’s claim that the authors would exaggerate their results. They give intensive and, in my opinion, well-balanced discussions on the reach, but at the same time on the limitations of their techniques in the introduction section (and partly also in the summary section). For example, they make clear that their attacks apply to so-called “Weak PUFs” only under very rare circumstances, and that they do not apply at all to Coating PUFs, SRAM PUFs, or Butterfly PUFs. Any readers who had a short glimpse at the introduction cannot miss this discussion, and could hardly make claims on a “PUF-exaggeration”, as the ones we have seen. Finally, it may be interesting to add that following its publication on the ePrint archive, the paper has been accepted to CCS in 2010. It has been quoted multiple times in the meantime (please see Google scholar). It is clear that such acceptance and citation numbers cannot always prove the quality of scientific work. But in this case, they show at least that the paper has been received rather well in the PUF community, which is somewhat at odds with D.J. Bernstein’s (perhaps too harsh) criticism of the paper. GeorgeBest From: 2013-22-07 15:15:59 (UTC)

15:17 [Pub][ePrint] Sequential message authentication code without random oracles, by Bin Wang and Xiaojing Hong

  Katz et al. provided a generic transform to construct aggregate message authentication codes and imposed a lower bound on the length of one aggregate MAC tag. The lower bound shows that the required tag length is at least linear with the number of messages when fast verification such as constant or logarithmic computation overhead is required. Aggregate message authentication codes are useful in settings such as mobile ad-hoc networks where devices are resource-constrained and energy cost is at a premium. In this paper, we introduce the notion of sequential aggregate message authentication code (SAMAC). We present a security model for this notion under unforgeability against chosen message and verification query attack and construct an efficient SAMAC scheme by extending a number-theoretic MAC construction due to Dodis et al. We prove the security of our SAMAC scheme under the CDH assumption in the standard model. Our SAMAC scheme improves the lower bound with the help of the underlying algebraic structure. Performance analysis shows that our SAMAC scheme yields constant computation for the verifier as well as fixed length for one aggregate.



15:17 [Pub][ePrint] Implementing Lightweight Block Ciphers on x86 Architectures, by Ryad Benadjila and Jian Guo and Victor Lomné and Thomas Peyrin

  Lightweight block ciphers are designed so as to fit into very constrained environments, but usually not really with software performance in mind. For classical lightweight applications where many constrained devices communicate with a server, it is also crucial that the cipher has good software performance on the server side. Recent work has shown that bitslice implementations applied to Piccolo and PRESENT led to very good software speeds, thus making lightweight ciphers interesting for cloud applications. However, we remark that bitslice implementations might not be interesting for some situations, where the amount of data to be enciphered at a time is usually small, and very little work has been done on non-bitslice implementations.

In this article, we explore general software implementations of lightweight ciphers on x86 architectures, with a special focus on LED, Piccolo and PRESENT. First, we analyze table-based implementations, and we provide a theoretical model to predict the behavior of various possible trade-offs depending on the processor cache latency profile. We obtain the fastest table-based implementations for our lightweight ciphers, which is of interest for legacy processors. Secondly, we apply to our portfolio of primitives the vperm implementation trick for 4-bit Sboxes, which gives good performance, extra side-channels protection, and is quite fit for many lightweight primitives. Finally, we investigate bitslice implementations, analyzing various costs which are usually neglected (bitsliced form (un)packing, key schedule, etc.), but that must be taken in account for many lightweight applications. We finally discuss which type of implementation seems to be the best suited depending on the applications profile.



15:17 [Pub][ePrint] Weakness of $\\mbox{${\\mathbb F}$}_{3^{6 \\cdot 509}}$ for Discrete Logarithm Cryptography, by Gora Adj and Alfred Menezes and Thomaz Oliveira and Francisco Rodr\\\'iguez-Henr\\\'iquez

  In 2013, Joux, and then Barbulescu, Gaudry, Joux and Thom\\\'{e},

presented new algorithms for computing discrete logarithms in finite

fields of small and medium characteristic. We show that these new

algorithms render the finite field $\\Fmain = \\FF_{3^{3054}}$ weak for

discrete logarithm cryptography in the sense that discrete logarithms

in this field can be computed significantly faster than with the

previous fastest algorithms. Our concrete analysis shows that the

supersingular elliptic curve over $\\FF_{3^{509}}$ with embedding degree

6 that had been considered for implementing pairing-based cryptosystems

at the 128-bit security level in fact provides only a significantly

lower level of security.



15:17 [Pub][ePrint] Dynamic Runtime Methods to Enhance Private Key Blinding, by Karine Gandolfi-Villegas and Nabil Hamzi

  In this paper we propose new methods to blind exponents

used in RSA and in elliptic curves based algorithms. Due to classical

differential power analysis (DPA and CPA), a lot of countermeasures to

protect exponents have been proposed since 1999 Kocher [20] and by

Coron [13]. However, these blinding methods present some drawbacks

regarding execution time and memory cost. It also got some weaknesses.

Indeed they could also be targeted by some attacks such as The Carry

Leakage on the Randomized Exponent proposed by P.A. Fouque et al.

in [23] or inefficient against some others analysis such as Single Power

Analysis. In this article, we explain how the most used method could

be exploited when an attacker can access test samples. We target here

new dynamic blinding methods in order to prevent from any learning

phase and also to improve the resistance against the latest side channel

analyses published.