International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-05-01
12:17 [Pub][ePrint] Expiration and Revocation of Keys for Attribute-based Signatures, by Stephen R. Tate and Roopa Vishwanathan

  Attribute-based signatures, introduced by Maji \\emph{et al.}, are

signatures that prove that an authority has issued the signer

``attributes\'\' that satisfy some specified predicate. In existing

attribute-based signature schemes, keys are valid indefinitely once

issued. In this paper, we initiate the study of incorporating time into

attribute-based signatures, where a time instance is embedded in every

signature, and attributes are restricted to producing signatures with

times that fall in designated validity intervals. We provide

three implementations that vary in granularity of assigning validity

intervals to attributes, including a scheme in which each attribute

has its own independent validity interval, a scheme in which all

attributes share a common validity interval, and a scheme in which

sets of attributes share validity intervals. All of our schemes provide anonymity to a signer, hide the attributes used to create the signature, and provide collusion-resistance between

users.



12:17 [Pub][ePrint] Simple Chosen-Ciphertext Security from Low-Noise LPN, by Eike Kiltz and Daniel Masny and Krzysztof Pietrzak

  Recently, D\\\"ottling et al. (ASIACRYPT 2012) proposed the first

chosen-ciphertext (IND-CCA) secure public-key encryption scheme from the learning parity with noise (LPN) assumption.

In this work we give an alternative scheme which is conceptually simpler and more efficient.

At the core of our construction is a trapdoor technique originally proposed for lattices by Micciancio and Peikert (EUROCRYPT 2012), which we adapt to the LPN setting. The main technical tool is a new double-trapdoor mechanism, together with a trapdoor switching lemma based on a computational variant of the leftover hash lemma.



12:17 [Pub][ePrint] Success through confidence: Evaluating the effectiveness of a side-channel attacj, by Adrian Thillard and Emmanuel Prouff and Thomas Roche

  Side-channel attacks usually apply a divide-and-conquer strategy, separately recovering different parts of the secret. Their efficiency in practice relies on the adversary ability to precisely assess the success or unsucces of each of these recoveries. This makes the study of the attack success rate a central problem in side-channel analysis. In tis paper we tackle this issue in two different settings for the most popular attack, namely the Correlation Power Analysis (CPA). In the first setting, we assume that the targeted subkey is known and we compare the state of the art formulae expressing the success rate as a function of the leakage noise and the algebraic properties of the cryptographic primitive. We also make the link between these formulae and the recent work of Fei et al. at CHES 2012. In the second setting, the subkey is no longer assumed to be known and we introduce the notion of confidence level in an attack result, allowing for the study of different heuristics. Through experiments, we show that the rank evolution of a subkey hypothesis can be exploited to compute a better confidence than considering only the final result.





2015-04-29
15:17 [Pub][ePrint] Method to Protect Passwords in Databases for Web Applications, by Scott Contini

  Trying to make it more difficult to hack passwords has a long history. However the research community has not addressed the change of context from traditional Unix mainframe systems to web applications which face new threats (DoS) and have fewer constraints (client-side computation is allowed). In absence of updated guidance, a variety of solutions are scattered all over the web, from amateur to somewhat professional. However, even the best references have issues such as incomplete details, misuse of terminology, assertion of requirements that are not adequately justified, and too many options presented to the developer, opening the door to potential mistakes. The purpose of this research note is to present a solution with complete details and a concise summary of the requirements, and to provide a solution that developers can readily implement with confidence, assuming that the solution is endorsed by the research community. The proposed solution involves client-side processing of a heavy computation in combination with a server-side hash computation. It follows a similar approach to a few other proposals on the web, but is more complete and justified than any that we found.



15:17 [Pub][ePrint] Fully Succinct Garbled RAM, by Ran Canetti and Justin Holmgren

  We construct the first fully succinct garbling scheme for RAM programs, assuming the existence of indistinguishability obfuscation for circuits and one-way functions. That is, the size, space requirements, and runtime of the garbled program are the same as those of the input program, up to poly-logarithmic factors and a polynomial in the security parameter. The scheme can be used to construct indistinguishability obfuscators for RAM programs with comparable efficiency, at the price of requiring sub-exponential security of the underlying primitives.

The scheme builds on the recent schemes of Koppula-Lewko-Waters and Canetti-Holmgren-Jain-Vaikuntanathan [STOC 15]. A key technical challenge here is how to combine the fixed-prefix technique of KLW, which was developed for deterministic programs, with randomized Oblivious RAM techniques. To overcome that, we develop a method for arguing about the indistinguishability of two obfuscated randomized programs that use correlated randomness. Along the way, we also define and construct garbling schemes that offer only partial protection. These may be of independent interest.



15:17 [Pub][ePrint] Keccak, by Guido Bertoni and Joan Daemen and Michael Peeters and Gilles Van Assche

  In October 2012, the American National Institute of Standards and Technology (NIST) announced the selection of Keccak as the winner of the SHA-3 Cryptographic Hash Algorithm Competition [10,11]. This concluded an open competition that was remarkable both for its magnitude and the involvement of the cryptographic community. Public review is of paramount importance to increase the confidence in the new standard and to favor its quick adoption. The SHA-3 competition explicitly took this into account by giving open access to the candidate algorithms and everyone in the cryptographic community could try to break them, compare their performance, or simply give comments.



15:17 [Pub][ePrint] Dual System Encryption Framework in Prime-Order Groups, by Nuttapong Attrapadung

  We propose a new generic framework for achieving fully secure attribute based encryption (ABE) in prime-order bilinear groups. It is generic in the sense that it can be applied to ABE for arbitrary predicate. All previously available frameworks that are generic in this sense are given only in composite-order bilinear groups. These consist of the frameworks proposed by Wee (TCC\'14) and Attrapadung (Eurocrypt\'14). Both frameworks provide abstractions of dual-system encryption techniques introduced by Waters (Crypto\'09). Our framework can be considered as a prime-order version of Attrapadung\'s framework and works in a similar manner: it relies on a main component called pair encodings, and it generically compiles any secure pair encoding scheme for a predicate in consideration to a fully secure ABE scheme for that predicate. One feature of our new compiler is that although the resulting ABE schemes will be newly defined in prime-order groups, we require essentially the same security notions of pair encodings as before. Beside the security of pair encodings, our framework assumes only the Matrix Diffie-Hellman assumption, introduced by Escala et al. (Crypto\'13), which is a weak assumption that includes the Decisional Linear assumption as a special case.

As for its applications, we can plug in available pair encoding schemes and automatically obtain the first fully secure ABE realizations in prime-order groups for predicates of which only fully secure schemes in composite-order groups were known. These include ABE for regular languages, ABE for monotone span programs (and hence Boolean formulae) with short ciphertexts or keys, and completely unbounded ABE for monotone span programs.

As a side result, we establish the first generic implication from ABE for monotone span programs to ABE for branching programs. Consequently, we obtain fully-secure ABE for branching programs in some new variants, namely, unbounded, short-ciphertext, and short-key variants. Previous ABE schemes for branching programs are bounded and require linear-size ciphertexts and keys.



15:17 [Pub][ePrint] On the Communication Complexity of Secure Computation, by Deepesh Data and Manoj M. Prabhakaran and Vinod M. Prabhakaran

  Information theoretically secure multi-party computation (MPC) is a central

primitive of modern cryptography. However, relatively little

is known about the communication complexity of this primitive.

In this work, we develop powerful information theoretic tools to prove lower

bounds on the communication complexity of MPC. We restrict ourselves to a

concrete setting involving 3-parties, in order to bring out the power of

these tools without introducing too many complications. Our techniques

include the use of a data processing inequality for {\\em residual

information} --- i.e., the gap between mutual information and

G\\\'acs-K\\\"orner common information, a new {\\em information inequality} for

3-party protocols, and the idea of {\\em distribution switching} by which

lower bounds computed under certain worst-case scenarios can be shown to

apply for the general case.

Using these techniques we obtain tight bounds on communication complexity by

MPC protocols for various interesting functions. In particular, we show

concrete functions that have ``communication-ideal\'\' protocols, which

achieve the minimum communication simultaneously on all links in the

network. Also, we obtain the first {\\em explicit} example of a function that

incurs a higher communication cost than the input length in the secure

computation model of Feige, Kilian and Naor \\cite{FeigeKiNa94}, who had

shown that such functions exist. We also show that our communication bounds

imply tight lower bounds on the amount of randomness required by MPC

protocols for many interesting functions.



15:17 [Pub][ePrint] Forgery Attacks on round-reduced ICEPOLE-128, by Christoph Dobraunig and Maria Eichlseder and Florian Mendel

  ICEPOLE is a family of authenticated encryptions schemes submitted to the ongoing CAESAR competition and in addition presented at CHES 2014. To justify the use of ICEPOLE, or to point out potential weaknesses, third-party cryptanalysis is needed. In this work, we evaluate the resistance of ICEPOLE-128 against forgery attacks. By using differential cryptanalysis, we are able to create forgeries from a known ciphertext-tag pair with a probability of $2^{-60.3}$ for a round-reduced version of ICEPOLE-128, where the last permutation is reduced to 4 (out of 6) rounds. This is a noticeable advantage compared to simply guessing the right tag, which works with a probability of $2^{-128}$. As far as we know, this is the first published attack in a nonce-respecting setting on round-reduced versions of ICEPOLE-128.



15:17 [Pub][ePrint] Biclique cryptanalysis of MIBS-80 and PRESENT-80, by Mohammad Hossein Faghihi Sereshgi, Mohammad Dakhilalian, and Mohsen Shakiba

  In this paper we present the first biclique cryptanalysis of MIBS block cipher

and a new biclique cryptanalysis of PRESENT block cipher. These attacks are

performed on full-round MIBS-80 and full-round PRESENT-80. Attack on MIBS-

80 uses matching without matrix method and has a data complexity upper bounded

by $2^{52}$ chosen plaintext where it reduced security of this cipher about 1 bit. Attack

on PRESENT-80 has a data complexity of at most $2^{22}$ chosen plaintexts and computational

complexity of $2^{79.37}$ encryptions that both complexities are lower than other

cryptanalyses of PRESENT-80 so far.



00:17 [Pub][ePrint] PAC Learning of Arbiter PUFs, by Fatemeh Ganji and Shahin Tajik and Jean-Pierre Seifert

  The general concept of Physically Unclonable Functions (PUFs) has been nowadays widely accepted and adopted to meet the requirements of secure identification and key generation/storage for cryptographic ciphers. However, shattered by different kinds of attacks, it has been proved that the promised security features of arbiter PUFs, including unclonability and unpredictability do not hold unconditionally. It has been stated in the literature that arbiter PUFs can be broken by launching modeling attacks, i.e., applying machine learning methods. In this case, a large set of Challenge-Response Pairs (CRP) is collected by an attacker, aiming at mathematically modeling the Challenge-Response behavior for a given arbiter PUF. However, the success of all existing modeling attacks rests so far on pure trial and error estimates. This means that neither the probability of obtaining a useful model (confidence), nor the maximum number of required CRPs, nor the correct prediction of an unknown challenge (accuracy) is guaranteed at all. To address these issues, this work will present a Probably Approximately Correct (PAC) learning algorithm. This proves that learning of any arbiter PUF for prescribed levels of accuracy and confidence can be done in polynomial time. Based on some crucial discretization process, we are able to define a Deterministic Finite Automaton (of polynomial size), which exactly accepts that regular language that corresponds to the challenges mapped by the given PUF to one responses. We also prove that the maximum required number of CRPs is polynomial in the number of arbiter stages. A further analysis reveals that this maximum number of CRPs is also polynomial in the maximum deviation of the arbiter delays as well as the pre-defined levels of accuracy and confidence. To the best of our knowledge this is the first algorithm which can provably learn an arbitrary arbiter PUF. Moreover, our proof of the PAC learnability of arbiter PUFs gives many new insights into the correct construction of secure arbiter PUFs in general.