International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-05-01
12:17 [Pub][ePrint] Relaxing Full-Codebook Security: A Refined Analysis of Key-Length Extension Schemes, by Peter Gazi and Jooyoung Lee and Yannick Seurin and John Steinberger and Stefano Tessaro

  We revisit the security (as a pseudorandom permutation) of cascading-based constructions for block-cipher key-length extension. Previous works typically considered the extreme case where the adversary is given the entire codebook of the construction, the only complexity measure being the number $q_e$ of queries to the underlying ideal block cipher, representing adversary\'s secret-key-independent computation. Here, we initiate a systematic study of the more natural case of an adversary restricted to adaptively learning a number $q_c$ of plaintext/ciphertext pairs that is less than the entire codebook. For any such $q_c$, we aim to determine the highest number of block-cipher queries $q_e$ the adversary can issue without being able to successfully distinguish the construction (under a secret key) from a random permutation.

More concretely, we show the following results for key-length extension schemes using a block cipher with $n$-bit blocks and $\\kappa$-bit keys:

- Plain cascades of length $\\ell = 2r+1$ are secure whenever $q_c q_e^r \\ll 2^{r(\\kappa+n)}$, $q_c \\ll 2^\\ka$ and $q_e \\ll 2^{2\\ka}$. The bound for $r = 1$ also applies to two-key triple encryption (as used within Triple DES).

- The $r$-round XOR-cascade is secure as long as $q_c q_e^r \\ll 2^{r(\\kappa+n)}$, matching an attack by Gazi (CRYPTO 2013).

- We fully characterize the security of Gazi and Tessaro\'s two-call 2XOR construction (EUROCRYPT 2012) for all values of $q_c$, and note that the addition of a third whitening step strictly increases security for $2^{n/4} \\le q_c \\le 2^{3/4n}$. We also propose a variant of this construction without re-keying and achieving comparable security levels.



12:17 [Pub][ePrint] Factoring RSA moduli with weak prime factors, by Abderrahmane Nitaj and Tajjeeddine Rachidi

  In this paper, we study the problem of factoring an RSA modulus $N=pq$ in polynomial time, when $p$ is a weak prime, that is, $p$ can be expressed as $ap=u_0+M_1u_1+\\ldots+M_ku_k$ for some $k$ integers $M_1,\\ldots, M_k$ and $k+2$ suitably small parameters $a$, $u_0,\\ldots u_k$. We further compute a lower bound for the set of weak moduli, that is, moduli made of at least one weak prime, in the interval $[2^{2n},2^{2(n+1)}]$ and show that this number is much larger than the set of RSA prime factors satisfying Coppersmith\'s conditions, effectively extending the likelihood for factoring RSA moduli. We also prolong our findings to moduli composed of two weak primes.



12:17 [Pub][ePrint] New attacks on RSA with Moduli $N=p^rq$, by Abderrahmane Nitaj and Tajjeeddine Rachidi

  We present three attacks on the Prime Power RSA with modulus $N=p^rq$. In the first attack, we consider a public exponent $e$ satisfying an equation $ex-\\phi(N)y=z$ where $\\phi(N)=p^{r-1}(p-1)(q-1)$. We show that one can factor $N$ if the parameters $|x|$ and $|z|$ satisfy $|xz|

12:17 [Pub][ePrint] Expiration and Revocation of Keys for Attribute-based Signatures, by Stephen R. Tate and Roopa Vishwanathan

  Attribute-based signatures, introduced by Maji \\emph{et al.}, are

signatures that prove that an authority has issued the signer

``attributes\'\' that satisfy some specified predicate. In existing

attribute-based signature schemes, keys are valid indefinitely once

issued. In this paper, we initiate the study of incorporating time into

attribute-based signatures, where a time instance is embedded in every

signature, and attributes are restricted to producing signatures with

times that fall in designated validity intervals. We provide

three implementations that vary in granularity of assigning validity

intervals to attributes, including a scheme in which each attribute

has its own independent validity interval, a scheme in which all

attributes share a common validity interval, and a scheme in which

sets of attributes share validity intervals. All of our schemes provide anonymity to a signer, hide the attributes used to create the signature, and provide collusion-resistance between

users.



12:17 [Pub][ePrint] Simple Chosen-Ciphertext Security from Low-Noise LPN, by Eike Kiltz and Daniel Masny and Krzysztof Pietrzak

  Recently, D\\\"ottling et al. (ASIACRYPT 2012) proposed the first

chosen-ciphertext (IND-CCA) secure public-key encryption scheme from the learning parity with noise (LPN) assumption.

In this work we give an alternative scheme which is conceptually simpler and more efficient.

At the core of our construction is a trapdoor technique originally proposed for lattices by Micciancio and Peikert (EUROCRYPT 2012), which we adapt to the LPN setting. The main technical tool is a new double-trapdoor mechanism, together with a trapdoor switching lemma based on a computational variant of the leftover hash lemma.



12:17 [Pub][ePrint] Success through confidence: Evaluating the effectiveness of a side-channel attacj, by Adrian Thillard and Emmanuel Prouff and Thomas Roche

  Side-channel attacks usually apply a divide-and-conquer strategy, separately recovering different parts of the secret. Their efficiency in practice relies on the adversary ability to precisely assess the success or unsucces of each of these recoveries. This makes the study of the attack success rate a central problem in side-channel analysis. In tis paper we tackle this issue in two different settings for the most popular attack, namely the Correlation Power Analysis (CPA). In the first setting, we assume that the targeted subkey is known and we compare the state of the art formulae expressing the success rate as a function of the leakage noise and the algebraic properties of the cryptographic primitive. We also make the link between these formulae and the recent work of Fei et al. at CHES 2012. In the second setting, the subkey is no longer assumed to be known and we introduce the notion of confidence level in an attack result, allowing for the study of different heuristics. Through experiments, we show that the rank evolution of a subkey hypothesis can be exploited to compute a better confidence than considering only the final result.





2015-04-29
15:17 [Pub][ePrint] Method to Protect Passwords in Databases for Web Applications, by Scott Contini

  Trying to make it more difficult to hack passwords has a long history. However the research community has not addressed the change of context from traditional Unix mainframe systems to web applications which face new threats (DoS) and have fewer constraints (client-side computation is allowed). In absence of updated guidance, a variety of solutions are scattered all over the web, from amateur to somewhat professional. However, even the best references have issues such as incomplete details, misuse of terminology, assertion of requirements that are not adequately justified, and too many options presented to the developer, opening the door to potential mistakes. The purpose of this research note is to present a solution with complete details and a concise summary of the requirements, and to provide a solution that developers can readily implement with confidence, assuming that the solution is endorsed by the research community. The proposed solution involves client-side processing of a heavy computation in combination with a server-side hash computation. It follows a similar approach to a few other proposals on the web, but is more complete and justified than any that we found.



15:17 [Pub][ePrint] Fully Succinct Garbled RAM, by Ran Canetti and Justin Holmgren

  We construct the first fully succinct garbling scheme for RAM programs, assuming the existence of indistinguishability obfuscation for circuits and one-way functions. That is, the size, space requirements, and runtime of the garbled program are the same as those of the input program, up to poly-logarithmic factors and a polynomial in the security parameter. The scheme can be used to construct indistinguishability obfuscators for RAM programs with comparable efficiency, at the price of requiring sub-exponential security of the underlying primitives.

The scheme builds on the recent schemes of Koppula-Lewko-Waters and Canetti-Holmgren-Jain-Vaikuntanathan [STOC 15]. A key technical challenge here is how to combine the fixed-prefix technique of KLW, which was developed for deterministic programs, with randomized Oblivious RAM techniques. To overcome that, we develop a method for arguing about the indistinguishability of two obfuscated randomized programs that use correlated randomness. Along the way, we also define and construct garbling schemes that offer only partial protection. These may be of independent interest.



15:17 [Pub][ePrint] Keccak, by Guido Bertoni and Joan Daemen and Michael Peeters and Gilles Van Assche

  In October 2012, the American National Institute of Standards and Technology (NIST) announced the selection of Keccak as the winner of the SHA-3 Cryptographic Hash Algorithm Competition [10,11]. This concluded an open competition that was remarkable both for its magnitude and the involvement of the cryptographic community. Public review is of paramount importance to increase the confidence in the new standard and to favor its quick adoption. The SHA-3 competition explicitly took this into account by giving open access to the candidate algorithms and everyone in the cryptographic community could try to break them, compare their performance, or simply give comments.



15:17 [Pub][ePrint] Dual System Encryption Framework in Prime-Order Groups, by Nuttapong Attrapadung

  We propose a new generic framework for achieving fully secure attribute based encryption (ABE) in prime-order bilinear groups. It is generic in the sense that it can be applied to ABE for arbitrary predicate. All previously available frameworks that are generic in this sense are given only in composite-order bilinear groups. These consist of the frameworks proposed by Wee (TCC\'14) and Attrapadung (Eurocrypt\'14). Both frameworks provide abstractions of dual-system encryption techniques introduced by Waters (Crypto\'09). Our framework can be considered as a prime-order version of Attrapadung\'s framework and works in a similar manner: it relies on a main component called pair encodings, and it generically compiles any secure pair encoding scheme for a predicate in consideration to a fully secure ABE scheme for that predicate. One feature of our new compiler is that although the resulting ABE schemes will be newly defined in prime-order groups, we require essentially the same security notions of pair encodings as before. Beside the security of pair encodings, our framework assumes only the Matrix Diffie-Hellman assumption, introduced by Escala et al. (Crypto\'13), which is a weak assumption that includes the Decisional Linear assumption as a special case.

As for its applications, we can plug in available pair encoding schemes and automatically obtain the first fully secure ABE realizations in prime-order groups for predicates of which only fully secure schemes in composite-order groups were known. These include ABE for regular languages, ABE for monotone span programs (and hence Boolean formulae) with short ciphertexts or keys, and completely unbounded ABE for monotone span programs.

As a side result, we establish the first generic implication from ABE for monotone span programs to ABE for branching programs. Consequently, we obtain fully-secure ABE for branching programs in some new variants, namely, unbounded, short-ciphertext, and short-key variants. Previous ABE schemes for branching programs are bounded and require linear-size ciphertexts and keys.



15:17 [Pub][ePrint] On the Communication Complexity of Secure Computation, by Deepesh Data and Manoj M. Prabhakaran and Vinod M. Prabhakaran

  Information theoretically secure multi-party computation (MPC) is a central

primitive of modern cryptography. However, relatively little

is known about the communication complexity of this primitive.

In this work, we develop powerful information theoretic tools to prove lower

bounds on the communication complexity of MPC. We restrict ourselves to a

concrete setting involving 3-parties, in order to bring out the power of

these tools without introducing too many complications. Our techniques

include the use of a data processing inequality for {\\em residual

information} --- i.e., the gap between mutual information and

G\\\'acs-K\\\"orner common information, a new {\\em information inequality} for

3-party protocols, and the idea of {\\em distribution switching} by which

lower bounds computed under certain worst-case scenarios can be shown to

apply for the general case.

Using these techniques we obtain tight bounds on communication complexity by

MPC protocols for various interesting functions. In particular, we show

concrete functions that have ``communication-ideal\'\' protocols, which

achieve the minimum communication simultaneously on all links in the

network. Also, we obtain the first {\\em explicit} example of a function that

incurs a higher communication cost than the input length in the secure

computation model of Feige, Kilian and Naor \\cite{FeigeKiNa94}, who had

shown that such functions exist. We also show that our communication bounds

imply tight lower bounds on the amount of randomness required by MPC

protocols for many interesting functions.