International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

01:17 [Pub][ePrint] Infective Computation and Dummy Rounds: Fault Protection for Block Ciphers without Check-before-Output, by Benedikt Gierlichs and Jorn-Marc Schmidt and Michael Tunstall

  Implementation attacks pose a serious threat for the security of cryptographic devices and there are a multitude of countermeasures that are used to prevent them. Two countermeasures used in implementations of block ciphers to increase the complexity of such attacks are the use of dummy rounds and redundant computation with consistency checks to prevent fault attacks. In this paper we present several countermeasures based on the idea of infective computation. Our countermeasures ensure that a fault injected into a cipher, dummy, or redundant round will infect the ciphertext such that an attacker cannot derive any information on the secret key being used. This has one clear advantage: the propagation of faults prevents an attacker from being able to conduct any fault analysis on any corrupted ciphertexts. As a consequence, there is no need for any test at the end of an implementation to determine if a fault has been injected and a ciphertext can always be returned.

16:17 [Pub][ePrint] Lecture Notes in Secret Sharing, by Carles Padro

  These are basically the lecture notes for the short course \"Applications of Combinatorics to Information-Theoretic Cryptography\", Central European University, Budapest, May-June 2012. With the objective of covering a full course on secret sharing, additional content will be added in subsequent versions of these lecture notes.

16:17 [Pub][ePrint] Minkowski sum based lattice construction for solving simultaneous modular equations and applications to RSA, by Yoshinori Aono

  We investigate a lattice construction method for the Coppersmith technique

for finding small solutions of a modular equation.

We consider its variant for simultaneous equations

and propose a method to construct a lattice

by combining lattices for solving single equations.

As applications,

we consider

(i) a new RSA cryptanalysis for multiple short secret exponents,

(ii) its partial key exposure situation,

and (iii) investigating the hardness of finding a certain amount of LSBs of the RSA secret exponent.

More precisely,

our algorithm can factor an RSA modulus from $\\ell \\ge 2$ pairs of RSA public exponents with the common modulus

corresponding to secret exponents smaller than $N^{(9\\ell -5)/(12\\ell + 4)}$,

which improves on the previously best known result $N^{(3\\ell-1)/(4\\ell+4)}$ by Sarkar and Maitra \\cite{SM10a,SM10b}.

For partial key exposure situation,

we also can factor the modulus if

$\\beta - \\delta/2 + 1/4 < (3\\ell-1)(3\\ell + 1)$,

where $\\beta$ and $\\delta$ are bit-lengths $/n$ of the secret exponent and its exposed LSBs,


Particularly, letting $\\beta=1$, which means that the secret exponent is full-sized,

the necessary amount of exposed bits is $[5/2 - 2(3\\ell -1)/(3\\ell +1)]n$, which is less than $n$ for $\\ell \\ge 3$.

Suppose we have an algorithm that recovers the above amount of $d$ from $e$ and $N$ satisfying $e\\approx N$.

We showed that $N$ can be factored

in polynomial time in $\\log N$ under a heuristic assumption that the Coppersmith technique works.

When $\\ell$ becomes large, the necessary amount becomes $0.5 n$ bits.

Hence, we conclude that recovering the lower half of LSBs of $d$ is polynomial time equivalent to the factoring

under the heuristic assumption.

From the last result,

we propose {\\it a half-amount conjecture}

that roughly, factoring RSA modulus is polynomial-time equivalent to

any continued bits of secret information such as $p,q,d,p+q$ and $p-q$

(or $d_p$ and $d_q$ for RSA-CRT).

It is supported from several results, e.g.,

Coppersmith \\cite{Co96b} shows that recovering the upper half of $p$ is equivalent to factoring.

16:17 [Pub][ePrint] Mixed-integer Linear Programming in the Analysis of Trivium and Ktantan, by Julia Borghoff

  In this paper we present a rather new approach to apply mixed-integer optimization to the cryptanalysis of cryptographic primitives. We focus on the stream cipher Trivium, that has been recommended by the eSTREAM stream cipher project, and the lightweight block cipher Ktantan. Using these examples we explain how the problem of solving a non-linear multivariate Boolean equation system can be formulated as a mixed-integer linear programming problem. Our main focus is the formulation of the mixed-integer programming model (MIP model), which includes amongst others the choice of a conversion method to convert the Boolean equations into equations over the reals, different guessing strategies and the selection of binary variables. We apply the commercial solver Cplex to our problems. The results and further possible features of the approach are discussed.

16:17 [Pub][ePrint] What is the Effective Key Length for a Block Cipher: an Attack on Every Block Cipher, by Jialin Huang and Xuejia Lai

  Recently, several important block ciphers are considered to

be broken by the bruteforce-like cryptanalysis, with a time complexity

faster than exhaustive key search by going over the entire key space but performing less than a full encryption for each possible key. Motivated by this observation, we describe a meet-in-the-middle attack that can always be successfully mounted against any practical block ciphers with success probability one. The data complexity of this attack is the smallest according to the unicity distance. The time complexity can be written as $2^k(1-\\epsilon)$ where $\\epsilon > 0$ for all block ciphers. Previously, the security bound that is commonly accepted is the length k of the given master key. From our result we point out that actually this k-bit security is always overestimated and can never be reached due to the inevitable key bits loss. No amount of clever design can prevent it, but increments of the number of rounds can reduce this key loss as much as possible. We give more insight in the problem of the upper bound of eective key bits in block ciphers, and show a more accurate bound. A suggestion about the relation between the key size and block size is given. That is, when the number of rounds is xed, it is better to take a key size equal to the block size. Moreover, eective key bits of many well-known block ciphers are calculated and analyzed, which also conrm their lower security margin than thought before.

10:17 [Pub][ePrint] Robust Encryption, Revisited, by Pooya Farshim and BenoƮt Libert and Kenneth G. Paterson and Elizabeth A. Quaglia

  We revisit the notions of robustness introduced by Abdalla, Bellare, and Neven (TCC 2010). One of the main motivations for the introduction of strong robustness for public-key encryption (PKE) by Abdalla et al. to prevent certain types of attack on Sako\'s auction protocol. We show, perhaps surprisingly, that Sako\'s protocol is still vulnerable to attacks exploiting robustness problems in the underlying PKE scheme, even when it is instantiated with a \\emph{strongly} robust scheme. This demonstrates that current notions of robustness are insufficient even for one of its most natural applications. To address this and other limitations in existing notions, we introduce a series of new robustness notions for PKE and explore their relationships. In particular, we introduce \\emph{complete} robustness, our strongest new notion of robustness, and give a number of constructions for completely robust PKE schemes.

08:12 [Event][New] TAEECE2013: Intl Con: Technological Advances in Electrical, Electronics & Computer Eng.

  Submission: 20 March 2013
Notification: 20 April 2013
From May 9 to May 10
Location: Konya, Turkey
More Information:

19:17 [Pub][ePrint] Virtual isomorphisms of ciphers: is AES secure against differential / linear attack?, by Alexander Rostovtsev

  In [] method of virtual isomorphisms of ciphers was proposed for cryptanalysis. Cipher is vulnerable to an attack iff isomorphic cipher is vulnerable to this attack. That method is based on conjugation, and it is not practical because all round operations except one become nonlinear. New isomorphism of AES is proposed, its image IAES has only one nonlinear operation IXOR - isomorphic image of XOR of 5 bytes. Maximal probabilities of byte differentials are increased about 10-11 times, maximal biases of linear sums are increased about 3.6 times comparatively to original AES. IAES possesses computable family of differentials of IXOR with two active input bytes, zero output difference and probability 1. Zero output difference decreases the rate of multiplication of active nonlinearities in differential characteristic of IAES.

19:17 [Pub][ePrint] PRE- Stronger Security Notion and Efficient Construction with New Property, by Jiang Zhang \\and Zhenfeng Zhang \\and Yu Chen

  In a proxy re-encryption (PRE) scheme, a proxy is given a re-encryption key and has the ability to translate a ciphertext under one key into a ciphertext of the same message under a different key, without learning anything about the message encrypted under either key. PREs have been widely used in many exciting applications, such as email forwarding and law enforcement. Based on a good observation on the applications of PREs, we find that a PRE receiver needs an ability, just like what is provided by public-key encryption with non-interactive opening, to non-interactively and efficiently convince third parties of what he obtains from a particular (transformed) ciphertext, while still keeping the security of his secret key and other ciphertexts.

To meet such a practical requirement, we first introduce proxy re-encryption with non-interactive opening (PRENO), and formally

define the notions of security against \\textit{chosen ciphertext

attacks} (CCA) and \\textit{proof soundness}. Our security model is natural and strong since we allow the CCA adversary to adaptively choose public keys for malicious users (i.e., a chosen key model), and a scheme secure in previous models (i.e., knowledge of secret key models) is not necessarily secure in our model. Then, we present an efficient PRENO scheme which satisfies our security notions based on the decisional bilinear Diffie-Hellman (DBDH) assumption in the standard model. Compared with two previous PRE schemes, our scheme is competitive in several aspects. First, its CCA security is proved in a strong security model under a well-studied assumption in the standard model. Second, it has a good overall performance in terms of ciphertext length and computational cost. Third, it first provides non-interactive opening for PRE schemes.

19:17 [Pub][ePrint] Uniform Compression Functions Can Fail to Preserve \"Full\" Entropy, by Daniel R. L. Brown

  To have \"full\" entropy has been defined in a draft NIST standard to be to have min-entropy very close, proportionally, to the min-entropy of a uniform distribution. A function is uniform if all its preimages have the same size. This report proves that the output of any uniform compression function can fail to have full entropy, even when the input has full entropy.

19:17 [Pub][ePrint] Estimating the $\\phi(n)$ of Upper/Lower Bound in its RSA Cryptosystem, by Rock C. Liu and Zhiwi Yeh

  The RSA-768 (270 decimal digits) was factored by Kleinjung et al. on December 12 2009, and the RSA-704 (212 decimal digits) was factored by Bai et al. on July 2, 2012. And the RSA-200 (663 bits) was factored by Bahr et al. on May 9, 2005. Until right now, there is no body successful to break the RSA-210 (696 bits) currently. In this paper, we would discuss an estimation method to approach lower/upper bound of $\\phi(n)$ in the RSA parameters. Our contribution may help researchers lock the $\\phi(n)$ and the challenge RSA shortly.