International Association for Cryptologic Research

# IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-11-30
16:17 [Pub][ePrint]

We investigate a lattice construction method for the Coppersmith technique

for finding small solutions of a modular equation.

We consider its variant for simultaneous equations

and propose a method to construct a lattice

by combining lattices for solving single equations.

As applications,

we consider

(i) a new RSA cryptanalysis for multiple short secret exponents,

(ii) its partial key exposure situation,

and (iii) investigating the hardness of finding a certain amount of LSBs of the RSA secret exponent.

More precisely,

our algorithm can factor an RSA modulus from $\\ell \\ge 2$ pairs of RSA public exponents with the common modulus

corresponding to secret exponents smaller than $N^{(9\\ell -5)/(12\\ell + 4)}$,

which improves on the previously best known result $N^{(3\\ell-1)/(4\\ell+4)}$ by Sarkar and Maitra \\cite{SM10a,SM10b}.

For partial key exposure situation,

we also can factor the modulus if

$\\beta - \\delta/2 + 1/4 < (3\\ell-1)(3\\ell + 1)$,

where $\\beta$ and $\\delta$ are bit-lengths $/n$ of the secret exponent and its exposed LSBs,

respectively.

Particularly, letting $\\beta=1$, which means that the secret exponent is full-sized,

the necessary amount of exposed bits is $[5/2 - 2(3\\ell -1)/(3\\ell +1)]n$, which is less than $n$ for $\\ell \\ge 3$.

Suppose we have an algorithm that recovers the above amount of $d$ from $e$ and $N$ satisfying $e\\approx N$.

We showed that $N$ can be factored

in polynomial time in $\\log N$ under a heuristic assumption that the Coppersmith technique works.

When $\\ell$ becomes large, the necessary amount becomes $0.5 n$ bits.

Hence, we conclude that recovering the lower half of LSBs of $d$ is polynomial time equivalent to the factoring

under the heuristic assumption.

From the last result,

we propose {\\it a half-amount conjecture}

that roughly, factoring RSA modulus is polynomial-time equivalent to

any continued bits of secret information such as $p,q,d,p+q$ and $p-q$

(or $d_p$ and $d_q$ for RSA-CRT).

It is supported from several results, e.g.,

Coppersmith \\cite{Co96b} shows that recovering the upper half of $p$ is equivalent to factoring.

16:17 [Pub][ePrint]

In this paper we present a rather new approach to apply mixed-integer optimization to the cryptanalysis of cryptographic primitives. We focus on the stream cipher Trivium, that has been recommended by the eSTREAM stream cipher project, and the lightweight block cipher Ktantan. Using these examples we explain how the problem of solving a non-linear multivariate Boolean equation system can be formulated as a mixed-integer linear programming problem. Our main focus is the formulation of the mixed-integer programming model (MIP model), which includes amongst others the choice of a conversion method to convert the Boolean equations into equations over the reals, different guessing strategies and the selection of binary variables. We apply the commercial solver Cplex to our problems. The results and further possible features of the approach are discussed.

16:17 [Pub][ePrint]

Recently, several important block ciphers are considered to

be broken by the bruteforce-like cryptanalysis, with a time complexity

faster than exhaustive key search by going over the entire key space but performing less than a full encryption for each possible key. Motivated by this observation, we describe a meet-in-the-middle attack that can always be successfully mounted against any practical block ciphers with success probability one. The data complexity of this attack is the smallest according to the unicity distance. The time complexity can be written as $2^k(1-\\epsilon)$ where $\\epsilon > 0$ for all block ciphers. Previously, the security bound that is commonly accepted is the length k of the given master key. From our result we point out that actually this k-bit security is always overestimated and can never be reached due to the inevitable key bits loss. No amount of clever design can prevent it, but increments of the number of rounds can reduce this key loss as much as possible. We give more insight in the problem of the upper bound of eective key bits in block ciphers, and show a more accurate bound. A suggestion about the relation between the key size and block size is given. That is, when the number of rounds is xed, it is better to take a key size equal to the block size. Moreover, eective key bits of many well-known block ciphers are calculated and analyzed, which also conrm their lower security margin than thought before.

2012-11-29
10:17 [Pub][ePrint]

We revisit the notions of robustness introduced by Abdalla, Bellare, and Neven (TCC 2010). One of the main motivations for the introduction of strong robustness for public-key encryption (PKE) by Abdalla et al. to prevent certain types of attack on Sako\'s auction protocol. We show, perhaps surprisingly, that Sako\'s protocol is still vulnerable to attacks exploiting robustness problems in the underlying PKE scheme, even when it is instantiated with a \\emph{strongly} robust scheme. This demonstrates that current notions of robustness are insufficient even for one of its most natural applications. To address this and other limitations in existing notions, we introduce a series of new robustness notions for PKE and explore their relationships. In particular, we introduce \\emph{complete} robustness, our strongest new notion of robustness, and give a number of constructions for completely robust PKE schemes.

08:12 [Event][New]

Submission: 20 March 2013
Notification: 20 April 2013
From May 9 to May 10
Location: Konya, Turkey
More Information: http://sdiwc.net/conferences/2013/taeece2013/index.php

2012-11-28
19:17 [Pub][ePrint]

In [eprint.iacr.org/2009/117] method of virtual isomorphisms of ciphers was proposed for cryptanalysis. Cipher is vulnerable to an attack iff isomorphic cipher is vulnerable to this attack. That method is based on conjugation, and it is not practical because all round operations except one become nonlinear. New isomorphism of AES is proposed, its image IAES has only one nonlinear operation IXOR - isomorphic image of XOR of 5 bytes. Maximal probabilities of byte differentials are increased about 10-11 times, maximal biases of linear sums are increased about 3.6 times comparatively to original AES. IAES possesses computable family of differentials of IXOR with two active input bytes, zero output difference and probability 1. Zero output difference decreases the rate of multiplication of active nonlinearities in differential characteristic of IAES.

19:17 [Pub][ePrint]

In a proxy re-encryption (PRE) scheme, a proxy is given a re-encryption key and has the ability to translate a ciphertext under one key into a ciphertext of the same message under a different key, without learning anything about the message encrypted under either key. PREs have been widely used in many exciting applications, such as email forwarding and law enforcement. Based on a good observation on the applications of PREs, we find that a PRE receiver needs an ability, just like what is provided by public-key encryption with non-interactive opening, to non-interactively and efficiently convince third parties of what he obtains from a particular (transformed) ciphertext, while still keeping the security of his secret key and other ciphertexts.

To meet such a practical requirement, we first introduce proxy re-encryption with non-interactive opening (PRENO), and formally

define the notions of security against \\textit{chosen ciphertext

attacks} (CCA) and \\textit{proof soundness}. Our security model is natural and strong since we allow the CCA adversary to adaptively choose public keys for malicious users (i.e., a chosen key model), and a scheme secure in previous models (i.e., knowledge of secret key models) is not necessarily secure in our model. Then, we present an efficient PRENO scheme which satisfies our security notions based on the decisional bilinear Diffie-Hellman (DBDH) assumption in the standard model. Compared with two previous PRE schemes, our scheme is competitive in several aspects. First, its CCA security is proved in a strong security model under a well-studied assumption in the standard model. Second, it has a good overall performance in terms of ciphertext length and computational cost. Third, it first provides non-interactive opening for PRE schemes.

19:17 [Pub][ePrint]

To have \"full\" entropy has been defined in a draft NIST standard to be to have min-entropy very close, proportionally, to the min-entropy of a uniform distribution. A function is uniform if all its preimages have the same size. This report proves that the output of any uniform compression function can fail to have full entropy, even when the input has full entropy.

19:17 [Pub][ePrint]

The RSA-768 (270 decimal digits) was factored by Kleinjung et al. on December 12 2009, and the RSA-704 (212 decimal digits) was factored by Bai et al. on July 2, 2012. And the RSA-200 (663 bits) was factored by Bahr et al. on May 9, 2005. Until right now, there is no body successful to break the RSA-210 (696 bits) currently. In this paper, we would discuss an estimation method to approach lower/upper bound of $\\phi(n)$ in the RSA parameters. Our contribution may help researchers lock the $\\phi(n)$ and the challenge RSA shortly.

19:17 [Pub][ePrint]

Forensic watermarking is the application of digital watermarks

for the purpose of tracing unauthorized redistribution of content.

The most powerful type of attack on watermarks is the

collusion attack, in which multiple users compare their differently

watermarked versions of the same content.

Collusion-resistant codes have been developed against these attacks.

One of the most famous such codes is the Tardos code.

It has the asymptotically optimal property that it can resist c

attackers with a code of length proportional to c^2.

Determining error rates for the Tardos code and its various

extensions and generalizations turns out to be a nontrivial problem.

In recent work we developed an approach called the

Convolution and Series Expansion (CSE) method to accurately compute

false positive accusation probabilities.

In this paper we extend the CSE method in order to make it possible

to compute false negative accusation probabilities as well.

19:17 [Pub][ePrint]

In this paper, we study differential attacks against ARX schemes. We

build upon the generalized characteristics of de Cannière and Rechberger

and the multi-bit constraints of Leurent. We describe a more efficient

way to propagate multi-bit constraints, that allows us to use the

complete set of 2^32 2.5-bit constraints, instead of the reduced sets

used by Leurent.

As a result, we are able to build complex non-linear differential

characteristics for reduced versions of the hash function Skein. We

present several characteristics for use in various attack scenarios;

this results in attacks with a relatively low complexity, in relatively

strong settings. In particular, we show practical free-start and

semi-free-start collision attacks for 20 rounds and 12 rounds of

Skein-256, respectively.

To the best of our knowledge, these are the first examples of complex

differential trails are build for pure ARX designs. We believe this is

an important work to assess the security of ARX designs against

differential cryptanalysis. Our improved tools will be publicly available

with the final version of this paper.