International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-03-05
13:17 [Pub][ePrint]

In 2011, Lindell proposed an efficient commitment scheme, with a non-interactive opening algorithm, in the Universal Composability (UC) framework. He recently acknowledged a bug in its security analysis for the adaptive case. We analyze the proof of the original paper and propose a simple patch of the scheme.

More interestingly, we then modify it and present a more efficient commitment scheme secure in the UC framework, with the same level of security as Lindell\'s protocol: adaptive corruptions, with erasures. The

security is proven in the standard model (with a Common Reference String) under the classical Decisional Diffie-Hellman assumption. Our proposal is the most efficient UC-secure commitment proposed to date (in

terms of computational workload and communication complexity).

13:17 [Pub][ePrint]

We initiate a general study of schemes resilient to both tampering and leakage attacks. Tampering attacks are powerful cryptanalytic attacks where an adversary can change the secret state and observes the effect of such changes at the output. Our contributions are outlined below:

(1) We propose a general construction showing that any cryptographic primitive where the secret key can be chosen as a uniformly random string can be made secure against bounded tampering and leakage. This holds in a restricted model where the tampering functions must be chosen from a set of bounded size after the public parameters have been sampled. Our result covers pseudorandom functions, and many encryption and signature schemes.

(2) We show that standard ID and signature schemes constructed from a large class of Sigma-protocols (including the Okamoto scheme, for instance) are secure even if the adversary can arbitrarily tamper with the prover\'s state a bounded number of times and/or obtain some bounded amount of leakage. Interestingly, for the Okamoto scheme we can allow also independent tampering with the public parameters.

(3) We show a bounded tamper and leakage resilient CCA secure public key cryptosystem based on the DDH assumption. We first define a weaker CPA-like security notion that we can instantiate based on DDH, and then we give a general compiler that yields CCA-security with tamper and leakage resilience. This requires a public tamper-proof common reference string.

(4) Finally, we explain how to boost bounded tampering and leakage resilience (as in 2. and 3. above) to continuous tampering and leakage resilience, in the so-called floppy model where each user has a personal floppy (containing leak- and tamper-free information) which can be used to refresh the secret key (note that if the key is not updated, continuous tamper resilience is known to be impossible). For the case of ID schemes, we also show that if the underlying protocol is secure in the bounded retrieval model, then our compiler remains secure, even if the adversary can tamper with the computation performed by the device.

In some earlier work, the implementation of the tamper resilient primitive was assumed to be aware of the possibility of tampering, in that it would switch to a special mode and, e.g., self-destruct if tampering was detected. None of our results require this assumption.

13:17 [Pub][ePrint]

Bellare, Boldyreva, and O\'Neill (CRYPTO \'07) initiated the study of deterministic public-key encryption as an alternative in scenarios where randomized encryption has inherent drawbacks. The resulting line of research has so far guaranteed security only for adversarially-chosen plaintext distributions that are {\\em independent} of the public key used by the scheme. In most scenarios, however, it is typically not realistic to assume that adversaries do not take the public key into account when attacking a scheme.

We show that it is possible to guarantee meaningful security even for plaintext distributions that depend on the public key. We extend the previously proposed notions of security, allowing adversaries to {\\em adaptively} choose plaintext distributions {\\em after} seeing the public key, in an {\\em interactive} manner. The only restrictions we make are that: (1) plaintext distributions are unpredictable (as is essential in deterministic public-key encryption), and (2) the number of plaintext distributions from which each adversary is allowed to adaptively choose is upper bounded by $2^{p}$, where $p$ can be any predetermined polynomial in the security parameter. For example, with $p = 0$ we capture plaintext distributions that are independent of the public key, and with $p = O(s \\log s)$ we capture, in particular, all plaintext distributions that are samplable by circuits of size $s$.

Within our framework we present both constructions in the random-oracle model based on any public-key encryption scheme, and constructions in the standard model based on lossy trapdoor functions (thus, based on a variety of number-theoretic assumptions). Previously known constructions heavily relied on the independence between the plaintext distributions and the public key for the purposes of randomness extraction. In our setting, however, randomness extraction becomes significantly more challenging once the plaintext distributions and the public key are no longer independent. Our approach is inspired by research on randomness extraction from seed-dependent distributions. Underlying our approach is a new generalization of a method for such randomness extraction, originally introduced by Trevisan and Vadhan (FOCS \'00) and Dodis (PhD Thesis, MIT, \'00).

13:17 [Pub][ePrint]

Information-theoretically secure (ITS) authentication is needed in

Quantum Key Distribution (QKD). In this paper, we study security of

an ITS authentication scheme proposed by Wegman\\&Carter, in the case

of partially known authentication key. This scheme uses a new

authentication key in each authentication attempt, to select a hash

function from an Almost Strongly Universal$_2$ hash function family.

The partial knowledge of the attacker is measured as the trace

distance between the authentication key distribution and the uniform

distribution; this is the usual measure in QKD. We provide direct

proofs of security of the scheme, when using partially known key,

first in the information-theoretic setting and then in terms of

witness indistinguishability as used in the Universal Composability

(UC) framework. We find that if the authentication procedure has a

failure probability $\\epsilon$ and the authentication key has an

$\\epsilon\'$ trace distance to the uniform, then under ITS, the

adversary\'s success probability conditioned on an authentic

message-tag pair is only bounded by $\\epsilon+|\\mT|\\epsilon\'$, where

$|\\mT|$ is the size of the set of tags. Furthermore, the trace

distance between the authentication key distribution and the uniform

increases to $|\\mT|\\epsilon\'$ after having seen an authentic

message-tag pair. Despite this, we are able to prove directly that

the authenticated channel is indistinguishable from an (ideal)

authentic channel (the desired functionality), except with

probability less than $\\epsilon+\\epsilon\'$. This proves that the

scheme is ($\\epsilon+\\epsilon\'$)-UC-secure, without using the

composability theorem.

13:17 [Pub][ePrint]

An often neglected problem for potential practical adoption of Password-based Authenticated Key Exchange (PAKE) protocols on the Internet is the handling of failed password trials. Unlike the currently used approach, where a server-authenticated TLS channel (involving constant number of public key-based operations on both sides) is set up once and can then be used by the client to try a limited number of passwords essentially for free, any new password trial using PAKE would result in the repetition of the entire protocol. With existing PAKE protocols, the minimum number of public key-based operations on both sides is thus lower-bounded by $O(n)$, where $n$ is the number of trials. This bound is optimal for the client (that tries $n$ passwords in the worst case) but is clearly not optimal for the server, which uses the same reference password of the client in each trial. This paper presents a secure and practical approach for achieving the lower bound of $O(1)$ public key operations on the server side.

To this end, we introduce Oblivious PAKE (O-PAKE), a general compiler for a large class of PAKE protocols, allowing a client that shares one password with a server to use a set of passwords within one PAKE session, which succeeds if and only if one of those input passwords matches the one stored on the server side. The term oblivious\'\' is used to emphasize that no information about non-matching passwords input by the client is made available to the server, which contrasts for instance to the aforementioned TLS-based approach, where any tried password is disclosed to the server. The $O(1)$ bound on the server side is obtained in our O-PAKE compiler using special processing techniques for the messages of the input PAKE protocol. We prove security of the O-PAKE compiler under standard assumptions using the latest variant of the popular game-based PAKE model by Bellare, Rogaway, and Pointcheval (Eurocrypt 2000). We identify the requirements that PAKE protocols must satisfy in order to suit the compiler and give two concrete O-PAKE protocols based on existing PAKE schemes. Both protocols are implemented and the analysis of their performance attests to the practicality of the compiler.

The use of O-PAKE further eliminates another practical problem with password-based authentication on the Web in that users no longer need to remember the actual association between their frequently used passwords and corresponding servers and can try several of them in one execution without revealing the entire set to the server.

2013-03-01
18:01 [PhD][Update]

Name: Marc Stevens
Topic: Attacks on Hash Functions and Applications
Category:secret-key cryptography

Description: Cryptographic hash functions compute a small fixed-size hash value for any given message. A main application is in digital signatures which require that it must be hard to find collisions, i.e., two different messages that map to the same hash value. In this thesis we provide an analysis of the security of the cryptographic hash function standards MD5 and SHA-1 that have been broken since 2004 due to so called identical-prefix collision attacks. In particular, we present more efficient identical-prefix collision attacks on both MD5 and SHA-1 that improve upon the literature. Furthermore, we introduce a new more flexible attack on MD5 and SHA-1 called the chosen-prefix collision attack that allows significantly more control over the two colliding messages. Moreover, we have proven that our new attack on MD5 poses a realistic threat to the security of everyday applications with our construction of a rogue Certification Authority (CA). Our rogue CA could have enabled the total subversion of secure communications with any website -- if we had not purposely crippled it. Finally, we have introduced an efficient algorithm to detect whether a given message was generated using an identical-prefix or chosen-prefix collision attack on MD5 or SHA-1.[...]

18:00 [Job][Update]

The Centre for Advanced Computing - Algorithms and Cryptography, in the Department of Computing, Faculty of Science, Macquarie University (Sydney, Australia), invites applications for a research fellow in Number Theory and Cryptography

18:00 [Job][New]

The Centre for Advanced Computing - Algorithms and Cryptography, in the Department of Computing, Faculty of Science, Macquarie University (Sydney, Australia), invites applications for a research fellow in Number Theory and Cryptography

2013-02-27
19:17 [Pub][ePrint]

Well-designed initialisation and keystream generation processes for stream ciphers should ensure that each key-IV pair generates a distinct keystream. In this paper, we analyse some ciphers where this does not happen due to state convergence occurring either during initialisation, keystream generation or both. We show how state convergence occurs in each case and identify two mechanisms which can cause state convergence.

19:17 [Pub][ePrint]

In this paper we present a biclique attack on the newly proposed block cipher KLEIN-64. We first introduce some weaknesses of the diffusion layer and key schedule of this algorithm. Then we exploit them to present a full round attack on KLEIN-64 using an asymmetric biclique. The (worst case) computations and data complexity of this attack are 2^{62.84} and 2^{39}, respectively. A modified version of this attack is

also presented which is slightly faster at the expense of the data required.

19:17 [Pub][ePrint]

The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen [BPR12] at EUROCRYPT \'12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters where the modulus and modulus-to-error ratio are super-polynomial. In this work we resolve the main open problem of [BPR12] and give a new reduction that works for a larger range of parameters, allowing for a polynomial modulus and modulus-to-error ratio. In particular, a smaller modulus gives us greater efficiency, and a smaller modulus-to-error ratio gives us greater security, which now follows from the worst-case hardness of GapSVP with polynomial (rather than super-polynomial) approximation factors.

As a tool in the reduction, we show that there is a lossy mode\'\' for the LWR problem, in which LWR samples only reveal partial information about the secret. This property gives us several interesting new applications, including a proof that LWR remains secure with weakly random secrets of sufficient min-entropy, and very simple new constructions of deterministic encryption, lossy trapdoor functions and reusable extractors.

Our approach is inspired by a technique of Goldwasser et al. [GKPV10] from ICS \'10, which implicitly showed the existence of a lossy mode\'\' for LWE. By refining this technique, we also improve on the parameters of that work to only requiring a polynomial (instead of super-polynomial) modulus and modulus-to-error ratio.