International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-03-05
13:17 [Pub][ePrint] Direct Proof of Security of Wegman-Carter Authentication with Partially Known Key, by Aysajan Abidin and Jan-Åke Larsson

  Information-theoretically secure (ITS) authentication is needed in

Quantum Key Distribution (QKD). In this paper, we study security of

an ITS authentication scheme proposed by Wegman\\&Carter, in the case

of partially known authentication key. This scheme uses a new

authentication key in each authentication attempt, to select a hash

function from an Almost Strongly Universal$_2$ hash function family.

The partial knowledge of the attacker is measured as the trace

distance between the authentication key distribution and the uniform

distribution; this is the usual measure in QKD. We provide direct

proofs of security of the scheme, when using partially known key,

first in the information-theoretic setting and then in terms of

witness indistinguishability as used in the Universal Composability

(UC) framework. We find that if the authentication procedure has a

failure probability $\\epsilon$ and the authentication key has an

$\\epsilon\'$ trace distance to the uniform, then under ITS, the

adversary\'s success probability conditioned on an authentic

message-tag pair is only bounded by $\\epsilon+|\\mT|\\epsilon\'$, where

$|\\mT|$ is the size of the set of tags. Furthermore, the trace

distance between the authentication key distribution and the uniform

increases to $|\\mT|\\epsilon\'$ after having seen an authentic

message-tag pair. Despite this, we are able to prove directly that

the authenticated channel is indistinguishable from an (ideal)

authentic channel (the desired functionality), except with

probability less than $\\epsilon+\\epsilon\'$. This proves that the

scheme is ($\\epsilon+\\epsilon\'$)-UC-secure, without using the

composability theorem.



13:17 [Pub][ePrint] Oblivious PAKE and Efficient Handling of Password Trials, by Franziskus Kiefer and Mark Manulis

  An often neglected problem for potential practical adoption of Password-based Authenticated Key Exchange (PAKE) protocols on the Internet is the handling of failed password trials. Unlike the currently used approach, where a server-authenticated TLS channel (involving constant number of public key-based operations on both sides) is set up once and can then be used by the client to try a limited number of passwords essentially for free, any new password trial using PAKE would result in the repetition of the entire protocol. With existing PAKE protocols, the minimum number of public key-based operations on both sides is thus lower-bounded by $O(n)$, where $n$ is the number of trials. This bound is optimal for the client (that tries $n$ passwords in the worst case) but is clearly not optimal for the server, which uses the same reference password of the client in each trial. This paper presents a secure and practical approach for achieving the lower bound of $O(1)$ public key operations on the server side.

To this end, we introduce Oblivious PAKE (O-PAKE), a general compiler for a large class of PAKE protocols, allowing a client that shares one password with a server to use a set of passwords within one PAKE session, which succeeds if and only if one of those input passwords matches the one stored on the server side. The term ``oblivious\'\' is used to emphasize that no information about non-matching passwords input by the client is made available to the server, which contrasts for instance to the aforementioned TLS-based approach, where any tried password is disclosed to the server. The $O(1)$ bound on the server side is obtained in our O-PAKE compiler using special processing techniques for the messages of the input PAKE protocol. We prove security of the O-PAKE compiler under standard assumptions using the latest variant of the popular game-based PAKE model by Bellare, Rogaway, and Pointcheval (Eurocrypt 2000). We identify the requirements that PAKE protocols must satisfy in order to suit the compiler and give two concrete O-PAKE protocols based on existing PAKE schemes. Both protocols are implemented and the analysis of their performance attests to the practicality of the compiler.

The use of O-PAKE further eliminates another practical problem with password-based authentication on the Web in that users no longer need to remember the actual association between their frequently used passwords and corresponding servers and can try several of them in one execution without revealing the entire set to the server.





2013-03-01
18:01 [PhD][Update] Marc Stevens: Attacks on Hash Functions and Applications

  Name: Marc Stevens
Topic: Attacks on Hash Functions and Applications
Category:secret-key cryptography

Description: Cryptographic hash functions compute a small fixed-size hash value for any given message. A main application is in digital signatures which require that it must be hard to find collisions, i.e., two different messages that map to the same hash value. In this thesis we provide an analysis of the security of the cryptographic hash function standards MD5 and SHA-1 that have been broken since 2004 due to so called identical-prefix collision attacks. In particular, we present more efficient identical-prefix collision attacks on both MD5 and SHA-1 that improve upon the literature. Furthermore, we introduce a new more flexible attack on MD5 and SHA-1 called the chosen-prefix collision attack that allows significantly more control over the two colliding messages. Moreover, we have proven that our new attack on MD5 poses a realistic threat to the security of everyday applications with our construction of a rogue Certification Authority (CA). Our rogue CA could have enabled the total subversion of secure communications with any website -- if we had not purposely crippled it. Finally, we have introduced an efficient algorithm to detect whether a given message was generated using an identical-prefix or chosen-prefix collision attack on MD5 or SHA-1.[...]


18:00 [Job][Update] Postdoc, Macquarie University, Sydney, Australia, British Commonwealth

  The Centre for Advanced Computing - Algorithms and Cryptography, in the Department of Computing, Faculty of Science, Macquarie University (Sydney, Australia), invites applications for a research fellow in Number Theory and Cryptography

18:00 [Job][New] Postdoc, Macquarie University, Sydney, Australie, British Commonwealth

  The Centre for Advanced Computing - Algorithms and Cryptography, in the Department of Computing, Faculty of Science, Macquarie University (Sydney, Australia), invites applications for a research fellow in Number Theory and Cryptography



2013-02-27
19:17 [Pub][ePrint] State convergence in bit-based stream ciphers, by Sui-Guan Teo and Harry Bartlett and Ali Alhamdan and Leonie Simpson and Kenneth Koon-Ho Wong and Ed Dawson

  Well-designed initialisation and keystream generation processes for stream ciphers should ensure that each key-IV pair generates a distinct keystream. In this paper, we analyse some ciphers where this does not happen due to state convergence occurring either during initialisation, keystream generation or both. We show how state convergence occurs in each case and identify two mechanisms which can cause state convergence.



19:17 [Pub][ePrint] Biclique Cryptanalysis of the Full-Round KLEIN Block Cipher, by Zahra Ahmadian and Mahmoud Salmasizadeh and Mohammad Reza Aref

  In this paper we present a biclique attack on the newly proposed block cipher KLEIN-64. We first introduce some weaknesses of the diffusion layer and key schedule of this algorithm. Then we exploit them to present a full round attack on KLEIN-64 using an asymmetric biclique. The (worst case) computations and data complexity of this attack are 2^{62.84} and 2^{39}, respectively. A modified version of this attack is

also presented which is slightly faster at the expense of the data required.



19:17 [Pub][ePrint] Learning with Rounding, Revisited: New Reduction, Properties and Applications, by Joel Alwen and Stephan Krenn and Krzysztof Pietrzak and Daniel Wichs

  The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen [BPR12] at EUROCRYPT \'12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters where the modulus and modulus-to-error ratio are super-polynomial. In this work we resolve the main open problem of [BPR12] and give a new reduction that works for a larger range of parameters, allowing for a polynomial modulus and modulus-to-error ratio. In particular, a smaller modulus gives us greater efficiency, and a smaller modulus-to-error ratio gives us greater security, which now follows from the worst-case hardness of GapSVP with polynomial (rather than super-polynomial) approximation factors.

As a tool in the reduction, we show that there is a ``lossy mode\'\' for the LWR problem, in which LWR samples only reveal partial information about the secret. This property gives us several interesting new applications, including a proof that LWR remains secure with weakly random secrets of sufficient min-entropy, and very simple new constructions of deterministic encryption, lossy trapdoor functions and reusable extractors.

Our approach is inspired by a technique of Goldwasser et al. [GKPV10] from ICS \'10, which implicitly showed the existence of a ``lossy mode\'\' for LWE. By refining this technique, we also improve on the parameters of that work to only requiring a polynomial (instead of super-polynomial) modulus and modulus-to-error ratio.



19:17 [Pub][ePrint] Secure Two-Party Computation via Leaky Generalized Oblivious Transfer, by Samuel Ranellucci and Alain Tapp

  We construct a very efficient protocol for constant round Two-Party Secure Function Evaluation based on general assumptions. We define and instantiate a leaky variant of Generalized Oblivious Transfer based on Oblivious Transfer and Commitment Schemes. The concepts of Garbling Schemes, Leaky Generalized Oblivious Transfer and Privacy Amplification are combined using the Cut-and-Choose paradigm to obtain the final protocol. Our solution is proven secure in the Universal Composability Paradigm.



19:17 [Pub][ePrint] Attacks and Comments on Several Recently Proposed Key Management Schemes, by Niu Liu and Shaohua Tang and Lingling Xu

  In this paper, we review three problematic key management(KM) schemes recently proposed, including Kayam\'s scheme for groups with hierarchy [9], Piao\'s group KM scheme [13], Purushothama\'s group KM schemes [15]. We point out the problems in each scheme. Kayam\'s scheme is not secure to collusion attack. Piao\'s group KM scheme is not secure and has a bad primitive. The hard problem it bases is not really hard. Purushothama\'s scheme has a redundant design that costs lots of resources and doesn\'t give an advantage to the security evel and dynamic efficiency of it. We also briefly analyze the underlying reasons why these problem emerge.



19:17 [Pub][ePrint] Notions of Black-Box Reductions, Revisited, by Paul Baecher and Christina Brzuska and Marc Fischlin

  Reductions are the common technique to prove security of cryptographic constructions based on a primitive. They take an allegedly successful adversary against the construction and turn it into a successful adversary against the underlying primitive. To a large extent, these reductions are black-box in the sense that they consider the primitive and/or the adversary against the construction only via the input-output behavior, but do not depend on internals like the code of the primitive or of the adversary. Reingold, Trevisan, and Vadhan~(TCC, 2004) provided a widely adopted framework, called the RTV framework from hereon, to classify and relate different notions of black-box reductions.

Having precise notions for such reductions is very important when it comes to black-box separations, where one shows that black-box reductions cannot exist. An impossibility result, which clearly specifies the type of reduction it rules out, enables us to identify the potential leverages to bypass the separation. We acknowledge this by extending the RTV framework in several respects using a more fine-grained approach. First, we capture a type of reduction---frequently ruled out by so-called meta-reductions---which escapes the RTV framework so far. Second, we consider notions that are ``almost black-box\'\', i.e., where the reduction receives additional information about the adversary, such as its success probability. Third, we distinguish explicitly between efficient and inefficient primitives and adversaries, allowing us to determine how relativizing reductions in the sense of Impagliazzo and Rudich (STOC, 1989) fit into the picture.