International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-07-06
21:17 [Pub][ePrint] Hash Combiners for Second Pre-Image Resistance, Target Collision Resistance and Pre-Image Resistance have Long Output, by Arno Mittelbach

  A $(k,l)$ hash-function combiner for property $P$ is a construction that, given access to $l$ hash functions, yields a single cryptographic hash function which has property $P$ as long as at least $k$ out of the $l$ hash functions have that property. Hash function combiners are used to hedge against the failure of one or more of the individual components. One example of the application of hash function combiners are the previous versions of the TLS and SSL protocols \\cite{RFC:6101,RFC:5246}.

The concatenation combiner which simply concatenates the outputs of all hash functions is an example of a robust combiner for collision resistance. However, its output length is, naturally, significantly longer than each individual hash-function output, while the security bounds are not necessarily stronger than that of the strongest input hash-function. In 2006 Boneh and Boyen asked whether a robust black-box combiner for collision resistance can exist that has an output length which is significantly less than that of the concatenation combiner \\cite{C:BonBoy06}. Regrettably, this question has since been answered in the negative for fully black-box constructions (where hash function and adversary access is being treated as black-box), that is, combiners (in this setting) for collision resistance roughly need at least the length of the concatenation combiner to be robust \\cite{C:BonBoy06,C:CRSTVW07,EC:Pietrzak07,C:Pietrzak08}.

In this paper we examine weaker notions of collision resistance, namely: \\emph{second pre-image resistance} and \\emph{target collision resistance} \\cite{FSE:RogShr04} and \\emph{pre-image resistance}. As a generic brute-force attack against any of these would take roughly $2^n$ queries to an $n$-bit hash function, in contrast to only $2^{n/2}$ queries it would take to break collision resistance (due to the birthday bound), this might indicate that combiners for weaker notions of collision resistance can exist which have a significantly shorter output than the concatenation combiner (which is, naturally, also robust for these properties). Regrettably, this is not the case.



21:17 [Pub][ePrint] Never trust a bunny, by Daniel J. Bernstein and Tanja Lange

  ``Lapin\'\' is a new RFID authentication protocol proposed at FSE 2012.

``Ring-LPN\'\' (Ring-Learning-Parity-with-Noise) is a new computational problem proposed in the same paper; there is a proof relating the security of Lapin to the difficulty of Ring-LPN. This paper presents an attack against Ring-LPN-512 and Lapin-512.The attack is not practical but nevertheless violates specific security claims in the FSE 2012 paper.



21:17 [Pub][ePrint] Fully Anonymous Attribute Tokens from Lattices, by Jan Camenisch and Gregory Neven and Markus Rückert

  Anonymous authentication schemes such as group signatures and anonymous credentials are important privacy-protecting tools in electronic communications. The only currently known scheme based on assumptions that resist quantum attacks is the group signature scheme by Gordon et al. (ASIACRYPT 2010). We present a generalization of group signatures called *anonymous attribute tokens* where users are issued attribute-containing credentials that they can use to anonymously sign messages and generate tokens revealing only a subset of their attributes. We present two lattice-based constructions of this new primitive, one with and one without opening capabilities for the group manager. The latter construction directly yields as a special case the first lattice-based group signature scheme offering full anonymity (in the random-oracle model), as opposed to the practically less relevant notion of chosen-plaintext anonymity offered by the scheme of Gordon et al. We also extend our scheme to protect users from

framing attacks by the group manager, where the latter creates tokens or signatures in the name of honest users. Our constructions involve new lattice-based tools for aggregating signatures and verifiable CCA2-secure encryption.



21:17 [Pub][ePrint] Publicly Verifiable Ciphertexts, by Juan Manuel Gonz{\\\'a}lez Nieto and Mark Manulis and Bertram Poettering and Jothi Rangasamy and Douglas Stebila

  In many applications, where encrypted traffic flows from an open (public) domain to a protected (private) domain, there exists a gateway that bridges the two domains and faithfully forwards the incoming traffic to the receiver. We observe that indistinguishability against (adaptive) chosen-ciphertext attacks (IND-CCA), which is a mandatory goal in face of active attacks in a public domain, can be essentially relaxed to indistinguishability against chosen-plaintext attacks (IND-CPA) for ciphertexts once they pass the gateway that acts as an IND-CCA/CPA filter by first checking the validity of an incoming IND-CCA ciphertext, then transforming it (if valid) into an IND-CPA ciphertext, and forwarding the latter to the recipient in the private domain. ``Non-trivial filtering\'\' can result in reduced decryption costs on the receivers\' side.

We identify a class of encryption schemes with \\emph{publicly verifiable ciphertexts} that admit generic constructions of (non-trivial) IND-CCA/CPA filters. These schemes are characterized by existence of public algorithms that can distinguish between valid and invalid ciphertexts. To this end, we formally define (non-trivial) public verifiability of ciphertexts for general encryption schemes, key encapsulation mechanisms, and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption flavours. We further analyze the security impact of public verifiability and discuss generic transformations and concrete constructions that enjoy this property.



21:17 [Pub][ePrint] PICARO - A Block Cipher Allowing Efficient Higher-Order Side-Channel Resistance -- Extended Version --, by Gilles Piret and Thomas Roche and Claude Carlet

  Many papers deal with the problem of constructing an efficient masking scheme for existing block ciphers. We take the reverse approach: that is, given a proven masking scheme (Rivain and Prouff, CHES 2010) we design a block cipher that fits well the masking constraints. The difficulty of implementing efficient masking for a block cipher comes mainly from the S-boxes. Therefore the choice of an adequate S-box is the first and most critical step of our work. The S-box we selected is non-bijective; we discuss the resulting design and security problems. A complete design of the cipher is given, as well as some implementation results.



21:17 [Pub][ePrint] Another look at non-uniformity, by Neal Koblitz and Alfred Menezes

  We argue that it is unnatural and undesirable to use the non-uniform model of complexity for practice-oriented security reductions in cryptography.



21:17 [Pub][ePrint] Multiple Differential Cryptanalysis using \\LLR and $\\chi^2$ Statistics, by Céline Blondeau and Benoît Gérard and Kaisa Nyberg

  Recent block ciphers have been designed to be resistant against differential

cryptanalysis. Nevertheless it has been shown that such resistance claims

may not be as tight as wished due to recent advances in this field.

One of the main improvements to differential cryptanalysis is the use of many differentials to reduce the data complexity. In this paper we propose a general model for understanding multiple differential cryptanalysis and propose new attacks based on tools used in multidimensional linear cryptanalysis (namely \\LLR and $\\CHI$ statistical tests). Practical cases are considered on a reduced version of the cipher PRESENT to evaluate different approaches for selecting and combining the differentials considered. We also consider the tightness of the theoretical estimates corresponding to these attacks.



21:17 [Pub][ePrint] Quantum Key Distribution in the Classical Authenticated Key Exchange Framework, by Michele Mosca and Douglas Stebila and Berkant Ustaoglu

  Key establishment is a crucial primitive for building secure channels: in a multi-party setting, it allows two parties using only public authenticated communication to establish a secret session key which can be used to encrypt messages. But if the session key is compromised, the confidentiality of encrypted messages is typically compromised as well. Without quantum mechanics, key establishment can only be done under the assumption that some computational problem is hard. Since digital communication can be easily eavesdropped and recorded, it is important to consider the secrecy of information anticipating future algorithmic and computational discoveries which could break the secrecy of past keys, violating the secrecy of the confidential channel.

Quantum key distribution (QKD) can be used generate secret keys that are secure against any future algorithmic or computational improvements. QKD protocols still require authentication of classical communication, however, which is most easily achieved using computationally secure digital signature schemes. It is generally considered folklore that QKD when used with computationally secure authentication is still secure against an unbounded adversary, provided the adversary did not break the authentication during the run of the protocol.

We describe a security model for quantum key distribution based on traditional classical authenticated key exchange (AKE) security models. Using our model, we characterize the long-term security of the BB84 QKD protocol with computationally secure authentication against an eventually unbounded adversary. By basing our model on traditional AKE models, we can more readily compare the relative merits of various forms of QKD and existing classical AKE protocols. This comparison illustrates in which types of adversarial environments different quantum and classical key agreement protocols can be secure.



21:17 [Pub][ePrint] Achieving Constant Round Leakage-Resilient Zero-Knowledge, by Omkant Pandey

  Recently there has been a huge emphasis on constructing cryptographic protocols that maintain their security guarantees even in the presence of side channel attacks. Such attacks exploit the physical characteristics of a cryptographic device to learn useful information about the internal state of the device. Designing protocols that deliver meaningful security even in the presence of such leakage attacks is a challenging task.

The recent work of Garg, Jain, and Sahai formulates a meaningful notion of zero-knowledge in presence of leakage; and provides a construction which satisfies a weaker variant of this notion called (1+e)-leakage-resilient-zero-knowledge, for every constant e>0. In this weaker variant, roughly speaking, if the verifier learns L bits of leakage during the interaction, then the simulator is allowed to access (1+e).L bits of leakage. The round complexity of their protocol is n/e.

In this work, we present the first construction of leakage-resilient zero-knowledge satisfying the ideal requirement of e=0. While our focus is on a feasibility result for e=0, our construction also enjoys a constant number of rounds. At the heart of our construction is a new ``public-coin preamble\'\' which allows the simulator to recover arbitrary information from a (cheating) verifier in a ``straight line.\'\' We use non-black-box simulation techniques to accomplish this goal.



21:17 [Pub][ePrint] A Unified Indifferentiability Proof for Permutation- or Block Cipher-Based Hash Functions, by Anne Canteaut and Thomas Fuhr and Mar\\\'{i}a Naya-Plasencia and Pascal Paillier and Jean-Ren\\\'{e} Reinh

  In the recent years, several hash constructions have been

introduced that aim at achieving enhanced security margins by strengthening the Merkle-Damg{\\aa}rd mode. However, their security analysis have been conducted independently and using a variety of proof methodologies. This paper unifies these results by proposing a unique indifferentiability proof that considers a broadened form of the general compression function introduced by Stam at FSE09. This general definition enables us to capture in a realistic model most of the features of the mode of operation ({\\em e.g.}, message encoding, blank rounds, message insertion,...) within the pre-processing and post-processing functions. Furthermore, it relies on an

inner primitive which can be instantiated either by an ideal block cipher, or by an ideal permutation. Then, most existing hash functions can be seen as the Chop-MD construction applied to some compression function which fits the broadened Stam model. Our result then gives the tightest known indifferentiability bounds for several general modes of operations, including Chop-MD, Haifa or sponges. Moreover, we show that it applies in a quite automatic way, by providing the security bounds for 7 out of the 14 second round SHA-3 candidates, which are in some cases improved over previously known ones.



21:17 [Pub][ePrint] Zero-Knowledge Proofs with Low Amortized Communication from Lattice Assumptions, by Ivan Damgard and Adriana Lopez-Alt

  We construct zero-knowledge proofs of plaintext knowledge (PoPK) and correct multiplication (PoPC) for the Regev encryption scheme with low amortized communication complexity. Previous constructions of both PoPK and PoPC had communication cost linear in the size of the public key (roughly quadratic in the lattice dimension, ignoring logarithmic factors). Furthermore, previous constructions of PoPK suffered from one of the following weaknesses: either the message and randomness space were restricted, or there was a super-polynomial gap between the size of the message and randomness that an honest prover chose and the size of which an accepting verifier would be convinced. The latter weakness was also present in the existent PoPC protocols.

In contrast, O(n) proofs (for lattice dimension n) in our PoPK and PoPC protocols have communication cost linear in the public key. Thus, we improve the amortized communication cost of each proof by a factor linear in the security parameter. Furthermore, we allow the message space to be \\Z_p and the randomness distribution to be the discrete Gaussian, both of which are natural choices for the Regev encryption scheme. Finally, in our schemes there is no gap between the the size of the message and randomness that an honest prover chooses and the size of which an accepting verifier is convinced.

Our constructions use the ``MPC-in-the-head\'\' technique of Ishai et al. (STOC 2007). At the heart of our constructions is a protocol for proving that a value is bounded by some publicly known bound. This uses Lagrange\'s Theorem that states that any positive integer can be expressed as the sum of four squares (an idea previously used by Boudot (EUROCRYPT 2000)), as well as techniques from Cramer and Damg{\\aa}rd (CRYPTO 2009).