International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-07-06
21:17 [Pub][ePrint] A Note for the Ideal Order-Preserving Encryption Object and Generalized Order-Preserving Encryption, by Liangliang Xiao and I-Ling Yen

  Order-preserving encryption (OPE) preserves the order of data in their ciphertexts and, hence, allows range search on the encrypted data without needing to decrypt them. Security analysis of OPE schemes is very important because OPE is not a perfect encryption algorithm (the ciphertexts leak the ordering information of the plaintexts). Most of the existing security analysis for the OPE schemes are informal: they are either based on author-defined attacks or experiments. The authors in \\cite{Bol09} initiate the cryptographic study of the OPE scheme. They define the security notion POPF-CCA to qualify the security of OPE. In POPF-CCA, the ``ideal\" OPE object is defined where the encryption function is uniformly randomly selected from all order-preserving functions (generally the ``ideal\" OPE object is not computationally feasible), and a (constructed) ``real\" OPE scheme is secure under POPF-CCA if it is computationally indistinguishable from the ideal object. In other words, although the ``ideal\" OPE object is not computationally feasible, it is used as the security goal, and a (constructed) ``real\" OPE scheme is secure if it is as secure as the ``ideal\" OPE object. Such approach conceives the assumption (but not clearly stated and proved) that the ``ideal\" OPE object is the most secure OPE. But the correctness of the assumption is an easily ignored problem.

In this paper, we investigate the security of the OPE in more depth. We first give example to show that the ``ideal\" OPE object may not always be the most secure OPE. It indicates that we need to use the ``ideal\" encryption object more cautiously in the security analysis of OPE. Additionally we extend the concept of OPE to generalized OPE (GOPE). Unlike OPE, the ciphertexts of GOPE may not be numbers, but GOPE still enables the comparisons on the encrypted data without needing to decrypt them. We present two GOPEs in polynomial-sized and superpolynomial-sized domains that satisfy stronger notions of security than that of the ideal OPE object, respectively.



21:17 [Pub][ePrint] SipHash: a fast short-input PRF, by Jean-Philippe Aumasson and Daniel J. Bernstein

  SipHash is a family of pseudorandom functions optimized for short inputs. Target applications include network traffic authentication and hash-table lookups protected against hash-flooding denial-of-service attacks. SipHash is simpler than MACs based on universal hashing, and faster on short inputs. Compared to dedicated designs for hash-table lookup, SipHash has well-defined security goals and competitive performance. For example, SipHash processes a 16-byte input with a fresh key in 140 cycles on an AMD FX-8150 processor, which is much faster than state-of-the-art MACs. We propose that hash tables switch to SipHash as a hash function.



21:17 [Pub][ePrint] On Hashing Graphs, by Ashish Kundu, Elisa Bertino

  Collision resistant one-way hashing schemes are the basic building blocks of almost all crypto-systems. Use of graph-structured data models are on the rise -- in graph databases, representation of biological and healthcare data as well as in modeling systems for representing system topologies. Therefore, the problem of hashing graphs with respect to crypto-systems needs to be studied and addressed. The traditional Merkle Hash technique cannot be applied as it is because graphs are more complex data structures than trees. In this paper, we make the following contributions: (1) we define the formal security model of hashing schemes for graphs, (2) we define the formal security model of leakage-free hashing schemes for graphs, (3) we describe a hashing scheme for hashing directed and undirected graphs that uses Merkle hash technique, (4) and a hashing scheme that uses structural information instead of Merkle hash technique, (5) we define leakage-free hashing schemes for graphs. Our constructions use graph traversal techniques and are highly efficient with respect to updates to graphs: they require as little as two (O(1)) hash values to be updated to refresh the hash of the graph, while the Merkle Hash Technique and Search DAG schemes for trees and DAGs respectively require as many as O(|V|) and O(|V|+|E|).



21:17 [Pub][ePrint] On Reconfigurable Fabrics and Generic Side-Channel Countermeasures, by Robert Beat and Philipp Grabher and Dan Page and Stefan Tillich and Marcin Wójcik

  The use of field programmable devices in security-critical applications is growing in popularity; in part, this can be attributed to their potential for balancing metrics such as efficiency and algorithm agility. However, in common with non-programmable alternatives, physical attack techniques such as fault and power analysis are a threat. We investigate a family of next-generation field programmable devices, specifically those based on the concept of time sharing,within this context: our results support the premise that extra, inherent flexibility in such devices can offer a range of possibilities for low-overhead, generic countermeasures against physical attack.



21:17 [Pub][ePrint] Hash Combiners for Second Pre-Image Resistance, Target Collision Resistance and Pre-Image Resistance have Long Output, by Arno Mittelbach

  A $(k,l)$ hash-function combiner for property $P$ is a construction that, given access to $l$ hash functions, yields a single cryptographic hash function which has property $P$ as long as at least $k$ out of the $l$ hash functions have that property. Hash function combiners are used to hedge against the failure of one or more of the individual components. One example of the application of hash function combiners are the previous versions of the TLS and SSL protocols \\cite{RFC:6101,RFC:5246}.

The concatenation combiner which simply concatenates the outputs of all hash functions is an example of a robust combiner for collision resistance. However, its output length is, naturally, significantly longer than each individual hash-function output, while the security bounds are not necessarily stronger than that of the strongest input hash-function. In 2006 Boneh and Boyen asked whether a robust black-box combiner for collision resistance can exist that has an output length which is significantly less than that of the concatenation combiner \\cite{C:BonBoy06}. Regrettably, this question has since been answered in the negative for fully black-box constructions (where hash function and adversary access is being treated as black-box), that is, combiners (in this setting) for collision resistance roughly need at least the length of the concatenation combiner to be robust \\cite{C:BonBoy06,C:CRSTVW07,EC:Pietrzak07,C:Pietrzak08}.

In this paper we examine weaker notions of collision resistance, namely: \\emph{second pre-image resistance} and \\emph{target collision resistance} \\cite{FSE:RogShr04} and \\emph{pre-image resistance}. As a generic brute-force attack against any of these would take roughly $2^n$ queries to an $n$-bit hash function, in contrast to only $2^{n/2}$ queries it would take to break collision resistance (due to the birthday bound), this might indicate that combiners for weaker notions of collision resistance can exist which have a significantly shorter output than the concatenation combiner (which is, naturally, also robust for these properties). Regrettably, this is not the case.



21:17 [Pub][ePrint] Never trust a bunny, by Daniel J. Bernstein and Tanja Lange

  ``Lapin\'\' is a new RFID authentication protocol proposed at FSE 2012.

``Ring-LPN\'\' (Ring-Learning-Parity-with-Noise) is a new computational problem proposed in the same paper; there is a proof relating the security of Lapin to the difficulty of Ring-LPN. This paper presents an attack against Ring-LPN-512 and Lapin-512.The attack is not practical but nevertheless violates specific security claims in the FSE 2012 paper.



21:17 [Pub][ePrint] Fully Anonymous Attribute Tokens from Lattices, by Jan Camenisch and Gregory Neven and Markus Rückert

  Anonymous authentication schemes such as group signatures and anonymous credentials are important privacy-protecting tools in electronic communications. The only currently known scheme based on assumptions that resist quantum attacks is the group signature scheme by Gordon et al. (ASIACRYPT 2010). We present a generalization of group signatures called *anonymous attribute tokens* where users are issued attribute-containing credentials that they can use to anonymously sign messages and generate tokens revealing only a subset of their attributes. We present two lattice-based constructions of this new primitive, one with and one without opening capabilities for the group manager. The latter construction directly yields as a special case the first lattice-based group signature scheme offering full anonymity (in the random-oracle model), as opposed to the practically less relevant notion of chosen-plaintext anonymity offered by the scheme of Gordon et al. We also extend our scheme to protect users from

framing attacks by the group manager, where the latter creates tokens or signatures in the name of honest users. Our constructions involve new lattice-based tools for aggregating signatures and verifiable CCA2-secure encryption.



21:17 [Pub][ePrint] Publicly Verifiable Ciphertexts, by Juan Manuel Gonz{\\\'a}lez Nieto and Mark Manulis and Bertram Poettering and Jothi Rangasamy and Douglas Stebila

  In many applications, where encrypted traffic flows from an open (public) domain to a protected (private) domain, there exists a gateway that bridges the two domains and faithfully forwards the incoming traffic to the receiver. We observe that indistinguishability against (adaptive) chosen-ciphertext attacks (IND-CCA), which is a mandatory goal in face of active attacks in a public domain, can be essentially relaxed to indistinguishability against chosen-plaintext attacks (IND-CPA) for ciphertexts once they pass the gateway that acts as an IND-CCA/CPA filter by first checking the validity of an incoming IND-CCA ciphertext, then transforming it (if valid) into an IND-CPA ciphertext, and forwarding the latter to the recipient in the private domain. ``Non-trivial filtering\'\' can result in reduced decryption costs on the receivers\' side.

We identify a class of encryption schemes with \\emph{publicly verifiable ciphertexts} that admit generic constructions of (non-trivial) IND-CCA/CPA filters. These schemes are characterized by existence of public algorithms that can distinguish between valid and invalid ciphertexts. To this end, we formally define (non-trivial) public verifiability of ciphertexts for general encryption schemes, key encapsulation mechanisms, and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption flavours. We further analyze the security impact of public verifiability and discuss generic transformations and concrete constructions that enjoy this property.



21:17 [Pub][ePrint] PICARO - A Block Cipher Allowing Efficient Higher-Order Side-Channel Resistance -- Extended Version --, by Gilles Piret and Thomas Roche and Claude Carlet

  Many papers deal with the problem of constructing an efficient masking scheme for existing block ciphers. We take the reverse approach: that is, given a proven masking scheme (Rivain and Prouff, CHES 2010) we design a block cipher that fits well the masking constraints. The difficulty of implementing efficient masking for a block cipher comes mainly from the S-boxes. Therefore the choice of an adequate S-box is the first and most critical step of our work. The S-box we selected is non-bijective; we discuss the resulting design and security problems. A complete design of the cipher is given, as well as some implementation results.



21:17 [Pub][ePrint] Another look at non-uniformity, by Neal Koblitz and Alfred Menezes

  We argue that it is unnatural and undesirable to use the non-uniform model of complexity for practice-oriented security reductions in cryptography.



21:17 [Pub][ePrint] Multiple Differential Cryptanalysis using \\LLR and $\\chi^2$ Statistics, by Céline Blondeau and Benoît Gérard and Kaisa Nyberg

  Recent block ciphers have been designed to be resistant against differential

cryptanalysis. Nevertheless it has been shown that such resistance claims

may not be as tight as wished due to recent advances in this field.

One of the main improvements to differential cryptanalysis is the use of many differentials to reduce the data complexity. In this paper we propose a general model for understanding multiple differential cryptanalysis and propose new attacks based on tools used in multidimensional linear cryptanalysis (namely \\LLR and $\\CHI$ statistical tests). Practical cases are considered on a reduced version of the cipher PRESENT to evaluate different approaches for selecting and combining the differentials considered. We also consider the tightness of the theoretical estimates corresponding to these attacks.