International Association for Cryptologic Research

# IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-08-15
09:17 [Pub][ePrint]

The Trusted Platform Module (TPM) is designed to enable trustworthy computation and communication over open networks. The TPM provides a way to store cryptographic keys and other sensitive values in its shielded memory and act as \\emph{Root of Trust for Storage} (RTS). The TPM interacts with applications via a predefined set of commands (an API). In this paper, we give an abstraction model for the TPM 2.0 specification concentrating on Protected Storage part. With identification and formalization of their secrecy properties, we devise a type system with asymmetric cryptographic primitives to statically enforce and prove their security.

09:17 [Pub][ePrint]

After a series of works on RC4 cryptanalysis in last few years (published in flagship cryptology conferences and journals), the most significant (and also very recent) attack on the cipher has been the discovery of vulnerabilities in the SSL/TLS protocol, by AlFardan, Bernstein, Paterson, Poettering and Schuldt. They ran extensive computations to identify significant short-term single-byte keystream biases of RC4, and utilized that knowledge in the attack. The biases identified by AlFardan et al. consist of earlier known biases of RC4, as well as some newly discovered ones.

In this paper, we attempt at proving the new, unproved or partially proved biases amongst the above-mentioned ones. The theoretical proofs of these biases not only assert a scientific justification, but also discover intricate patterns and operations of the cipher associated with these biases. For example, while attempting the proof of a bias of the first output byte towards 129, we observe that this bias occurs prominently only for certain lengths of the secret key of RC4. In addition, our findings reveal that this bias may be related to the old and unsolved problem of anomalies\'\' in the distribution of the state array after the Key Scheduling Algorithm. In this connection, we prove the anomaly in $S_0[128] = 127$, a problem open for more than a decade.

Other than proving the new biases, we also complete the proof for the extended keylength dependent biases in RC4, a problem attempted and partially solved by Isobe, Ohigashi, Watanabe and Morii in FSE 2013. Our new proofs and observations in this paper, along with the connection to the older results, provide a comprehensive view on the state-of-the-art literature in RC4 cryptanalysis.

2013-08-14
15:17 [Pub][ePrint]

We present an efficient algorithm that can distinguish the keystream of WPA from that of a generic instance of RC4 with a packet complexity of $O(N^2)$, where $N$ denotes the size of the internal permutation of RC4. In practice, our distinguisher requires approximately $2^{19}$ packets; thus making it the best known distinguisher of WPA to date. This is a significantly improved distinguisher than the previous WPA distinguisher identified by Sepehrdad, Vaudenay and Vuagnoux in Eurocrypt 2011, which requires more than $2^{40}$ packets in practice. The motivation of our distinguisher arises from the recent observations on WPA by AlFardan, Bernstein, Paterson, Poettering and Schuldt, and this work puts forward an example how an experimental bias may lead to an efficient theoretical distinguisher.

15:17 [Pub][ePrint]

Broadcast encryption is conventionally formalized as broadcast encapsulation in which, instead of a cipher-

text, a session key is produced, which is required to be indistinguishable from random. Such a scheme can

provide public encryption functionality in combination with a symmetric encryption through the hybrid en-

cryption paradigm. The Boneh-Gentry-Waters scheme of 2005 proposed a broadcast scheme with constant-size

ciphertext. It is one of the most efficient broadcast encryption schemes regarding overhead size. In this work we

consider the improved scheme of Phan-Pointcheval-Shahandashi-Ste

er [PPSS12] which provides an adaptive

CCA broadcast encryption scheme. These two schemes may be tweaked to use bilinear pairings[DGS].

This document details our choices for the implementation of the PPSS scheme. We provide a complete golden sequence

of the protocol with efficient pairings (Tate, Ate and Optimal Ate). We target a 128-bit security

level, hence we use a BN-curve [BN06]. The aim of this work is to contribute to the use and the standardization of

PPSS scheme and pairings in concrete systems.

15:17 [Pub][ePrint]

Mechanisms for secure mobile communication can be enablers for novel applications in the area of cooperative work. In this context, this article exemplarily investigates an emergency management setting. An efficient support of emergency communication is of high practical importance, but has specific challenges: unpredictable local crisis situations harden the establishment of communication structures, legal requirements dictate the use of end-to-end secure and documentable approaches, while end users demand user-friendliness and privacy protection.

Dealing with these challenges, the contribution of this article is two-fold. Firstly, together with emergency practioners, we follow a participatory design approach. We define security requirements, patterns for efficient communication and derive a design proposal. Secondly, we devise a novel approach to multilaterally end-to-end secure, user-friendly attribute-based messaging for one-to-many communication. It builds on a hybrid encryption technique, which efficiently combines ciphertext-policy attribute-based encryption, location-based encryption and symmetric encryption. The hybrid encryption approach supports dynamic location attributes as user-friendly selectors for targeted messaging with dynamic groups of mobile and anonymous receivers. The achieved security of the approach and concrete application scenarios are discussed.

15:17 [Pub][ePrint]

Quantum Readout PUFs (QR-PUFs) have been proposed as

a technique for remote authentication of objects. The security

is based on basic quantum information theoretic principles

and the assumption that the adversary cannot efficiently implement arbitrary unitary transformations.

We analyze the security of QR-PUF schemes in the case where

each challenge consists of precisely $n$ quanta and the dimension $K$ of the Hilbert space is larger than $n^2$.

We consider a class of attacks where the adversary first tries to learn as much as possible about the challenge and then bases his response on his estimate of the challenge.

For this class of attacks we derive an upper bound on the adversary\'s success probability as a function of $K$ and~$n$.

15:17 [Pub][ePrint]

We put forward a new approach for the design of efficient multiparty protocols:

1. Design a protocol for a small number of parties (say, 3 or 4)

which achieves security against a single corrupted party. Such

protocols are typically easy to construct as they may employ

techniques that do not scale well with the number of corrupted

parties.

2. Recursively compose with itself to obtain an efficient n-party

protocol which achieves security against a constant fraction of

corrupted parties.

The second step of our approach combines the player emulation

technique of Hirt and Maurer (J. Cryptology, 2000) with

constructions of logarithmic-depth formulae which compute

threshold functions using only constant fan-in threshold gates.

Using this approach, we simplify and improve on previous results

in cryptography and distributed computing. In particular:

- We provide conceptually simple constructions of efficient

protocols for Secure Multiparty Computation (MPC) in the

presence of an honest majority, as well as broadcast protocols

from point-to-point channels and a 2-cast primitive.

- We obtain new results on MPC over blackbox groups and other

algebraic structures.

The above results rely on the following complexity-theoretic

contributions, which may be of independent interest:

- We show that for every integers j,k such that m = (k-1)/(j-1)

is an integer, there is an explicit (poly(n)-time) construction

of a logarithmic-depth formula which computes a good

approximation of an (n/m)-out-of-n threshold function using only

j-out-of-k threshold gates and no constants.

- For the special case of n-bit majority from 3-bit majority

gates, a non-explicit construction follows from the work of

Valiant (J. Algorithms, 1984). For this special case, we provide

an explicit construction with a better approximation than for the

general threshold case, and also an exact explicit construction

based on standard complexity-theoretic or cryptographic

assumptions.

15:17 [Pub][ePrint]

This short note describes a key-recovery attack against a multivariate quadratic cryptosystem proposed by Huang, Liu, and Yang (PKC 2012). Our attack is running lattice-basis reduction algorithms on a lattice constructed from the keys in the cryptosystem. The attack takes less than 20 minutes for the proposed parameter sets which are expected to be 80-bit and 128-bit security.

15:17 [Pub][ePrint]

Proving bounds on the expected differential probability (EDP) of a characteristic over all keys has been a popular technique of arguing security for both block ciphers and hash functions. In fact, to a large extent, it was the clear formulation and elegant deployment of this very principle that helped Rijndael win the AES competition. Moreover, most SHA-3 finalists have come with explicit upper bounds on the EDP of a characteristic as a major part of their design rationale. However, despite the pervasiveness of this design approach, there is no understanding of what such bounds actually mean for the security of a primitive once a key is fixed --- an essential security question in practice.

In this paper, we aim to bridge this fundamental gap. Our main result is a quantitative connection between a bound on the EDP of differential characteristics and the highest

number of input pairs that actually satisfy a characteristic for a fixed key. This is particularly important for the design of permutation-based hash functions such as sponge functions, where the EDP value itself is not informative for the absence of rekeying. We apply our theoretical result to revisit the security arguments of some prominent recent block ciphers and hash functions. For most of those, we have good news: a characteristic is followed by a small number of pairs only. For Keccak, though, currently much more rounds would be needed for our technique to guarantee any reasonable maximum number of pairs.

Thus, our work --- for the first time --- sheds light on the fixed-key differential behaviour of block ciphers in general and substitution-permutation networks in particular which has been a long-standing fundamental problem in symmetric-key cryptography.

15:17 [Pub][ePrint]

Coppersmith described at Eurocrypt 96 a polynomial-time algorithm for finding small roots of univariate modular equations, based on lattice reduction. In this paper we describe the first improvement of the asymptotic complexity of Coppersmith\'s algorithm. Our method consists in taking advantage of Coppersmith\'s matrix structure, in order to apply LLL algorithm on a matrix whose elements are smaller than those of Coppersmith\'s original matrix. Using the $L^2$ algorithm, the asymptotic complexity of our method is $O(\\log^{6+\\epsilon} N)$ for any $\\epsilon > 0$, instead of $O(\\log^{8+\\epsilon} N)$ previously. Furthermore, we devise a method that allows to speed up the exhaustive search which is usually performed to reach Coppersmith\'s theoretical bound. Our approach takes advantage of the LLL performed to test one guess, to reduce complexity of the LLL performed for the next guess. Experimental results confirm that it leads to a considerable performance improvement.

15:17 [Pub][ePrint]

In past years a body of data integrity checking techniques have been proposed for securing cloud data services. Most of these work assume that only the data owner can modify cloud-stored data. Recently a few attempts started considering more realistic scenarios by allowing multiple cloud users to modify data with integrity assurance. However, these attempts are still far from practical due to the tremendous computational cost on cloud users. Moreover, collusion between misbehaving cloud servers and revoked users is not considered. This paper proposes a novel data integrity checking scheme characterized by multi-user modification, collusion resistance and a constant computational cost of integrity checking for cloud users, thanks to our novel design of polynomial-based authentication tags and proxy tag update techniques. Our scheme also supports public checking and efficient user revocation and is provably secure. Numerical analysis and extensive experimental results show the efficiency and scalability of our proposed scheme.