International Association for Cryptologic Research

# IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-07-06
21:17 [Pub][ePrint]

Let $\\G$ be a group of prime order $q$, and let $g_1,\\ldots,g_n$ be

random elements of $\\G$. We say that a vector $\\vecx = (x_1,\\ldots,x_n)\\in \\Z_q^n$ is a {\\em discrete log representation} of

some some element $y\\in\\G$ (with respect to $g_1,\\ldots,g_n$) if

$g_1^{x_1}\\cdots g_n^{x_n} = y$. Any element $y$ has many discrete log

representations, forming an affine subspace of $\\Z_q^n$. We show

that these representations have a nice {\\em continuous leakage-resilience} property as follows. Assume some attacker

$\\A(g_1,\\ldots,g_n,y)$ can repeatedly learn $L$ bits of information on arbitrarily many random representations of $y$.

That is, $\\A$ adaptively chooses polynomially many leakage functions

$f_i:\\Z_q^n\\rightarrow \\{0,1\\}^L$, and learns the value

$f_i(\\vecx_i)$, where $\\vecx_i$ is a {\\em fresh and random}

discrete log representation of $y$. $\\A$ wins the game if it eventually outputs a

valid discrete log representation $\\vecx^*$ of $y$. We show that if

the discrete log assumption holds in $\\G$, then no polynomially

bounded $\\A$ can win this game with non-negligible probability, as

long as the leakage on each representation is bounded by $L\\approx (n-2)\\log q = (1-\\frac{2}{n})\\cdot |\\vecx|$.

As direct extensions of this property, we design very simple continuous leakage-resilient (CLR) one-way function (OWF) and public-key encryption (PKE) schemes in the so called invisible key update\'\' model introduced by Alwen et al. at CRYPTO\'09. Our CLR-OWF is based on the standard Discrete Log assumption and our CLR-PKE is based on the standard Decisional Diffie-Hellman assumption. Prior to our work, such schemes could only be constructed in groups with a bilinear pairing.

As another surprising application, we show how to design the first leakage-resilient {\\em traitor tracing} scheme, where no attacker, getting the secret keys of a small subset of decoders (called traitors\'\') {\\em and} bounded leakage on the secret keys of all other decoders, can create a valid decryption key which will not be traced back to at least one of the traitors.

21:17 [Pub][ePrint]

In this paper we present a comprehensive comparison of all Round 3 SHA-3 candidates and the current standard SHA-2 from the point of view of hardware performance in modern FPGAs. Each algorithm is implemented using multiple architectures based on the concepts of iteration, folding, unrolling, pipelining, and circuit replication. Trade-offs between speed and area are investigated, and the best architecture from the point of view of the throughput to area ratio is identified. Finally, all algorithms are ranked based on their overall performance in FPGAs. The characteristic features of each algorithm important from the point of view of its implementation in hardware are identified.

21:17 [Pub][ePrint]

We give details of the factorization of RSA-704 with CADO-NFS. This is a record computation with publicly available software tools. The aim of this experiment was to stress CADO-NFS - which was originally designed for 512-bit factorizations - for larger inputs, and to identify possible rooms of improvement.

21:17 [Pub][ePrint]

The Boneh-Gentry-Waters (BGW) scheme is one of the most efficient broadcast encryption scheme regarding the overhead size. This performance relies on the use of a pairing. Hence this protocol can benefit from public key improvements. The ciphertext is of constant size, whatever the proportion of revoked users is. The main lasting constraint is the computation time at receiver end as it depends on the number of revoked users. In this paper we describe two modifications to improve the BGW bandwidth and time complexity. First we rewrite the protocol and its security proof with an asymmetric pairing over the Barreto-Naehrig (BN) curves instead of a symmetric one over supersingular curves. This modification leads to a practical gain of 60 % in speed and 84 % in bandwidth. The second tweaks allows to reduce the computation time from $O(n-r)$ to $\\min(O(r),O(n-r))$ for the worst case (and better for the average case). We give performance measures of our implementation for a 128-bit security level of the modified protocol on a smartphone.

21:17 [Pub][ePrint]

We describe a method for efficiently hashing multiple messages of different lengths. Such computations occur in various scenarios, and one of them is when an operating system checks the integrity of its components during boot time. These tasks can gain performance by parallelizing the computations and using SIMD architectures. For such scenarios, we compare the performance of a new 4-buffers SHA-256 S-HASH implementation, to that of the standard serial hashing. Our results are measured on the 2nd Generation IntelĀ® Core(TM) Processor, and demonstrate SHA-256 processing at effectively ~5.2 Cycles per Byte, when hashing from any of the three cache levels, or from the system memory. This represents speedup by a factor of 3.42x compared to OpenSSL (1.0.1), and by 2.25x compared to the recent and faster n-SMS method. For hashing from a disk, we show an effective rate of ~6.73 Cycles/Byte, which is almost 3 times faster than OpenSSL (1.0.1) under the same conditions. These results indicate that for some usage models, SHA-256 is significantly faster than commonly perceived.

21:17 [Pub][ePrint]

We study the slow diffusion of the AES key schedule for 256-bit keys and find weakness which can be used in the preimage attack on Davis-Meyer mode. Our preimage attack works for 8 rounds of AES- 256 with the computational complexity of $2^{124.9}$, while the best previous attack works for 7 rounds of AES-256. It is also extended to the preimage attack on some well-known double-block-length hash modes assuming the underlying block cipher is 8-round AES-256, whose computational complexity is $2^{252.9}$.

21:17 [Pub][ePrint]

We consider distributed private data analysis,

where $n$ parties each holding some sensitive data wish

to compute some aggregate statistics over all parties\' data.

We prove a tight lower bound for the private distributed summation

problem. Our lower bound is strictly stronger than

the prior lower-bound result by Beimel, Nissim, and Omri

published in CRYPTO 2008.

In particular, we show that any $n$-party protocol

computing the sum with sparse communication graph

of $\\Omega(\\sqrt{n})$

with constant probability, in order

to defend against potential coalitions of compromised users.

Furthermore, we show that in the client-server communication model,

where all users communicate solely with an untrusted server,

the additive error must be $\\Omega(\\sqrt{n})$, regardless of

the number of messages or rounds.

Both of our lower-bounds, for the general setting and the

client-to-server

communication model, are strictly stronger than those of

Beimel, Nissim and Omri, since we remove the assumption

on the number of rounds (and

also the number of messages in the client-to-server

communication model).

Our lower bounds generalize to the

$(\\epsilon, \\delta)$ differential

privacy notion, for reasonably small values of $\\delta$.

21:17 [Pub][ePrint]

With the launch of Mac OS X 10.7 (Lion), Apple has introduced a volume

encryption mechanism known as FileVault 2. Apple only disclosed marketing aspects of the closed-source software, e.g. its use of the AES-XTS tweakable encryption, but a publicly available security evaluation and detailed description was unavailable until now.

We have performed an extensive analysis of FileVault 2 and we have been able to find all the algorithms and parameters needed to successfully read an encrypted volume. This allows us to perform forensic investigations on encrypted volumes using our own tools.

In this paper we present the architecture of FileVault 2, giving details of the key derivation, encryption process and metadata structures needed to perform the volume decryption. Besides the analysis of the system, we have also built a library that can mount a volume encrypted with FileVault 2. As a contribution to the research and forensic communities we have made this library open source.

Additionally, we present an informal security evaluation of the system and comment on some of the design and implementation features. Among others we analyze the random number generator used to create the recovery password. We have also analyzed the entropy of each 512-byte block in the encrypted volume and discovered that part of the user data was left unencrypted.

21:17 [Pub][ePrint]

This paper introduces a special type of symmetric cryptosystem called multi-encryption scheme. It allows users to encrypt multiple plaintexts into a single ciphertext. Each plaintext is protected with its own secret key, meaning that they can be decrypted individually by applying the decryption function with the corresponding key to the ciphertext. Compared to encrypting the ciphertexts one-by-one using a standard symmetric cryptosystem, the main advantage of using a multi-encryption scheme is the no-search property, which guarantees that knowing the key is sufficient for decrypting a single plaintext. We show how to construct a multi-encryption scheme based on polynomials over finite fields. A possible application area is coercion-resistant electronic voting. To ensure a strong form of privacy, voters are equipped with multiple fake credentials, which are indistinguishable from the proper one. While theoretically sound, this requires a voter to perfectly recall multiple lengthy random numbers, and to know which of them is the proper one. To ensure 100\\% recall, users need to manage these numbers and keep them secret. A multi-encryption scheme is an elegant solution for this problem.

21:17 [Pub][ePrint]

We present the first formal mathematical presentation of the generalized Russian cards problem, and provide rigorous security definitions that capture both basic and extended versions of weak and perfect security notions. In the generalized Russian cards problem, three players, Alice, Bob, and Cathy, are dealt a deck of $n$ cards, each given $a$, $b$, and $c$ cards, respectively. The goal is for Alice and Bob to learn each other\'s hands via public communication, without Cathy learning the fate of any particular card. The basic idea is that Alice announces a set of possible hands she might hold, and Bob, using knowledge of his own hand, should be able to learn Alice\'s cards from this announcement, but Cathy should not. Using a combinatorial approach, we are able to give a nice characterization of informative strategies (i.e., strategies allowing Bob to learn Alice\'s hand), having optimal communication complexity, namely the set of possible hands Alice announces must be equivalent to a large set of $t-(n, a, 1)$-designs, where $t=a-c$. We also provide some interesting necessary conditions for certain types of deals to be simultaneously informative and secure. That is, for deals satisfying $c = a-d$ for some $d \\geq 2$, where $b \\geq d-1$ and the strategy is assumed to satisfy a strong version of security (namely perfect $(d-1)$-security), we show that $a = d+1$ and hence $c=1$. We also give a precise characterization of informative and perfectly $(d-1)$-secure deals of the form $(d+1, b, 1)$ satisfying $b \\geq d-1$ involving $d-(n, d+1, 1)$-designs.

21:17 [Pub][ePrint]

Distributed key generation (DKG) has been studied extensively in the cryptographic literature. However, it has never been examined outside of the synchronous setting, and the known DKG protocols cannot guarantee safety or liveness over the Internet.

In this work, we present the first realistic DKG protocol for use over the Internet. We propose a practical system model for the Internet and define an efficient verifiable secret sharing (VSS) scheme in it. We observe the necessity of Byzantine agreement for asynchronous DKG and analyze the difficulty of using a randomized protocol for it. Using our VSS scheme and a leader-based agreement protocol, we then design a provably secure DKG protocol. We also consider and achieve cryptographic properties such as uniform randomness of the shared secret and compare static versus adaptive adversary models. Finally, we implement our DKG protocol, and establish its efficiency and reliability by extensively testing it on the PlanetLab platform. Counter to a general non-scalability perception about asynchronous systems, our experiments demonstrate that our asynchronous DKG protocol scales well with the system size and it is suitable for realizing multiparty computation and threshold cryptography over the Internet.