International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-07-06
21:17 [Pub][ePrint]

In cloud storage service, clients upload their data together with authentication information to cloud storage server. To ensure the availability and integrity of clients\' stored data, cloud server(CS) must prove to a verifier that he is actually storing all of the client\'s data unchanged. And, enabling public auditability for cloud storage is of critical importance to users with constrained computing resources, who can resort to a third party auditor (TPA) to check the integrity of outsourced data. However, most of the existing proofs of retrievability schemes or proof of data possession schemes do not consider data privacy problem. Zero knowledge privacy requires TPA or the adversary can not deduce any information of the file data from auditing system. In this paper, We point out the second scheme in [16] can not provide zero knowledge privacy as they claimed. After giving a new construction of a recently proposed cryptographic primitive named aggregatable signature based broadcast ($ASBB$) encryption scheme, we present an efficient public auditing scheme with zero knowledge privacy. The new scheme is as efficient as the scheme presented by Shacham and Waters without considering privacy and is secure in the random oracle model.

21:17 [Pub][ePrint]

We present a compiler that converts any circuit into one that remains secure even if a constant fraction of its wires are tampered with. Following the seminal work of Ishai et al. (Eurocrypt 2006), we consider adversaries who may choose an arbitrary set of wires to corrupt, and may set each such wire to 0 or to 1, or may toggle with the wire. We prove that such adversaries, who continuously tamper with the circuit, can learn at most logarithmically many bits of secret information (in addition to black-box access to the circuit). Our results are information theoretic.

21:17 [Pub][ePrint]

Let $\\G$ be a group of prime order $q$, and let $g_1,\\ldots,g_n$ be

random elements of $\\G$. We say that a vector $\\vecx = (x_1,\\ldots,x_n)\\in \\Z_q^n$ is a {\\em discrete log representation} of

some some element $y\\in\\G$ (with respect to $g_1,\\ldots,g_n$) if

$g_1^{x_1}\\cdots g_n^{x_n} = y$. Any element $y$ has many discrete log

representations, forming an affine subspace of $\\Z_q^n$. We show

that these representations have a nice {\\em continuous leakage-resilience} property as follows. Assume some attacker

$\\A(g_1,\\ldots,g_n,y)$ can repeatedly learn $L$ bits of information on arbitrarily many random representations of $y$.

That is, $\\A$ adaptively chooses polynomially many leakage functions

$f_i:\\Z_q^n\\rightarrow \\{0,1\\}^L$, and learns the value

$f_i(\\vecx_i)$, where $\\vecx_i$ is a {\\em fresh and random}

discrete log representation of $y$. $\\A$ wins the game if it eventually outputs a

valid discrete log representation $\\vecx^*$ of $y$. We show that if

the discrete log assumption holds in $\\G$, then no polynomially

bounded $\\A$ can win this game with non-negligible probability, as

long as the leakage on each representation is bounded by $L\\approx (n-2)\\log q = (1-\\frac{2}{n})\\cdot |\\vecx|$.

As direct extensions of this property, we design very simple continuous leakage-resilient (CLR) one-way function (OWF) and public-key encryption (PKE) schemes in the so called invisible key update\'\' model introduced by Alwen et al. at CRYPTO\'09. Our CLR-OWF is based on the standard Discrete Log assumption and our CLR-PKE is based on the standard Decisional Diffie-Hellman assumption. Prior to our work, such schemes could only be constructed in groups with a bilinear pairing.

As another surprising application, we show how to design the first leakage-resilient {\\em traitor tracing} scheme, where no attacker, getting the secret keys of a small subset of decoders (called traitors\'\') {\\em and} bounded leakage on the secret keys of all other decoders, can create a valid decryption key which will not be traced back to at least one of the traitors.

21:17 [Pub][ePrint]

In this paper we present a comprehensive comparison of all Round 3 SHA-3 candidates and the current standard SHA-2 from the point of view of hardware performance in modern FPGAs. Each algorithm is implemented using multiple architectures based on the concepts of iteration, folding, unrolling, pipelining, and circuit replication. Trade-offs between speed and area are investigated, and the best architecture from the point of view of the throughput to area ratio is identified. Finally, all algorithms are ranked based on their overall performance in FPGAs. The characteristic features of each algorithm important from the point of view of its implementation in hardware are identified.

21:17 [Pub][ePrint]

We give details of the factorization of RSA-704 with CADO-NFS. This is a record computation with publicly available software tools. The aim of this experiment was to stress CADO-NFS - which was originally designed for 512-bit factorizations - for larger inputs, and to identify possible rooms of improvement.

21:17 [Pub][ePrint]

The Boneh-Gentry-Waters (BGW) scheme is one of the most efficient broadcast encryption scheme regarding the overhead size. This performance relies on the use of a pairing. Hence this protocol can benefit from public key improvements. The ciphertext is of constant size, whatever the proportion of revoked users is. The main lasting constraint is the computation time at receiver end as it depends on the number of revoked users. In this paper we describe two modifications to improve the BGW bandwidth and time complexity. First we rewrite the protocol and its security proof with an asymmetric pairing over the Barreto-Naehrig (BN) curves instead of a symmetric one over supersingular curves. This modification leads to a practical gain of 60 % in speed and 84 % in bandwidth. The second tweaks allows to reduce the computation time from $O(n-r)$ to $\\min(O(r),O(n-r))$ for the worst case (and better for the average case). We give performance measures of our implementation for a 128-bit security level of the modified protocol on a smartphone.

21:17 [Pub][ePrint]

We describe a method for efficiently hashing multiple messages of different lengths. Such computations occur in various scenarios, and one of them is when an operating system checks the integrity of its components during boot time. These tasks can gain performance by parallelizing the computations and using SIMD architectures. For such scenarios, we compare the performance of a new 4-buffers SHA-256 S-HASH implementation, to that of the standard serial hashing. Our results are measured on the 2nd Generation Intel® Core(TM) Processor, and demonstrate SHA-256 processing at effectively ~5.2 Cycles per Byte, when hashing from any of the three cache levels, or from the system memory. This represents speedup by a factor of 3.42x compared to OpenSSL (1.0.1), and by 2.25x compared to the recent and faster n-SMS method. For hashing from a disk, we show an effective rate of ~6.73 Cycles/Byte, which is almost 3 times faster than OpenSSL (1.0.1) under the same conditions. These results indicate that for some usage models, SHA-256 is significantly faster than commonly perceived.

21:17 [Pub][ePrint]

We study the slow diffusion of the AES key schedule for 256-bit keys and find weakness which can be used in the preimage attack on Davis-Meyer mode. Our preimage attack works for 8 rounds of AES- 256 with the computational complexity of $2^{124.9}$, while the best previous attack works for 7 rounds of AES-256. It is also extended to the preimage attack on some well-known double-block-length hash modes assuming the underlying block cipher is 8-round AES-256, whose computational complexity is $2^{252.9}$.

21:17 [Pub][ePrint]

We consider distributed private data analysis,

where $n$ parties each holding some sensitive data wish

to compute some aggregate statistics over all parties\' data.

We prove a tight lower bound for the private distributed summation

problem. Our lower bound is strictly stronger than

the prior lower-bound result by Beimel, Nissim, and Omri

published in CRYPTO 2008.

In particular, we show that any $n$-party protocol

computing the sum with sparse communication graph

of $\\Omega(\\sqrt{n})$

with constant probability, in order

to defend against potential coalitions of compromised users.

Furthermore, we show that in the client-server communication model,

where all users communicate solely with an untrusted server,

the additive error must be $\\Omega(\\sqrt{n})$, regardless of

the number of messages or rounds.

Both of our lower-bounds, for the general setting and the

client-to-server

communication model, are strictly stronger than those of

Beimel, Nissim and Omri, since we remove the assumption

on the number of rounds (and

also the number of messages in the client-to-server

communication model).

Our lower bounds generalize to the

$(\\epsilon, \\delta)$ differential

privacy notion, for reasonably small values of $\\delta$.

21:17 [Pub][ePrint]

With the launch of Mac OS X 10.7 (Lion), Apple has introduced a volume

encryption mechanism known as FileVault 2. Apple only disclosed marketing aspects of the closed-source software, e.g. its use of the AES-XTS tweakable encryption, but a publicly available security evaluation and detailed description was unavailable until now.

We have performed an extensive analysis of FileVault 2 and we have been able to find all the algorithms and parameters needed to successfully read an encrypted volume. This allows us to perform forensic investigations on encrypted volumes using our own tools.

In this paper we present the architecture of FileVault 2, giving details of the key derivation, encryption process and metadata structures needed to perform the volume decryption. Besides the analysis of the system, we have also built a library that can mount a volume encrypted with FileVault 2. As a contribution to the research and forensic communities we have made this library open source.

Additionally, we present an informal security evaluation of the system and comment on some of the design and implementation features. Among others we analyze the random number generator used to create the recovery password. We have also analyzed the entropy of each 512-byte block in the encrypted volume and discovered that part of the user data was left unencrypted.

21:17 [Pub][ePrint]

This paper introduces a special type of symmetric cryptosystem called multi-encryption scheme. It allows users to encrypt multiple plaintexts into a single ciphertext. Each plaintext is protected with its own secret key, meaning that they can be decrypted individually by applying the decryption function with the corresponding key to the ciphertext. Compared to encrypting the ciphertexts one-by-one using a standard symmetric cryptosystem, the main advantage of using a multi-encryption scheme is the no-search property, which guarantees that knowing the key is sufficient for decrypting a single plaintext. We show how to construct a multi-encryption scheme based on polynomials over finite fields. A possible application area is coercion-resistant electronic voting. To ensure a strong form of privacy, voters are equipped with multiple fake credentials, which are indistinguishable from the proper one. While theoretically sound, this requires a voter to perfectly recall multiple lengthy random numbers, and to know which of them is the proper one. To ensure 100\\% recall, users need to manage these numbers and keep them secret. A multi-encryption scheme is an elegant solution for this problem.