International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-07-06
21:17 [Pub][ePrint] Zero-Knowledge Proofs with Low Amortized Communication from Lattice Assumptions, by Ivan Damgard and Adriana Lopez-Alt

  We construct zero-knowledge proofs of plaintext knowledge (PoPK) and correct multiplication (PoPC) for the Regev encryption scheme with low amortized communication complexity. Previous constructions of both PoPK and PoPC had communication cost linear in the size of the public key (roughly quadratic in the lattice dimension, ignoring logarithmic factors). Furthermore, previous constructions of PoPK suffered from one of the following weaknesses: either the message and randomness space were restricted, or there was a super-polynomial gap between the size of the message and randomness that an honest prover chose and the size of which an accepting verifier would be convinced. The latter weakness was also present in the existent PoPC protocols.

In contrast, O(n) proofs (for lattice dimension n) in our PoPK and PoPC protocols have communication cost linear in the public key. Thus, we improve the amortized communication cost of each proof by a factor linear in the security parameter. Furthermore, we allow the message space to be \\Z_p and the randomness distribution to be the discrete Gaussian, both of which are natural choices for the Regev encryption scheme. Finally, in our schemes there is no gap between the the size of the message and randomness that an honest prover chooses and the size of which an accepting verifier is convinced.

Our constructions use the ``MPC-in-the-head\'\' technique of Ishai et al. (STOC 2007). At the heart of our constructions is a protocol for proving that a value is bounded by some publicly known bound. This uses Lagrange\'s Theorem that states that any positive integer can be expressed as the sum of four squares (an idea previously used by Boudot (EUROCRYPT 2000)), as well as techniques from Cramer and Damg{\\aa}rd (CRYPTO 2009).



21:17 [Pub][ePrint] Public Auditing for Ensuring Cloud Data Storage Security With Zero Knowledge Privacy, by Wang Shao-hui, Chen Dan-wei, Wang Zhi-wei, Chang Su-qin

  In cloud storage service, clients upload their data together with authentication information to cloud storage server. To ensure the availability and integrity of clients\' stored data, cloud server(CS) must prove to a verifier that he is actually storing all of the client\'s data unchanged. And, enabling public auditability for cloud storage is of critical importance to users with constrained computing resources, who can resort to a third party auditor (TPA) to check the integrity of outsourced data. However, most of the existing proofs of retrievability schemes or proof of data possession schemes do not consider data privacy problem. Zero knowledge privacy requires TPA or the adversary can not deduce any information of the file data from auditing system. In this paper, We point out the second scheme in [16] can not provide zero knowledge privacy as they claimed. After giving a new construction of a recently proposed cryptographic primitive named aggregatable signature based broadcast ($ASBB$) encryption scheme, we present an efficient public auditing scheme with zero knowledge privacy. The new scheme is as efficient as the scheme presented by Shacham and Waters without considering privacy and is secure in the random oracle model.



21:17 [Pub][ePrint] Securing Circuits Against Constant-Rate Tampering, by Dana Dachman-Soled and Yael Tauman Kalai

  We present a compiler that converts any circuit into one that remains secure even if a constant fraction of its wires are tampered with. Following the seminal work of Ishai et al. (Eurocrypt 2006), we consider adversaries who may choose an arbitrary set of wires to corrupt, and may set each such wire to 0 or to 1, or may toggle with the wire. We prove that such adversaries, who continuously tamper with the circuit, can learn at most logarithmically many bits of secret information (in addition to black-box access to the circuit). Our results are information theoretic.



21:17 [Pub][ePrint] On Continual Leakage of Discrete Log Representations, by Shweta Agrawal and Yevgeniy Dodis and Vinod Vaikuntanathan and Daniel Wichs

  Let $\\G$ be a group of prime order $q$, and let $g_1,\\ldots,g_n$ be

random elements of $\\G$. We say that a vector $\\vecx =

(x_1,\\ldots,x_n)\\in \\Z_q^n$ is a {\\em discrete log representation} of

some some element $y\\in\\G$ (with respect to $g_1,\\ldots,g_n$) if

$g_1^{x_1}\\cdots g_n^{x_n} = y$. Any element $y$ has many discrete log

representations, forming an affine subspace of $\\Z_q^n$. We show

that these representations have a nice {\\em continuous leakage-resilience} property as follows. Assume some attacker

$\\A(g_1,\\ldots,g_n,y)$ can repeatedly learn $L$ bits of information on arbitrarily many random representations of $y$.

That is, $\\A$ adaptively chooses polynomially many leakage functions

$f_i:\\Z_q^n\\rightarrow \\{0,1\\}^L$, and learns the value

$f_i(\\vecx_i)$, where $\\vecx_i$ is a {\\em fresh and random}

discrete log representation of $y$. $\\A$ wins the game if it eventually outputs a

valid discrete log representation $\\vecx^*$ of $y$. We show that if

the discrete log assumption holds in $\\G$, then no polynomially

bounded $\\A$ can win this game with non-negligible probability, as

long as the leakage on each representation is bounded by $L\\approx (n-2)\\log q = (1-\\frac{2}{n})\\cdot |\\vecx|$.

As direct extensions of this property, we design very simple continuous leakage-resilient (CLR) one-way function (OWF) and public-key encryption (PKE) schemes in the so called ``invisible key update\'\' model introduced by Alwen et al. at CRYPTO\'09. Our CLR-OWF is based on the standard Discrete Log assumption and our CLR-PKE is based on the standard Decisional Diffie-Hellman assumption. Prior to our work, such schemes could only be constructed in groups with a bilinear pairing.

As another surprising application, we show how to design the first leakage-resilient {\\em traitor tracing} scheme, where no attacker, getting the secret keys of a small subset of decoders (called ``traitors\'\') {\\em and} bounded leakage on the secret keys of all other decoders, can create a valid decryption key which will not be traced back to at least one of the traitors.



21:17 [Pub][ePrint] Comprehensive Evaluation of High-Speed and Medium-Speed Implementations of Five SHA-3 Finalists Using Xilinx and Altera FPGAs, by Kris Gaj and Ekawat Homsirikamol and Marcin Rogawski and Rabia Shahid

  In this paper we present a comprehensive comparison of all Round 3 SHA-3 candidates and the current standard SHA-2 from the point of view of hardware performance in modern FPGAs. Each algorithm is implemented using multiple architectures based on the concepts of iteration, folding, unrolling, pipelining, and circuit replication. Trade-offs between speed and area are investigated, and the best architecture from the point of view of the throughput to area ratio is identified. Finally, all algorithms are ranked based on their overall performance in FPGAs. The characteristic features of each algorithm important from the point of view of its implementation in hardware are identified.



21:17 [Pub][ePrint] Factorisation of RSA-704 with CADO-NFS, by Shi Bai and Emmanuel Thom\\\'e and Paul Zimmermann

  We give details of the factorization of RSA-704 with CADO-NFS. This is a record computation with publicly available software tools. The aim of this experiment was to stress CADO-NFS - which was originally designed for 512-bit factorizations - for larger inputs, and to identify possible rooms of improvement.



21:17 [Pub][ePrint] Improved Broadcast Encryption Scheme with Constant-Size Ciphertext, by Renaud Dubois and Aurore Guillevic and Marine Sengelin Le Breton

  The Boneh-Gentry-Waters (BGW) scheme is one of the most efficient broadcast encryption scheme regarding the overhead size. This performance relies on the use of a pairing. Hence this protocol can benefit from public key improvements. The ciphertext is of constant size, whatever the proportion of revoked users is. The main lasting constraint is the computation time at receiver end as it depends on the number of revoked users. In this paper we describe two modifications to improve the BGW bandwidth and time complexity. First we rewrite the protocol and its security proof with an asymmetric pairing over the Barreto-Naehrig (BN) curves instead of a symmetric one over supersingular curves. This modification leads to a practical gain of 60 % in speed and 84 % in bandwidth. The second tweaks allows to reduce the computation time from $O(n-r)$ to $\\min(O(r),O(n-r))$ for the worst case (and better for the average case). We give performance measures of our implementation for a 128-bit security level of the modified protocol on a smartphone.



21:17 [Pub][ePrint] Simultaneous hashing of multiple messages , by Shay Gueron and Vlad Krasnov

  We describe a method for efficiently hashing multiple messages of different lengths. Such computations occur in various scenarios, and one of them is when an operating system checks the integrity of its components during boot time. These tasks can gain performance by parallelizing the computations and using SIMD architectures. For such scenarios, we compare the performance of a new 4-buffers SHA-256 S-HASH implementation, to that of the standard serial hashing. Our results are measured on the 2nd Generation IntelĀ® Core(TM) Processor, and demonstrate SHA-256 processing at effectively ~5.2 Cycles per Byte, when hashing from any of the three cache levels, or from the system memory. This represents speedup by a factor of 3.42x compared to OpenSSL (1.0.1), and by 2.25x compared to the recent and faster n-SMS method. For hashing from a disk, we show an effective rate of ~6.73 Cycles/Byte, which is almost 3 times faster than OpenSSL (1.0.1) under the same conditions. These results indicate that for some usage models, SHA-256 is significantly faster than commonly perceived.



21:17 [Pub][ePrint] New Preimage Attacks on Hash Modes of AES-256, by Deukjo Hong and Dong-Chan Kim and Daesung Kwon

  We study the slow diffusion of the AES key schedule for 256-bit keys and find weakness which can be used in the preimage attack on Davis-Meyer mode. Our preimage attack works for 8 rounds of AES- 256 with the computational complexity of $2^{124.9}$, while the best previous attack works for 7 rounds of AES-256. It is also extended to the preimage attack on some well-known double-block-length hash modes assuming the underlying block cipher is 8-round AES-256, whose computational complexity is $2^{252.9}$.



21:17 [Pub][ePrint] Optimal Lower Bound for Differentially Private Multi-Party Aggregation, by T-H. Hubert Chan and Elaine Shi and Dawn Song

  We consider distributed private data analysis,

where $n$ parties each holding some sensitive data wish

to compute some aggregate statistics over all parties\' data.

We prove a tight lower bound for the private distributed summation

problem. Our lower bound is strictly stronger than

the prior lower-bound result by Beimel, Nissim, and Omri

published in CRYPTO 2008.

In particular, we show that any $n$-party protocol

computing the sum with sparse communication graph

must incur an additive error

of $\\Omega(\\sqrt{n})$

with constant probability, in order

to defend against potential coalitions of compromised users.

Furthermore, we show that in the client-server communication model,

where all users communicate solely with an untrusted server,

the additive error must be $\\Omega(\\sqrt{n})$, regardless of

the number of messages or rounds.

Both of our lower-bounds, for the general setting and the

client-to-server

communication model, are strictly stronger than those of

Beimel, Nissim and Omri, since we remove the assumption

on the number of rounds (and

also the number of messages in the client-to-server

communication model).

Our lower bounds generalize to the

$(\\epsilon, \\delta)$ differential

privacy notion, for reasonably small values of $\\delta$.



21:17 [Pub][ePrint] Infiltrate the Vault: Security Analysis and Decryption of Lion Full Disk Encryption, by Omar Choudary and Felix Grobert and Joachim Metz

  With the launch of Mac OS X 10.7 (Lion), Apple has introduced a volume

encryption mechanism known as FileVault 2. Apple only disclosed marketing aspects of the closed-source software, e.g. its use of the AES-XTS tweakable encryption, but a publicly available security evaluation and detailed description was unavailable until now.

We have performed an extensive analysis of FileVault 2 and we have been able to find all the algorithms and parameters needed to successfully read an encrypted volume. This allows us to perform forensic investigations on encrypted volumes using our own tools.

In this paper we present the architecture of FileVault 2, giving details of the key derivation, encryption process and metadata structures needed to perform the volume decryption. Besides the analysis of the system, we have also built a library that can mount a volume encrypted with FileVault 2. As a contribution to the research and forensic communities we have made this library open source.

Additionally, we present an informal security evaluation of the system and comment on some of the design and implementation features. Among others we analyze the random number generator used to create the recovery password. We have also analyzed the entropy of each 512-byte block in the encrypted volume and discovered that part of the user data was left unencrypted.