International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-07-06
21:17 [Pub][ePrint] Quantum Key Distribution in the Classical Authenticated Key Exchange Framework, by Michele Mosca and Douglas Stebila and Berkant Ustaoglu

  Key establishment is a crucial primitive for building secure channels: in a multi-party setting, it allows two parties using only public authenticated communication to establish a secret session key which can be used to encrypt messages. But if the session key is compromised, the confidentiality of encrypted messages is typically compromised as well. Without quantum mechanics, key establishment can only be done under the assumption that some computational problem is hard. Since digital communication can be easily eavesdropped and recorded, it is important to consider the secrecy of information anticipating future algorithmic and computational discoveries which could break the secrecy of past keys, violating the secrecy of the confidential channel.

Quantum key distribution (QKD) can be used generate secret keys that are secure against any future algorithmic or computational improvements. QKD protocols still require authentication of classical communication, however, which is most easily achieved using computationally secure digital signature schemes. It is generally considered folklore that QKD when used with computationally secure authentication is still secure against an unbounded adversary, provided the adversary did not break the authentication during the run of the protocol.

We describe a security model for quantum key distribution based on traditional classical authenticated key exchange (AKE) security models. Using our model, we characterize the long-term security of the BB84 QKD protocol with computationally secure authentication against an eventually unbounded adversary. By basing our model on traditional AKE models, we can more readily compare the relative merits of various forms of QKD and existing classical AKE protocols. This comparison illustrates in which types of adversarial environments different quantum and classical key agreement protocols can be secure.



21:17 [Pub][ePrint] Achieving Constant Round Leakage-Resilient Zero-Knowledge, by Omkant Pandey

  Recently there has been a huge emphasis on constructing cryptographic protocols that maintain their security guarantees even in the presence of side channel attacks. Such attacks exploit the physical characteristics of a cryptographic device to learn useful information about the internal state of the device. Designing protocols that deliver meaningful security even in the presence of such leakage attacks is a challenging task.

The recent work of Garg, Jain, and Sahai formulates a meaningful notion of zero-knowledge in presence of leakage; and provides a construction which satisfies a weaker variant of this notion called (1+e)-leakage-resilient-zero-knowledge, for every constant e>0. In this weaker variant, roughly speaking, if the verifier learns L bits of leakage during the interaction, then the simulator is allowed to access (1+e).L bits of leakage. The round complexity of their protocol is n/e.

In this work, we present the first construction of leakage-resilient zero-knowledge satisfying the ideal requirement of e=0. While our focus is on a feasibility result for e=0, our construction also enjoys a constant number of rounds. At the heart of our construction is a new ``public-coin preamble\'\' which allows the simulator to recover arbitrary information from a (cheating) verifier in a ``straight line.\'\' We use non-black-box simulation techniques to accomplish this goal.



21:17 [Pub][ePrint] A Unified Indifferentiability Proof for Permutation- or Block Cipher-Based Hash Functions, by Anne Canteaut and Thomas Fuhr and Mar\\\'{i}a Naya-Plasencia and Pascal Paillier and Jean-Ren\\\'{e} Reinh

  In the recent years, several hash constructions have been

introduced that aim at achieving enhanced security margins by strengthening the Merkle-Damg{\\aa}rd mode. However, their security analysis have been conducted independently and using a variety of proof methodologies. This paper unifies these results by proposing a unique indifferentiability proof that considers a broadened form of the general compression function introduced by Stam at FSE09. This general definition enables us to capture in a realistic model most of the features of the mode of operation ({\\em e.g.}, message encoding, blank rounds, message insertion,...) within the pre-processing and post-processing functions. Furthermore, it relies on an

inner primitive which can be instantiated either by an ideal block cipher, or by an ideal permutation. Then, most existing hash functions can be seen as the Chop-MD construction applied to some compression function which fits the broadened Stam model. Our result then gives the tightest known indifferentiability bounds for several general modes of operations, including Chop-MD, Haifa or sponges. Moreover, we show that it applies in a quite automatic way, by providing the security bounds for 7 out of the 14 second round SHA-3 candidates, which are in some cases improved over previously known ones.



21:17 [Pub][ePrint] Zero-Knowledge Proofs with Low Amortized Communication from Lattice Assumptions, by Ivan Damgard and Adriana Lopez-Alt

  We construct zero-knowledge proofs of plaintext knowledge (PoPK) and correct multiplication (PoPC) for the Regev encryption scheme with low amortized communication complexity. Previous constructions of both PoPK and PoPC had communication cost linear in the size of the public key (roughly quadratic in the lattice dimension, ignoring logarithmic factors). Furthermore, previous constructions of PoPK suffered from one of the following weaknesses: either the message and randomness space were restricted, or there was a super-polynomial gap between the size of the message and randomness that an honest prover chose and the size of which an accepting verifier would be convinced. The latter weakness was also present in the existent PoPC protocols.

In contrast, O(n) proofs (for lattice dimension n) in our PoPK and PoPC protocols have communication cost linear in the public key. Thus, we improve the amortized communication cost of each proof by a factor linear in the security parameter. Furthermore, we allow the message space to be \\Z_p and the randomness distribution to be the discrete Gaussian, both of which are natural choices for the Regev encryption scheme. Finally, in our schemes there is no gap between the the size of the message and randomness that an honest prover chooses and the size of which an accepting verifier is convinced.

Our constructions use the ``MPC-in-the-head\'\' technique of Ishai et al. (STOC 2007). At the heart of our constructions is a protocol for proving that a value is bounded by some publicly known bound. This uses Lagrange\'s Theorem that states that any positive integer can be expressed as the sum of four squares (an idea previously used by Boudot (EUROCRYPT 2000)), as well as techniques from Cramer and Damg{\\aa}rd (CRYPTO 2009).



21:17 [Pub][ePrint] Public Auditing for Ensuring Cloud Data Storage Security With Zero Knowledge Privacy, by Wang Shao-hui, Chen Dan-wei, Wang Zhi-wei, Chang Su-qin

  In cloud storage service, clients upload their data together with authentication information to cloud storage server. To ensure the availability and integrity of clients\' stored data, cloud server(CS) must prove to a verifier that he is actually storing all of the client\'s data unchanged. And, enabling public auditability for cloud storage is of critical importance to users with constrained computing resources, who can resort to a third party auditor (TPA) to check the integrity of outsourced data. However, most of the existing proofs of retrievability schemes or proof of data possession schemes do not consider data privacy problem. Zero knowledge privacy requires TPA or the adversary can not deduce any information of the file data from auditing system. In this paper, We point out the second scheme in [16] can not provide zero knowledge privacy as they claimed. After giving a new construction of a recently proposed cryptographic primitive named aggregatable signature based broadcast ($ASBB$) encryption scheme, we present an efficient public auditing scheme with zero knowledge privacy. The new scheme is as efficient as the scheme presented by Shacham and Waters without considering privacy and is secure in the random oracle model.



21:17 [Pub][ePrint] Securing Circuits Against Constant-Rate Tampering, by Dana Dachman-Soled and Yael Tauman Kalai

  We present a compiler that converts any circuit into one that remains secure even if a constant fraction of its wires are tampered with. Following the seminal work of Ishai et al. (Eurocrypt 2006), we consider adversaries who may choose an arbitrary set of wires to corrupt, and may set each such wire to 0 or to 1, or may toggle with the wire. We prove that such adversaries, who continuously tamper with the circuit, can learn at most logarithmically many bits of secret information (in addition to black-box access to the circuit). Our results are information theoretic.



21:17 [Pub][ePrint] On Continual Leakage of Discrete Log Representations, by Shweta Agrawal and Yevgeniy Dodis and Vinod Vaikuntanathan and Daniel Wichs

  Let $\\G$ be a group of prime order $q$, and let $g_1,\\ldots,g_n$ be

random elements of $\\G$. We say that a vector $\\vecx =

(x_1,\\ldots,x_n)\\in \\Z_q^n$ is a {\\em discrete log representation} of

some some element $y\\in\\G$ (with respect to $g_1,\\ldots,g_n$) if

$g_1^{x_1}\\cdots g_n^{x_n} = y$. Any element $y$ has many discrete log

representations, forming an affine subspace of $\\Z_q^n$. We show

that these representations have a nice {\\em continuous leakage-resilience} property as follows. Assume some attacker

$\\A(g_1,\\ldots,g_n,y)$ can repeatedly learn $L$ bits of information on arbitrarily many random representations of $y$.

That is, $\\A$ adaptively chooses polynomially many leakage functions

$f_i:\\Z_q^n\\rightarrow \\{0,1\\}^L$, and learns the value

$f_i(\\vecx_i)$, where $\\vecx_i$ is a {\\em fresh and random}

discrete log representation of $y$. $\\A$ wins the game if it eventually outputs a

valid discrete log representation $\\vecx^*$ of $y$. We show that if

the discrete log assumption holds in $\\G$, then no polynomially

bounded $\\A$ can win this game with non-negligible probability, as

long as the leakage on each representation is bounded by $L\\approx (n-2)\\log q = (1-\\frac{2}{n})\\cdot |\\vecx|$.

As direct extensions of this property, we design very simple continuous leakage-resilient (CLR) one-way function (OWF) and public-key encryption (PKE) schemes in the so called ``invisible key update\'\' model introduced by Alwen et al. at CRYPTO\'09. Our CLR-OWF is based on the standard Discrete Log assumption and our CLR-PKE is based on the standard Decisional Diffie-Hellman assumption. Prior to our work, such schemes could only be constructed in groups with a bilinear pairing.

As another surprising application, we show how to design the first leakage-resilient {\\em traitor tracing} scheme, where no attacker, getting the secret keys of a small subset of decoders (called ``traitors\'\') {\\em and} bounded leakage on the secret keys of all other decoders, can create a valid decryption key which will not be traced back to at least one of the traitors.



21:17 [Pub][ePrint] Comprehensive Evaluation of High-Speed and Medium-Speed Implementations of Five SHA-3 Finalists Using Xilinx and Altera FPGAs, by Kris Gaj and Ekawat Homsirikamol and Marcin Rogawski and Rabia Shahid

  In this paper we present a comprehensive comparison of all Round 3 SHA-3 candidates and the current standard SHA-2 from the point of view of hardware performance in modern FPGAs. Each algorithm is implemented using multiple architectures based on the concepts of iteration, folding, unrolling, pipelining, and circuit replication. Trade-offs between speed and area are investigated, and the best architecture from the point of view of the throughput to area ratio is identified. Finally, all algorithms are ranked based on their overall performance in FPGAs. The characteristic features of each algorithm important from the point of view of its implementation in hardware are identified.



21:17 [Pub][ePrint] Factorisation of RSA-704 with CADO-NFS, by Shi Bai and Emmanuel Thom\\\'e and Paul Zimmermann

  We give details of the factorization of RSA-704 with CADO-NFS. This is a record computation with publicly available software tools. The aim of this experiment was to stress CADO-NFS - which was originally designed for 512-bit factorizations - for larger inputs, and to identify possible rooms of improvement.



21:17 [Pub][ePrint] Improved Broadcast Encryption Scheme with Constant-Size Ciphertext, by Renaud Dubois and Aurore Guillevic and Marine Sengelin Le Breton

  The Boneh-Gentry-Waters (BGW) scheme is one of the most efficient broadcast encryption scheme regarding the overhead size. This performance relies on the use of a pairing. Hence this protocol can benefit from public key improvements. The ciphertext is of constant size, whatever the proportion of revoked users is. The main lasting constraint is the computation time at receiver end as it depends on the number of revoked users. In this paper we describe two modifications to improve the BGW bandwidth and time complexity. First we rewrite the protocol and its security proof with an asymmetric pairing over the Barreto-Naehrig (BN) curves instead of a symmetric one over supersingular curves. This modification leads to a practical gain of 60 % in speed and 84 % in bandwidth. The second tweaks allows to reduce the computation time from $O(n-r)$ to $\\min(O(r),O(n-r))$ for the worst case (and better for the average case). We give performance measures of our implementation for a 128-bit security level of the modified protocol on a smartphone.



21:17 [Pub][ePrint] Simultaneous hashing of multiple messages , by Shay Gueron and Vlad Krasnov

  We describe a method for efficiently hashing multiple messages of different lengths. Such computations occur in various scenarios, and one of them is when an operating system checks the integrity of its components during boot time. These tasks can gain performance by parallelizing the computations and using SIMD architectures. For such scenarios, we compare the performance of a new 4-buffers SHA-256 S-HASH implementation, to that of the standard serial hashing. Our results are measured on the 2nd Generation IntelĀ® Core(TM) Processor, and demonstrate SHA-256 processing at effectively ~5.2 Cycles per Byte, when hashing from any of the three cache levels, or from the system memory. This represents speedup by a factor of 3.42x compared to OpenSSL (1.0.1), and by 2.25x compared to the recent and faster n-SMS method. For hashing from a disk, we show an effective rate of ~6.73 Cycles/Byte, which is almost 3 times faster than OpenSSL (1.0.1) under the same conditions. These results indicate that for some usage models, SHA-256 is significantly faster than commonly perceived.