International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-06-21
18:17 [Pub][ePrint] Differential Fault Intensity Analysis, by Nahid Farhady Ghalaty and Bilgiday Yuce and Mostafa Taha and Patrick Schaumont

  --Recent research has demonstrated that there is

no sharp distinction between passive attacks based on sidechannel

leakage and active attacks based on fault injection.

Fault behavior can be processed as side-channel information,

offering all the benefits of Differential Power Analysis including

noise averaging and hypothesis testing by correlation. This paper

introduces Differential Fault Intensity Analysis, which combines

the principles of Differential Power Analysis and fault injection.

We observe that most faults are biased - such as single-bit,

two-bit, or three-bit errors in a byte - and that this property

can reveal the secret key through a hypothesis test. Unlike

Differential Fault Analysis, we do not require precise analysis

of the fault propagation. Unlike Fault Sensitivity Analysis, we do

not require a fault sensitivity profile for the device under attack.

We demonstrate our method on an FPGA implementation of

AES with a fault injection model. We find that with an average

of 7 fault injections, we can reconstruct a full 128-bit AES key



18:17 [Pub][ePrint] Zeroizing Without Low-Level Zeroes: New MMAP Attacks and Their Limitations, by Jean-Sebastien Coron and Craig Gentry and Shai Halevi and Tancrede Lepoint and Hemanta K. Maji and Eric Miles and Mariana

  We extend the recent zeroizing attacks of Cheon, Han, Lee, Ryu and Stehle (Eurocrypt\'15) on multilinear maps to settings where no encodings of zero below the maximal level are available. Some of the new attacks apply to the CLT13 scheme (resulting in a total break) while others apply to (a variant of) the GGH13 scheme (resulting in a weak-DL attack). We also note the limits of these zeroizing attacks.



18:17 [Pub][ePrint] Assessment of Hiding the Higher-Order Leakages in Hardware - what are the achievements versus overheads?, by Amir Moradi and Alexander Wild

  Higher-order side-channel attacks are becoming amongst the major interests of academia as well as industry sector. It is indeed being motivated by the development of countermeasures which can prevent the leakages up to certain orders. As a concrete example, threshold implementation (TI) as an efficient way to realize Boolean masking in hardware is able to avoid first-order leakages. Trivially, the attacks conducted at second (and higher) orders can exploit the corresponding leakages hence devastating the provided security. Hence, the extension of TI to higher orders was being expected which has been presented at ASIACRYPT 2014. Following its underlying univariate settings it can provide security at higher orders, and its area and time overheads naturally increase with the desired security order.

In this work we look at the feasibility of higher-order attacks on first-order TI from another perspective. Instead of increasing the order of resistance by employing higher-order TIs, we realize the first-order TI designs following the principles of a power-equalization technique dedicated to FPGA platforms, that naturally leads to hardening higher-order attacks. We show that although the first-order TI designs, which are additionally equipped by the power-equalization methodology, have significant area overhead, they can maintain the same throughput and more importantly can avoid the higher-order leakages to be practically exploitable by up to 1 billion traces.



18:17 [Pub][ePrint] Combining Differential Privacy and Secure Multiparty Computation, by Martin Pettai and Peeter Laud

  We consider how to perform privacy-preserving analyses on private data from different data providers and containing personal information of many different individuals. We combine differential privacy and secret sharing in the same system to protect the privacy of both the data providers and the individuals. We have implemented a prototype of this combination and the overhead of adding differential privacy to secret sharing is small enough to be usable in practice.



18:17 [Pub][ePrint] The Chain Rule for HILL Pseudoentropy, Revisited, by Krzysztof Pietrzak and Maciej Skorski

  Computationalnotionsofentropy(a.k.a.pseudoentropy)have found many applications, including leakage-resilient cryptography, deter- ministic encryption or memory delegation. The most important tools to argue about pseudoentropy are chain rules, which quantify by how much (in terms of quantity and quality) the pseudoentropy of a given random variable X decreases when conditioned on some other variable Z (think for example of X as a secret key and Z as information leaked by a side-channel). In this paper we give a very simple and modular proof of the chain rule for HILL pseudoentropy, improving best known parameters. Our version allows for increasing the acceptable length of leakage in ap- plications up to a constant factor compared to the best previous bounds. As a contribution of independent interest, we provide a comprehensive study of all known versions of the chain rule, comparing their worst-case strength and limitations.



18:17 [Pub][ePrint] Predictive Models for Min-Entropy Estimation, by John Kelsey and Kerry A. McKay and Meltem Sonmez Turan

  Random numbers are essential for cryptography. In most real-world systems, these values come from a cryptographic pseudorandom number generator (PRNG), which in turn is seeded by an entropy source. The security of the entire cryptographic system then relies on the accuracy of the claimed amount of entropy provided by the source. If the entropy source provides less unpredictability than is expected, the security of the cryptographic mechanisms is undermined. For this reason, correctly estimating the amount of entropy available from a source is critical.

In this paper, we develop a set of tools for estimating entropy, based on mechanisms that attempt to predict the next sample in a sequence based on all previous samples.

These mechanisms are called predictors. We develop a framework for using predictors to estimate entropy, and test them experimentally against both simulated and real noise sources. For comparison, we subject the entropy estimates defined in the August 2012 draft of NIST Special Publication 800-90B to the same tests, and compare their performance.



09:17 [Pub][ePrint] Composable & Modular Anonymous Credentials: Definitions and Practical Constructions, by Jan Camenisch and Maria Dubovitskaya and Kristiyan Haralambiev and Markulf Kohlweiss

  It takes time for theoretical advances to get used in practical schemes. Anonymous credential schemes are no exception. For instance, existing schemes suited for real-world use lack formal, composable definitions, partly because they do not support straight-line extraction and rely on random oracles for their security arguments.

To address this gap, we propose unlinkable redactable signatures (URS), a new building block for privacy-enhancing protocols, which we use to construct the first efficient UC-secure anonymous credential system that supports multiple issuers, selective disclosure of attributes, and pseudonyms. Our scheme is one of the first such systems for which both the size of a credential and its presentation proof are independent of the number of attributes issued in a credential. Moreover, our new credential scheme does not rely on random oracles.

As an important intermediary step, we address the problem of building a functionality for a complex credential system that can cover many different features. Namely, we design a core building block for a single issuer that supports credential issuance and presentation with respect to pseudonyms and then show how to construct a full-fledged credential system with multiple issuers in a modular way. We expect this flexible definitional approach to be of independent interest.



09:17 [Pub][ePrint] Universal Computational Extractors and the Superfluous Padding Assumption for Indistinguishability Obfuscation, by Christina Brzuska and Arno Mittelbach

  Universal Computational Extractors (UCEs), introduced by Bellare, Hoang and Keelveedhi (CRYPTO 2013), are a framework of assumptions on hash functions that allow to instantiate random oracles in a large variety of settings. Brzuska, Farshim and Mittelbach (CRYPTO 2014) showed that a large class of UCE assumptions with \\emph{computationally} unpredictable sources cannot be achieved, if indistinguishability obfuscation exists. In the process of circumventing obfuscation-based attacks, new UCE notions emerged, most notably UCEs with respect to \\emph{statistically} unpredictable sources that suffice for a large class of applications. However, the only standard model constructions of UCEs are for a small subclass considering only $q$-query sources which are \\emph{strongly statistically} unpredictable (Brzuska, Mittelbach; Asiacrypt 2014).

The contributions of this paper are threefold:

1) We show a surprising equivalence for the notions of strong unpredictability and (plain) unpredictability thereby lifting the construction from Brzuska and Mittelbach to achieve $q$-query UCEs for statistically unpredictable sources. This yields standard model instantiations for various ($q$-query) primitives including, deterministic public-key encryption, message-locked encryption, multi-bit point obfuscation, CCA-secure encryption, and more. For some of these, our construction yields the first standard model candidate.

2) We study the blow-up that occurs in indistinguishability obfuscation proof techniques due to puncturing and state the \\emph{Superfluous Padding Assumption} for indistinguishability obfuscation which allows us to lift the $q$-query restriction of our construction. We validate the assumption by showing that it holds for virtual black-box obfuscation.

3) Brzuska and Mittelbach require a strong form of point obfuscation secure in the presence of auxiliary input for their construction of UCEs. We show that this assumption is indeed necessary for the construction of injective UCEs.



09:17 [Pub][ePrint] How Secure and Quick is QUIC? Provable Security and Performance Analyses, by Robert Lychev and Samuel Jero and Alexandra Boldyreva and Cristina Nita-Rotaru

  QUIC is a secure transport

protocol developed by Google and implemented in Chrome in 2013, currently

representing one of the most promising solutions to decreasing latency

while intending to provide security properties similar with TLS.

In this work we shed some light on QUIC\'s strengths and weaknesses

in terms of its provable security and performance guarantees in the presence of attackers.

We first introduce a security model for analyzing performance-driven protocols like QUIC

and prove that QUIC satisfies our definition under reasonable assumptions on the protocol\'s building blocks.

However, we find that QUIC does not satisfy the traditional notion of forward secrecy that is provided by some modes of TLS,

e.g., TLS-DHE.

Our analyses also reveal that with simple bit-flipping and replay attacks on some

public parameters exchanged during the handshake, an

adversary could easily prevent QUIC from achieving minimal latency

advantages either by having it fall back to TCP or by causing

the client and server to have an inconsistent view of their

handshake leading to a failure to complete the connection.

We have implemented these attacks and demonstrated that they

are practical.

Our results suggest that QUIC\'s security weaknesses are introduced by the very mechanisms used to reduce latency,

which highlights the seemingly inherent trade off between minimizing latency and providing `good\' security guarantees.



09:17 [Pub][ePrint] Secure Key Generation from Biased PUFs, by Roel Maes and Vincent van der Leest and Erik van der Sluis and Frans Willems

  PUF-based key generators have been widely considered as a root-of-trust in digital systems. They typically require an error-correcting mechanism (e.g. based on the code-offset method) for dealing with bit errors between the enrollment and reconstruction of keys. When the used PUF does not have full entropy, entropy leakage between the helper data and the device-unique key material can occur. If the entropy level of the PUF becomes too low, the PUF-derived key can be attacked through the publicly available helper data. In this work we provide several solutions for preventing this entropy leakage for PUFs suffering from bias. The methods proposed in this work pose no limit on the amount of bias that can be tolerated, which solves an important open problem for PUF-based key generation. Additionally, the solutions are all evaluated based on reliability, efficiency, leakage and reusability showing that depending on requirements for the key generator different solutions are preferable.





2015-06-20
01:50 [News] FSE 2013 videos

  Videos from FSE 2013 are now online.