International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

15:17 [Pub][ePrint] On the Power of Random Oracles, by Iftach Haitner and Eran Omri and Hila Zarosim

  In the random oracle model, the parties are given oracle access to a random member of

a (typically huge) function family, and are assumed to have unbounded computational power

(though they can only make a bounded number of oracle queries). This model provides powerful

properties that allow proving the security of many protocols, even such that cannot be proved

secure in the standard model (under any hardness assumption). The random oracle model is also

used to show that a given cryptographic primitive cannot be used in a black-box way to construct

another primitive; in their seminal work, Impagliazzo and Rudich [STOC \'89] showed that in the

random function model - when the function family is the set of all functions - it is impossible

to construct (secure) key-agreement protocols, yielding that key-agreement cannot be black-box

reduced to one-way functions. Their work has a long line of followup works (Simon [EC \'98],

Gertner et al. [STOC \'00] and Gennaro et al. [SICOMP \'05], to name a few), showing that given

oracle access to a certain type of function family (e.g., the family that \"implements\" public-key

encryption) is not sufficient for building a given cryptographic primitive (e.g., oblivious transfer).

Yet, in the more general sense, the following fundamental question remained open:

What is the exact power of the random oracle model, and more specifically, of the

random function model?

We make progress towards answering the above question, showing that any (no private input)

semi-honest two-party functionality that can be securely implemented in the random function

model, can be securely implemented information theoretically (where parties are assumed to be

all powerful, and no oracle is given). We further generalize the above result to function families

that provide some natural combinatorial property.

To exhibit the power of our result, we use the recent information theoretic impossibility result

of McGregor et al. [FOCS \'10], to show the existence of functionalities (e.g., inner product) that

cannot be computed both accurately and in a differentially private manner in the random

function model; yielding that protocols for computing these functionalities cannot be black-box

reduced to the existence of one-way functions.

15:17 [Pub][ePrint] Quantum algorithm for the discrete logarithm problem for matrices over finite group rings, by A. D. Myasnikov and A. Ushakov

  We propose a polynomial time quantum algorithm for solving the

discrete logarithm problem in matrices over finite group rings.

The hardness of this problem was recently employed in the design

of a key-exchange protocol proposed by D. Kahrobaei, C. Koupparis,

and V. Shpilrain. Our result implies that the Kahrobaei et al.

protocol does not belong to the realm of post-quantum cryptography.

21:17 [Pub][ePrint] Constant-Round Concurrent Zero Knowledge From Falsifiable Assumptions, by Kai-Min Chung and Huijia Lin and Rafael Pass

  We present a constant-round concurrent zero-knowledge protocol for NP. Our protocol is sound against uniform polynomial-time attackers, and relies on the existence of families of collision-resistant hash functions, and a new (but in our eyes, natural) falsifiable intractability assumption: Roughly speaking, that Micali\'s non-interactive CS-proofs are sound for languages in P.

21:17 [Pub][ePrint] Adaptively Secure Garbling with Applications to One-Time Programs and Secure Outsourcing, by Mihir Bellare and Viet Tung Hoang and Phillip Rogaway

  Standard constructions of garbled circuits provide only static security, meaning the input x is not allowed to depend on the garbled circuit F. But some applications--notably one-time programs

(Goldwasser, Kalai, and Rothblum 2008) and secure outsourcing (Gennaro, Gentry, Parno 2010)--need adaptive security, where x may depend on F. We identify gaps in proofs from these papers with

regard to adaptive security and suggest the need of a better abstraction boundary. To this end we

investigate the adaptive security of garbling schemes, an abstraction of Yao\'s garbled-circuit technique

that we recently introduced (Bellare, Hoang, Rogaway 2012). Building on that framework, we give definitions encompassing privacy, authenticity, and obliviousness, with either coarse-grained or fine-grained adaptivity. We show how adaptively secure garbling schemes support simple solutions for one-time programs and secure outsourcing, with privacy being the goal in the first case and obliviousness and authenticity the goal in the second. We give transforms that promote static-secure garbling schemes

to adaptive-secure ones. Our work advances the thesis that conceptualizing garbling schemes as a first-class cryptographic primitive can simplify, unify, or improve treatments for higher-level protocols.

21:17 [Pub][ePrint] Packed Ciphertexts in LWE-based Homomorphic Encryption, by Zvika Brakerski and Craig Gentry and Shai Halevi

  In this short note we observe that the Peikert-Vaikuntanathan-Waters (PVW) method of packing many plaintext elements in a single Regev-type ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the Smart-Vercauteren (SV) ciphertext-packing technique that relies on polynomial-CRT. While the SV technique is only applicable to schemes that rely on ring-LWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of ``General-LWE\'\').

Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with ``general-LWE\'\' schemes, suggesting yet another tradeoff that can be optimized for different settings.

21:17 [Pub][ePrint] Information Leakage of Continuous-Source Zero Secrecy Leakage Helper Data Schemes, by Joep de Groot and Boris Skoric and Niels de Vreede and Jean-Paul Linnartz

  A Helper Data Scheme is a cryptographic primitive that extracts a high-entropy noise-free string from noisy data. Helper Data Schemes are used for privacy-preserving databases and for Physical Unclonable Functions.

We refine the theory of Helper Data schemes with Zero Secrecy Leakage (ZSL), i.e. the mutual information between the helper data and the extracted secret is zero. We prove that ZSL necessitates particular properties of the helper data generating function, which also allows us to show the existence of `Sibling Points\'. In the special case that our generated secret is uniformly distributed (Fuzzy Extractors) our results coincide with the continuum limit of a recent construction by Verbiskiy et al. Yet our results cover secure sketches as well. Moreover we present an optimal reconstruction algorithm for this scheme, that not only provides the lowest possible reconstruction error rate but also yields an attractive, simple implementation of the verification.

Further, we introduce Diagnostic Category Leakage (DCL), which quantifies what an attacker can infer from helper data about a particular medical indication of the enrolled user, or reversely what probabilistic knowledge of a diagnose can leak about the secret. If the attacker has a priori knowledge about the enrolled user (medical indications, race, gender), then the ZSL property does not guarantee that there is no secrecy leakage from the helper data. However, this effect is typically very small.

21:17 [Pub][ePrint] Leakage Squeezing of Order Two, by Claude Carlet and Jean-Luc Danger and Sylvain Guilley and Houssem Maghrebi

  In masking schemes, \\emph{leakage squeezing} is the study of the optimal shares\' representation, that maximizes the resistance order against high-order side-channel attacks.

Squeezing the leakage of first-order Boolean masking has been problematized and solved previously in~\\cite{DBLP:conf/africacrypt/MaghrebiCGD12}.

The solution consists in finding a bijection $F$ that modifies the mask, in such a way that its graph, seen as a code, be of greatest dual distance.

This paper studies second-order leakage squeezing, \\emph{i.e.} leakage squeezing with two independent random masks.

It is proved that, compared to first-order leakage squeezing, second-order leakage squeezing at least increments (by one unit) the resistance against high-order attacks, such as high-order correlation power analyses (HO-CPA).

Now, better improvements over first-order leakage squeezing are possible by relevant constructions of the squeezing bijections pair.

We provide with linear bijections that improve by strictly more than one (instead of one) the resistance order.


when the masking is applied on bytes (which suits AES),

resistance against $1$st-order (resp. $2$nd-order) attacks is possible with one (resp. two) masks.

Optimal leakage squeezing with one mask resists HO-CPA of orders up to $5$.

In this paper, with two masks, we provide resistance against HO-CPA not only of order $5+1=6$, but also of order $7$.

21:17 [Pub][ePrint] On Transaction Pseudonyms with Implicit Attributes, by Stefan G. Weber

  Transaction pseudonyms with implicit attributes are a novel approach to multilevel linkable transaction pseudonyms. We extend earlier work of Juels and Pappu on reencryption-based transaction pseudonyms, by developing new mechanisms for controlled pseudonym linkability.

This includes mechanisms for cooperative, stepwise re-identication

as well as individual authentication of pseudonyms. Our proposal makes

use of efficient techniques from the area of secure multiparty computation and cryptographically secure PRNGs.

21:17 [Pub][ePrint] Improved Zero-knowledge Proofs of Knowledge for the ISIS Problem, and Applications, by San Ling and Khoa Nguyen and Damien Stehle and Huaxiong Wang

  In all of existing efficient proofs of knowledge of a solution to the Inhomogeneous Small Integer Solution ISIS problem, the knowledge extractor can only output a vector that is about $\\sqrt{n}$ times longer than the witness possessed by the prover. As a consequence, in many cryptographic schemes that use these proof systems as building blocks, there exists a gap between the hardness of solving the underlying ISIS problem and the hardness used in the security reductions. In this paper, we generalize Stern\'s protocol to obtain two statistical zero-knowledge proofs of knowledge for the ISIS problem (in the $l_\\infty$ norm) that remove this gap. Our result yields the potential of relying on weaker security assumptions for various lattice-based cryptographic constructions. As applications of our proof system, we introduce a concurrently secure identity-based identification scheme based on the worst-case hardness of the $\\mathrm{SIVP}_{\\widetilde{O}(n^{1.5})}$ problem (in the $l_2$ norm) in general lattices in the random oracle model, and an efficient statistical zero-knowledge proof of plaintext knowledge with small constant gap factor for Regev\'s encryption scheme.

20:25 [Event][New] HOST: IEEE International Symposium on HARDWARE-ORIENTED SECURITY and TRUST

  Submission: 10 December 2012
Notification: 22 February 2013
From June 2 to June 3
Location: Austin, United States
More Information:

17:11 [News] SHA-3 Winner: Keccak

  NIST has announce Keccak as SHA-3 winner. You find more information about Keccak at The full statement of the NIST is found at the link below.