International Association for Cryptologic Research

# IACR News Central

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-10-07
21:17 [Pub][ePrint]

We present a constant-round concurrent zero-knowledge protocol for NP. Our protocol is sound against uniform polynomial-time attackers, and relies on the existence of families of collision-resistant hash functions, and a new (but in our eyes, natural) falsifiable intractability assumption: Roughly speaking, that Micali\'s non-interactive CS-proofs are sound for languages in P.

21:17 [Pub][ePrint]

Standard constructions of garbled circuits provide only static security, meaning the input x is not allowed to depend on the garbled circuit F. But some applications--notably one-time programs

(Goldwasser, Kalai, and Rothblum 2008) and secure outsourcing (Gennaro, Gentry, Parno 2010)--need adaptive security, where x may depend on F. We identify gaps in proofs from these papers with

regard to adaptive security and suggest the need of a better abstraction boundary. To this end we

investigate the adaptive security of garbling schemes, an abstraction of Yao\'s garbled-circuit technique

that we recently introduced (Bellare, Hoang, Rogaway 2012). Building on that framework, we give definitions encompassing privacy, authenticity, and obliviousness, with either coarse-grained or fine-grained adaptivity. We show how adaptively secure garbling schemes support simple solutions for one-time programs and secure outsourcing, with privacy being the goal in the first case and obliviousness and authenticity the goal in the second. We give transforms that promote static-secure garbling schemes

to adaptive-secure ones. Our work advances the thesis that conceptualizing garbling schemes as a first-class cryptographic primitive can simplify, unify, or improve treatments for higher-level protocols.

21:17 [Pub][ePrint]

In this short note we observe that the Peikert-Vaikuntanathan-Waters (PVW) method of packing many plaintext elements in a single Regev-type ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the Smart-Vercauteren (SV) ciphertext-packing technique that relies on polynomial-CRT. While the SV technique is only applicable to schemes that rely on ring-LWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of General-LWE\'\').

Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with general-LWE\'\' schemes, suggesting yet another tradeoff that can be optimized for different settings.

21:17 [Pub][ePrint]

A Helper Data Scheme is a cryptographic primitive that extracts a high-entropy noise-free string from noisy data. Helper Data Schemes are used for privacy-preserving databases and for Physical Unclonable Functions.

We refine the theory of Helper Data schemes with Zero Secrecy Leakage (ZSL), i.e. the mutual information between the helper data and the extracted secret is zero. We prove that ZSL necessitates particular properties of the helper data generating function, which also allows us to show the existence of `Sibling Points\'. In the special case that our generated secret is uniformly distributed (Fuzzy Extractors) our results coincide with the continuum limit of a recent construction by Verbiskiy et al. Yet our results cover secure sketches as well. Moreover we present an optimal reconstruction algorithm for this scheme, that not only provides the lowest possible reconstruction error rate but also yields an attractive, simple implementation of the verification.

Further, we introduce Diagnostic Category Leakage (DCL), which quantifies what an attacker can infer from helper data about a particular medical indication of the enrolled user, or reversely what probabilistic knowledge of a diagnose can leak about the secret. If the attacker has a priori knowledge about the enrolled user (medical indications, race, gender), then the ZSL property does not guarantee that there is no secrecy leakage from the helper data. However, this effect is typically very small.

21:17 [Pub][ePrint]

In masking schemes, \\emph{leakage squeezing} is the study of the optimal shares\' representation, that maximizes the resistance order against high-order side-channel attacks.

Squeezing the leakage of first-order Boolean masking has been problematized and solved previously in~\\cite{DBLP:conf/africacrypt/MaghrebiCGD12}.

The solution consists in finding a bijection $F$ that modifies the mask, in such a way that its graph, seen as a code, be of greatest dual distance.

This paper studies second-order leakage squeezing, \\emph{i.e.} leakage squeezing with two independent random masks.

It is proved that, compared to first-order leakage squeezing, second-order leakage squeezing at least increments (by one unit) the resistance against high-order attacks, such as high-order correlation power analyses (HO-CPA).

Now, better improvements over first-order leakage squeezing are possible by relevant constructions of the squeezing bijections pair.

We provide with linear bijections that improve by strictly more than one (instead of one) the resistance order.

Specifically,

when the masking is applied on bytes (which suits AES),

resistance against $1$st-order (resp. $2$nd-order) attacks is possible with one (resp. two) masks.

Optimal leakage squeezing with one mask resists HO-CPA of orders up to $5$.

In this paper, with two masks, we provide resistance against HO-CPA not only of order $5+1=6$, but also of order $7$.

21:17 [Pub][ePrint]

Transaction pseudonyms with implicit attributes are a novel approach to multilevel linkable transaction pseudonyms. We extend earlier work of Juels and Pappu on reencryption-based transaction pseudonyms, by developing new mechanisms for controlled pseudonym linkability.

This includes mechanisms for cooperative, stepwise re-identication

as well as individual authentication of pseudonyms. Our proposal makes

use of efficient techniques from the area of secure multiparty computation and cryptographically secure PRNGs.

21:17 [Pub][ePrint]

In all of existing efficient proofs of knowledge of a solution to the Inhomogeneous Small Integer Solution ISIS problem, the knowledge extractor can only output a vector that is about $\\sqrt{n}$ times longer than the witness possessed by the prover. As a consequence, in many cryptographic schemes that use these proof systems as building blocks, there exists a gap between the hardness of solving the underlying ISIS problem and the hardness used in the security reductions. In this paper, we generalize Stern\'s protocol to obtain two statistical zero-knowledge proofs of knowledge for the ISIS problem (in the $l_\\infty$ norm) that remove this gap. Our result yields the potential of relying on weaker security assumptions for various lattice-based cryptographic constructions. As applications of our proof system, we introduce a concurrently secure identity-based identification scheme based on the worst-case hardness of the $\\mathrm{SIVP}_{\\widetilde{O}(n^{1.5})}$ problem (in the $l_2$ norm) in general lattices in the random oracle model, and an efficient statistical zero-knowledge proof of plaintext knowledge with small constant gap factor for Regev\'s encryption scheme.

2012-10-04
20:25 [Event][New]

Submission: 10 December 2012
From June 2 to June 3
Location: Austin, United States

2012-10-03
17:11 [News]

NIST has announce Keccak as SHA-3 winner. You find more information about Keccak at http://keccak.noekeon.org/ The full statement of the NIST is found at the link below.

14:53 [Event][New]

Submission: 25 January 2013
From May 6 to May 7
Location: Gebze, Turkey

2012-10-02
15:17 [Pub][ePrint]

In this paper we present an attack on a fully homomorphic encryption scheme on PKC2010.

We construct a modi¯ed secret key, a modi¯ed decryption algorithm and a subset of the ciphertext

space. When the ciphertext is from the subset, we can correctly decrypt it by our modi¯ed secret key

and modi¯ed decryption algorithm. We also discuss when our modi¯ed decryption algorithm is e±cient,

and when the subset is not negligible.