International Association for Cryptologic Research

# IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-02-27
19:17 [Pub][ePrint]

The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen [BPR12] at EUROCRYPT \'12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters where the modulus and modulus-to-error ratio are super-polynomial. In this work we resolve the main open problem of [BPR12] and give a new reduction that works for a larger range of parameters, allowing for a polynomial modulus and modulus-to-error ratio. In particular, a smaller modulus gives us greater efficiency, and a smaller modulus-to-error ratio gives us greater security, which now follows from the worst-case hardness of GapSVP with polynomial (rather than super-polynomial) approximation factors.

As a tool in the reduction, we show that there is a lossy mode\'\' for the LWR problem, in which LWR samples only reveal partial information about the secret. This property gives us several interesting new applications, including a proof that LWR remains secure with weakly random secrets of sufficient min-entropy, and very simple new constructions of deterministic encryption, lossy trapdoor functions and reusable extractors.

Our approach is inspired by a technique of Goldwasser et al. [GKPV10] from ICS \'10, which implicitly showed the existence of a lossy mode\'\' for LWE. By refining this technique, we also improve on the parameters of that work to only requiring a polynomial (instead of super-polynomial) modulus and modulus-to-error ratio.

19:17 [Pub][ePrint]

We construct a very efficient protocol for constant round Two-Party Secure Function Evaluation based on general assumptions. We define and instantiate a leaky variant of Generalized Oblivious Transfer based on Oblivious Transfer and Commitment Schemes. The concepts of Garbling Schemes, Leaky Generalized Oblivious Transfer and Privacy Amplification are combined using the Cut-and-Choose paradigm to obtain the final protocol. Our solution is proven secure in the Universal Composability Paradigm.

19:17 [Pub][ePrint]

In this paper, we review three problematic key management(KM) schemes recently proposed, including Kayam\'s scheme for groups with hierarchy [9], Piao\'s group KM scheme [13], Purushothama\'s group KM schemes [15]. We point out the problems in each scheme. Kayam\'s scheme is not secure to collusion attack. Piao\'s group KM scheme is not secure and has a bad primitive. The hard problem it bases is not really hard. Purushothama\'s scheme has a redundant design that costs lots of resources and doesn\'t give an advantage to the security evel and dynamic efficiency of it. We also briefly analyze the underlying reasons why these problem emerge.

19:17 [Pub][ePrint]

Reductions are the common technique to prove security of cryptographic constructions based on a primitive. They take an allegedly successful adversary against the construction and turn it into a successful adversary against the underlying primitive. To a large extent, these reductions are black-box in the sense that they consider the primitive and/or the adversary against the construction only via the input-output behavior, but do not depend on internals like the code of the primitive or of the adversary. Reingold, Trevisan, and Vadhan~(TCC, 2004) provided a widely adopted framework, called the RTV framework from hereon, to classify and relate different notions of black-box reductions.

Having precise notions for such reductions is very important when it comes to black-box separations, where one shows that black-box reductions cannot exist. An impossibility result, which clearly specifies the type of reduction it rules out, enables us to identify the potential leverages to bypass the separation. We acknowledge this by extending the RTV framework in several respects using a more fine-grained approach. First, we capture a type of reduction---frequently ruled out by so-called meta-reductions---which escapes the RTV framework so far. Second, we consider notions that are almost black-box\'\', i.e., where the reduction receives additional information about the adversary, such as its success probability. Third, we distinguish explicitly between efficient and inefficient primitives and adversaries, allowing us to determine how relativizing reductions in the sense of Impagliazzo and Rudich (STOC, 1989) fit into the picture.

19:17 [Pub][ePrint]

Side-channel information leaked during the execution of cryptographic modules usually contains various noises. Normally, these noises have negative effects on the performance of side-channel attacks exploiting noisy leakages. Therefore, to reduce noise in leakages usually serves to be an effective approach to enhance the performance of side-channel attacks. However, most existing noise reduction methods treat all noises as a whole, instead of identifying and dealing with each of them individually. Motivated by this, this paper investigates the feasibility and implications of identifying trend noise from any other noises in side-channel acquisitions and then dealing with it accordingly. Specifically, we discuss the effectiveness of applying least square method (LSM for short) to remove inherent trend noise in side-channel leakages, and also clarify the limited capability of existing noise reduction methods in dealing with trend noise. For this purpose, we perform a series of correlation power analysis attacks, as a case of study, against a set of real power traces, published in the second stage of international DPA contest which provides a public set of original power traces without any preprocessing, from an unprotected FPGA implementation of AES encryption. The experimental results firmly confirmed the soundness and validity of our analysis and observations.

19:17 [Pub][ePrint]

Byzantine broadcast is a distributed primitive that allows a specific party (called sender\'\') to consistently distribute a value $v$ among $n$ parties in the presence of potential misbehavior of up to $t$ of the parties. Broadcast requires that correct parties always agree on the same value and if the sender is correct, then the agreed value is $v$. Broadcast without a setup (i.e., from scratch) is achievable from point-to-point channels if and only if $t < n/3$. In case $t \\ge n/3$ a trusted setup is required. The setup may be assumed to be given initially or generated by the parties in a setup phase.

It is known that generating setup for protocols with cryptographic security is relatively simple and only consists of setting up a public-key infrastructure. However, generating setup for information-theoretically secure protocols is much more involved. In this paper we study the complexity of setup generation for information-theoretic protocols using point-to-point channels and temporarily available broadcast channels. We optimize the number of rounds in which the temporary broadcast channels are used while minimizing the number of bits broadcast with them. We give the first information-theoretically secure broadcast protocol tolerating $t < n/2$ that uses the temporary broadcast channels during only 1 round in the setup phase. Furthermore, only $\\cO(n^3)$ bits need to be broadcast with the temporary broadcast channels during that round, independently of the security parameter employed. The broadcast protocol presented in this paper allows to construct the first information-theoretically secure MPC protocol which uses a broadcast channel during only one round. Additionally, the presented broadcast protocol supports refreshing, which allows to broadcast an a priori unknown number of times given a fixed-size setup.

19:17 [Pub][ePrint]

White-box cryptography concerns the design and analysis of implementations of cryptographic algorithms engineered to execute on untrusted platforms. Such implementations are said to operate in a \\emph{white-box attack context}. This is an attack model where all details of the implementation are completely visible to an attacker: not only do they see input and output, they see every intermediate computation that happens along the way. The goal of a white-box attacker when targeting an implementation of a cipher is typically to extract the cryptographic key; thus, white-box implementations have been designed to thwart this goal (\\ie to make key extraction difficult/infeasible). The academic study of white-box cryptography was initiated in 2002 in the seminal work of Chow, Eisen, Johnson and van Oorschot (SAC 2002). Here, we review the first white-box AES implementation proposed by Chow \\etal and give detailed information on how to construct it. We provide a number of diagrams that summarize the flow of data through the various look-up tables in the implementation, which helps clarify the overall design. We then briefly review the impressive 2004 cryptanalysis by Billet, Gilbert and Ech-Chatbi (SAC 2004). The BGE attack can be used to extract an AES key from Chow \\etal\'s original white-box AES implementation with a work factor of about $2^{30}$, and this fact has motivated subsequent work on improved AES implementations.

19:17 [Pub][ePrint]

Motivated by the goal of controlling the amount of work required to

access a shared resource or to solve a cryptographic puzzle,

we introduce and study the related notions of {\\em lossy chains} and {\\em fractional secret sharing}.

Fractional secret sharing generalizes traditional secret sharing by allowing a fine-grained control over the amount of uncertainty

about the secret. More concretely, a fractional secret sharing scheme realizes a fractional access structure $f:2^{[n]}\\to [m]$ by guaranteeing that from the point of view of each set $T\\subseteq [n]$ of parties, the secret is {\\em uniformly} distributed over a set of $f(T)$ potential secrets. We show that every (monotone) fractional access structure can be realized. For {\\em symmetric} structures, in which $f(T)$ depends only on the size of $T$, we give an efficient construction with share size $poly(n,\\log m)$.

Our construction of fractional secret sharing schemes is based on the new notion of {\\em lossy chains} which may be of independent interest.

A lossy chain is a Markov chain $(X_0,\\ldots,X_n)$ which starts with a random secret $X_0$ and gradually loses information about it at a rate which is specified by a {\\em loss function} $g$. Concretely, in every step $t$, the distribution of $X_0$ conditioned on the value of $X_t$ should always be uniformly distributed over a set of size $g(t)$.

We show how to construct such lossy chains efficiently for any possible loss function $g$, and prove that our construction achieves an optimal asymptotic information rate.

19:17 [Pub][ePrint]

Design efficient Lattice-based cryptosystem secure against adaptive chosen ciphertext attack (IND-CCA2) is a challenge problem. To the date, full CCA2-security of all proposed Lattice-based PKE schemes achieved by using a generic transformations such as either strongly unforgeable one-time signature schemes (SU-OT-SS), or a message authentication code (MAC) and weak form of commitment. The drawback of these schemes is that encryption requires \"separate encryption\". Therefore, the resulting encryption scheme is not sufficiently efficient to be used in practice and it is inappropriate for many applications such as small ubiquitous computing devices with limited resources such as smart cards, active RFID tags, wireless sensor networks and other embedded devices.

In this work, for the first time, we introduce an efficient universal random data padding (URDP) scheme, and show how it can be used to construct a \"direct\" CCA2-secure encryption scheme from \"any\" worst-case hardness problems in (ideal) lattice in the standard model, resolving a problem that has remained open till date. This novel approach is a \"black-box\" construction and leads to the elimination of separate encryption, as it avoids using general transformation from CPA-secure scheme to a CCA2-secure one. IND-CCA2 security of this scheme can be tightly reduced in the standard model to the assumption that the underlying primitive is an one-way trapdoor function.

19:17 [Pub][ePrint]

The Strassen algorithm for multiplying $2 \\times 2$ matrices requires seven multiplications and 18 additions. The recursive use of this algorithm for matrices of dimension $n$ yields a total arithmetic complexity of $(7n^{2.81}-6n^2)$ for $n=2^k$. Winograd showed that using seven multiplications for this kind of multiplications is optimal, so any algorithm for multiplying $2 \\times 2$ matrices with seven multiplications is therefore called a Strassen-like algorithm. Winograd also discovered an additively optimal Strassen-like algorithm with 15 additions. This algorithm is called the Winograd\'s variant, whose arithmetic complexity is $(6n^{2.81}-5n^2)$ for $n=2^k$ and $(3.73n^{2.81}-5n^2)$ for $n=8\\cdot 2^k$, which is the best-known bound for Strassen-like multiplications. This paper proposes a method that reduces the complexity of Winograd\'s variant to $(5n^{2.81}+0.5n^{2.59}+2n^{2.32}-6.5n^2)$ for $n=2^k$. It is also shown that the total arithmetic complexity can be improved to $(3.55n^{2.81}+0.148n^{2.59}+1.02n^{2.32}-6.5n^2)$ for $n=8\\cdot 2^k$, which, to the best of our knowledge, improves the best-known bound for a Strassen-like matrix multiplication algorithm.

19:17 [Pub][ePrint]

We present a constant-round unconditional black-box compiler, that transforms any ideal straight- line extractable commitment scheme, into an extractable and equivocal commitment scheme, therefore yielding to UC-security [Can01]. We exemplify the usefulness of our compiler providing two (constant- round) instantiations of ideal straight-line extractable commitment using (malicious) PUFs [OSVW13] and stateless tamper-proof hardware tokens [Kat07]. This allows us to achieve the first unconditionally UC-secure commitment with malicious PUFs and the first unconditionally UC-secure commitment with stateless tokens. Our constructions are secure for adversaries creating arbitrarily malicious stateful PUFs/tokens.

Previous results with malicious PUFs used either computational assumptions to achieve UC-secure commitments or were unconditionally secure but only in the indistinguishability sense [OSVW13]. Similarly, with stateless tokens, UC-secure commitments are known only under computational assumptions [CGS08, GIS+10, CKS+11], while the (not UC) unconditional commitment scheme of [GIMS10] is secure only in a weaker model in which the adversary is not allowed to create stateful tokens.

Besides allowing us to prove feasibility of unconditional UC-security with (malicious) PUFs and stateless tokens, our compiler can be instantiated with any ideal straight-line extractable commitment scheme, thus allowing the use of various setup assumptions which may better fit the application or the technology available.