International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2013-02-27
19:17 [Pub][ePrint] Attacks and Comments on Several Recently Proposed Key Management Schemes, by Niu Liu and Shaohua Tang and Lingling Xu

  In this paper, we review three problematic key management(KM) schemes recently proposed, including Kayam\'s scheme for groups with hierarchy [9], Piao\'s group KM scheme [13], Purushothama\'s group KM schemes [15]. We point out the problems in each scheme. Kayam\'s scheme is not secure to collusion attack. Piao\'s group KM scheme is not secure and has a bad primitive. The hard problem it bases is not really hard. Purushothama\'s scheme has a redundant design that costs lots of resources and doesn\'t give an advantage to the security evel and dynamic efficiency of it. We also briefly analyze the underlying reasons why these problem emerge.



19:17 [Pub][ePrint] Notions of Black-Box Reductions, Revisited, by Paul Baecher and Christina Brzuska and Marc Fischlin

  Reductions are the common technique to prove security of cryptographic constructions based on a primitive. They take an allegedly successful adversary against the construction and turn it into a successful adversary against the underlying primitive. To a large extent, these reductions are black-box in the sense that they consider the primitive and/or the adversary against the construction only via the input-output behavior, but do not depend on internals like the code of the primitive or of the adversary. Reingold, Trevisan, and Vadhan~(TCC, 2004) provided a widely adopted framework, called the RTV framework from hereon, to classify and relate different notions of black-box reductions.

Having precise notions for such reductions is very important when it comes to black-box separations, where one shows that black-box reductions cannot exist. An impossibility result, which clearly specifies the type of reduction it rules out, enables us to identify the potential leverages to bypass the separation. We acknowledge this by extending the RTV framework in several respects using a more fine-grained approach. First, we capture a type of reduction---frequently ruled out by so-called meta-reductions---which escapes the RTV framework so far. Second, we consider notions that are ``almost black-box\'\', i.e., where the reduction receives additional information about the adversary, such as its success probability. Third, we distinguish explicitly between efficient and inefficient primitives and adversaries, allowing us to determine how relativizing reductions in the sense of Impagliazzo and Rudich (STOC, 1989) fit into the picture.



19:17 [Pub][ePrint] On the Negative Effects of Trend Noise and \\\\, by Yuchen Cao, Yongbin Zhou and Zhenmei Yu

  Side-channel information leaked during the execution of cryptographic modules usually contains various noises. Normally, these noises have negative effects on the performance of side-channel attacks exploiting noisy leakages. Therefore, to reduce noise in leakages usually serves to be an effective approach to enhance the performance of side-channel attacks. However, most existing noise reduction methods treat all noises as a whole, instead of identifying and dealing with each of them individually. Motivated by this, this paper investigates the feasibility and implications of identifying trend noise from any other noises in side-channel acquisitions and then dealing with it accordingly. Specifically, we discuss the effectiveness of applying least square method (LSM for short) to remove inherent trend noise in side-channel leakages, and also clarify the limited capability of existing noise reduction methods in dealing with trend noise. For this purpose, we perform a series of correlation power analysis attacks, as a case of study, against a set of real power traces, published in the second stage of international DPA contest which provides a public set of original power traces without any preprocessing, from an unprotected FPGA implementation of AES encryption. The experimental results firmly confirmed the soundness and validity of our analysis and observations.



19:17 [Pub][ePrint] On the Complexity of Broadcast Setup, by Martin Hirt and Pavel Raykov

  Byzantine broadcast is a distributed primitive that allows a specific party (called ``sender\'\') to consistently distribute a value $v$ among $n$ parties in the presence of potential misbehavior of up to $t$ of the parties. Broadcast requires that correct parties always agree on the same value and if the sender is correct, then the agreed value is $v$. Broadcast without a setup (i.e., from scratch) is achievable from point-to-point channels if and only if $t < n/3$. In case $t \\ge n/3$ a trusted setup is required. The setup may be assumed to be given initially or generated by the parties in a setup phase.

It is known that generating setup for protocols with cryptographic security is relatively simple and only consists of setting up a public-key infrastructure. However, generating setup for information-theoretically secure protocols is much more involved. In this paper we study the complexity of setup generation for information-theoretic protocols using point-to-point channels and temporarily available broadcast channels. We optimize the number of rounds in which the temporary broadcast channels are used while minimizing the number of bits broadcast with them. We give the first information-theoretically secure broadcast protocol tolerating $t < n/2$ that uses the temporary broadcast channels during only 1 round in the setup phase. Furthermore, only $\\cO(n^3)$ bits need to be broadcast with the temporary broadcast channels during that round, independently of the security parameter employed. The broadcast protocol presented in this paper allows to construct the first information-theoretically secure MPC protocol which uses a broadcast channel during only one round. Additionally, the presented broadcast protocol supports refreshing, which allows to broadcast an a priori unknown number of times given a fixed-size setup.



19:17 [Pub][ePrint] A Tutorial on White-box AES, by James A. Muir

  White-box cryptography concerns the design and analysis of implementations of cryptographic algorithms engineered to execute on untrusted platforms. Such implementations are said to operate in a \\emph{white-box attack context}. This is an attack model where all details of the implementation are completely visible to an attacker: not only do they see input and output, they see every intermediate computation that happens along the way. The goal of a white-box attacker when targeting an implementation of a cipher is typically to extract the cryptographic key; thus, white-box implementations have been designed to thwart this goal (\\ie to make key extraction difficult/infeasible). The academic study of white-box cryptography was initiated in 2002 in the seminal work of Chow, Eisen, Johnson and van Oorschot (SAC 2002). Here, we review the first white-box AES implementation proposed by Chow \\etal and give detailed information on how to construct it. We provide a number of diagrams that summarize the flow of data through the various look-up tables in the implementation, which helps clarify the overall design. We then briefly review the impressive 2004 cryptanalysis by Billet, Gilbert and Ech-Chatbi (SAC 2004). The BGE attack can be used to extract an AES key from Chow \\etal\'s original white-box AES implementation with a work factor of about $2^{30}$, and this fact has motivated subsequent work on improved AES implementations.



19:17 [Pub][ePrint] Lossy Chains and Fractional Secret Sharing, by Yuval Ishai and Eyal Kushilevitz and Omer Strulovich

  Motivated by the goal of controlling the amount of work required to

access a shared resource or to solve a cryptographic puzzle,

we introduce and study the related notions of {\\em lossy chains} and {\\em fractional secret sharing}.

Fractional secret sharing generalizes traditional secret sharing by allowing a fine-grained control over the amount of uncertainty

about the secret. More concretely, a fractional secret sharing scheme realizes a fractional access structure $f:2^{[n]}\\to [m]$ by guaranteeing that from the point of view of each set $T\\subseteq [n]$ of parties, the secret is {\\em uniformly} distributed over a set of $f(T)$ potential secrets. We show that every (monotone) fractional access structure can be realized. For {\\em symmetric} structures, in which $f(T)$ depends only on the size of $T$, we give an efficient construction with share size $poly(n,\\log m)$.

Our construction of fractional secret sharing schemes is based on the new notion of {\\em lossy chains} which may be of independent interest.

A lossy chain is a Markov chain $(X_0,\\ldots,X_n)$ which starts with a random secret $X_0$ and gradually loses information about it at a rate which is specified by a {\\em loss function} $g$. Concretely, in every step $t$, the distribution of $X_0$ conditioned on the value of $X_t$ should always be uniformly distributed over a set of size $g(t)$.

We show how to construct such lossy chains efficiently for any possible loss function $g$, and prove that our construction achieves an optimal asymptotic information rate.



19:17 [Pub][ePrint] URDP: General Framework for Direct CCA2 Security from any Lattice-Based PKE Scheme, by Roohallah Rastaghi

  Design efficient Lattice-based cryptosystem secure against adaptive chosen ciphertext attack (IND-CCA2) is a challenge problem. To the date, full CCA2-security of all proposed Lattice-based PKE schemes achieved by using a generic transformations such as either strongly unforgeable one-time signature schemes (SU-OT-SS), or a message authentication code (MAC) and weak form of commitment. The drawback of these schemes is that encryption requires \"separate encryption\". Therefore, the resulting encryption scheme is not sufficiently efficient to be used in practice and it is inappropriate for many applications such as small ubiquitous computing devices with limited resources such as smart cards, active RFID tags, wireless sensor networks and other embedded devices.

In this work, for the first time, we introduce an efficient universal random data padding (URDP) scheme, and show how it can be used to construct a \"direct\" CCA2-secure encryption scheme from \"any\" worst-case hardness problems in (ideal) lattice in the standard model, resolving a problem that has remained open till date. This novel approach is a \"black-box\" construction and leads to the elimination of separate encryption, as it avoids using general transformation from CPA-secure scheme to a CCA2-secure one. IND-CCA2 security of this scheme can be tightly reduced in the standard model to the assumption that the underlying primitive is an one-way trapdoor function.



19:17 [Pub][ePrint] On the Arithmetic Complexity of Strassen-Like Matrix Multiplications, by Murat Cenk and M. Anwar Hasan

  The Strassen algorithm for multiplying $2 \\times 2$ matrices requires seven multiplications and 18 additions. The recursive use of this algorithm for matrices of dimension $n$ yields a total arithmetic complexity of $(7n^{2.81}-6n^2)$ for $n=2^k$. Winograd showed that using seven multiplications for this kind of multiplications is optimal, so any algorithm for multiplying $2 \\times 2$ matrices with seven multiplications is therefore called a Strassen-like algorithm. Winograd also discovered an additively optimal Strassen-like algorithm with 15 additions. This algorithm is called the Winograd\'s variant, whose arithmetic complexity is $(6n^{2.81}-5n^2)$ for $n=2^k$ and $(3.73n^{2.81}-5n^2)$ for $n=8\\cdot 2^k$, which is the best-known bound for Strassen-like multiplications. This paper proposes a method that reduces the complexity of Winograd\'s variant to $(5n^{2.81}+0.5n^{2.59}+2n^{2.32}-6.5n^2)$ for $n=2^k$. It is also shown that the total arithmetic complexity can be improved to $(3.55n^{2.81}+0.148n^{2.59}+1.02n^{2.32}-6.5n^2)$ for $n=8\\cdot 2^k$, which, to the best of our knowledge, improves the best-known bound for a Strassen-like matrix multiplication algorithm.



19:17 [Pub][ePrint] Unconditionally Secure and Universally Composable Commitments from Physical Assumptions, by Ivan Damgard and Alessandra Scafuro

  We present a constant-round unconditional black-box compiler, that transforms any ideal straight- line extractable commitment scheme, into an extractable and equivocal commitment scheme, therefore yielding to UC-security [Can01]. We exemplify the usefulness of our compiler providing two (constant- round) instantiations of ideal straight-line extractable commitment using (malicious) PUFs [OSVW13] and stateless tamper-proof hardware tokens [Kat07]. This allows us to achieve the first unconditionally UC-secure commitment with malicious PUFs and the first unconditionally UC-secure commitment with stateless tokens. Our constructions are secure for adversaries creating arbitrarily malicious stateful PUFs/tokens.

Previous results with malicious PUFs used either computational assumptions to achieve UC-secure commitments or were unconditionally secure but only in the indistinguishability sense [OSVW13]. Similarly, with stateless tokens, UC-secure commitments are known only under computational assumptions [CGS08, GIS+10, CKS+11], while the (not UC) unconditional commitment scheme of [GIMS10] is secure only in a weaker model in which the adversary is not allowed to create stateful tokens.

Besides allowing us to prove feasibility of unconditional UC-security with (malicious) PUFs and stateless tokens, our compiler can be instantiated with any ideal straight-line extractable commitment scheme, thus allowing the use of various setup assumptions which may better fit the application or the technology available.



19:17 [Pub][ePrint] Shorter Quasi-Adaptive NIZK Proofs for Linear Subspaces, by Charanjit S. Jutla and Arnab Roy

  We define a novel notion of quasi-adaptive non-interactive zero knowledge (NIZK) proofs for probability distributions on parametrized languages. It is quasi-adaptive in the sense that the common reference string (CRS) generator can generate the CRS depending on the language parameters. However, the simulation is required to be uniform, i.e., a single efficient simulator should work for the whole class of parametrized languages. For distributions on languages that are linear subspaces of vector spaces over bilinear groups, we give quasi-adaptive NIZKs that are shorter and more efficient than Groth-Sahai NIZKs. For many cryptographic applications quasi-adaptive NIZKs suffice, and our constructions can lead to significant improvements in the standard model. Our construction can be based on any $k$-linear assumption, and in particular under the Symmetric eXternal Diffie Hellman (SXDH) assumption our proofs are even competitive with Random-Oracle based $\\Sigma$-protocol NIZK proofs.

We also show that our system can be extended to include integer tags in the defining equations, where the tags are provided adaptively by the adversary. This leads to applicability of our system to many applications that use tags, e.g. applications using Cramer-Shoup projective hash proofs. Our techniques also lead to the shortest known (ciphertext) fully secure identity based encryption (IBE) scheme under standard static assumptions (SXDH).



19:17 [Pub][ePrint] Full Characterization of Functions that Imply Fair Coin Tossing and Ramifications to Fairness, by Gilad Asharov and Yehuda Lindell and Tal Rabin

  It is well known that it is impossible for two parties to toss a coin fairly (Cleve, STOC 1986). This result implies that it is impossible to securely compute with fairness any function that can be used to toss a coin fairly. In this paper, we focus on the class of deterministic Boolean functions with finite domain, and we ask for which functions in this class is it possible to information-theoretically toss an unbiased coin, given a protocol for securely computing the function with fairness. We provide a \\emph{complete characterization} of the functions in this class that imply and do not imply fair coin tossing. This characterization extends our knowledge of which functions cannot be securely computed with fairness. In addition, it provides a focus as to which functions may potentially be securely computed with fairness, since a function that cannot be used to fairly toss a coin is not ruled out by the impossibility result of Cleve (which is the \\emph{only} known impossibility result for fairness). In addition to the above, we draw corollaries to the feasibility of achieving fairness in two possible fail-stop models.