International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also get this service via

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2012-08-13
15:17 [Pub][ePrint] A Probabilistic Quantum Key Transfer Protocol, by Abhishek Parakh

  We propose a protocol to transfer a one-time-pad (in a probabilistic manner) from Alice to Bob, over a public channel. The proposed protocol is unique because Bob merely acts as the receiver of the pad (secret key), i.e. Bob does not need to send any message back to Alice unless he detects eavesdropping. Such a secure transfer of one-time-pad, over public channel, is not possible in classical cryptography and in quantum cryptography all previous protocols require Bob to send almost as many messages back to Alice as she does to Bob, to establish a key.



15:17 [Pub][ePrint] Must you know the code of f to securely compute f?, by Mike Rosulek

  When Alice and Bob want to securely evaluate a function of their shared inputs, they typically first express the function as a (boolean or arithmetic) circuit and then securely evaluate that circuit, gate-by-gate. In other words, a secure protocol for evaluating $f$ is typically obtained in a {\\em non-black-box-way} from $f$ itself. Consequently, secure computation protocols have high overhead (in communication \\& computation) that is directly linked to the circuit-description complexity of $f$.

In other settings throughout cryptography, black-box constructions invariably lead to better practical efficiency than comparable non-black-box constructions. Could secure computation protocols similarly be made more practical by eliminating their dependence on a circuit representation of the target function? Or, in other words, {\\em must one know the code of $f$ to securely evaluate $f$?}

In this work we initiate the theoretical study of this question. We show the following:

1. A complete characterization of the 2-party tasks which admit such security against semi-honest adversaries. The characterization is inspired by notions of {\\em autoreducibility} from computational complexity theory. From this characterization, we show a class of pseudorandom functions that {\\em cannot} be securely evaluated (when one party holds the seed and the other holds the input) without ``knowing\'\' the code of the function in question. On the positive side, we show a class of functions (related to blind signatures) that can indeed be securely computed without ``knowing\'\' the code of the function.

2. Sufficient conditions for such security against malicious adversaries, also based on autoreducibility. We show that it is not possible to prove membership in the image of a one-way function in zero-knowledge, without ``knowing\'\' the code of the one-way function.



15:17 [Pub][ePrint] Crowd-Blending Privacy, by Johannes Gehrke and Michael Hay and Edward Lui and Rafael Pass

  We introduce a new definition of privacy called \\emph{crowd-blending privacy} that strictly relaxes the notion of differential privacy. Roughly speaking, $k$-crowd blending private sanitization of a database requires that each individual $i$ in the database ``blends\'\' with $k$ other individuals $j$ in the database, in the sense that the output of the sanitizer is ``indistinguishable\'\' if $i$\'s data is replaced by $j$\'s.

We demonstrate crowd-blending private mechanisms for histograms and for releasing synthetic data points, achieving strictly better utility than what is possible using differentially private mechanisms. Additionally, we demonstrate that if a crowd-blending private mechanism is combined with a ``pre-sampling\'\' step, where the individuals in the database are randomly drawn from some underlying population (as is often the case during data collection), then the combined mechanism satisfies not only differential privacy, but also the stronger notion of zero-knowledge privacy. This holds even if the pre-sampling is slightly biased and an adversary knows whether certain individuals were sampled or not. Taken together, our results yield a practical approach for collecting and privately releasing data while ensuring higher utility than previous approaches.



15:17 [Pub][ePrint] Hush Functions Extended to Any Size Input versus Any Size Output, by Gideon Samid

  Traditional hush functions map a large number to a small number such that the reverse-hush has an infinity of solutions, and nonetheless a collision is hard to come by. This primitive is so abundantly useful that one is tempted to extend it such that any number large or small may be mapped to any number larger, or smaller while maintaining the above conditions. This extension would increase the flexibility of the commodity hush primitive, expand its current applications, and likely suggest new ones. Additional generality may be achieved by allowing the input to determine the computational burden, and involving Turing\'s Entscheidungsproblem. We propose an algorithm where a natural number, X, is mapped to another natural number Y, referring to the mapping as a \"Crypto Square\", and to the reverse as \"Crypto Square Root\": Y = X**2|c and X = √Y|c. While the crypto-square mapping is a proper function, the square root equation has infinite solutions. There exists a deterministic solution algorithm to find any desired number of solutions to a square-root equation. This asymmetry proves itself useful, since the mapping is Z+→Z+, and hence the chance of collision for any finite size set is negligible. Unlike standard one-way functions, crypto-square shields the identity of the input (X), not by the intractability of the reverse function, but by Vernam-like equivocation per the infinity of X candidates. This prospect suggests further examination of this \"square\" algorithm for possible useful roles in various crypto protocols, especially protocols concerned with privacy, authentication and deniability.



15:17 [Pub][ePrint] Computing small discrete logarithms faster, by Daniel J. Bernstein and Tanja Lange

  Computations of small discrete logarithms are feasible even in \"secure\" groups, and are used as subroutines in several cryptographic protocols in the literature. For example, the Boneh--Goh--Nissim degree-2-homomorphic public-key encryption system uses generic square-root discrete-logarithm methods for decryption. This paper shows how to use a small group-specific table to accelerate these subroutines. The cost of setting up the table grows with the table size, but the acceleration also grows with the table size. This paper shows experimentally that computing a discrete logarithm in an interval of order l takes only 1.93*l^{1/3} multiplications on average using a table of size l^{1/3} precomputed with 1.21*l^{2/3} multiplications, and computing a discrete logarithm in a group of order l takes only 1.77*l^{1/3} multiplications on average using a table of size l^{1/3} precomputed with 1.24*l^{2/3} multiplications.



15:17 [Pub][ePrint] Barriers in Cryptography with Weak, Correlated and Leaky Sources, by Daniel Wichs

  There has been much recent progress in constructing cryptosystems that maintain their security without requiring uniform randomness and perfect secrecy. These schemes are motivated by a diverse set of problems such as providing resilience to side-channel leakage, using weak physical sources of randomness as secret keys, and allowing deterministic encryption for high-entropy messages. The study of these problems has significantly deepened our understanding of how randomness is used in cryptographic constructions and proofs.

Nevertheless, despite this progress, some basic and seemingly achievable security properties have eluded our reach. For example, we are also unable to prove the security of basic tools for manipulating weak/leaky random sources, such as as pseudo-entropy generators and seed-dependent condensers. We also do not know how to prove leakage-resilient security of any cryptosystem that has a unique secret key for each public key. In the context of deterministic encryption, we do not have a standard-model constructions achieving the strongest notion of security originally proposed by Bellare, Boldyreva and O\'Neill (CRYPTO \'07), that allows for the encryption of arbitrarily correlated messages of sufficiently large individual entropy.

In this work, we provide broad black-box separation results, showing that the security of such primitives cannot be proven under virtually \\emph{any} standard cryptographic hardness assumption via a reduction that treats the adversary as a \\emph{black box}. We do so by formalizing the intuition that ``the only way that a reduction can simulate the correctly distributed view for an attacker is to know all the secrets, in which case it does not learn anything useful from the attack\'\'. This intuition is often misleading and subtle ways of getting around it allow us to achieve a wealth of positive results for many cryptographic primitives with imperfect randomness. However, in this work we show that this intuition can be formalized and that it indeed presents a real barrier in many special cases involving the above-mentioned examples.



15:17 [Pub][ePrint] Information-Theoretic Timed-Release Security: Key-Agreement, Encryption, and Authentication Codes, by Yohei Watanabe and Takenobu Seito and Junji Shikata

  In this paper, we study timed-release cryptography with information-theoretic security. As fundamental cryptographic primitives with information-theoretic security, we can consider key-agreement, encryption, and authentication codes. Therefore, in this paper, we deal with information-theoretic timed-release security for all those primitives. Specifically, we propose models and formalizations of security for information-theoretic timed-release key-agreement, encryption, and authentication codes, and we present constructions of those ones. In particular, information-theoretic timed-release encryption and authentication codes can be constructed from information-theoretic timed-release key-agreement in a generic and simple way. Also, we derive tight lower bounds on sizes of secret-keys and show optimal constructions for information-theoretic timed-release key-agreement and encryption. Furthermore, we investigate a relationship of mechanisms between information-theoretic timed-release key-agreement and information-theoretic key-insulated key-agreement. It turns out that there exists a simple algorithm which converts the former into the latter, and vice versa. In the sense, we conclude that these two mechanisms are essentially close.



12:17 [Pub][ePrint] EPiC: Efficient Privacy-Preserving Counting for MapReduce, by Erik-Oliver Blass and Guevara Noubir and Triet Vo Huu

  In the face of an untrusted cloud infrastructure, outsourced data

needs to be protected. Fully homomorphic encryption is one solution

that also allows performing operations on outsourced data. However,

the involved high overhead of today\'s fully homomorphic encryption

techniques outweigh cloud cost saving advantages, rendering it

impractical. We present EPiC, a practical, efficient protocol for

the privacy-preserving evaluation of a fundamental operation on data

sets: frequency counting. In an IND-CPA encrypted outsourced data

set, a cloud user can specify a pattern, and the cloud will count

the number of occurrences of this pattern in a completely oblivious

manner. EPiC\'s main idea is, first, to reduce the problem of

counting to polynomial evaluation. Second, to efficiently evaluate

polynomials in a privacy-preserving manner, we extend previous work

on the Hidden Modular Group Order assumption and design a new

\\emph{somewhat homomorphic} encryption scheme. This scheme is highly

efficient in our particular counting scenario with a relatively

small number of possible patterns. Besides a formal analysis where

we prove EPiC\'s privacy, we also present implementation and

evaluation results. We specifically target Google\'s prominent

MapReduce paradigm as offered by major cloud providers. Our

evaluation performed both locally and in Amazon\'s public cloud with

data sets sizes of up to 1 TByte shows only modest overhead compared

to non-private counting, attesting to EPiC\'s efficiency.



12:17 [Pub][ePrint] New Leakage Resilient CCA-Secure Public Key Encryption, by Kaoru Kurosawa and Ryo Nojima and Le Trieu Phong

  This paper shows a generic method of constructing CCA-secure public key encryption schemes with leakage resilience on the secret key. It is based on a new kind of universal$_2$ hash proof system which accepts an auxiliary parameter. Specifically, two schemes are presented, basing on the DCR assumption and DLIN assumption respectively.



07:37 [Event][New] TCC2013: The Tenth Theory of Cryptography Conference

  Submission: 1 September 2012
Notification: 1 December 2012
From March 3 to March 6
Location: Tokyo, Japan
More Information: http://tcc2013.com/




2012-08-11
01:26 [Event][New] Qshine-2013: Heterogeneous Networking for Quality, Reliability, Security and Robustness

  Submission: 30 September 2012
Notification: 15 November 2012
From January 11 to January 12
Location: Greater Noida, India
More Information: http://qshine.org/2013/show/home