International Association for Cryptologic Research

IACR News Central

Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:

To receive your credentials via mail again, please click here.

You can also access the full news archive.

Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).

2015-04-29
00:17 [Pub][ePrint] Condensed Unpredictability, by Maciej Skorski and Alexander Golovnev and Krzysztof Pietrzak

  We consider the task of deriving a key with high HILL

entropy (i.e., being computationally indistinguishable from

a key with high min-entropy) from an unpredictable source.

Previous to this work, the only known way to transform unpredictability into

a key that was $\\eps$ indistinguishable from having min-entropy was via

pseudorandomness, for example by Goldreich-Levin (GL) hardcore bits.

This approach has the inherent limitation that from a source with $k$ bits of unpredictability entropy one can derive a key of length (and thus HILL entropy)

at most $k-2\\log(1/\\epsilon)$ bits. In many settings, e.g. when dealing with biometric data, such a $2\\log(1/\\epsilon)$ bit entropy loss in not an option.

Our main technical contribution is a theorem that states that in the high entropy regime, unpredictability implies HILL entropy.

Concretely, any variable $K$ with $|K|-d$ bits of unpredictability entropy has the same amount of so called

metric entropy (against real-valued, deterministic distinguishers), which is known to imply the same amount of HILL entropy.

The loss in circuit size in this argument is exponential in the entropy gap $d$, and thus this result only applies for small $d$ (i.e., where the

size of distinguishers considered is exponential in $d$).

To overcome the above restriction, we investigate if it\'s possible to first ``condense\'\' unpredictability entropy and make the entropy gap small. We show that any source with

$k$ bits of unpredictability can be condensed into a source of length $k$ with $k-3$ bits of unpredictability entropy.

Our condenser simply ``abuses\" the GL construction and derives a $k$ bit key from a source with $k$ bits of unpredicatibily. The original GL theorem

implies nothing when extracting

that many bits, but we show that in this regime, GL still behaves like a ``condenser\" for unpredictability.

This result comes with two caveats (1) the loss in circuit size is exponential in $k$ and (2) we require that the source we start with has \\emph{no} HILL entropy (equivalently, one can efficiently check if a guess is correct). We leave it as an intriguing open problem to

overcome these restrictions or to prove they\'re inherent.



00:17 [Pub][ePrint] Feasibility and Completeness of Cryptographic Tasks in the Quantum World, by Serge Fehr and Jonathan Katz and Fang Song and Hong-Sheng Zhou and Vassilis Zikas

  It is known that cryptographic feasibility results can change by moving from the classical to the quantum world. With this in mind, we study the feasibility of realizing functionalities in the framework of universal composability, with respect to both computational and information- theoretic security. With respect to computational security, we show that existing feasibility results carry over unchanged from the classical to the quantum world; a functionality is \"trivial\" (i.e., can be realized without setup) in the quantum world if and only if it is trivial in the classical world. The same holds with regard to functionalities that are complete (i.e., can be used to realize arbitrary other functionalities).

In the information-theoretic setting, the quantum and classical worlds differ. In the quantum world, functionalities in the class we consider are either complete, trivial, or belong to a family of simultaneous-exchange functionalities (e.g., XOR). However, other results in the information- theoretic setting remain roughly unchanged.



00:17 [Pub][ePrint] Privately Evaluating Decision Trees and Random Forests, by David J. Wu and Tony Feng and Michael Naehrig and Kristin Lauter

  Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model\'s output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. Next, we show an extension of the semi-honest protocol that obtains one-sided security against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate tenfold improvements in computation and bandwidth.





2015-04-28
14:31 [Job][New] PhD student, University College London

  Applications are invited for a PhD position in the field of cryptography in the Information Security Group at the UCL Department of Computer Science, to be supervised by Dr. Sarah Meiklejohn. The position is funded partially by Microsoft Research (MSR) Cambridge and will be co-supervised by Dr. Markulf Kohlweiss, an MSR researcher.

The successful applicant will study the topic of controlled malleability in non-interactive zero-knowledge proofs, which has already been demonstrated to provide useful constructions such as compact verifiable shuffles, delegatable anonymous credentials, and more. The aim of this project is to improve on existing methods for constructing controlled-malleable primitives, both in terms of efficiency and in terms of the variety of cryptographic operations that can be performed, as well as to improve their usability and interoperability with existing cryptographic primitives.

We expect a candidate to have a strong degree in Computer Science, Mathematics, or a related MSc course. A good mathematics background and a willingness to become fluent with modern cryptographic constructions is necessary. The studentship is open to all, but fully covers only UK/EU fees.

05:20 [Event][New] SSR 2015: Security Standardisation Research 2015

  Submission: 26 June 2015
Notification: 10 September 2015
From December 15 to December 16
Location: Tokyo, Japan
More Information: http://ssr2015.com


00:17 [Pub][ePrint] Cluster Computing in Zero Knowledge, by Alessandro Chiesa and Eran Tromer and Madars Virza

  Large computations, when amenable to distributed parallel execution, are often executed on computer clusters, for scalability and cost reasons. Such computations are used in many applications, including, to name but a few, machine learning, webgraph mining, and statistical machine translation. Oftentimes, though, the input data is private and only the result of the computation can be published. Zero-knowledge proofs would allow, in such settings, to verify correctness of the output without leaking (additional) information about the input.

In this work, we investigate theoretical and practical aspects of *zero-knowledge proofs for cluster computations*. We design, build, and evaluate zero-knowledge proof systems for which:

(i) a proof attests to the correct execution of a cluster computation; and

(ii) generating the proof is itself a cluster computation that is similar in structure and complexity to the original one.

Concretely, we focus on MapReduce, an elegant and popular form of cluster computing.

Previous zero-knowledge proof systems can in principle prove a MapReduce computation\'s correctness, via a monolithic NP statement that reasons about all mappers, all reducers, and shuffling. However, it is not clear how to generate the proof for such monolithic statements via parallel execution by a distributed system. Our work demonstrates, by theory and implementation, that proof generation can be similar in structure and complexity to the original cluster computation.

Our main technique is a bootstrapping theorem for succinct non-interactive arguments of knowledge (SNARKs) that shows how, via recursive proof composition and Proof-Carrying Data, it is possible to transform any SNARK into a *distributed SNARK for MapReduce* which proves, piecewise and in a distributed way, the correctness of every step in the original MapReduce computation as well as their global consistency.





2015-04-24
21:17 [Pub][ePrint] Cryptography from Post-Quantum Assumptions, by Raza Ali Kazmi

  In this thesis we present our contribution in the field of post-quantum cryptography.

We introduce a new notion of {\\em weakly Random-Self-Reducible} public-key cryptosystem and show how it can be used to implement secure Oblivious Transfer.

We also show that two recent (Post-quantum) cryptosystems can be considered as

{\\em weakly Random-Self-Reducible}. We introduce a new problem called Isometric Lattice

Problem and reduce graph isomorphism and linear code

equivalence to this problem. We also show that this problem has a perfect zero-knowledge interactive proof with respect to a malicious verifier;

this is the only hard problem in lattices that is known to have this property.



18:17 [Pub][ePrint] Bounds on surmising remixed keys, by Daniel R. L. Brown

  A remixed key is derived from a secret source key by applying a

public but unpredictable random function to the source key. A

remixed key models a key derived from a shared secret and a public

unpredictable salt, using a common, deterministic, pseudorandom

function---which is somewhat like TLS record-layer keys.

This report tries to validate the intuition that remixed keys are not

easy to surmise, in other words, that remixing does not introduce an

exploitable spike in the probability distribution of the remixed

key. The report provides pencil-and-paper proofs of numerical

bounds on the probability that an adversary can surmise a remixed

key, assuming a uniformly random source key and remix function. The

proofs are derived from a proof of an asymptotic result on

probability theory in a textbook by Shoup.



18:00 [Job][New] Post-Doctoral Fellowships in Lattice-Based Cryptography, Ecole Normale Superieure de Lyon

  We are seeking candidates for 2 post-doctoral research positions, in the areas of lattice-based cryptography and lattice algorithms. The positions are for up to 3 years.

Interested applicants should provide a detailed resume and references, before June 15th, 2015.



03:17 [Pub][ePrint] Security Analysis of PRINCE, by Jeremy Jean and Ivica Nikolic and Thomas Peyrin and Lei Wang and Shuang Wu

  In this article, we provide the first third-party security analysis of the

PRINCE lightweight block cipher, and the underlying PRINCE_core. First, while

no claim was made by the authors regarding related-key attacks, we show that

one can attack the full cipher with only a single pair of related keys, and

then reuse the same idea to derive an attack in the single-key model for the

full PRINCE_core for several instances of the $\\alpha$ parameter (yet not the

one randomly chosen by the designers). We also show how to exploit the

structural linear relations that exist for PRINCE in order to obtain a key

recovery attack that slightly breaks the security claims for the full cipher.

We analyze the application of integral attacks to get the best known

key-recovery attack on a reduced version of the PRINCE cipher. Finally, we

provide time-memory-data tradeoffs, that require only known

plaintext-ciphertext data, and that can be applied to full PRINCE.



03:17 [Pub][ePrint] Publicly Verifiable Software Watermarking, by Aloni Cohen and Justin Holmgren and Vinod Vaikuntanathan

  Software Watermarking is the process of transforming a program into a functionally equivalent \"marked\" program in such a way that it is computationally hard to remove the mark without destroying functionality. Barak, Goldreich, Impagliazzo, Rudich, Sahai, Vadhan and Yang (CRYPTO 2001) defined software watermarking and showed that the existence of indistinguishability obfuscation implies that software watermarking is impossible. Given the recent candidate constructions of indistinguishability obfuscation, this result paints a bleak picture for the possibility of meaningful watermarking.

We show that slightly relaxing the functionality requirement gives us strong positive results for watermarking. Namely, instead of requiring the marked program to agree with the original unmarked program on all inputs, we require only that they agree on a large fraction of inputs. With this relaxation in mind, our contributions are as follows.

1. We define publicly verifiable watermarking where marking a program requires a secret key, but anyone can verify that a program is marked. The handful of existing watermarking schemes are secretly verifiable, and moreover, satisfy only a weak definition where the ad- versary is restricted in the type of unmarked programs it is allowed to produce (Naccache, Shamir and Stern, PKC 1999; Nishimaki, EUROCRYPT 2013). Moreover, our definition requires security against chosen program attacks, where an adversary has access to an oracle that marks programs of her choice.

2. We construct a publicly verifiable watermarking scheme for any family of puncturable pseudo-random functions (PPRF), assuming indistinguishability obfuscation and injective one-way functions.

We also give an indication of the limits of watermarking by showing that the existence of robust totally unobfuscatable families of functions rules out a general watermarking scheme for cryptographic functionalities such as signatures and MACs.