International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

20 February 2017
We study the notion of {\em ad hoc secure computation}, recently introduced by Beimel et al. (ITCS 2016), in the context of the {\em Private Simultaneous Messages} (PSM) model of Feige et al.\ (STOC 2004). In ad hoc secure computation we have $n$ parties that may potentially participate in a protocol but, at the actual time of execution, only $k$ of them, whose identity is {\em not} known in advance, actually participate. This situation is particularly challenging in the PSM setting, where protocols are non-interactive (a single message from each participating party to a special output party) and where the parties rely on pre-distributed, correlated randomness (that in the ad-hoc setting will have to take into account all possible sets of participants).

We present several different constructions of \apsm\ protocols from standard PSM protocols. These constructions imply, in particular, that efficient information-theoretic \apsm\ protocols exist for NC1 and different classes of log-space computation, and efficient computationally-secure \apsm\ protocols for polynomial-time computable functions can be based on a one-way function. As an application, we obtain an information-theoretic implementation of {\em order-revealing encryption} whose security holds for two messages.

We also consider the case where the actual number of participating parties $t$ may be larger than the minimal $k$ for which the protocol is designed to work. In this case, it is unavoidable that the output party learns the output corresponding to each subset of $k$ out of the $t$ participants. Therefore, a ``best possible security'' notion, requiring that this will be the {\em only} information that the output party learns, is needed. We present connections between this notion and the previously studied notion of {\em $t$-robust PSM} (also known as ``non-interactive MPC''). We show that constructions in this setting for even simple functions (like AND or threshold) can be translated into non-trivial instances of program obfuscation (such as {\em point function obfuscation} and {\em fuzzy point function obfuscation}, respectively). We view these results as a negative indication that protocols with ``best possible security'' are impossible to realize efficiently in the information-theoretic setting or require strong assumptions in the computational setting.
We address the problems of whether t-circular-secure encryption can be based on (t-1)-circular-secure encryption or on semantic (CPA) security, if t = 1. While for t = 1 a folklore construction, based on CPA-secure encryption, can be used to build a 1-circular-secure encryption with the same secret-key and message space, no such constructions are known for the bit-encryption case, which is of particular importance in fully-homomorphic encryption. Also, for $t \geq 2$, all constructions of t-circular-secure encryption (bitwise or otherwise) are based on specific assumptions.

We make progress toward these problems by ruling out all fully-blackbox constructions of

-- 1-seed circular-secure public-key bit encryption from CPA-secure public-key encryption;

-- t-seed circular-secure public-key encryption from (t-1)-seed circular-secure public-key encryption, for any $t \geq 2$.

Informally, seed-circular security is a variant of the circular security notion in which the seed of the key-generation algorithm, instead of the secret key, is encrypted. We also show how to extend our first result to rule out a large and non-trivial class of constructions of 1-circular-secure bit encryption, which we dub key-isolating constructions.

Our separation model follows that of Gertner, Malkin and Reingold (FOCS’01), which is a weaker separation model than that of Impagliazzo and Rudich.
It is widely known that double encryption does not substantially increase the security of a block cipher. Indeed, the classical meet-in-the middle attack recovers the $2k$-bit secret key at the cost of roughly $2^k$ off-line enciphering operations, in addition to very few known plaintext-ciphertext pairs. Thus, essentially as efficiently as for the underlying cipher with a $k$-bit key.

This paper revisits double encryption under the lens of multi-user security. We prove that its security degrades only very mildly with an increasing number of users, as opposed to single encryption, where security drops linearly. More concretely, we give a tight bound for the multi-user security of double encryption as a pseudorandom permutation in the ideal-cipher model, and describe matching attacks.

Our contribution is also conceptual: To prove our result, we enhance and generalize the generic technique recently proposed by Hoang and Tessaro for lifting single-user to multi-user security. We believe this technique to be broadly applicable.
ePrint Report Privacy-Preserving Search of Similar Patients in Genomic Data Gilad Asharov, Shai Halevi, Yehuda Lindell, Tal Rabin
The growing availability of genomic data holds great promise for advancing medicine and research, but unlocking its full potential requires adequate methods for protecting the privacy of individuals whose genome data we use. One example of this tension is running Similar Patient Query on remote genomic data: In this setting a doctor that holds the genome of his/her patient may try to find other individuals with ``close" genomic data, and use the data of these individuals to help diagnose and find effective treatment for that patient's conditions. This is clearly a desirable mode of operation, however, the privacy exposure implications are considerable, so we would like to carry out the above ``closeness'' computation in a privacy preserving manner.

Secure-computation techniques offer a way out of this dilemma, but the high cost of computing edit distance privately poses a great challenge. Wang et al.~proposed recently [ACM CCS '15] an efficient solution, for situations where the genome sequences are so close that edit distance between two genomes can be well approximated just by looking at the indexes in which they differ from the reference genome. However, this solution does not extend well to cases with high divergence among individual genomes, and different techniques are needed there.

In this work we put forward a new approach for highly efficient secure computation for computing an approximation of the edit-distance, that works well even in settings with much higher divergence. We present contributions on two fronts. First, the design of an approximation method that would yield an efficient private computation. Second, further optimizations of the two-party protocol. Our tests indicate that the approximation method works well even in regions of the genome where the distance between individuals is 5\% or more with many insertions and deletions (compared to 99.5\% similarly with mostly substitutions, as considered by Wang et al.). As for speed, our protocol implementation takes just a few seconds to run on databases with thousands of records, each of length thousands of alleles, and it scales almost linearly with both the database size and the length of the sequences in it. As an example, in the datasets of the recent iDASH competition, it takes less than two seconds to find the nearest five records to a query, in a size-500 dataset of length-3500 sequences. This is 2-3 orders of magnitude faster than using state-of-the-art secure protocols for exact computation.
Constraint-hiding constrained PRFs (CHCPRFs), initially studied by Boneh, Lewi and Wu [PKC 2017], are constrained PRFs where the constrained key hides the description of the constraint. Envisioned with powerful applications such as searchable encryption, private-detectable watermarking and symmetric deniable encryption, the only known candidates of CHCPRFs are based on indistinguishability obfuscation or multilinear maps with strong security properties.

In this paper we construct CHCPRFs for all NC1 circuits from the Learning with Errors assumption. The construction draws heavily from the graph-induced multilinear maps by Gentry, Gorbunov and Halevi [TCC 2015], as well as the existing lattice-based PRFs. In fact, our construction can be viewed as an instance of the GGH15 approach where security can be reduced to LWE.

We also show how to build from CHCPRFs reusable garbled circuits (RGC), or equivalently private-key function-hiding functional encryptions with 1-key security. This provides a different approach of constructing RGC from that of Goldwasser et al. [STOC 2013].
The Principal Ideal Problem (resp. Short Principal Ideal Problem), shorten as PIP (resp. SPIP), consists in finding a generator (resp. short generator) of a principal ideal in the ring of integers of a number field. Several lattice-based cryptosystems rely on the presumed hardness of these two problems. In practice, most of them do not use an arbitrary number field but a power-of-two cyclotomic field. The Smart and Vercauteren fully homomorphic encryption scheme and the multilinear map of Garg, Gentry, and Halevi epitomize this common restriction. Recently, Cramer, Ducas, Peikert, and Regev showed that solving the SPIP in such cyclotomic rings boiled down to solving the PIP. In this paper, we present a heuristic algorithm that solves the PIP in prime-power cyclotomic fields in subexponential time L(1/2) in the discriminant of the number field. This is achieved by descending to its totally real subfield. The implementation of our algorithm allows to recover in practice the secret key of the Smart and Vercauteren scheme, for the smallest proposed parameters (in dimension 256).
ePrint Report Partitioned Group Password-Based Authenticated Key Exchange Dario Fiore, Maria Isabel Gonzalez Vasco, Claudio Soriente
Group Password-Based Authenticated Key Exchange (GPAKE) allows a group of users to establish a secret key, as long as all of them share the same password. However, in existing GPAKE protocols as soon as one user runs the protocol with a non-matching password, all the others abort and no key is established. In this paper we seek for a more flexible, yet secure, GPAKE and put forward the notion of partitioned GPAKE. Partitioned GPAKE tolerates users that run the protocol on different passwords. Through a protocol run, any subgroup of users that indeed share a password, establish a session key, factoring out the ``noise'' of inputs by users holding different passwords. At the same time any two keys, each established by a different subgroup of users, are pair-wise independent if the corresponding subgroups hold different passwords. We also introduce the notion of password-privacy for partitioned GPAKE, which is a kind of affiliation hiding property, ensuring that an adversary should not be able to tell whether any given set of users share a password. Finally, we propose an efficient instantiation of partitioned GPAKE building on an unforgeable symmetric encryption scheme and a PAKE by Bellare et al. Our proposal is proven secure in the random oracle/ideal cipher model, and requires only two communication rounds.
Lattice-based cryptography is a promising candidate to build cryptographic primitives that are secure against quantum algorithms. The Learning with Errors problem is one of the most important hardness assumptions, lattice-based constructions base their security on. Recently, Albrecht et al. (Journal of Mathematical Cryptology, 2015) presented the Sage module LWE-Estimator to estimate the hardness of concrete LWE instances, making the choice of parameters for lattice-based primitives easier and better comparable. The effectiveness of algorithms to solve LWE is often depending on the number of LWE instances, called LWE samples, given. To give lower bounds on the hardness of concrete instances it is assumed that each algorithm has given enough samples to run in optimal runtime per instance. That means it is assumed that the optimal number of samples is available. However, in cryptographic applications the optimal number of samples is often not given, but only a small number of samples. This leads to a more conservative choice of parameters than necessary in applications.

This work aims to improve the parameter choice with respect to the described problem. Our contribution is twofold: First, we analyze the hardness of LWE instances given a restricted number of samples. For this, we describe algorithms proposed in the literature to solve LWE briefly and estimate their computational cost while taking a restricted number of samples into account. Secondly, we extend the Sage module LWE-Estimator, based on our theoretical results. Furthermore, we evaluate the resulting implementation and show that restricting the number of samples has a significant impact on the hardness of LWE instances.
ePrint Report Revisiting AES Related-Key Differential Attacks with Constraint Programming David Gérault, Pascal Lafourcade, Marine Minier, Christine Solnon
The Advanced Encryption Standard (AES) is one of the most studied symmetric encryption schemes. During the last years, several attacks have been discovered in different adversary models. In this paper, we focus on related-key differential attacks, where the adversary may introduce differences in plaintext pairs and also in keys. We show that Constraint Programming (CP) can be used to model these attacks, and that it allows us to efficiently find all optimal related-key differential characteristics for AES-128, AES-192 and AES-256. In particular, we improve the best related-key differential for the whole AES-256 and give the best related-key differential on 10 rounds of AES-192, which is the differential trail with the longest path. Those results allow us to improve existing related-key distinguishers, basic related-key attacks and $q$-multicollisions on AES-256.
The Test Vector Leakage Assessment (TVLA) methodology is a qualitative tool relying on Welch's T-test to assess the security of cryptographic implementations against side-channel attacks. Despite known limitations (e.g., risks of false negatives and positives), it is sometimes considered as a pass-fail test to determine whether such implementations are "safe" or not (without clear definition of what is "safe"). In this note, we clarify the limited quantitative meaning of this test when used as a standalone tool. For this purpose, we first show that the straightforward application of this approach to assess the security of a masked implementation is not sufficient. More precisely, we show that even in a simple (more precisely, univariate) case study that seems best suited for the TVLA methodology, detection (or lack thereof) with Welch's T-test can be totally disconnected from the actual security level of an implementation. For this purpose, we put forward the case of a realistic masking scheme that looks very safe from the TVLA point-of-view and is nevertheless easy to break. We then discuss this result in more general terms and argue that this limitation is shared by all "moment-based" security evaluations. We conclude the note positively, by describing how to use moment-based analyzes as a useful ingredient of side-channel security evaluations, to determine a "security order".
ePrint Report Modifying an Enciphering Scheme after Deployment Paul Grubbs, Thomas Ristenpart, Yuval Yarom
Assume that a symmetric encryption scheme has been deployed and used with a secret key. We later must change the encryption scheme in a way that preserves the ability to decrypt (a subset of) previously encrypted plaintexts. Frequent real-world examples are migrating from a token-based encryption system for credit-card numbers to a format-preserving encryption (FPE) scheme, or extending the message space of an already deployed FPE. The ciphertexts may be stored in systems for which it is not easy or not efficient to retrieve them (to re-encrypt the plaintext under the new scheme). We introduce methods for functionality-preserving modifications to encryption, focusing particularly on deterministic, length-preserving ciphers such as those used to perform format-preserving encryption. We provide a new technique, that we refer to as the Zig-Zag construction, that allows one to combine two ciphers using different domains in a way that results in a secure cipher on one domain. We explore its use in the two settings above, replacing token-based systems and extending message spaces. We develop appropriate security goals and prove security relative to them assuming the underlying ciphers are themselves secure as strong pseudorandom permutations.
This paper describes a radically different privacy, security and integrity solution. Dispersed Cryptography converts the cloud from a security threat into a security asset by combining a standard stream cipher and the Quotient Ring Transform (QRT). The result is an integrated error correction/encryption algorithm. This encoding disperses data, breaking it into many smaller pieces and scattering them to different sites. No single site is critical; any can be lost without losing data. No single site can access data, even if the cryptovariable (secret key) is compromised.

The resulting system is more flexible and seamlessly adds both data integrity and security. The underlying codes are linear, and therefore have homomorphic properties and may be used in coding based quantum resistant cryptography.
19 February 2017
Tenure track or tenured position in Cryptology

The vacancy is open to talented individuals who are interested in an excellent opportunity to pursue a successful scientific career. The position is targeted primarily at candidates for the Assistant Professor level. However, candidates with an outstanding record for Associate or Full Professor levels may also be considered.

The professorship is a joint position between the Department of Computer Science (http://cs.aalto.fi/en/) and the Department of Mathematics and Systems analysis (http://math.aalto.fi/en/). With strong research groups in systems security, theoretical computer science, algebra and discrete mathematics, and stochastics, Aalto University is emerging as a leader in information security. The selected candidate is expected to establish independent research and teaching in cryptology. We solicit applications from candidates with expertise in any area of modern cryptology including, but not limited to, symmetric-key and public-key cryptography and cryptanalysis, information-theoretic and complexity-theoretic perspectives of cryptology, as well as research on implementation and application of cryptographic primitives.

Closing date for applications: 31 March 2017

Contact: Professor N. Asokan, tel +358 50 4836465 or Professor Camilla Hollanti, tel. +358 50 5628987, or in recruitment process-related questions HR Coordinator Laura Kuusisto-Noponen.

e-mails: firstname.lastname (at) aalto.fi or, for Prof. N. Asokan, firstinitial.lastname (at) aalto.fi

More information: http://www.aalto.fi/en/about/careers/jobs/view/1210/

17 February 2017
Job Posting Ph.D. student in the area of Implementations of Post-Quantum Cryptography Cryptographic Engineering Research Group at George Mason University, USA

Cryptographic Engineering Research Group (CERG) at George Mason University, USA, is seeking qualified candidates for the Graduate Research Assistant position in the area of efficient implementations of Post-Quantum Cryptosystems and attacks against these cryptosystems. The desired qualifications include strong mathematical background in algebra and number theory, experience in hardware design using hardware description languages, and knowledge of C and scripting languages, such as Python. Additional experience in Magma or SageMath, ASIC and FPGA design, software/hardware codesign, High-Level Synthesis, embedded software development, side-channel analysis, GPU programming, and Linux operating system is a plus.

The position is open starting in Fall 2017. Qualified candidates should apply to the ECE PhD program at George Mason University by March 15, 2017, indicating Dr. Gaj and/or Dr. Kaps as their preffered academic advisors. In parallel, an earlier e-mail contact with Dr. Gaj at kgaj (at) gmu.edu is highly recommended.

Closing date for applications: 15 March 2017

Contact: Kris Gaj, kgaj (at) gmu.edu, http://ece.gmu.edu/~kgaj

More information: https://cryptography.gmu.edu/team

16 February 2017
ePrint Report Hashing Garbled Circuits for Free Xiong Fan, Chaya Ganesh, Vladimir Kolesnikov
We introduce {\em Free Hash}, a new approach to generating Garbled Circuit (GC) hash at no extra cost during GC generation. This is in contrast with state-of-the-art approaches, which hash GCs at computational cost of up to $6\times$ of GC generation. GC hashing is at the core of the cut-and-choose technique of GC-based secure function evaluation (SFE).

Our main idea is to intertwine hash generation/verification with GC generation and evaluation. While we {\em allow} an adversary to generate a GC $\widehat{\GC}$ whose hash collides with an honestly generated $\GC$, such a $\widehat{\GC}$ w.h.p. will fail evaluation and cheating will be discovered. Our GC hash is simply a (slightly modified) XOR of all the gate table rows of GC. It is compatible with Free XOR and half-gates garbling, and can be made to work with many cut-and-choose SFE protocols.

With today's network speeds being not far behind hardware-assisted fixed-key garbling throughput, eliminating the GC hashing cost will significantly improve SFE performance. Our estimates show substantial cost reduction in typical settings, and up to factor $6$ in specialized applications relying on GC hashes.

We implemented GC hashing algorithm and report on its performance.
Cryptographic APIs like PKCS#11 are interfaces to trusted hardware where keys are stored; the secret keys should never leave the trusted hardware in plaintext. In PKCS#11 it is possible to give keys conflicting roles, leading to a number of key-recovery attacks. To prevent these attacks, one can authenticate the attributes of keys when wrapping, but this is not standard in PKCS#11. Alternatively, one can configure PKCS#11 to place additional restrictions on the commands permitted by the API.

Bortolozzo et al. proposed a configuration of PKCS#11, called the Secure Templates Patch (STP), supporting symmetric encryption and key wrapping. However, the security guarantees for STP given by Bortolozzo et al. are with respect to a weak attacker model. STP has been implemented as a set of filtering rules in Caml Crush, a software filter for PKCS#11 that rejects certain API calls. The filtering rules in Caml Crush extend STP by allowing users to compute and verify MACs and so the previous analysis of STP does not apply to this configuration.

We give a rigorous analysis of STP, including the extension used in Caml Crush. Our contribution is as follows:

(i) We show that the extension of STP used in Caml Crush is insecure.

(ii) We propose a strong, computational security model for configurations of PKCS#11 where the adversary can adaptively corrupt keys and prove that STP is secure in this model.

(iii) We prove the security of an extension of STP that adds support for public-key encryption and digital signatures.
ePrint Report Composable and Robust Outsourced Storage Christian Badertscher, Ueli Maurer
The security of data outsourcing mechanisms has become a crucial aspect of today's IT infrastructures. Security goals range from ensuring storage integrity, confidentiality, and access pattern hiding, to proofs of storage, proofs of ownership, and secure deduplication techniques. Despite sharing a common setting, previous security analyses of these tasks are often performed in different models and in a stand-alone fashion, which makes it hard to assess the overall security of a protocol or application involving several security schemes. In this work, we fill this gap and provide a composable model to capture the above security goals. We instantiate the basic client-server setting in this model, where the goal of the honest client is to retain security in the presence of a malicious server. Three specific contributions of the paper, which may be of independent interest, are:

1.) We present a novel and composable definition for secure and robust outsourcing schemes. Our definition is stronger than previous definitions for oblivious RAM or software protection, and assures strong security guarantees against active attacks. It not only assures that an attacker cannot learn the access pattern, but moreover assures resilience to errors and the prevention of targeted attacks to specific locations. We provide a protocol based on the well-known Path ORAM scheme achieving this strong security goal. We justify the need for such a strong notion in practice and show that several existing schemes cannot achieve this level of security.

2.) We present a novel and composable definition for proofs of retrievability capturing the guarantee that a successful audit implies that the current server state allows the client to retrieve his data. As part of our study, we develop an audit mechanism, based on secure and robust outsourcing schemes, that is similar to the construction by Cash et al. (Eurocrpyt 2013), but is universally composable and fault-tolerant.

3.) We assess the security of the standard challenge-response audit mechanism, in which the server has to compute a hash $H(F||c)$ on the file $F$ concatenated with a uniformly random challenge $c$ chosen by the client. Being concerned with composable security, we prove that this audit mechanism is not secure, even in the random oracle model, without assuming additional restrictions on the server behavior. The security of this basic audit scheme was implicitly assumed in Ristenpart et al. (Eurocrypt 2011). To complete the picture, we state the additional assumptions for this audit mechanism to be provably secure and investigate the (in)applicability of hash-function constructions in this setting.
In the recent IEEE communication letter ``Grouping-Proof-Distance-Bounding Protocols: Keep All Your Friends Close" by Karlsson and Mitrokotsa, a protocol for grouping-proof distance-bounding (GPDB) is proposed. In this letter, we show that the proof that is generated by the proposed GBDP protocol does not actually prove anything. Furthermore, we provide a construction towards a distance-bounding grouping-proof, however it remains unclear if one can ever truly combine (privacy-preserving) distance-bounding and a grouping-proof.
ePrint Report A Practical Multivariate Blind Signature Scheme Albrecht Petzoldt, Alan Szepieniec, Mohamed Saied Emam Mohamed
Multivariate Cryptography is one of the main candidates for creating post-quantum cryptosystems. Especially in the area of digital signatures, there exist many practical and secure multivariate schemes. However, there is a lack of multivariate signature schemes with special properties such as blind, ring and group signatures. In this paper, we propose a generic technique to transform multivariate signature schemes into blind signature schemes and show the practicality of the construction on the example of Rainbow. The resulting scheme satisfies the usual blindness criterion and a one-more-unforgeability criterion adapted to MQ signatures, produces short blind signatures and is very efficient.
A distributed computation in which nodes are connected by a partial communication graph is called \emph{topology-hiding} if it does not reveal information about the graph (beyond what is revealed by the output of the function). Previous results [Moran, Orlov, Richelson; TCC'15] have shown that topology-hiding computation protocols exist for graphs of logarithmic diameter (in the number of nodes), but the feasibility question for graphs of larger diameter was open even for very simple graphs such as chains, cycles and trees.

In this work, we take a step towards topology-hiding computation protocols for arbitrary graphs by constructing protocols that can be used in a large class of {\em large-diameter networks}, including cycles, trees and graphs with logarithmic \emph{circumference}. Our results use very different methods from [MOR15] and can be based on a standard assumption (such as DDH).
ePrint Report Sublinear Zero-Knowledge Arguments for RAM Programs Payman Mohassel, Mike Rosulek, Alessandra Scafuro
We describe a new succinct zero-knowledge argument protocol with the following properties. The prover commits to a large data-set $M$, and can thereafter prove many statements of the form $\exists w : \mathcal{R}_i(M,w)=1$, where $\mathcal{R}_i$ is a public function. The protocol is {\em succinct} in the sense that the cost for the verifier (in computation \& communication) does not depend on $|M|$, not even in any initialization phase. In each proof, the computation/communication cost for {\em both} the prover and the verifier is proportional only to the running time of an oblivious RAM program implementing $\mathcal{R}_i$ (in particular, this can be sublinear in $|M|$). The only costs that scale with $|M|$ are the computational costs of the prover in a one-time initial commitment to $M$.

Known sublinear zero-knowledge proofs either require an initialization phase where the work of the verifier is proportional to $|M|$ and are therefore sublinear only in an amortized sense, or require that the computational cost for the prover is proportional to $|M|$ upon {\em each proof}.

Our protocol uses efficient crypto primitives in a black-box way and is UC-secure in the {\em global}, non-programmable random oracle, hence it does not rely on any trusted setup assumption.
ePrint Report New Collision Attacks on Round-Reduced Keccak Kexin Qiao, Ling Song, Meicheng Liu, Jian Guo
In this paper, we focus on collision attacks against \Keccak hash function family and some of its variants. Following the framework developed by Dinur \etal at FSE~2012 where 4-round collisions were found by combining 3-round differential trails and 1-round connectors, we extend the connectors one round further hence achieve collision attacks for up to 5 rounds. The extension is possible thanks to the large degree of freedom of the wide internal state. By linearization of all S-boxes of the first round, the problem of finding solutions of 2-round connectors are converted to that of solving a system of linear equations. However, due to the quick freedom reduction from the linearization, the system has solution only when the 3-round differential trails satisfy some additional conditions. We develop a dedicated differential trail search strategy and find such special differentials indeed exist. As a result, the first practical collision attack against 5-round \shakea and two 5-round instances of the \Keccak collision challenges are found with real examples. We also give the first results against 5-round \Keccakn-224 and 6-round \Keccak collision challenges. It is remarked that the work here is still far from threatening the security of the full 24-round \Keccak family.
Indistinguishability Obfuscation (iO) has enabled an incredible number of new and exciting applications. However, our understanding of how to actually build secure iO remains in its infancy. While many candidate constructions have been published, some have been broken, and it is unclear which of the remaining candidates are secure.

This work deals with the following basic question: \emph{Can we hedge our bets when it comes to iO candidates?} In other words, if we have a collection of iO candidates, and we only know that at least one of them is secure, can we still make use of these candidates?

This topic was recently studied by Ananth, Jain, Naor, Sahai, and Yogev [CRYPTO 2016], who showed how to construct a robust iO combiner: Specifically, they showed that given the situation above, we can construct a single iO scheme that is secure as long as (1) at least one candidate iO scheme is a subexponentially secure iO, and (2) either the subexponential DDH or LWE assumptions hold.

In this work, we make three contributions: \begin{itemize} \item (\textbf{Better robust iO combiners.}) First, we work to improve the assumptions needed to obtain the same result as Ananth et al.: namely we show how to replace the DDH/LWE assumption with the assumption that subexponentially secure one-way functions exist. \item (\textbf{Transforming Combiners from iO to FE and NIKE.}) Second, we consider a broader question: what if we start with several iO candidates where only one works, but we don't care about achieving iO itself, rather we want to achieve concrete applications of iO? In this case, we are able to work with the \emph{minimal} assumption of just polynomially secure one-way functions, and where the working iO candidate only achieves polynomial security. We call such combiners {\em transforming combiners}. More generally, a transforming combiner from primitive A to primitive B is one that takes as input many candidates of primitive A, out of which we are guaranteed that at least one is secure and outputs a secure candidate of primitive B. We can correspondingly define robust transforming combiners. We present transforming combiners from indistinguishability obfuscation to \emph{functional encryption} and \emph{non-interactive multiparty key exchance (NIKE)}. \item (\textbf{Correctness Amplification for iO from polynomial security and one-way functions.}) Finally, along the way, we obtain a result of independent interest: Recently, Bitansky and Vaikuntanathan [TCC 2016] showed how to amplify the correctness of an iO scheme, but they needed subexponential security for the iO scheme and also require subexponentially secure DDH or LWE. We show how to achieve the same correctness amplification result, but requiring only polynomial security from the iO scheme, and assuming only polynomially secure one-way functions. \end{itemize}
Recent work on searchable symmetric encryption (SSE) has focused on increasing its expressiveness. A notable example is the OXT construction (Cash et al., CRYPTO '13 ) which is the first SSE scheme to support conjunctive keyword queries with sub-linear search complexity. While OXT efficiently supports disjunctive and boolean queries that can be expressed in searchable normal form, it can only handle arbitrary disjunctive and boolean queries in linear time. This motivates the problem of designing expressive SSE schemes with worst-case sub-linear search; that is, schemes that remain highly efficient for any keyword query.

In this work, we address this problem and propose non-interactive highly efficient SSE schemes that handle arbitrary disjunctive and boolean queries with worst-case sub-linear search and optimal communication complexity. Our main construction, called IEX, makes black-box use of an underlying single keyword SSE scheme which we can instantiate in various ways. Our first instantiation, IEX-2Lev, makes use of the recent 2Lev construction (Cash et al., NDSS '14 ) and is optimized for search at the expense of storage overhead. Our second instantiation, IEX-ZMF, relies on a new single keyword SSE scheme we introduce called ZMF and is optimized for storage overhead at the expense of efficiency (while still achieving asymptotically sub-linear search). Our ZMF construction is the first adaptively-secure highly compact SSE scheme and may be of independent interest. At a very high level, it can be viewed as an encrypted version of a new Bloom filter variant we refer to as a Matryoshka filter. In addition, we show how to extend IEX to be dynamic and forward-secure.

To evaluate the practicality of our schemes, we designed and implemented a new encrypted search framework called Clusion. Our experimental results demonstrate the practicality of IEX and of its instantiations with respect to either search (for IEX-2Lev) and storage overhead (for IEX-ZMF).
In cut-and-choose protocols for two-party secure computation (2PC) the main overhead is the number of garbled circuits that must be sent. Recent work (Lindell, Riva; Huang et al., Crypto 2014) has shown that in a batched setting, when the parties plan to evaluate the same function $N$ times, the number of garbled circuits per execution can be reduced by a $O(\log N)$ factor compared to the single-execution setting. This improvement is significant in practice: an order of magnitude for $N$ as low as one thousand. % Besides the number of garbled circuits, communication round trips are another significant performance bottleneck. Afshar et al. (Eurocrypt 2014) proposed an efficient cut-and-choose 2PC that is round-optimal (one message from each party), but in the single-execution setting.

In this work we present new malicious-secure 2PC protocols that are round-optimal and also take advantage of batching to reduce cost. Our contributions include: \begin{itemize} \item A 2-message protocol for batch secure computation ($N$ instances of the same function). The number of garbled circuits is reduced by a $O(\log N)$ factor over the single-execution case. However, other aspects of the protocol that depend on the input/output size of the function do not benefit from the same $O(\log N)$-factor savings. \item A 2-message protocol for batch secure computation, in the random oracle model. All aspects of this protocol benefit from the $O(\log N)$-factor improvement, except for small terms that do not depend on the function being evaluated. \item A protocol in the offline/online setting. After an offline preprocessing phase that depends only on the function $f$ and $N$, the parties can securely evaluate $f$, $N$ times (not necessarily all at once). Our protocol's online phase is only 2 messages, and the total online communication is only $\ell + O(\kappa)$ bits, where $\ell$ is the input length of $f$ and $\kappa$ is a computational security parameter. This is only $O(\kappa)$ bits more than the information-theoretic lower bound for malicious 2PC.
The round complexity of secure computation has been a fundamental problem in cryptography. Katz and Ostrovsky proved that 5 rounds are both necessary and sufficient for secure computation in the stand alone setting, thus resolving the exact round complexity of standalone secure computation.

In contrast, round complexity of secure computation in the concurrent setting, where several protocols may run simultaneously, is poorly understood. Since standard polynomial time simulation is impossible in the concurrent setting, alternative security notions have been proposed, e.g., super-polynomial simulation (SPS). While SPS security can be achieved in constant rounds, the actual constant (> 20) is far from optimal.

In this work, we take the first steps towards studying the exact round complexity of concurrent secure computation. We focus on the two party case and present a new secure computation protocol that achieves SPS security under concurrent self-composition. Our protocol has 5 rounds assuming quasi-polynomially-hard injective one-way functions (or 7 rounds assuming standard polynomially-hard collision-resistant hash functions). We also require other standard assumptions, specifically trapdoor OWPs and lossy TDFs. This matches the rounds for standalone secure computation.

More specifically, our security proof presents a polynomial time reduction from SPS security to 3-round public-coin non-malleable commitments with appropriate extractability properties. Such commitments are known based on quasi-polynomially-hard injective OWFs. (The reduction also works with a special 6-round non-malleable commitment to yield the 7-round result under CRHFs.)
A public key encryption scheme is said to be n-circular secure if no PPT adversary can distinguish between encryptions of an n length key cycle and n encryptions of zero.

One interesting question is whether circular security comes for free from IND-CPA security. Recent works have addressed this question, showing that for all integers n, there exists an IND-CPA scheme that is not n-circular secure. However, this leaves open the possibility that for every IND-CPA cryptosystem, there exists a cycle length l, dependent on the cryptosystem (and the security parameter) such that the scheme is l-circular secure. If this is true, then this would directly lead to many applications, in particular, it would give us a fully homomorphic encryption scheme via Gentry’s bootstrapping.

In this work, we show that is not true. Assuming indistinguishability obfuscation and leveled homomorphic encryption, we construct an IND-CPA scheme such that for all cycle lengths l, the scheme is not l-circular secure.
Verifiable encryption allows one to prove properties about encrypted data and is an important building block in the design of cryptographic protocols, e.g., group signatures, key escrow, fair exchange protocols, etc. Existing lattice-based verifiable encryption schemes, and even just proofs of knowledge of the encrypted data, require parallel composition of proofs to reduce the soundness error, resulting in proof sizes that are only truly practical when amortized over a large number of ciphertexts.

In this paper, we present a new construction of a verifiable encryption scheme, based on the hardness of the Ring-LWE problem in the random-oracle model, for short solutions to linear equations over polynomial rings. Our scheme is "one-shot", in the sense that a single instance of the proof already has negligible soundness error, yielding compact proofs even for individual ciphertexts. Whereas verifiable encryption usually guarantees that decryption can recover a witness for the original language, we relax this requirement to decrypt a witness of a related but extended language. This relaxation is sufficient for many applications and we illustrate this with example usages of our scheme in key escrow and verifiably encrypted signatures.

One of the interesting aspects of our construction is that the decryption algorithm is probabilistic and uses the proof as input (rather than using only the ciphertext). The decryption time for honestly-generated ciphertexts only depends on the security parameter, while the expected running time for decrypting an adversarially-generated ciphertext is directly related to the number of random-oracle queries of the adversary who created it. This property suffices in most practical scenarios, especially in situations where the ciphertext proof is part of an interactive protocol, where the decryptor is substantially more powerful than the adversary, or where adversaries can be otherwise discouraged to submit malformed ciphertexts.
We introduce the twisted $\mu_4$-normal form for elliptic curves, deriving in particular addition algorithms with complexity $9M + 2S$ and doubling algorithms with complexity $2M + 5S + 2m$ over a binary field. Every ordinary elliptic curve over a finite field of characteristic 2 is isomorphic to one in this family. This improvement to the addition algorithm is comparable to the $7M + 2S$ achieved for the $\mu_4$-normal form, and replaces the previously best known complexity of $13M + 3S$ on L\'opez-Dahab models applicable to these twisted curves. The derived doubling algorithm is essentially optimal, without any assumption of special cases. We show moreover that the Montgomery scalar multiplication with point recovery carries over to the twisted models, giving symmetric scalar multiplication adapted to protect against side channel attacks, with a cost of $4M + 4S + 1m_t + 2m_c$ per bit. In characteristic different from 2, we establish a linear isomorphism with the twisted Edwards model. This work, in complement to the introduction of $\mu_4$-normal form, fills the lacuna in the body of work on efficient arithmetic on elliptic curves over binary fields, explained by this common framework for elliptic curves if $\mu_4$-normal form in any characteristic. The improvements are analogous to those which the Edwards and twisted Edwards models achieved for elliptic curves over finite fields of odd characteristic and extend $\mu_4$-normal form to cover the binary NIST curves.
In this work we separate private-key semantic security from circular security using the Learning with Error assumption. Prior works used the less standard assumptions of multilinear maps or indistinguishability obfuscation. To achieve our results we develop new techniques for obliviously evaluating branching programs.

  older items