International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

29 May 2017
ePrint Report Refined Probability of Differential Characteristics Including Dependency Between Multiple Rounds Anne Canteaut, Eran Lambooij, Samuel Neves, Shahram Rasoolzadeh, Yu Sasaki, Marc Stevens
The current paper studies the probability of differential characteristics for an unkeyed (or with a fixed key) construction. Most notably, it focuses on the gap between two probabilities of differential characteristics: probability with independent S-box assumption, $p_{ind}$, and exact probability, $p_{exact}$. It turns out that $p_{exact}$ is larger than $p_{ind}$ in Feistel network with some S-box based inner function. The mechanism of this gap is then theoretically analyzed. The gap is derived from interaction of S-boxes in three rounds, and the gap depends on the size and choice of the S-box. In particular the gap can never be zero when the S-box is bigger than six bits. To demonstrate the power of this improvement, a related-key differential characteristic is proposed against a lightweight block cipher RoadRunneR. For the 128-bit key version, $p_{ind}$ of $2^{-48}$ is improved to $p_{exact}$ of $2^{-43}$. For the 80-bit key version, $p_{ind}$ of $2^{-68}$ is improved to $p_{exact}$ of $2^{-62}$. The analysis is further extended to SPN with an almost-MDS binary matrix in the core primitive of the authenticated encryption scheme Minalpher: $p_{ind}$ of $2^{-128}$ is improved to $p_{exact}$ of $2^{-96}$, which allows to extend the attack by two rounds.
A constrained pseudorandom function (PRF) is a secure PRF for which one can generate constrained keys that can only be used to evaluate the PRF on a subset of the domain. Constrained PRFs are used widely, most notably in applications of indistinguishability obfuscation (iO). In this paper we show how to constrain an invertible PRF (IPF), which is significantly harder. An IPF is a secure injective PRF accompanied by an inversion algorithm. A constrained key for an IPF can only be used to evaluate the IPF on a subset S of the domain, and to invert the IPF on the image of S. We first define the notion of a constrained IPF and then give two main constructions: one for puncturing an IPF and the other for (single-key) circuit constraints. Both constructions rely on recent work on private constrained PRFs. We also show that constrained pseudorandom permutations are impossible under our definition.
ePrint Report Forward-Security under Continual Leakage Mihir Bellare, Adam O'Neill, Igors Stepanovs
Current signature and encryption schemes secure against continual leakage fail completely if the key in any time period is fully exposed. We suggest forward security as a second line of defense, so that in the event of full exposure of the current secret key, at least uses of keys prior to this remain secure, a big benefit in practice. (For example if the signer is a certificate authority, full exposure of the current secret key would not invalidate certificates signed under prior keys.) We provide definitions for signatures and encryption that are forward-secure under continual leakage. Achieving these definitions turns out to be challenging, and we make initial progress with some constructions and transforms.
28 May 2017
The iterated Even--Mansour (EM) ciphers form the basis of many block cipher designs. Several results have established their security in the CPA/CCA models, under related-key attacks, and in the indifferentiability framework. In this work, we study the Even--Mansour ciphers under key-dependent message (KDM) attacks. KDM security is particularly relevant for block ciphers since non-expanding mechanisms are convenient in setting such as full disk encryption (where various forms of key-dependency might exist). We formalize the folklore result that the ideal cipher is KDM secure. We then show that EM ciphers meet varying levels of KDM security depending on the number of rounds and permutations used. One-round EM achieves some form of KDM security, but this excludes security against offsets of keys. With two rounds we obtain KDM security against offsets, and using different round permutations we achieve KDM security against all permutation-independent claw-free functions. As a contribution of independent interest, we present a modular framework that can facilitate the security treatment of symmetric constructions in models such as RKA or KDM that allow for correlated inputs.
Two types of tweakable blockciphers based on classical blockciphers have been presented over the last years: non-tweak-rekeyable and tweak-rekeyable, depending on whether the tweak may influence the key input to the underlying blockcipher. In the former direction, the best possible security is conjectured to be $2^{\sigma n/(\sigma+1)}$, where $n$ is the size of the blockcipher and $\sigma$ is the number of blockcipher calls. In the latter direction, Mennink and Wang et al. presented optimally secure schemes, but only in the ideal cipher model. We investigate the possibility to construct a tweak-rekeyable cipher that achieves optimal security in the standard cipher model. As a first step, we note that all standard-model security results in literature implicitly rely on a generic standard-to-ideal transformation, that replaces all keyed blockcipher calls by random secret permutations, at the cost of the security of the blockcipher. Then, we prove that if this proof technique is adopted, tweak-rekeying will not help in achieving optimal security: if $2^{\sigma n/(\sigma+1)}$ is the best one can get without tweak-rekeying, optimal $2^n$ provable security with tweak-rekeying is impossible.
At CRYPTO 2016, Cogliati and Seurin introduced the Encrypted Davies-Meyer construction, $p_2(p_1(x) \oplus x)$ for two $n$-bit permutations $p_1,p_2$, and proved security up to $2^{2n/3}$. We present an improved security analysis up to $2^n/(67n)$. Additionally, we introduce the dual of the Encrypted Davies-Meyer construction, $p_2(p_1(x)) \oplus p_1(x)$, and prove even tighter security for this construction: $2^n/67$. We finally demonstrate that the analysis neatly generalizes to prove almost optimal security of the Encrypted Wegman-Carter with Davies-Meyer MAC construction and its newly introduced dual. Central to our analysis is a modernization of Patarin's mirror theorem and an exposition of how it relates to fundamental cryptographic problems.
This paper presents a unified framework that supports different types of privacy-preserving search queries over encrypted cloud data. In the framework, users can perform any of the multi-keyword search, range search and k-nearest neighbor search operations in a privacy-preserving manner. All three types of queries are transformed into predicate-based search leveraging bucketization, locality sensitive hashing and homomorphic encryption techniques. The proposed framework is implemented using Hadoop MapReduce, and its efficiency and accuracy are evaluated using publicly available real data sets. The implementation results show that the proposed framework can effectively be used in moderate sized data sets and it is scalable for much larger data sets provided that the number of computers in the Hadoop cluster is increased. To the best of our knowledge, the proposed framework is the first privacy-preserving solution, in which three different types of search queries are effectively applied over encrypted data.
In this paper we show how to totally break the fully homomorphic encryption sccheme of eprint 2017/458.
ePrint Report On the Relation Between SIM and IND-RoR Security Models for PAKEs José Becerra, Vincenzo Iovino, Dimiter Ostrev, Marjan Skrobot
Password-based Authenticated Key-Exchange (PAKE) protocols allow users, who need only to share a password, to compute a high-entropy shared session key despite passwords being taken from a dictionary. Security models for PAKE protocols aim to capture the desired security properties that such protocols must satisfy when executed in the presence of an active adversary. They are usually classified into i) indistinguishability-based (IND-based) or ii) simulation-based (SIM-based). The relation between these two security notions is unclear and mentioned as a gap in the literature. In this work, we prove that SIM-BMP security from Boyko et al. (EUROCRYPT 2000) implies IND-RoR security from Abdalla et al. (PKC 2005) and that IND-RoR security is equivalent to a slightly modified version of SIM-BMP security. We also investigate whether IND-RoR security implies (unmodified) SIM-BMP security.
We propose a technique of individually modifying an attribute-based encryption scheme (ABE) that is secure against chosen-plaintext attacks (CPA) into an ABE scheme that is secure against chosen-ciphertext attacks (CCA) in the standard model. We demonstrate the technique in the case of the Waters ciphertext-policy ABE (CP-ABE). Our technique is helpful when a Diffie-Hellman tuple to be verified is in the terminal group of a bilinear map. We utilize the Twin Diffie-Hellman Trapdoor Test of Cash, Kiltz and Shoup, and it results in expansion of secret key length and decryption cost of computation by a factor of four, whereas public key length, ciphertext length and encryption cost of computation remain almost the same. In the case that the size of attribute sets are small, those lengths and costs are smaller than those of the CP-ABE obtained via the generic transformation of Yamada et al. in PKC 2011.
ePrint Report Why Your Encrypted Database Is Not Secure Paul Grubbs, Thomas Ristenpart, Vitaly Shmatikov
Encrypted databases, a popular approach to protecting data from compromised database management systems (DBMS’s), use abstract threat models that capture neither realistic databases, nor realistic attack scenarios. In particular, the “snapshot attacker” model used to support the security claims for many encrypted databases does not reflect the information about past queries available in any snapshot attack on an actual DBMS. We demonstrate how this gap between theory and reality causes encrypted databases to fail to achieve their “provable security” guarantees.
Functional encryption enables fine-grained access to encrypted data. In many scenarios, however, it is important to control not only what users are allowed to read (as provided by traditional functional encryption), but also what users are allowed to send. Recently, Damgård et al. (TCC 2016) introduced a new cryptographic framework called access control encryption (ACE) for restricting information flow within a system in terms of both what users can read as well as what users can write. While a number of access control encryption schemes exist, they either rely on strong assumptions such as indistinguishability obfuscation or are restricted to simple families of access control policies.

In this work, we give the first ACE scheme for arbitrary policies from standard assumptions. Our construction is generic and can be built from the combination of a digital signature scheme, a predicate encryption scheme, and a (single-key) functional encryption scheme that supports randomized functionalities. All of these primitives can be instantiated from standard assumptions in the plain model, and so, we obtain the first ACE scheme capable of supporting general policies from standard assumptions. One possible instantiation of our construction relies upon standard number-theoretic assumptions (namely, the DDH and RSA assumptions) and standard lattice assumptions (namely, LWE). Finally, we conclude by introducing several extensions to the ACE framework to support dynamic and more fine-grained access control policies.
Modular design via a tweakable blockcipher (TBC) offers efficient authenticated encryption (AE) schemes (with associated data) that call a blockcipher once for each data block (of associated data or a plaintext). However, the existing efficient blockcipher-based TBCs are secure up to the birthday bound, where the underlying keyed blockcipher is a secure strong pseudorandom permutation. Existing blockcipher-based AE schemes with beyond-birthday-bound (BBB) security are not efficient, that is, a blockcipher is called twice or more for each data block.

In this paper, we present a TBC, XKX, that offers efficient blockcipher-based AE schemes with BBB security, by combining with efficient TBC-based AE schemes such as $\Theta$CB and $\mathbb{OTR}$. XKX is a combination of two TBCs, Minematsu's TBC and Liskov et al.'s TBC. In the XKX-based AE schemes, a nonce and a counter are taken as tweak; a nonce-dependent blockcipher's key is generated by using a pseudo-random function $F$ (from Minematsu); a counter is inputted to an almost xor universal hash function, and the hash value is xor-ed with the input and output blocks of a blockcipher with the nonce-dependent key (from Liskov et al.). For each query to the AE scheme, after the nonce-dependent key is generated, it can be reused, thereby a blockcipher is called once for each data block. We prove that the security bounds of the XKX-based AE schemes become roughly $\ell^2 q/2^n$, where $q$ is the number of queries to the AE scheme, $n$ is the blockcipher size, and $\ell$ is the number of blockcipher calls in one AE evaluation. Regarding the function $F$, we present two blockcipher-based instantiations, the concatenation of blockcipher calls, $F^{(1)}$, and the xor of blockcipher calls, $F^{(2)}$, where $F^{(i)}$ calls a blockcipher $i+1$ times. By the PRF/PRP switch, the security bounds of the XKX-based AE schemes with $F^{(1)}$ become roughly $\ell^2 q/2^n + q^2/2^n$, thus if $\ell \ll 2^{n/2}$ and $q \ll 2^{n/2}$, these achieve BBB security. By the xor construction, the security bounds of the XKX-based AE schemes with $F^{(2)}$ become roughly $\ell^2 q/2^n + q/2^n$, thus if $\ell \ll 2^{n/2}$, these achieve BBB security.
ePrint Report Lelantos: A Blockchain-based Anonymous Physical Delivery System Riham AlTawy, Muhammad ElSheikh, Amr M. Youssef, Guang Gong
Real world physical shopping offers customers the privilege of maintaining their privacy by giving them the option of using cash, and thus providing no personal information such as their names and home addresses. On the contrary, electronic shopping mandates the use of all sorts of personally identifiable information for both billing and shipping purposes. Cryptocurrencies such as Bitcoin have created a stimulated growth in private billing by enabling pseudonymous payments. However, the anonymous delivery of the purchased physical goods is still an open research problem.

In this work, we present a blockchain-based physical delivery system called Lelantos that within a realistic threat model, offers customer anonymity, fair exchange and merchant-customer unlinkability. Our system is inspired by the onion routing techniques which are used to achieve anonymous message delivery. Additionally, Lelantos relies on the decentralization and pseudonymity of the blockchain to enable pseudonymity that is hard to compromise, and the distributed consensus mechanisms provided by smart contracts to enforce fair irrefutable transactions between distrustful contractual parties.
We study the problem of secure two-party computation in the presence of a trusted setup. If there is an unconditionally UC-secure protocol for $f$ that makes use of calls to an ideal $g$, then we say that $f$ reduces to $g$ (and write $f \sqsubseteq g$). Some $g$ are complete in the sense that all functions reduce to $g$. However, almost nothing is known about the power of an incomplete $g$ in this setting. We shed light on this gap by showing a characterization of $f \sqsubseteq g$ for incomplete $g$.

Very roughly speaking, we show that $f$ reduces to $g$ if and only if it does so by the simplest possible protocol: one that makes a single call to ideal $g$ and uses no further communication. Furthermore, such simple protocols can be characterized by a natural combinatorial condition on $f$ and $g$.

Looking more closely, our characterization applies only to a very wide class of $f$, and only for protocols that are deterministic or logarithmic-round. However, we give concrete examples showing that both of these limitations are inherent to the characterization itself. Functions not covered by our characterization exhibit qualitatively different properties. Likewise, randomized, superlogarithmic-round protocols are qualitatively more powerful than deterministic or logarithmic-round ones.
ePrint Report Proving Resistance against Invariant Attacks: How to Choose the Round Constants Christof Beierle, Anne Canteaut, Gregor Leander, Yann Rotella
Many lightweight block ciphers apply a very simple key schedule in which the round keys only differ by addition of a round-specific constant. Generally, there is not much theory on how to choose appropriate constants. In fact, several of those schemes were recently broken using invariant attacks, i.e., invariant subspace or nonlinear invariant attacks. This work analyzes the resistance of such ciphers against invariant attacks and reveals the precise mathematical properties that render those attacks applicable. As a first practical consequence, we prove that some ciphers including Prince, Skinny-64 and Mantis7 are not vulnerable to invariant attacks. Also, we show that the invariant factors of the linear layer have a major impact on the resistance against those attacks. Most notably, if the number of invariant factors of the linear layer is small (e.g., if its minimal polynomial has a high degree), we can easily find round constants which guarantee the resistance to all types of invariant attacks, independently of the choice of the S-box layer. We also explain how to construct optimal round constants for a given, but arbitrary, linear layer.
ePrint Report Leakage-Resilient Tweakable Encryption from One-Way Functions Suvradip Chakraborty, Chester Rebeiro, Debdeep Mukhopadhyay, C. Pandu Rangan
In this paper, we initiate the study of leakage-resilient tweakable encryption schemes in the relative key-leakage model, where the adversary can obtain (arbitrary) partial information about the secret key. We also focus on the minimal and generic assumptions needed to construct such a primitive. Interestingly, we show provably secure constructions of leakage-resilient (LR) tweakable encryption based on the sole assumption that one-way functions (OWF) exist via some interesting intermediate generic connections. A central tool used in our construction of LR-tweakable encryption is the notion of Symmetric-key tweakable weak hash proof system, which we introduce. This can be seen as a generalization of the Symmetric-key weak hash proof framework of Hazay et. al (Eurocrypt'13). Along the way, we also introduce a new primitive called tweakable weak pseudo-random functions (t-wPRF) and show how to generically construct it from weak-PRF. We then construct LR-version of t-wPRF and use it to construct LR-tweakable encryption.
26 May 2017
Understanding how hash functions can be used in a sound manner within cryptographic protocols, as well as how they can be constructed in a sound manner from compression functions, are two important problems in cryptography with a long history. Two approaches towards solving the first problem are the random oracle model (ROM) methodology and the UCE framework, and an approach to solving the second problem is the indifferentiability framework.

This paper revisits the two problems and the above approaches and makes three contributions. First, indifferentiability, which comes with a composition theorem, is generalized to context-restricted indifferentiability (CRI) to capture settings that compose only in a restricted context. Second, we introduce a new composable notion based on CRI, called RO-CRI, to capture the security of hash functions. We then prove that a non-interactive version of RO-CRI is equivalent to the UCE framework, and therefore RO-CRI leads to natural interactive generalizations of existing UCE families. Two generalizations of split UCE-security, called strong-split CRI-security and repeated-split CRI-security, are introduced. Third, new, more fine-grained soundness properties for hash function constructions are proposed which go beyond collision-resistance and indifferentiability guarantees. As a concrete result, a new soundness property of the Merkle-Damgard construction is shown: If the compression function is strong-split CRI-secure, then the overall hash function is split secure. The proof makes use of a new lemma on min-entropy splitting which may be of independent interest.
ePrint Report Transitioning to a Quantum-Resistant Public Key Infrastructure Nina Bindel, Udyani Herath, Matthew McKague, Douglas Stebila
To ensure uninterrupted cryptographic security, it is important to begin planning the transition to post-quantum cryptography. In addition to creating post-quantum primitives, we must also plan how to adapt the cryptographic infrastructure for the transition, especially in scenarios such as public key infrastructures (PKIs) with many participants. The use of hybrids — multiple algorithms in parallel — will likely play a role during the transition for two reasons: “hedging our bets” when the security of newer primitives is not yet certain but the security of older primitives is already in question; and to achieve security and functionality both in post-quantum-aware and in a backwards-compatible way with not-yet-upgraded software.

In this paper, we investigate the use of hybrid digital signature schemes. We consider several methods for combining signature schemes, and give conditions on when the resulting hybrid signature scheme is unforgeable. Additionally we address a new notion about the inability of an adversary to separate a hybrid signature into its components. For both unforgeability and non-separability, we give a novel security hierarchy based on how quantum the attack is. We then turn to three real-world standards involving digital signatures and PKI: certificates (X.509), secure channels (TLS), and email (S/MIME). We identify possible approaches to supporting hybrid signatures in these standards while retaining backwards compatibility, which we test in popular cryptographic libraries and implementations, noting specially the inability of some software to handle larger certificates.
ePrint Report Security Analysis of Arbiter PUF and Its Lightweight Compositions Under Predictability Test Phuong Ha Nguyen, Durga Prasad Sahoo, Rajat Subhra Chakraborty, Debdeep Mukhopadhyay
Unpredictability is an important security property of Physically Unclonable Function (PUF) in the context of statistical attacks, where the correlation between challenge-response pairs is explicitly exploited. In existing literature on PUFs, Hamming Distance test, denoted by $\mathrm{HDT}(t)$, was proposed to evaluate the unpredictability of PUFs, which is a simplified case of the Propagation Criterion test $\mathrm{PC}(t)$. The objective of these testing schemes is to estimate the output transition probability when there are $t$ or less than $t$ bits flips, and ideally, this probability value should be 0.5. In this work, we show that aforementioned two testing schemes are not enough to ensure the unpredictability of a PUF design. We propose a new test which is denoted as $\mathrm{HDT}(\mathbf{e},t)$. This testing scheme is a fine-tuned version of the previous schemes, as it considers the flipping bit pattern vector $\mathbf{e}$ along with parameter $t$. As a contribution, we provide a comprehensive discussion and analytic interpretation of $\mathrm{HDT}(t)$, $\mathrm{PC}(t)$ and $\mathrm{HDT}(\mathbf{e},t)$ test schemes for Arbiter PUF (APUF), XOR PUF and Lightweight Secure PUF (LSPUF). Our analysis establishes that $\mathrm{HDT}(\mathbf{e},t)$ test is more general in comparison with $\mathrm{HDT}(t)$ and $\mathrm{PC}(t)$ tests. In addition, we demonstrate a few scenarios where the adversary can exploit the information obtained from the analysis of $\mathrm{HDT}(\mathbf{e},t)$ properties of APUF, XOR PUF and LSPUF to develop statistical attacks on them, if the ideal value of $\mathrm{HDT}(\mathbf{e},t)=0.5$ is not achieved for a given PUF. We validate our theoretical observations using the simulated and FPGA implemented APUF, XOR PUF and LSPUF designs.
25 May 2017
ePrint Report Fully Homomorphic Encryption Using Multivariate Polynomials Matthew Tamayo-Rios, Jean-Charles Faugère, Ludovic Perret, Peng Hui How, Robin Zhang
Efficient and secure third party computation has many practical applications in cloud computing. We develop new approach for building fully homomorphic encryption (FHE) schemes, by starting with the intuition of using algebraic descriptions of the encryption and decryption functions to construct a functionally complete set of homomorphic boolean operators. We use this approach to design a compact efficient asymmetric cryptosystem that supports secure third party evaluation of arbitrary boolean functions. In the process, we introduce a new hard problem that is a more difficult variant of of the classical Isomorphism of Polynomials (IP) that we call the Obfuscated-IP.
For conventional secret sharing, if cheaters can submit possibly forged shares after observing shares of the honest users in the reconstruction phase, they can disturb the protocol and reconstruct the true secret. To overcome the problem, secret sharing scheme with properties of cheater-identification have been proposed. Existing protocols for cheater-identifiable secret sharing assumed non-rushing cheaters or honest majority. In this paper, we remove both conditions simultaneously, and give its universal construction from any secret sharing scheme. To resolve this end, we propose the concepts of "individual identification" and "agreed identification".
Proxy re-encryption (PRE) and Proxy re-signature (PRS) were introduced by Blaze, Bleumer and Strauss [Eurocrypt '98]. Basically, PRE allows a semi-trusted proxy to transform a ciphertext encrypted under one key into an encryption of the same plaintext under another key, without revealing the underlying plaintext. Since then, many interesting applications have been explored, and many constructions in various settings have been proposed, while PRS allows a semi-trusted proxy to transform Alice's signature on a message into Bob's signature on the same message, but the proxy cannot produce new valid signature on new messages for either Alice or Bob.

Recently, for PRE related progress, Cannetti and Honhenberger [CCS '07] defined a stronger notion -- CCA-security and construct a bi-directional PRE scheme. Later on, several work considered CCA-secure PRE based on bilinear group assumptions. Very recently, Kirshanova [PKC '14] proposed the first single-hop CCA1-secure PRE scheme based on learning with errors (LWE) assumption. For PRS related progress, Ateniese and Hohenberger [CCS'05] formalized this primitive and provided efficient constructions in the random oracle model. At CCS 2008, Libert and Vergnaud presented the first multi-hop uni-directional proxy re-signature scheme in the standard model, using assumptions in bilinear groups.

In this work, we first point out a subtle but serious mistake in the security proof of the work by Kirshanova. This reopens the direction of lattice-based CCA1-secure constructions, even in the single-hop setting. Then we construct a single-hop PRE scheme that is proven secure in our new tag-based CCA-PRE model. Next, we construct the first multi-hop PRE construction. Lastly, we also construct the first PRS scheme from lattices that is proved secure in our proposed unified security model
In this work, we design a new lattice encoding structure for vectors. Our encoding can be used to achieve a packed FHE scheme that allows some SIMD operations and can be used to improve all the prior IBE schemes and signatures in the series. In particular, with respect to FHE setting, our method improves over the prior packed GSW structure of Hiromasa et al. (PKC '15), as we do not rely on a circular assumption as required in their work. Moreover, we can use the packing and unpacking method to extract each single element, so that the homomorphic operation supports element-wise and cross-element-wise computation as well. In the IBE scenario, we improves over previous constructions supporting $O(\lambda)$-bit length identity from lattices substantially, such as Yamada (Eurocrypt '16), Katsumata, Yamada (Asiacrypt '16) and Yamada (Eprint '17), by shrinking the master public key to three matrices from standard Learning With Errors assumption. Additionally, our techniques from IBE can be adapted to construct a compact digital signature scheme, which achieves existential unforgeability under the standard Short Integer Solution (SIS) assumption with small polynomial parameters.
ePrint Report Algorand: Scaling Byzantine Agreements for Cryptocurrencies Yossi Gilad, Rotem Hemo, Silvio Micali, Georgios Vlachos, Nickolai Zeldovich
Algorand is a new cryptocurrency system that can confirm transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence.

Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed.

We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 30$\times$ Bitcoin's throughput, and incurs almost no penalty for scaling to more users.
We take a critical look at established security definitions for predicate encryption (PE) with public index under chosen-plaintext attack (CPA) and under chosen-ciphertext attack (CCA). In contrast to conventional public-key encryption (PKE), security definitions for PE have to deal with user collusion which is modeled by an additional key generation oracle. We identify three different formalizations of key handling in the literature implicitly assumed to lead to the same security notion. Contrary to this assumption we prove that the corresponding models result in two different security notions under CPA and three different security notions under CCA. Similarly to the recent results for PKE and conventional key-encapsulation mechanism (KEM) (Journal of Cryptology, 2015) we also analyze subtleties in security definitions for PE and predicate key-encapsulation mechanism (P-KEM) regarding the so-called "no-challenge-decryption" condition. While the results for PE and PKE are similar, the results for P-KEM significantly differ from the corresponding results for conventional KEM. Our analysis is based on appropriate definitions of semantic security and indistinguishability of encryptions for PE under different attacks scenarios. These definitions complement related security definitions for identity-based encryption and functional encryption. As a result of our work we suggest security definitions for PE and P-KEM under different attack scenarios.
23 May 2017
Machine learning models hosted in a cloud service are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive information. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model.

We present MiniONN, the first approach for transforming an existing neural network to an oblivious neural network supporting privacy-preserving predictions with reasonable efficiency. Unlike prior work, MiniONN requires no change to how models are trained. To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by transforming several typical neural network models trained from standard datasets.
The goal of leakage-resilient cryptography is to construct cryptographic algorithms that are secure even if the adversary obtains side-channel information from the real world implementation of these algorithms. Most of the prior works on leakage-resilient cryptography consider leakage models where the adversary has access to the leakage oracle before the challenge-ciphertext is generated (before-the-fact leakage). In this model, there are generic compilers that transform any leakage-resilient CPA-secure public key encryption (PKE) scheme to its CCA-2 variant using Naor-Yung type of transformations. In this work, we give an efficient generic compiler for transforming a leakage-resilient CPA-secure PKE to leakage-resilient CCA-2 secure PKE in presence of after-the-fact split-state (bounded) memory leakage model, where the adversary has access to the leakage oracle even after the challenge phase. The salient feature of our transformation is that the leakage rate (defined as the ratio of the amount of leakage to the size of secret key) of the transformed after-the-fact CCA-2 secure PKE is same as the leakage rate of the underlying after-the-fact CPA-secure PKE, which is $1-o(1)$. We then present another generic compiler for transforming an after-the-fact leakage-resilient CCA-2 secure PKE to a leakage-resilient authenticated key exchange (AKE) protocol in the bounded after-the-fact leakage-resilient eCK (BAFL-eCK) model proposed by Alawatugoda et al. (ASIACCS'14). To the best of our knowledge, this gives the first compiler that transform any leakage-resilient CCA-2 secure PKE to an AKE protocol in the leakage variant of the eCK model.
An emerging direction for authenticating people is the adoption of biometric authentication systems. Biometric credentials are becoming increasingly popular as a mean of authenticating people due to the wide rage of advantages that they provide with respect to classical authentication methods (e.g., password-based authentication). The most characteristic feature of this authentication method is the naturally strong bond between a user and her biometric credentials. This very same advantageous property, however, raises serious security and privacy concerns in case the biometric trait gets compromised. In this article, we present the most challenging issues that need to be taken into consideration when designing secure and privacy- preserving biometric authentication protocols. More precisely, we describe the main threats against privacy-preserving biometric authentication systems and give directions on possible countermeasures in order to design secure and privacy-preserving biometric authentication protocols.
Many block ciphers use permutations defined over the finite field $\mathbb{F}_{2^{2k}}$ with low differential uniformity, high nonlinearity, and high algebraic degree to provide confusion. Due to the lack of knowledge about the existence of almost perfect nonlinear (APN) permutations over $\mathbb{F}_{2^{2k}}$, which have lowest possible differential uniformity, when $k>3$, constructions of differentially 4-uniform permutations are usually considered. However, it is also very difficult to construct such permutations together with high nonlinearity; there are very few known families of such functions, which can have the best known nonlinearity and a high algebraic degree. At Crypto'16, Perrin et al. introduced a structure named butterfly, which leads to permutations over $\mathbb{F}_{2^{2k}}$ with differential uniformity at most 4 and very high algebraic degree when $k$ is odd. It is posed as an open problem in Perrin et al.'s paper and solved by Canteaut et al. that the nonlinearity is equal to $2^{2k-1}-2^k$. In this paper, we extend Perrin et al.'s work and study the functions constructed from butterflies with exponent $e=2^i+1$. It turns out that these functions over $\mathbb{F}_{2^{2k}}$ with odd $k$ have differential uniformity at most 4 and algebraic degree $k+1$. Moreover, we prove that for any integer $i$ and odd $k$ such that $\gcd(i,k)=1$, the nonlinearity equality holds, which also gives another solution to the open problem proposed by Perrin et al. This greatly expands the list of differentially 4-uniform permutations with good nonlinearity and hence provides more candidates for the design of block ciphers.

  older items