International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

18 June 2018
ePrint Report Privacy Preserving Verifiable Key Directories Melissa Chase, Apoorvaa Deshpande, Esha Ghosh
In recent years, some of the most popular online chat services such as iMessage and WhatsApp have deployed end-to-end encryption to mitigate some of the privacy risks to the transmitted messages. But facilitating end-to-end encryption requires a Public Key Infrastructure (PKI), so these services still require the service provider to maintain a centralized directory of public keys. A downside of this design is placing a lot of trust in the service provider; a malicious or compromised service provider can still intercept and read users' communication just by replacing the user's public key with one for which they know the corresponding secret. A recent work by Melara et al. builds a system called CONIKS where the service provider is required to prove that it is returning a consistent for each user. This allows each user to monitor his own key and reduces some of the risks of placing a lot of trust in the service provider. New systems [EthIKS,Catena] are already being built on CONIKS. While these systems are extremely relevant in practice, the security and privacy guarantees of these systems are still based on some ad-hoc analysis rather than on a rigorous foundation. In addition, without modular treatment, improving on the efficiency of these systems is challenging. In this work, we formalize the security and privacy requirements of a verifiable key service for end-to-end communication in terms of the primitive called {\em Verifiable Key Directories} (VKD). Our abstraction captures the functionality of all three existing systems: CONIKS, EthIKS and Catena. We quantify the leakage from these systems giving us a better understanding of their privacy in concrete terms. Finally, we give a VKD construction (with concrete efficiency analysis) which improves significantly on the existing ones in terms of privacy and efficiency. Our design modularly builds from another primitive that we define as append-only zero knowledge sets (aZKS) and from append-only Strong Accumulators. By providing modular constructions, we allow for the independent study of each of these building blocks: an improvement in any of them would directly result in an improved VKD construction. Our definition of aZKS generalizes the definition of the zero knowledge set for updates, which is a secondary contribution of this work, and can be of independent interest.
ePrint Report Continuously Non-Malleable Codes with Split-State Refresh Antonio Faonio, Jesper Buus Nielsen, Mark Simkin, Daniele Venturi
Non-malleable codes for the split-state model allow to encode a message into two parts, such that arbitrary independent tampering on each part, and subsequent decoding of the corresponding modified codeword, yields either the same as the original message, or a completely unrelated value. Continuously non-malleable codes further allow to tolerate an unbounded (polynomial) number of tampering attempts, until a decoding error happens. The drawback is that, after an error happens, the system must self-destruct and stop working, otherwise generic attacks become possible.

In this paper we propose a solution to this limitation, by leveraging a split-state refreshing procedure. Namely, whenever a decoding error happens, the two parts of an encoding can be locally refreshed (i.e.,\ without any interaction), which allows to avoid the self-destruct mechanism in some applications. Additionally, the refreshing procedure can be exploited in order to obtain security against continual leakage attacks. We give an abstract framework for building refreshable continuously non-malleable codes in the common reference string model, and provide a concrete instantiation based on the external Diffie-Hellman assumption.

Finally, we explore applications in which our notion turns out to be essential. The first application is a signature scheme tolerating an arbitrary polynomial number of split-state tampering attempts, without requiring a self-destruct capability, and in a model where refreshing of the memory happens only after an invalid output is produced. This circumvents an impossibility result from a recent work by Fuijisaki and Xagawa (Asiacrypt 2016). The second application is a compiler for tamper-resilient read-only RAM programs. In comparison to other tamper-resilient RAM compilers, ours has several advantages, among which the fact that, in some cases, it does not rely on the self-destruct feature.
In this paper, we propose a new type of non-recursive Mastrovito multiplier for $GF(2^m)$ using a $n$-term Karatsuba algorithm (KA), where $GF(2^m)$ is defined by an irreducible trinomial, $x^m+x^k+1, m=nk$. We show that such a type of trinomial combined with the $n$-term KA can fully exploit the spatial correlation of entries in related Mastrovito product matrices and lead to a low complexity architecture. The optimal parameter $n$ is further studied. As the main contribution of this study, the lower bound of the space complexity of our proposal is about $O(\frac{m^2}{2}+m^{3/2})$. Meanwhile, the time complexity matches the best Karatsuba multiplier known to date. To the best of our knowledge, it is the first time that Karatsuba-based multiplier has reached such a space complexity bound while maintaining relatively low time delay.
ePrint Report Attack on Kayawood Protocol: Uncloaking Private Keys Matvei Kotov, Anton Menshov, Alexander Ushakov
We analyze security properties of a two-party key-agreement protocol recently proposed by I. Anshel, D. Atkins, D. Goldfeld, and P. Gunnels, called Kayawood protocol. At the core of the protocol is an action (called E-multiplication) of a braid group on some finite set. The protocol assigns a secret element of a braid group to each party (private key). To disguise those elements, the protocol uses a so-called cloaking method that multiplies private keys on the left and on the right by specially designed elements (stabilizers for E-multiplication).

We present a heuristic algorithm that allows a passive eavesdropper to recover Alice's private key by removing cloaking elements. Our attack has 100% success rate on randomly generated instances of the protocol for the originally proposed parameter values and for recent proposals that suggest to insert many cloaking elements at random positions of the private key. Our implementation of the attack is available on GitHub.
ePrint Report Actively Secure OT-Extension from q-ary Linear Codes Ignacio Cascudo, René Bødker Christensen, Jaron Skovsted Gundersen
We consider recent constructions of $1$-out-of-$N$ OT-extension from Kolesnikov and Kumaresan (CRYPTO 2013) and from Orrú et al. (CT-RSA 2017), based on binary error-correcting codes. We generalize their constructions such that $q$-ary codes can be used for any prime power $q$. This allows to reduce the number of base $1$-out-of-$2$ OT's that are needed to instantiate the construction for any value of $N$, at the cost of increasing the complexity of the remaining part of the protocol. We analyze these trade-offs in some concrete cases.
ePrint Report On the Universally Composable Security of OpenStack Kyle Hogan, Hoda Maleki, Reza Rahaeimehr, Ran Canetti, Marten van Dijk, Jason Hennessey, Mayank Varia, Haibin Zhang
OpenStack is the prevalent open-source, non-proprietary package for managing cloud services and data centers. It is highly complex and consists of multiple inter-related components which are developed by separate, loosely coordinated groups. We initiate an effort to provide a rigorous and holistic security analysis of OpenStack. Our analysis has the following key features:

-It is user-centric: It stresses the security guarantees given to users of the system, in terms of privacy, correctness, and timeliness of the services.

-It provides defense in depth: It considers the security of OpenStack even when some of the components are compromised. This departs from the traditional design approach of OpenStack, which assumes that all services are fully trusted.

-It is modular: It formulates security properties for individual components and uses them to assert security properties of the overall system.

We base our modeling and security analysis in the universally composable (UC) security framework, which has been so far used mainly for analyzing security of cryptographic protocols. Indeed, demonstrating how the UC framework can be used to argue about security-sensitive systems which are mostly non-cryptographic in nature is another main contribution of this work.

Our analysis covers only a number of core components of OpenStack. Still, it uncovers some basic and important security trade-offs in the design. It also naturally paves the way to a more comprehensive analysis of OpenStack.
ePrint Report Verifiable Delay Functions Dan Boneh, Joseph Bonneau, Benedikt Bünz, Ben Fisch
We study the problem of building a verifiable delay function (VDF). A VDF requires a specified number of sequential steps to evaluate, yet produces a unique output that can be efficiently and publicly verified. VDFs have many applications in decentralized systems, including public randomness beacons, leader election in consensus protocols, and proofs of replication. We formalize the requirements for VDFs and present new candidate constructions that are the first to achieve an exponential gap between evaluation and verification time.
In this paper we proposed an ultra-lightweight cipher GRANULE. It is based on Feistel network which encrypts 64 bits of data with 80/128 bits of key. GRANULE needs very less memory space as compared to existing lightweight ciphers .GRANULE needs 1288 GEs for 80 bit and 1577 GEs for 128 bit key size. It also shows good resistance against linear and differential cryptanalysis. GRANULE needs very small footprint area and provides robust secure design which thwart attacks like biclique attack, zero correlation attack, meet in the middle attack ,key schedule attack and key collision attack. GRANULE is having a strong S-box which is the key designing aspect in any cipher design. In this paper GRANULE is proposed with 32 rounds which are enough to provide resistance against all possible types of attacks. GRANULE consumes very less power as compared to other modern lightweight ciphers. We believe GRANULE cipher is the best suited cipher for providing robust security in applications like IoT.
ePrint Report CHQS: Publicly Verifiable Homomorphic Signatures Beyond the Linear Case Lucas Schabh{\"u}ser, Denis Butin, Johannes Buchmann
Sensitive data is often outsourced to cloud servers, with the server performing computation on the data. Computational correctness must be efficiently verifiable by a third party while the input data remains confidential. This paper introduces CHQS, a homomorphic signature scheme from bilinear groups fulfilling these requirements. CHQS is the first such scheme to be both context hiding and publicly verifiable for arithmetic circuits of degree two. It also achieves amortized efficiency: after a precomputation, verification can be faster than the evaluation of the circuit itself.
ePrint Report Trends in design of ransomware viruses Vlad Constantin Craciun, Andrei Mogage, Emil Simion
The ransomware nightmare is taking over the internet impacting common users,small businesses and large ones. The interest and investment which are pushed into this market each month, tells us a few things about the evolution of both technical and social engineering and what to expect in the short-coming future from them. In this paper we analyze how ransomware programs developed in the last few years and how they were released in certain market segments throughout the deep web via RaaS, exploits or SPAM, while learning from their own mistakes to bring profit to the next level. We will also try to highlight some mistakes that were made, which allowed recovering the encrypted data, along with the ransomware authors preference for specific encryption types, how they got to distribute, the silent agreement between ransomwares, coin-miners and bot-nets and some edge cases of encryption, which may prove to be exploitable in the short-coming future.
ePrint Report Consolidating Security Notions in Hardware Masking Lauren De Meyer, Begül Bilgin, Oscar Reparaz
This paper revisits the security conditions of masked hardware implementations. We describe a new, succinct, information-theoretic condition to ensure security in the presence of glitches. This single condition includes, but is not limited to, previous security notions such as those used in threshold implementations. As a consequence, we can prove the security of masked functions that work with non-uniform input sharings. Our notion naturally generalizes to higher orders. Furthermore, we can apply our condition in a tool that efficiently tests and validates the resistance of masked hardware circuits against DPA. Finally, we also treat the notion of (strong) non-interference from an information-theoretic point-of-view in order to unify the different security concepts and pave the way to the verification of composability in the presence of glitches.
ePrint Report Continuous NMC Secure Against Permutations and Overwrites, with Applications to CCA Secure Commitments Ivan Damgård, Tomasz Kazana, Maciej Obremski, Varun Raj, Luisa Siniscalchi
Non-Malleable Codes (NMC) were introduced by Dziembowski, Pietrzak and Wichs in ICS 2010 as a relaxation of error correcting codes and error detecting codes. Faust, Mukherjee, Nielsen, and Venturi in TCC 2014 introduced an even stronger notion of non-malleable codes called continuous non-malleable codes where security is achieved against continuous tampering of a single codeword without re-encoding. We construct information theoretically secure CNMC resilient to bit permutations and overwrites, this is the first Continuous NMC constructed outside of the split-state model. In this work we also study relations between the CNMC and parallel CCA commitments. We show that the CNMC can be used to bootstrap a \sd\ parallel CCA bit commitment to a \sd\ parallel CCA string commitment, where \sd\ parallel CCA is a weak form of parallel CCA security. Then we can get rid of the \sd\ limitation obtaining a parallel CCA commitment, requiring only one-way functions.
ePrint Report Randomness analysis for multiple-recursive matrix generator Subhrajyoti Deb, Bubu Bhuyan, Sartaj Ul Hasan
Randomness testing of binary sequences generated by any keystream generator is of paramount importance to both designer and attacker. Here we consider a word-oriented keystream generator known as multiple-recursive matrix generator, which was introduced by Niederreiter (1993). Using NIST statistical test suite as well as DieHarder statistical package, we analyze randomness properties of binary sequences generated by multiple-recursive matrix generator and show that these sequences are not really adequate for any cryptographic applications. We also propose nonlinearly filtered multiple-recursive matrix generator based a special nonlinear function and establish that sequences generated by such a nonlinearly filtered multiple-recursive matrix generator provide good randomness results. Moreover, we compare our randomness test results with some of the recent lightweight stream ciphers like Lizard, Fruit, and Plantlet.
14 June 2018
ePrint Report On the Impossibility of Cryptography with Tamperable Randomness Per Austrin, Kai-Min Chung, Mohammad Mahmoody, Rafael Pass, Karn Seth
We initiate a study of the security of cryptographic primitives in the presence of efficient tampering attacks to the randomness of honest parties. More precisely, we consider p-tampering attackers that may \emph{efficiently} tamper with each bit of the honest parties' random tape with probability p, but have to do so in an ``online'' fashion. Our main result is a strong negative result: We show that any secure encryption scheme, bit commitment scheme, or zero-knowledge protocol can be ``broken'' with probability $p$ by a $p$-tampering attacker. The core of this result is a new Fourier analytic technique for biasing the output of bounded-value functions, which may be of independent interest.

We also show that this result cannot be extended to primitives such as signature schemes and identification protocols: assuming the existence of one-way functions, such primitives can be made resilient to (\nicefrac{1}{\poly(n)})-tampering attacks where $n$ is the security~parameter.
12 June 2018
ePrint Report Cryptanalysis of SFN Block Cipher Sadegh Sadeghi, Nasour Bagheri
SFN is a lightweight block cipher designed to be compact in hardware environment and also efficient in software platforms. Compared to the conventional block ciphers that are either Feistel or Substitution-Permutation (SP) network based, SFN has a different encryption method which uses both SP network structure and Feistel network structure to encrypt. SFN supports key lengths of 96 bits and its block length is 64 bits. In this paper, we propose an attack on full SFN by using the related key distinguisher. With this attack, we are able to recover the keys with a time complexity of 2^{60.58} encryptions. The data and memory complexity of the attacks are negligible. In addition, in the single key mode, we present a meet in the middle attack against the full rounds block cipher for which the time complexity is 2^{80} SFN calculations and the memory complexity is 2^{87} bytes. The date complexity of this attack is only a single known plaintext and its corresponding ciphertext.
ePrint Report Ramanujan graphs in cryptography Anamaria Costache, Brooke Feigon, Kristin Lauter, Maike Massierer, Anna Puskas
In this paper we study the security of a proposal for Post-Quantum Cryptography from both a number theoretic and cryptographic perspective. Charles-Goren-Lauter in 2006 proposed two hash functions based on the hardness of finding paths in Ramanujan graphs. One is based on Lubotzky--Phillips--Sarnak (LPS) graphs and the other one is based on Supersingular Isogeny Graphs. A 2008 paper by Petit-Lauter-Quisquater breaks the hash function based on LPS graphs. On the Supersingular Isogeny Graphs proposal, recent work has continued to build cryptographic applications on the hardness of finding isogenies between supersingular elliptic curves. A 2011 paper by De Feo-Jao-Pl\^ut proposed a cryptographic system based on Supersingular Isogeny Diffie--Hellman as well as a set of five hard problems. In this paper we show that the security of the SIDH proposal relies on the hardness of the SIG path-finding problem introduced in [CGL06]. In addition, similarities between the number theoretic ingredients in the LPS and Pizer constructions suggest that the hardness of the path-finding problem in the two graphs may be linked. By viewing both graphs from a number theoretic perspective, we identify the similarities and differences between the Pizer and LPS graphs.
XS-circuits describe block ciphers that utilize 2 operations: X) bitwise modulo 2 addition of binary words and S) substitution of words using key-dependent S-boxes with possibly complicated internal structure. We propose a model of XS-circuits which, despite the simplicity, covers a rather wide range of block ciphers. In our model, several instances of a simple round circuit, which contains only one S~operation, are linked together and form a compound circuit called a cascade. S operations of a cascade are interpreted as independent round oracles. We deal with diffusion characteristics of cascades. These characteristics are related to the cryptographic strength of corresponding block ciphers. We obtain results on invertibility, transitivity and 2-transitivity of mappings induced by round circuits and their cascades. We provide estimates on the first and second activation times where the i-th activation time is the minimum number of rounds which guarantees that at least i round oracles get different queries while processing two different cascade's inputs. The activation times are related to differential cryptanalysis. We introduce the similarity and duality relations between round circuits. Cascades of related circuits have the same or dual diffusion characteristics. We find canonical representatives of classes of similar circuits and show that the duality between circuits is related to duality between differential and linear attacks against corresponding block ciphers. We discuss families of circuits with growing number of inputs. Such families can be used to build wide-block ciphers.
4-bit crypto S-boxes play a significant role in encryption and decryption of many cipher algorithms from last 4 decades. Generation and cryptanalysis of generated 4-bit crypto S-boxes is one of the major concerns of modern cryptography till now. In this paper 48, 4-bit crypto S-boxes are generated with addition of all possible additive constants to the each element of crypto S-box of corresponding multiplicative inverses of all elemental polynomials (EPs) under the concerned irreducible polynomials (IPs) over Galois field GF(2^4). Cryptanalysis of 48 generated 4-bit crypto S-boxes is done with all relevant cryptanalysis algorithms of 4-bit crypto S-boxes. The result shows the better security of generated 4-bit crypto S-boxes.
We propose a new computational problem over the noncommutative group, called the twin conjugacy search problem. This problem is related to the conjugacy search problem and can be used for almost all of the same cryptographic constructions that are based on the conjugacy search problem. However, our new problem is at least hard as the conjugacy search problem. Moreover, the twin conjugacy search problem have many applications. One of the most important applications, we propose a trapdoor test which can replace the function of the decision oracle. We also show other applications of the problem, including: a non-interactive key exchange protocol and a key exchange protocol, a new encryption scheme which is secure against chosen ciphertext attack, with a very simple and tight security proof and short ciphertexts, under a weak assumption, in the random oracle model.
ePrint Report Implementation and Performance Evaluation of RNS Variants of the BFV Homomorphic Encryption Scheme Ahmad Al Badawi, Yuriy Polyakov, Khin Mi Mi Aung, Bharadwaj Veeravalli, Kurt Rohloff
Homomorphic encryption provides the ability to compute on encrypted data without ever decrypting them. Potential applications include aggregating sensitive encrypted data on a cloud environment and computing on the data in the cloud without compromising data privacy. There have been several recent advances resulting in new homomorphic encryption schemes and optimized variants of existing schemes. Two efficient Residue-Number-System variants of the Brakerski-Fan-Vercauteren homomorphic encryption scheme were recently proposed: the Bajard-Eynard-Hasan-Zucca (BEHZ) variant based on integer arithmetic with auxiliary moduli, and the Halevi-Polyakov-Shoup (HPS) variant based on a combination of integer and floating-point arithmetic techniques. We implement and evaluate the performance of both variants in CPU (both single- and multi-threaded settings) and GPU. The most interesting (and also unexpected) result of our performance evaluation is that the HPS variant in practice scales significantly better (typically by 15%-30%) with increase in multiplicative depth of the computation circuit than BEHZ. This implies that the runtime performance and supported circuit depth for HPS will always be better for most practical applications. The comparison of the homomorphic multiplication runtimes for CPU and GPU demonstrates that our best GPU performance results are 3x-33x faster than our best multi-threaded results for a modern server CPU environment. For the multiplicative depth of 98, our fastest GPU implementation performs decryption in 0.5 ms and homomorphic multiplication in 51 ms for 128-bit security settings, which is already practical for cloud environments supporting GPU computations. Our best runtime results are at least two orders of magnitude faster than all previously reported results for any variant of the Brakerski-Fan-Vercauteren scheme.
ePrint Report BISEN: Efficient Boolean Searchable Symmetric Encryption with Verifiability and Minimal Leakage Guilherme Borges, Henrique Domingos, Bernardo Ferreira, João Leitão, Tiago Oliveira, Bernardo Portela
The prevalence and availability of cloud infrastructures has made them the de facto solution for storing and archiving data, both for organizations and individual users. Nonetheless, the cloud's wide spread adoption is still hindered by data privacy and security concerns, particularly in applications with large data collections where efficient search and retrieval services are also major requirements. This leads to increased tension between security, efficiency, and search expressiveness, which current state of art solutions try to balance through complex cryptographic protocols that sacrifice efficiency and expressiveness for near optimal security.

In this paper we tackle this tension by proposing BISEN, a new provably-secure boolean searchable symmetric encryption scheme that improves these three complementary dimensions by exploring the design space of isolation guarantees offered by novel commodity hardware such as Intel SGX, abstracted as Isolated Execution Environments (IEEs). BISEN is the first scheme to enable highly expressive and arbitrarily complex boolean queries, with minimal leakage of information regarding performed queries and accessed data. Furthermore, by exploiting trusted hardware and the IEE abstraction, BISEN reduces communication costs between the client and the cloud, boosting query execution performance. Experimental validation and comparison with the state of art shows that BISEN provides better performance with enriched search semantics and security
Witness pseudorandom functions (witness PRFs), introduced by Zhandry [Zha16], was defined for an NP language L and generate a pseudorandom value for any instance x. The same pseudorandom value can be obtained efficiently using a valid witness w for x belongs to L. Zhandry built a subset-sum encoding scheme from multilinear maps and then converted a relation circuit corresponding to an NP language L to a subset-sum instance to achieve a witness PRF for L. The main goal in developing witness PRF in [Zha16] is to avoid obfuscation from various constructions of cryptographic primitives. Reliance on cryptographic tools built from multilinear maps may be perilous as existing multilinear maps are still heavy tools to use and suffering from many non-trivial attacks.

In this work, we give constructions of the following cryptographic primitives without using multilinear maps and instantiating obfuscation from randomized encoding: – We construct witness PRFs using a puncturable pseudorandom function and sub-exponentially secure randomized encoding scheme in common reference string (CRS) model. A sub-exponentially secure randomized encoding scheme in CRS model can be achieved from a sub-exponentially secure public key functional encryption scheme and learning with error assumptions with sub-exponential hardness. – We turn our witness PRF into a multi-relation witness PRF where one can use the scheme with a class of relations related to an NP language. – Furthermore, we construct an offline witness encryption scheme using our proposed witness PRF. The offline witness encryption scheme of Abusalah et al. [AFP16] was built from a plain public-key encryption, a statistical simulation-sound non-interactive zero knowledge (SSS-NIZK) proof system and obfuscation. In their scheme, a(n) SSS-NIZK proof is needed for the encryption whose efficiency depends on the underlying public key encryption. We replace SSS-NIZK by our witness PRF and construct an offline witness encryption scheme. More precisely, our scheme is based on a public-key encryption, a witness PRF and employs a sub-exponentially secure randomized encoding scheme in CRS model instantiating obfuscation. Our offline witness encryption can be turned into an offline functional witness encryption scheme where decryption releases a function of a message and witness as output.
ePrint Report Lower Bounds on Lattice Enumeration with Extreme Pruning Yoshinori Aono, Phong Q. Nguyen, Takanobu Seito, Junji Shikata
At Eurocrypt '10, Gama, Nguyen and Regev introduced lattice enumeration with extreme pruning: this algorithm is implemented in state-of-the-art lattice reduction software and used in challenge records. They showed that extreme pruning provided an exponential speed-up over full enumeration. However, no limit on its efficiency was known, which was problematic for long-term security estimates of lattice-based cryptosystems. We prove the first lower bounds on lattice enumeration with extreme pruning: if the success probability is lower bounded, we can lower bound the global running time taken by extreme pruning. Our results are based on geometric properties of cylinder intersections and some form of isoperimetry. We discuss their impact on lattice security estimates.
ePrint Report Polynomial Functional Encryption Scheme with Linear Ciphertext Size Jung Hee Cheon, Seungwan Hong, Changmin Lee, Yongha Son
In this paper, we suggest a new selective secure functional encryption scheme for degree $d$ polynomial. The number of ciphertexts for a message with length $\ell$ in our scheme is $O(\ell)$ regardless of $d$, while it is at least $\ell^{d/2}$ in the previous works.

Our main idea is to generically combine two abstract encryption schemes that satisfies some special properties. We also gives an instantiation of our scheme by combining ElGamal scheme and Ring-LWE based homomorphic encryption scheme, whose ciphertext length is exactly $2\ell+1,$ for any degree $d.$
We present a new method that produces bounded FHE schemes (see Definition 3), starting with encryption schemes that support one algebraic operation. We use this technique to construct examples of encryption schemes that, theoretically can handle any algebraic function on encrypted data.
ePrint Report Ring Homomorphic Encryption Schemes Mugurel Barcau, Vicentiu Pasol
We analyze the structure of commutative ring homomorphic encryption schemes and show that they are not quantum IND-CCA secure.
6 June 2018
ePrint Report Pisa: Arbitration Outsourcing for State Channels Patrick McCorry, Surya Bakshi, Iddo Bentov, Andrew Miller, Sarah Meiklejohn
State channels are a leading approach for improving the scalability of blockchains and cryptocurrencies. They allow a group of distrustful parties to optimistically execute an application-defined program amongst themselves, while the blockchain serves as a backstop in case of a dispute or abort. This effectively bypasses the congestion, fees and performance constraints of the underlying blockchain in the typical case. However, state channels introduce a new and undesirable assumption that a party must remain on-line and synchronised with the blockchain at all times to defend against execution fork attacks. An execution fork can revert a state channel’s history, potentially causing financial damage to a party that is innocent except for having crashed. To provide security even to parties that may go off-line for an extended period of time, we present Pisa, a protocol enables such parties to delegate to a third party, called the custodian, to cancel execution forks on their behalf. To evaluate Pisa, we provide a proof-of-concept implementation for a simplified Sprites and we demonstrate that it is cost-efficient to deploy on the Ethereum network.
ePrint Report Smart contracts for bribing miners Patrick McCorry, Alexander Hicks, Sarah Meiklejohn
We present three smart contracts that allow a briber to fairly exchange bribes to miners who pursue a mining strategy benefiting the briber. The first contract, CensorshipCon, highlights that Ethereum’s uncle block reward policy can directly subsidise the cost of bribing miners. The second contract, HistoryRevisionCon, rewards miners via an in-band payment for reversing transactions or enforcing a new state of another contract. The third contract, GoldfingerCon, rewards miners in one cryptocurrency for reducing the utility of another cryptocurrency. This work is motivated by the need to understand the extent to which smart contracts can impact the incentive mechanisms involved in Nakamoto-style consensus protocols.
ePrint Report Secure MPC: Laziness Leads to GOD Saikrishna Badrinarayanan, Aayush Jain, Nathan Manohar, Amit Sahai
Secure multi-party computation (MPC) is a problem of fundamental interest in cryptography. Traditional MPC protocols treat a "lazy" party (one that behaves honestly but aborts in the middle of the protocol execution) as a corrupt party that is colluding with the other corrupt parties. However, this outlook is unrealistic as there are various cases where an honest party may abort and turn "lazy" during the execution of a protocol without colluding with other parties. To address this gap, we initiate the study of what we call secure lazy multi-party computation with a formal definition that achieves meaningful correctness and security guarantees. We then construct a three-round lazy MPC protocol from standard cryptographic assumptions. Our protocol is malicious secure in the presence of a CRS and semi-malicious secure in the plain model without a CRS.

Using the techniques from above, we also achieve independently interesting consequences in the traditional MPC model. We construct the first round-optimal (three rounds) MPC protocol in the plain model (without a CRS) that achieves guaranteed output delivery (GOD) and is malicious-secure in the presence of an honest majority of parties. Previously, Gordon et al. [CRYPTO' 15] constructed a three-round protocol that achieves GOD in the honest majority setting, but required a CRS. They also showed that it is impossible to construct two-round protocols for the same even in the presence of a CRS. Thus, our result is the first 3-round GOD protocol that does not require any trusted setup, and it is optimal in terms of round complexity.

Furthermore, all our above protocols have communication complexity proportional only to the depth of the circuit being evaluated.

The key technical tool in our above protocols is a primitive called threshold multi-key fully homomorphic encryption which we define and construct assuming just learning with errors (LWE). We believe this primitive may be of independent interest.
ePrint Report PIR-PSI: Scaling Private Contact Discovery Daniel Demmler, Peter Rindal, Mike Rosulek, Ni Trieu
An important initialization step in many social-networking applications is contact discovery, which allows a user of the service to identify which of its existing social contacts also use the service. Naive approaches to contact discovery reveal a user's entire set of social/professional contacts to the service, presenting a significant tension between functionality and privacy.

In this work, we present a system for private contact discovery, in which the client learns only the intersection of its own contact list and a server's user database, and the server learns only the (approximate) size of the client's list. The protocol is specifically tailored to the case of a small client set and large user database. Our protocol has provable security guarantees and combines new ideas with state-of-the-art techniques from private information retrieval and private set intersection.

We report on a highly optimized prototype implementation of our system, which is practical on real-world set sizes. For example, contact discovery between a client with 1024 contacts and a server with 67 million user entries takes 1.36 sec (when using server multi-threading) and uses only 4.28 MiB of communication.

  older items