IACR News
If you have a news item you wish to distribute, they should be sent to the communications secretary. See also the events database for conference announcements.
Here you can see all recent updates to the IACR webpage. These updates are also available:
06 June 2018
Patrick McCorry, Surya Bakshi, Iddo Bentov, Andrew Miller, Sarah Meiklejohn
State channels are a leading approach for improving the scalability of blockchains and cryptocurrencies. They allow a group of distrustful parties to optimistically execute an application-defined program amongst themselves, while the blockchain serves as a backstop in case of a dispute or abort. This effectively bypasses the congestion, fees and performance constraints of the underlying blockchain in the typical case. However, state channels introduce a new and undesirable assumption that a party must remain on-line and synchronised with the blockchain at all times to defend against execution fork attacks. An execution fork can revert a state channels history, potentially causing financial damage to a party that is innocent except for having crashed. To provide security even to parties that may go off-line for an extended period of time, we present Pisa, a protocol enables such parties to delegate to a third party, called the custodian, to cancel execution forks on their
behalf. To evaluate Pisa, we provide a proof-of-concept implementation for a simplified Sprites and we demonstrate that it is cost-efficient to deploy on the Ethereum network.
Patrick McCorry, Alexander Hicks, Sarah Meiklejohn
We present three smart contracts that allow a briber to fairly exchange bribes to miners who pursue a mining strategy benefiting the briber. The first contract, CensorshipCon, highlights that Ethereums uncle block reward policy can directly subsidise the cost of bribing miners. The second contract, HistoryRevisionCon, rewards miners via an in-band payment for reversing transactions or enforcing a new state of another contract. The third contract, GoldfingerCon, rewards miners in one cryptocurrency for reducing the utility of another cryptocurrency. This work is motivated by the need to understand the extent to which smart contracts can impact the incentive mechanisms involved in Nakamoto-style consensus protocols.
Saikrishna Badrinarayanan, Aayush Jain, Nathan Manohar, Amit Sahai
Secure multi-party computation (MPC) is
a problem of fundamental interest in cryptography. Traditional MPC protocols treat a "lazy" party (one that behaves honestly but aborts in the middle of the protocol execution) as a corrupt party that is colluding with the other corrupt parties. However, this outlook is unrealistic as there are various cases where an honest party may abort and turn "lazy" during the execution of a protocol without colluding with other parties. To address this gap, we initiate the study of what we call secure lazy multi-party computation with a formal definition that achieves meaningful correctness and security guarantees. We then construct a three-round lazy MPC protocol from standard cryptographic assumptions. Our protocol is malicious secure in the presence of a CRS and semi-malicious secure in the plain model without a CRS.
Using the techniques from above, we also achieve independently interesting consequences in the traditional MPC model. We construct the first round-optimal (three rounds) MPC protocol in the plain model (without a CRS) that achieves guaranteed output delivery (GOD) and is malicious-secure in the presence of an honest majority of parties. Previously, Gordon et al. [CRYPTO' 15] constructed a three-round protocol that achieves GOD in the honest majority setting, but required a CRS. They also showed that it is impossible to construct two-round protocols for the same even in the presence of a CRS. Thus, our result is the first 3-round GOD protocol that does not require any trusted setup, and it is optimal in terms of round complexity.
Furthermore, all our above protocols have communication complexity proportional only to the depth of the circuit being evaluated.
The key technical tool in our above protocols is a primitive called threshold multi-key fully homomorphic encryption which we define and construct assuming just learning with errors (LWE). We believe this primitive may be of independent interest.
Using the techniques from above, we also achieve independently interesting consequences in the traditional MPC model. We construct the first round-optimal (three rounds) MPC protocol in the plain model (without a CRS) that achieves guaranteed output delivery (GOD) and is malicious-secure in the presence of an honest majority of parties. Previously, Gordon et al. [CRYPTO' 15] constructed a three-round protocol that achieves GOD in the honest majority setting, but required a CRS. They also showed that it is impossible to construct two-round protocols for the same even in the presence of a CRS. Thus, our result is the first 3-round GOD protocol that does not require any trusted setup, and it is optimal in terms of round complexity.
Furthermore, all our above protocols have communication complexity proportional only to the depth of the circuit being evaluated.
The key technical tool in our above protocols is a primitive called threshold multi-key fully homomorphic encryption which we define and construct assuming just learning with errors (LWE). We believe this primitive may be of independent interest.
Daniel Demmler, Peter Rindal, Mike Rosulek, Ni Trieu
An important initialization step in many social-networking applications is contact discovery, which allows a user of the service to identify which of its existing social contacts also use the service. Naive approaches to contact discovery reveal a user's entire set of social/professional contacts to the service, presenting a significant tension between functionality and privacy.
In this work, we present a system for private contact discovery, in which the client learns only the intersection of its own contact list and a server's user database, and the server learns only the (approximate) size of the client's list. The protocol is specifically tailored to the case of a small client set and large user database. Our protocol has provable security guarantees and combines new ideas with state-of-the-art techniques from private information retrieval and private set intersection.
We report on a highly optimized prototype implementation of our system, which is practical on real-world set sizes. For example, contact discovery between a client with 1024 contacts and a server with 67 million user entries takes 1.36 sec (when using server multi-threading) and uses only 4.28 MiB of communication.
In this work, we present a system for private contact discovery, in which the client learns only the intersection of its own contact list and a server's user database, and the server learns only the (approximate) size of the client's list. The protocol is specifically tailored to the case of a small client set and large user database. Our protocol has provable security guarantees and combines new ideas with state-of-the-art techniques from private information retrieval and private set intersection.
We report on a highly optimized prototype implementation of our system, which is practical on real-world set sizes. For example, contact discovery between a client with 1024 contacts and a server with 67 million user entries takes 1.36 sec (when using server multi-threading) and uses only 4.28 MiB of communication.
Jonathan Katz, Samuel Ranellucci, Mike Rosulek, Xiao Wang
Wang et al. (CCS 2017) recently proposed a protocol for malicious secure two-party computation that represents the state-of-the- art with regard to concrete efficiency in both the single-execution and amortized settings, with or without preprocessing. We show here several optimizations of their protocol that result in a significant improvement in the overall communication and running time. Specifically:
- We show how to make the authenticated garbling at the heart of their protocol compatible with the half-gate optimization of Zahur et al. (Eurocrypt 2015). We also show how to avoid sending an information-theoretic MAC for each garbled row. These two optimizations give up to a 2.6x improvement in communication, and make the communication of the online phase essentially equivalent to that of state-of-the-art semi-honest secure computation.
- We show various optimizations to their protocol for generating AND triples that, overall, result in a 1.5x improvement in the communication and a 2x improvement in the computation for that step.
- We show how to make the authenticated garbling at the heart of their protocol compatible with the half-gate optimization of Zahur et al. (Eurocrypt 2015). We also show how to avoid sending an information-theoretic MAC for each garbled row. These two optimizations give up to a 2.6x improvement in communication, and make the communication of the online phase essentially equivalent to that of state-of-the-art semi-honest secure computation.
- We show various optimizations to their protocol for generating AND triples that, overall, result in a 1.5x improvement in the communication and a 2x improvement in the computation for that step.
Tore Kasper Frederiksen, Yehuda Lindell, Valery Osheter, Benny Pinkas
We present two new, highly efficient, protocols for securely generating a distributed RSA key pair in the two-party setting. One protocol is semi-honestly secure and the other maliciously secure. Both are constant round and do not rely on any specific number-theoretic assumptions and improve significantly over the state-of-the-art by allowing a slight leakage (which we show to not affect security).
For our maliciously secure protocol our most significant improvement comes from executing most of the protocol in a ``strong'' semi-honest manner and then doing a single, light, zero-knowledge argument of correct execution. We introduce other significant improvements as well. One such improvement arrives in showing that certain, limited leakage does not compromise security, which allows us to use lightweight subprotocols. Another improvement, which may be of independent interest, comes in our approach for multiplying two large integers using OT, in the malicious setting, without being susceptible to a selective-failure attack. Finally, we implement our malicious protocol and show that its performance is an order of magnitude better than the best previous protocol, which provided only semi-honest security.
For our maliciously secure protocol our most significant improvement comes from executing most of the protocol in a ``strong'' semi-honest manner and then doing a single, light, zero-knowledge argument of correct execution. We introduce other significant improvements as well. One such improvement arrives in showing that certain, limited leakage does not compromise security, which allows us to use lightweight subprotocols. Another improvement, which may be of independent interest, comes in our approach for multiplying two large integers using OT, in the malicious setting, without being susceptible to a selective-failure attack. Finally, we implement our malicious protocol and show that its performance is an order of magnitude better than the best previous protocol, which provided only semi-honest security.
Pooya Farshim, Georg Fuchsbauer, Alain Passelègue
We revisit constructions of asymmetric primitives from obfuscation and give simpler alternatives. We consider public-key encryption, (hierarchical) identity-based encryption ((H)IBE), and predicate encryption. Obfuscation has already been shown to imply PKE by Sahai and Waters (STOC'14) and full-fledged functional encryption by Garg et al. (FOCS'13). We simplify all these constructions and reduce the necessary assumptions on the class of circuits that the obfuscator needs to support. Our PKE scheme relies on just a PRG and does not need any puncturing. Our IBE and bounded HIBE schemes convert natural key-delegation mechanisms from (recursive) applications of puncturable PRFs to IBE and HIBE schemes. Our most technical contribution is an unbounded HIBE, which uses (public-coin) differing-inputs obfuscation for circuits and whose proof relies on a recent pebbling-based hybrid argument by Fuchsbauer et al. (ASIACRYPT'14). All our constructions are anonymous, support arbitrary inputs, and have compact keys and ciphertexts.
Itai Dinur
The generalized birthday problem (GBP) was introduced by Wagner in 2002 and has shown to have many applications in cryptanalysis. In its typical variant, we are given access to a function $H:\{0,1\}^{\ell} \rightarrow \{0,1\}^n$ (whose specification depends on the underlying problem) and an integer $K>0$. The goal is to find $K$ distinct inputs to $H$ (denoted by $\{x_i\}_{i=1}^{K}$) such that $\sum_{i=1}^{K}H(x_i) = 0$. Wagner's K-tree algorithm solves the problem in time and memory complexities of about $N^{1/(\lfloor \log K \rfloor + 1)}$ (where $N= 2^n$). Two important open problems raised by Wagner were (1) devise efficient time-memory tradeoffs for GBP, and (2) reduce the complexity of the K-tree algorithm for $K$ which is not a power of 2.
In this paper, we make progress in both directions. First, we improve the best know GBP time-memory tradeoff curve (published by independently by Nikoli\'{c} and Sasaki and also by Biryukov and Khovratovich) for all $K \geq 8$ from $T^2M^{\lfloor \log K \rfloor -1} = N$ to $T^{\lceil (\log K)/2 \rceil + 1 }M^{\lfloor (\log K)/2 \rfloor} = N$, applicable for a large range of parameters. For example, for $K = 8$ we improve the best previous tradeoff from $T^2M^2 = N$ to $T^3M = N$ and for $K = 32$ the improvement is from $T^2M^4 = N$ to $T^4M^2 = N$.
Next, we consider values of $K$ which are not powers of 2 and show that in many cases even more efficient time-memory tradeoff curves can be obtained. Most interestingly, for $K \in \{6,7,14,15\}$ we present algorithms with the same time complexities as the K-tree algorithm, but with significantly reduced memory complexities. In particular, for $K=6$ the K-tree algorithm achieves $T=M=N^{1/3}$, whereas we obtain $T=N^{1/3}$ and $M=N^{1/6}$. For $K=14$, Wagner's algorithm achieves $T=M=N^{1/4}$, while we obtain $T=N^{1/4}$ and $M=N^{1/8}$. This gives the first significant improvement over the K-tree algorithm for small $K$.
Finally, we optimize our techniques for several concrete GBP instances and show how to solve some of them with improved time and memory complexities compared to the state-of-the-art.
Our results are obtained using a framework that combines several algorithmic techniques such as variants of the Schroeppel-Shamir algorithm for solving knapsack problems (devised in works by Howgrave-Graham and Joux and by Becker, Coron and Joux) and dissection algorithms (published by Dinur, Dunkelman, Keller and Shamir). It then builds on these techniques to develop new GBP algorithms.
In this paper, we make progress in both directions. First, we improve the best know GBP time-memory tradeoff curve (published by independently by Nikoli\'{c} and Sasaki and also by Biryukov and Khovratovich) for all $K \geq 8$ from $T^2M^{\lfloor \log K \rfloor -1} = N$ to $T^{\lceil (\log K)/2 \rceil + 1 }M^{\lfloor (\log K)/2 \rfloor} = N$, applicable for a large range of parameters. For example, for $K = 8$ we improve the best previous tradeoff from $T^2M^2 = N$ to $T^3M = N$ and for $K = 32$ the improvement is from $T^2M^4 = N$ to $T^4M^2 = N$.
Next, we consider values of $K$ which are not powers of 2 and show that in many cases even more efficient time-memory tradeoff curves can be obtained. Most interestingly, for $K \in \{6,7,14,15\}$ we present algorithms with the same time complexities as the K-tree algorithm, but with significantly reduced memory complexities. In particular, for $K=6$ the K-tree algorithm achieves $T=M=N^{1/3}$, whereas we obtain $T=N^{1/3}$ and $M=N^{1/6}$. For $K=14$, Wagner's algorithm achieves $T=M=N^{1/4}$, while we obtain $T=N^{1/4}$ and $M=N^{1/8}$. This gives the first significant improvement over the K-tree algorithm for small $K$.
Finally, we optimize our techniques for several concrete GBP instances and show how to solve some of them with improved time and memory complexities compared to the state-of-the-art.
Our results are obtained using a framework that combines several algorithmic techniques such as variants of the Schroeppel-Shamir algorithm for solving knapsack problems (devised in works by Howgrave-Graham and Joux and by Becker, Coron and Joux) and dissection algorithms (published by Dinur, Dunkelman, Keller and Shamir). It then builds on these techniques to develop new GBP algorithms.
Yackolley Amoussou-Guenou, Antonella Del Pozzo, Maria Potop-Butucaru, Sara Tucci-Piergiovanni
Tendermint-core blockchains offer strong consistency (no forks) in an open system relying on two ingredients (i) a set of validators that generate blocks via a variant of Practical Byzantine Fault Tolerant (PBFT) consensus protocol and (ii) a rewarding mechanism that dynamically selects nodes to be validators for the next block via proof-of-stake, a non-energy consuming alternative of proof-of-work. It is well-known that in those open systems the main threat is the tragedy of commons that may yield the system to collapse if the rewarding mechanism is not adequate. At minima the rewarding mechanism must be $fair$, i.e. distributing the rewards in proportion to the merit of participants.
The contribution of this paper is two-fold. First, we provide a formal description of Tendermint-core protocol and we prove that in eventual synchronous systems (i) it verifies a variant of one-shot consensus for the validation of one single block and (ii) a variant of the repeated consensus problem for multiple blocks.
Our second contribution relates to the fairness of Tendermint rewarding mechanism. We prove that Tendermint rewarding is not fair. However, a small twist in the protocol makes it eventually fair.
Additionally, we prove that there exists an (eventual) fair rewarding mechanism in repeated consensus-based blockchains if and only if the system is (eventually) synchronous.
05 June 2018
Farnoud Farahmand, William Diehl, Abubakr Abdulgadir, Jens-Peter Kaps, Kris Gaj
Authenticated ciphers offer potential benefits to resource-constrained devices in the Internet of Things (IoT). The CAESAR competition seeks optimal authenticated ciphers based on several criteria, including performance in resource-constrained (i.e., low-area, low-power, and low-energy) hardware. Although the competition specified a lightweight use case for Round 3, most hardware submissions to Round 3 were not lightweight implementations, in that they employed architectures optimized for best throughput-to-area (TP/A) ratio, and used the Pre- and PostProcessor modules from the CAE-SAR Hardware (HW) Development Package designed for high-speed applications. In this research, we provide true lightweight implementations of selected ciphers (ACORN, NORX, CLOC-AES, SILC-AES, and SILC-LED). These implementations use an improved version of the CAESAR HW DevelopmentPackage designed for lightweight applications, and are fully compliant with the CAESAR HW Application programming interface for Authenticated Ciphers. Our lightweight implementations achieve an average of 55% reduction in area and40% reduction in power compared to their corresponding high-speed versions. Although the average energy per bit of lightweight ciphers increases by a factor of 3.6, the lightweight version of NORX actually uses 47% less energy per bit than its corresponding high-speed implementation.
Prabhanjan Ananth, Arka Rai Choudhuri, Aarushi Goel, Abhishek Jain
We study the exact round complexity of secure multiparty computation (MPC) in the honest majority setting. We construct the following round-optimal protocols in the plain model:
- Security with abort: Assuming public-key encryption (PKE), we construct two round MPC that achieves security with abort against any $t<\frac{n}{2}$ malicious corruptions. Previously, the best known two round protocols only achieved weaker security notions against smaller corruption thresholds.
- Guaranteed output delivery: Assuming PKE, we construct two round MPC over broadcast and private channels that achieves security with guaranteed output delivery against any $t<\frac{n}{2}$ (semi-honest) fail-stop corruptions. This result overcomes the lower bounds of Gennaro et al. [CRYPTO'02] and Gordon et al. [CRYPTO'15]. With the additional assumption of Zaps, we also construct three round MPC in the broadcast model that achieves security with guaranteed output delivery against any $t<\frac{n}{2}$ malicious corruptions. Previously, such a protocol was only known in the common reference model, based on specific learning assumptions.
All of our results are obtained via general compilers that may be of independent interest.
- Security with abort: Assuming public-key encryption (PKE), we construct two round MPC that achieves security with abort against any $t<\frac{n}{2}$ malicious corruptions. Previously, the best known two round protocols only achieved weaker security notions against smaller corruption thresholds.
- Guaranteed output delivery: Assuming PKE, we construct two round MPC over broadcast and private channels that achieves security with guaranteed output delivery against any $t<\frac{n}{2}$ (semi-honest) fail-stop corruptions. This result overcomes the lower bounds of Gennaro et al. [CRYPTO'02] and Gordon et al. [CRYPTO'15]. With the additional assumption of Zaps, we also construct three round MPC in the broadcast model that achieves security with guaranteed output delivery against any $t<\frac{n}{2}$ malicious corruptions. Previously, such a protocol was only known in the common reference model, based on specific learning assumptions.
All of our results are obtained via general compilers that may be of independent interest.
Elette Boyle, Yuval Ishai, Antigoni Polychroniadou
Secure computations on big data call for protocols that have sublinear communication complexity in the input length. While fully homomorphic encryption (FHE) provides a general solution to the problem, employing it on a large scale is currently quite far from being practical. This is also the case for secure computation tasks that reduce to weaker forms of FHE such as ''somewhat homomorphic encryption'' or single-server private information retrieval (PIR).
Quite unexpectedly, Aggarwal, Mishra, and Pinkas (Eurocrypt 2004), Brickell and Shmatikov (Asiacrypt 2005), and shelat and Venkitasubramaniam (Asiacrypt 2015) have shown that in several natural instances of secure computation on big data, there are practical sublinear communication protocols that only require sublinear local computation and minimize the use of expensive public-key operations. This raises the question of whether similar protocols exist for other natural problems.
In this paper we put forward a framework for separating ''practical'' sublinear protocols from ''impractical'' ones, and establish a methodology for identifying ''provably hard'' big-data problems that do not admit practical protocols. This is akin to the use of NP-completeness to separate hard algorithmic problems from easy ones. We show that while the previous protocols of Aggarwal et al., Brickell and Shmatikov, and shelat and Venkitasubramaniam are indeed classified as being ''practical'' in this framework, slight variations of the problems they solve and other natural computational problems on big data are hard.
Our negative results are established by showing that the problem at hand is ''PIR-hard'' in the sense that any secure protocol for the problem implies PIR on a large database. This imposes a barrier on the local computational cost of secure protocols for the problem. We also identify a new natural relaxation of PIR that we call semi-PIR, which is useful for establishing ''intermediate hardness'' of several practically motivated secure computation tasks. We show that semi-PIR implies slightly sublinear PIR via an adaptive black-box reduction and that ruling out a stronger black-box reduction would imply a major breakthrough in complexity theory. We also establish information-theoretic separations between semi-PIR and PIR, showing that some problems that we prove to be semi-PIR-hard are not PIR-hard.
Quite unexpectedly, Aggarwal, Mishra, and Pinkas (Eurocrypt 2004), Brickell and Shmatikov (Asiacrypt 2005), and shelat and Venkitasubramaniam (Asiacrypt 2015) have shown that in several natural instances of secure computation on big data, there are practical sublinear communication protocols that only require sublinear local computation and minimize the use of expensive public-key operations. This raises the question of whether similar protocols exist for other natural problems.
In this paper we put forward a framework for separating ''practical'' sublinear protocols from ''impractical'' ones, and establish a methodology for identifying ''provably hard'' big-data problems that do not admit practical protocols. This is akin to the use of NP-completeness to separate hard algorithmic problems from easy ones. We show that while the previous protocols of Aggarwal et al., Brickell and Shmatikov, and shelat and Venkitasubramaniam are indeed classified as being ''practical'' in this framework, slight variations of the problems they solve and other natural computational problems on big data are hard.
Our negative results are established by showing that the problem at hand is ''PIR-hard'' in the sense that any secure protocol for the problem implies PIR on a large database. This imposes a barrier on the local computational cost of secure protocols for the problem. We also identify a new natural relaxation of PIR that we call semi-PIR, which is useful for establishing ''intermediate hardness'' of several practically motivated secure computation tasks. We show that semi-PIR implies slightly sublinear PIR via an adaptive black-box reduction and that ruling out a stronger black-box reduction would imply a major breakthrough in complexity theory. We also establish information-theoretic separations between semi-PIR and PIR, showing that some problems that we prove to be semi-PIR-hard are not PIR-hard.
Koji Chida, Daniel Genkin, Koki Hamada, Dai Ikarashi, Ryo Kikuchi, Yehuda Lindell, Ariel Nof
Protocols for secure multiparty computation enable a set of parties to compute a function of their inputs without revealing anything but the output. The security properties of the protocol must be preserved in the presence of adversarial behavior. The two classic adversary models considered are semi-honest (where the adversary follows the protocol specification but tries to learn more than allowed by examining the protocol transcript) and malicious (where the adversary may follow any arbitrary attack strategy). Protocols for semi-honest adversaries are often far more efficient, but in many cases the security guarantees are not strong enough.
In this paper, we present new protocols for securely computing any functionality represented by an arithmetic circuit. We utilize a new method for verifying that the adversary does not cheat, that yields a cost of just twice that of semi-honest protocols in some settings. Our protocols are information-theoretically secure in the presence of a malicious adversaries, assuming an honest majority. We present protocol variants for small and large fields, and show how to efficiently instantiate them based on replicated secret sharing and Shamir sharing. As with previous works in this area aiming to achieve high efficiency, our protocol is secure with abort and does not achieve fairness, meaning that the adversary may receive output while the honest parties do not.
We implemented our protocol and ran experiments for different numbers of parties, different network configurations and different circuit depths. Our protocol significantly outperforms the previous best for this setting (Lindell and Nof, CCS 2017); for a large number of parties, our implementation runs almost an order of magnitude faster than theirs.
In this paper, we present new protocols for securely computing any functionality represented by an arithmetic circuit. We utilize a new method for verifying that the adversary does not cheat, that yields a cost of just twice that of semi-honest protocols in some settings. Our protocols are information-theoretically secure in the presence of a malicious adversaries, assuming an honest majority. We present protocol variants for small and large fields, and show how to efficiently instantiate them based on replicated secret sharing and Shamir sharing. As with previous works in this area aiming to achieve high efficiency, our protocol is secure with abort and does not achieve fairness, meaning that the adversary may receive output while the honest parties do not.
We implemented our protocol and ran experiments for different numbers of parties, different network configurations and different circuit depths. Our protocol significantly outperforms the previous best for this setting (Lindell and Nof, CCS 2017); for a large number of parties, our implementation runs almost an order of magnitude faster than theirs.
Andre Esser, Felix Heuer, Robert Kübler, Alexander May, and Christian Sohler
The slightly subexponential algorithm of Blum, Kalai and Wasserman (BKW) provides a basis for assessing LPN/LWE security. However, its huge memory consumption strongly limits its practical applicability, thereby preventing precise security estimates for cryptographic LPN/LWE instantiations.
We provide the first time-memory trade-offs for the BKW algorithm. For instance, we show how to solve LPN in dimension $k$ in time $2^{\frac 43\frac k{\log k}}$ and memory $2^{\frac 23\frac k{\log k}}$. Using the Dissection technique due to Dinur et al. (Crypto 12) and a novel, slight generalization thereof, we obtain fine-grained trade-offs for any available (subexponential) memory while the running time remains subexponential.
Reducing the memory consumption of BKW below its running time also allows us to propose a first quantum version QBKW for the BKW algorithm.
We provide the first time-memory trade-offs for the BKW algorithm. For instance, we show how to solve LPN in dimension $k$ in time $2^{\frac 43\frac k{\log k}}$ and memory $2^{\frac 23\frac k{\log k}}$. Using the Dissection technique due to Dinur et al. (Crypto 12) and a novel, slight generalization thereof, we obtain fine-grained trade-offs for any available (subexponential) memory while the running time remains subexponential.
Reducing the memory consumption of BKW below its running time also allows us to propose a first quantum version QBKW for the BKW algorithm.
Shixiong Wang, Longjiang Qu, Chao Li, Shaojing Fu
In this paper, we study the condition of finding small solutions $(x,y,z)=(x_0, y_0, z_0)$ of the equation $Bx-Ay=z$. The framework is derived from Wiener's small private exponent attack on RSA and May-Ritzenhofen's investigation about the implicit factorization problem, both of which can be generalized to solve the above equation. We show that these two methods, together with Coppersmith's method, are equivalent for solving $Bx-Ay=z$ in the general case. Then based on Coppersmith's method, we present two improvements for solving $Bx-Ay=z$ in some special cases. The first improvement pays attention to the case where either $\gcd(x_0,z_0,A)$ or $\gcd(y_0,z_0,B)$ is large enough. As the applications of this improvement, we propose some new cryptanalysis of RSA, such as new results about the generalized implicit factorization problem, attacks with known bits of the prime factor, and so on. The motivation of these applications comes from oracle based complexity of factorization problems. The second improvement assumes that the value of $C \equiv z_0\ (\mathrm{mod}\ x_0)$ is known. We present two attacks on RSA as its applications. One focuses on the case with known bits of the private exponent together with the prime factor, and the other considers the case with a small difference of the two prime factors. Our new attacks on RSA improve the previous corresponding results respectively, and the correctness of the approach is verified by experiments.
Aggelos Kiayias, Annabell Kuldmaa, Helger Lipmaa, Janno Siim, Thomas Zacharias
In state-of-the-art e-voting systems, a bulletin board (BB) is a critical component for preserving election integrity and availability. Although it is common in the literature to assume that a BB is a centralized entity that is trusted, in the recent works of Culnane and Schneider [CSF 2014] and Chondros et al. [ICDCS 2016], the importance of removing BB as a single point of failure has been extensively discussed.
Motivated by these works, we introduce a framework for the formal security analysis of the BB functionality modeled as a distributed system that comprises
(i) a subsystem of item collection (IC) peers that receive and store the submitted items, and
(ii) a subsystem of audit board (AB) peers, where the IC subsystem publishes the stored items for verification.
Our framework treats a secure BB as a robust public transaction ledger, defined by Garay et al. [Eurocrypt 2015], that additionally supports the generation of receipts for successful posting. Namely, in our model, a secure BB system achieves Persistence and Liveness that are confirmable, in the sense that any malicious behavior of the BB system can be detected via a verification mechanism.
As a case study for our framework, we analyze the BB system of Culnane and Schneider and point out its weaknesses. We demonstrate an attack revealing that the said system does not achieve Confirmable Liveness, even in the case where the adversary is computationally bounded and covert, i.e., it may deviate from the protocol specification but does not want to be detected. In addition, we show that special care should be taken for the choice of the underlying cryptographic primitives, so that the claimed fault tolerance threshold of N/3 out-of N corrupted IC peers is preserved.
Next, based on our analysis, we introduce a new BB protocol that upgrades the [CSF 2014] protocol. We prove that it tolerates any number less than N/3 out-of N corrupted IC peers both for Persistence and Confirmable Liveness, against a computationally bounded general Byzantine adversary. Furthermore, Persistence can also be Confirmable, if we distribute the AB (originally a centralized entity in [CSF 2014]) as a replicated service with honest majority.
As a case study for our framework, we analyze the BB system of Culnane and Schneider and point out its weaknesses. We demonstrate an attack revealing that the said system does not achieve Confirmable Liveness, even in the case where the adversary is computationally bounded and covert, i.e., it may deviate from the protocol specification but does not want to be detected. In addition, we show that special care should be taken for the choice of the underlying cryptographic primitives, so that the claimed fault tolerance threshold of N/3 out-of N corrupted IC peers is preserved.
Next, based on our analysis, we introduce a new BB protocol that upgrades the [CSF 2014] protocol. We prove that it tolerates any number less than N/3 out-of N corrupted IC peers both for Persistence and Confirmable Liveness, against a computationally bounded general Byzantine adversary. Furthermore, Persistence can also be Confirmable, if we distribute the AB (originally a centralized entity in [CSF 2014]) as a replicated service with honest majority.
04 June 2018
Prabhanjan Ananth, Yuval Ishai, Amit Sahai
We consider the problem of protecting general computations against constant-rate random leakage. That is, the computation is performed by a randomized boolean circuit that maps a randomly encoded input to a randomly encoded output, such that even if the value of every wire is independently leaked with some constant probability $p > 0$, the leakage reveals essentially nothing about the input.
In this work we provide a conceptually simple, modular approach for solving the above problem, providing a simpler and self-contained alternative to previous constructions of Ajtai (STOC 2011) and Andrychowicz et al. (Eurocrypt 2016). We also obtain several extensions and generalizations of this result. In particular, we show that for every leakage probability $p<1$, there is a finite basis B such that leakage-resilient computation with leakage probability $p$ can be realized using circuits over the basis B.
We obtain similar positive results for the stronger notion of leakage tolerance, where the input is not encoded, but the leakage from the entire computation can be simulated given random $p'$-leakage of input values alone, for any $p<p'<1$. Finally, we complement this by a negative result, showing that for every basis B there is some leakage probability $p<1$ such that for any $p'<1$, leakage tolerance as above cannot be achieved in general.
We show that our modular approach is also useful for protecting computations against worst case leakage. In this model, we require that leakage of any t (adversarially chosen) wires reveal nothing about the input. By combining our construction with a previous derandomization technique of Ishai et al. (ICALP 2013), we show that security in this setting can be achieved with $O(t^{1+\varepsilon})$ random bits, for every constant $\varepsilon > 0$. This (near-optimal) bound significantly improves upon previous constructions that required more than $t^{3}$ random bits.
In this work we provide a conceptually simple, modular approach for solving the above problem, providing a simpler and self-contained alternative to previous constructions of Ajtai (STOC 2011) and Andrychowicz et al. (Eurocrypt 2016). We also obtain several extensions and generalizations of this result. In particular, we show that for every leakage probability $p<1$, there is a finite basis B such that leakage-resilient computation with leakage probability $p$ can be realized using circuits over the basis B.
We obtain similar positive results for the stronger notion of leakage tolerance, where the input is not encoded, but the leakage from the entire computation can be simulated given random $p'$-leakage of input values alone, for any $p<p'<1$. Finally, we complement this by a negative result, showing that for every basis B there is some leakage probability $p<1$ such that for any $p'<1$, leakage tolerance as above cannot be achieved in general.
We show that our modular approach is also useful for protecting computations against worst case leakage. In this model, we require that leakage of any t (adversarially chosen) wires reveal nothing about the input. By combining our construction with a previous derandomization technique of Ishai et al. (ICALP 2013), we show that security in this setting can be achieved with $O(t^{1+\varepsilon})$ random bits, for every constant $\varepsilon > 0$. This (near-optimal) bound significantly improves upon previous constructions that required more than $t^{3}$ random bits.
Jung Hee Cheon, Andrey Kim
Homomorphic Encryption for Arithmetic of Approximate Numbers (HEAAN) with its vector packing technique proved its potential in cryptographic applications. In this paper, we propose MHEAAN - a generalization of the HEAAN scheme to a multivariate case. Our design takes advantage of the HEAAN scheme, that the precision losses during the evaluation are limited by the depth of the circuit, and it exceeds no more than one bit compared to unencrypted approximate arithmetic, such as floating point operations. In addition, with a multivariate structure of the plaintext space, we suggest a general method of packing multidimensional structures as matrices and tensors in a single ciphertext. We provide a concrete two-dimensional construction and show the efficiency of our scheme on several matrix operations, such as matrix transposition, matrix multiplication, and inverse.
Yuyu Wang, Takahiro Matsuda, Goichiro Hanaoka, Keisuke Tanaka
In this work, we show negative results on the tamper-resilience of a wide class of cryptographic primitives with uniqueness properties, such as unique signatures, verifiable random functions, signatures with unique keys, injective one-way functions, and encryption schemes with a property we call unique-message property. Concretely, we prove that for these primitives, it is impossible to derive their (even extremely weak) tamper-resilience from any common assumption, via black-box reductions. Our proofs exploit the simulatable attack paradigm proposed by Wichs (ITCS 13), and the tampering model we treat is the plain model, where public parameters and public/secret key pairs are potentially tampered with.
Tim van de Kamp, Andreas Peter, Maarten H. Everts, Willem Jonker
We propose the first multi-client predicate-only encryption scheme capable of efficiently testing the equality of two encrypted vectors. Our construction can be used for the privacy-preserving monitoring of relations among multiple clients. Since both the clients data and the predicates are encrypted, our system is suitable for situations in which this information is considered sensitive. We prove our construction plaintext and predicate private in the generic bilinear group model using random oracles, and secure under chosen-plaintext attack with unbounded corruptions under the symmetric external Diffie-Hellman assumption. Additionally, we provide a proof-of-concept implementation that is capable of evaluating one thousand predicates defined over the inputs of ten clients in less than a minute on commodity hardware.