International Association for Cryptologic Research

IACR News Central

Here you can see all recent updates to the IACR webpage. These updates are also available:

Now viewing news items related to:

6 June 2018
ePrint Report Optimizing Authenticated Garbling for Faster Secure Two-Party Computation Jonathan Katz, Samuel Ranellucci, Mike Rosulek, Xiao Wang
Wang et al. (CCS 2017) recently proposed a protocol for malicious secure two-party computation that represents the state-of-the- art with regard to concrete efficiency in both the single-execution and amortized settings, with or without preprocessing. We show here several optimizations of their protocol that result in a significant improvement in the overall communication and running time. Specifically:

- We show how to make the “authenticated garbling” at the heart of their protocol compatible with the half-gate optimization of Zahur et al. (Eurocrypt 2015). We also show how to avoid sending an information-theoretic MAC for each garbled row. These two optimizations give up to a 2.6x improvement in communication, and make the communication of the online phase essentially equivalent to that of state-of-the-art semi-honest secure computation.

- We show various optimizations to their protocol for generating AND triples that, overall, result in a 1.5x improvement in the communication and a 2x improvement in the computation for that step.
ePrint Report Fast Distributed RSA Key Generation for Semi-Honest and Malicious Adversaries Tore Kasper Frederiksen, Yehuda Lindell, Valery Osheter, Benny Pinkas
We present two new, highly efficient, protocols for securely generating a distributed RSA key pair in the two-party setting. One protocol is semi-honestly secure and the other maliciously secure. Both are constant round and do not rely on any specific number-theoretic assumptions and improve significantly over the state-of-the-art by allowing a slight leakage (which we show to not affect security).

For our maliciously secure protocol our most significant improvement comes from executing most of the protocol in a ``strong'' semi-honest manner and then doing a single, light, zero-knowledge argument of correct execution. We introduce other significant improvements as well. One such improvement arrives in showing that certain, limited leakage does not compromise security, which allows us to use lightweight subprotocols. Another improvement, which may be of independent interest, comes in our approach for multiplying two large integers using OT, in the malicious setting, without being susceptible to a selective-failure attack. Finally, we implement our malicious protocol and show that its performance is an order of magnitude better than the best previous protocol, which provided only semi-honest security.
ePrint Report Simpler Constructions of Asymmetric Primitives from Obfuscation Pooya Farshim, Georg Fuchsbauer, Alain Passelègue
We revisit constructions of asymmetric primitives from obfuscation and give simpler alternatives. We consider public-key encryption, (hierarchical) identity-based encryption ((H)IBE), and predicate encryption. Obfuscation has already been shown to imply PKE by Sahai and Waters (STOC'14) and full-fledged functional encryption by Garg et al. (FOCS'13). We simplify all these constructions and reduce the necessary assumptions on the class of circuits that the obfuscator needs to support. Our PKE scheme relies on just a PRG and does not need any puncturing. Our IBE and bounded HIBE schemes convert natural key-delegation mechanisms from (recursive) applications of puncturable PRFs to IBE and HIBE schemes. Our most technical contribution is an unbounded HIBE, which uses (public-coin) differing-inputs obfuscation for circuits and whose proof relies on a recent pebbling-based hybrid argument by Fuchsbauer et al. (ASIACRYPT'14). All our constructions are anonymous, support arbitrary inputs, and have compact keys and ciphertexts.
The generalized birthday problem (GBP) was introduced by Wagner in 2002 and has shown to have many applications in cryptanalysis. In its typical variant, we are given access to a function $H:\{0,1\}^{\ell} \rightarrow \{0,1\}^n$ (whose specification depends on the underlying problem) and an integer $K>0$. The goal is to find $K$ distinct inputs to $H$ (denoted by $\{x_i\}_{i=1}^{K}$) such that $\sum_{i=1}^{K}H(x_i) = 0$. Wagner's K-tree algorithm solves the problem in time and memory complexities of about $N^{1/(\lfloor \log K \rfloor + 1)}$ (where $N= 2^n$). Two important open problems raised by Wagner were (1) devise efficient time-memory tradeoffs for GBP, and (2) reduce the complexity of the K-tree algorithm for $K$ which is not a power of 2.

In this paper, we make progress in both directions. First, we improve the best know GBP time-memory tradeoff curve (published by independently by Nikoli\'{c} and Sasaki and also by Biryukov and Khovratovich) for all $K \geq 8$ from $T^2M^{\lfloor \log K \rfloor -1} = N$ to $T^{\lceil (\log K)/2 \rceil + 1 }M^{\lfloor (\log K)/2 \rfloor} = N$, applicable for a large range of parameters. For example, for $K = 8$ we improve the best previous tradeoff from $T^2M^2 = N$ to $T^3M = N$ and for $K = 32$ the improvement is from $T^2M^4 = N$ to $T^4M^2 = N$.

Next, we consider values of $K$ which are not powers of 2 and show that in many cases even more efficient time-memory tradeoff curves can be obtained. Most interestingly, for $K \in \{6,7,14,15\}$ we present algorithms with the same time complexities as the K-tree algorithm, but with significantly reduced memory complexities. In particular, for $K=6$ the K-tree algorithm achieves $T=M=N^{1/3}$, whereas we obtain $T=N^{1/3}$ and $M=N^{1/6}$. For $K=14$, Wagner's algorithm achieves $T=M=N^{1/4}$, while we obtain $T=N^{1/4}$ and $M=N^{1/8}$. This gives the first significant improvement over the K-tree algorithm for small $K$.

Finally, we optimize our techniques for several concrete GBP instances and show how to solve some of them with improved time and memory complexities compared to the state-of-the-art.

Our results are obtained using a framework that combines several algorithmic techniques such as variants of the Schroeppel-Shamir algorithm for solving knapsack problems (devised in works by Howgrave-Graham and Joux and by Becker, Coron and Joux) and dissection algorithms (published by Dinur, Dunkelman, Keller and Shamir). It then builds on these techniques to develop new GBP algorithms.
ePrint Report Correctness and Fairness of Tendermint-core Blockchains Yackolley Amoussou-Guenou, Antonella Del Pozzo, Maria Potop-Butucaru, Sara Tucci-Piergiovanni
Tendermint-core blockchains offer strong consistency (no forks) in an open system relying on two ingredients (i) a set of validators that generate blocks via a variant of Practical Byzantine Fault Tolerant (PBFT) consensus protocol and (ii) a rewarding mechanism that dynamically selects nodes to be validators for the next block via proof-of-stake, a non-energy consuming alternative of proof-of-work. It is well-known that in those open systems the main threat is the tragedy of commons that may yield the system to collapse if the rewarding mechanism is not adequate. At minima the rewarding mechanism must be $fair$, i.e. distributing the rewards in proportion to the merit of participants. The contribution of this paper is two-fold. First, we provide a formal description of Tendermint-core protocol and we prove that in eventual synchronous systems (i) it verifies a variant of one-shot consensus for the validation of one single block and (ii) a variant of the repeated consensus problem for multiple blocks. Our second contribution relates to the fairness of Tendermint rewarding mechanism. We prove that Tendermint rewarding is not fair. However, a small twist in the protocol makes it eventually fair. Additionally, we prove that there exists an (eventual) fair rewarding mechanism in repeated consensus-based blockchains if and only if the system is (eventually) synchronous.
5 June 2018
ePrint Report Improved Lightweight Implementations of CAESAR Authenticated Ciphers Farnoud Farahmand, William Diehl, Abubakr Abdulgadir, Jens-Peter Kaps, Kris Gaj
Authenticated ciphers offer potential benefits to resource-constrained devices in the Internet of Things (IoT). The CAESAR competition seeks optimal authenticated ciphers based on several criteria, including performance in resource-constrained (i.e., low-area, low-power, and low-energy) hardware. Although the competition specified a ”lightweight” use case for Round 3, most hardware submissions to Round 3 were not lightweight implementations, in that they employed architectures optimized for best throughput-to-area (TP/A) ratio, and used the Pre- and PostProcessor modules from the CAE-SAR Hardware (HW) Development Package designed for high-speed applications. In this research, we provide true lightweight implementations of selected ciphers (ACORN, NORX, CLOC-AES, SILC-AES, and SILC-LED). These implementations use an improved version of the CAESAR HW DevelopmentPackage designed for lightweight applications, and are fully compliant with the CAESAR HW Application programming interface for Authenticated Ciphers. Our lightweight implementations achieve an average of 55% reduction in area and40% reduction in power compared to their corresponding high-speed versions. Although the average energy per bit of lightweight ciphers increases by a factor of 3.6, the lightweight version of NORX actually uses 47% less energy per bit than its corresponding high-speed implementation.
ePrint Report Round-Optimal Secure Multiparty Computation with Honest Majority Prabhanjan Ananth, Arka Rai Choudhuri, Aarushi Goel, Abhishek Jain
We study the exact round complexity of secure multiparty computation (MPC) in the honest majority setting. We construct the following round-optimal protocols in the plain model:

- Security with abort: Assuming public-key encryption (PKE), we construct two round MPC that achieves security with abort against any $t<\frac{n}{2}$ malicious corruptions. Previously, the best known two round protocols only achieved weaker security notions against smaller corruption thresholds.

- Guaranteed output delivery: Assuming PKE, we construct two round MPC over broadcast and private channels that achieves security with guaranteed output delivery against any $t<\frac{n}{2}$ (semi-honest) fail-stop corruptions. This result overcomes the lower bounds of Gennaro et al. [CRYPTO'02] and Gordon et al. [CRYPTO'15]. With the additional assumption of Zaps, we also construct three round MPC in the broadcast model that achieves security with guaranteed output delivery against any $t<\frac{n}{2}$ malicious corruptions. Previously, such a protocol was only known in the common reference model, based on specific learning assumptions.

All of our results are obtained via general compilers that may be of independent interest.
ePrint Report Limits of Practical Sublinear Secure Computation Elette Boyle, Yuval Ishai, Antigoni Polychroniadou
Secure computations on big data call for protocols that have sublinear communication complexity in the input length. While fully homomorphic encryption (FHE) provides a general solution to the problem, employing it on a large scale is currently quite far from being practical. This is also the case for secure computation tasks that reduce to weaker forms of FHE such as ''somewhat homomorphic encryption'' or single-server private information retrieval (PIR).

Quite unexpectedly, Aggarwal, Mishra, and Pinkas (Eurocrypt 2004), Brickell and Shmatikov (Asiacrypt 2005), and shelat and Venkitasubramaniam (Asiacrypt 2015) have shown that in several natural instances of secure computation on big data, there are practical sublinear communication protocols that only require sublinear local computation and minimize the use of expensive public-key operations. This raises the question of whether similar protocols exist for other natural problems.

In this paper we put forward a framework for separating ''practical'' sublinear protocols from ''impractical'' ones, and establish a methodology for identifying ''provably hard'' big-data problems that do not admit practical protocols. This is akin to the use of NP-completeness to separate hard algorithmic problems from easy ones. We show that while the previous protocols of Aggarwal et al., Brickell and Shmatikov, and shelat and Venkitasubramaniam are indeed classified as being ''practical'' in this framework, slight variations of the problems they solve and other natural computational problems on big data are hard.

Our negative results are established by showing that the problem at hand is ''PIR-hard'' in the sense that any secure protocol for the problem implies PIR on a large database. This imposes a barrier on the local computational cost of secure protocols for the problem. We also identify a new natural relaxation of PIR that we call semi-PIR, which is useful for establishing ''intermediate hardness'' of several practically motivated secure computation tasks. We show that semi-PIR implies slightly sublinear PIR via an adaptive black-box reduction and that ruling out a stronger black-box reduction would imply a major breakthrough in complexity theory. We also establish information-theoretic separations between semi-PIR and PIR, showing that some problems that we prove to be semi-PIR-hard are not PIR-hard.
ePrint Report Fast Large-Scale Honest-Majority MPC for Malicious Adversaries Koji Chida, Daniel Genkin, Koki Hamada, Dai Ikarashi, Ryo Kikuchi, Yehuda Lindell, Ariel Nof
Protocols for secure multiparty computation enable a set of parties to compute a function of their inputs without revealing anything but the output. The security properties of the protocol must be preserved in the presence of adversarial behavior. The two classic adversary models considered are semi-honest (where the adversary follows the protocol specification but tries to learn more than allowed by examining the protocol transcript) and malicious (where the adversary may follow any arbitrary attack strategy). Protocols for semi-honest adversaries are often far more efficient, but in many cases the security guarantees are not strong enough.

In this paper, we present new protocols for securely computing any functionality represented by an arithmetic circuit. We utilize a new method for verifying that the adversary does not cheat, that yields a cost of just twice that of semi-honest protocols in some settings. Our protocols are information-theoretically secure in the presence of a malicious adversaries, assuming an honest majority. We present protocol variants for small and large fields, and show how to efficiently instantiate them based on replicated secret sharing and Shamir sharing. As with previous works in this area aiming to achieve high efficiency, our protocol is secure with abort and does not achieve fairness, meaning that the adversary may receive output while the honest parties do not.

We implemented our protocol and ran experiments for different numbers of parties, different network configurations and different circuit depths. Our protocol significantly outperforms the previous best for this setting (Lindell and Nof, CCS 2017); for a large number of parties, our implementation runs almost an order of magnitude faster than theirs.
ePrint Report Dissection-BKW Andre Esser, Felix Heuer, Robert Ku&#776;bler, Alexander May, and Christian Sohler
The slightly subexponential algorithm of Blum, Kalai and Wasserman (BKW) provides a basis for assessing LPN/LWE security. However, its huge memory consumption strongly limits its practical applicability, thereby preventing precise security estimates for cryptographic LPN/LWE instantiations.

We provide the first time-memory trade-offs for the BKW algorithm. For instance, we show how to solve LPN in dimension $k$ in time $2^{\frac 43\frac k{\log k}}$ and memory $2^{\frac 23\frac k{\log k}}$. Using the Dissection technique due to Dinur et al. (Crypto ’12) and a novel, slight generalization thereof, we obtain fine-grained trade-offs for any available (subexponential) memory while the running time remains subexponential.

Reducing the memory consumption of BKW below its running time also allows us to propose a first quantum version QBKW for the BKW algorithm.
In this paper, we study the condition of finding small solutions $(x,y,z)=(x_0, y_0, z_0)$ of the equation $Bx-Ay=z$. The framework is derived from Wiener's small private exponent attack on RSA and May-Ritzenhofen's investigation about the implicit factorization problem, both of which can be generalized to solve the above equation. We show that these two methods, together with Coppersmith's method, are equivalent for solving $Bx-Ay=z$ in the general case. Then based on Coppersmith's method, we present two improvements for solving $Bx-Ay=z$ in some special cases. The first improvement pays attention to the case where either $\gcd(x_0,z_0,A)$ or $\gcd(y_0,z_0,B)$ is large enough. As the applications of this improvement, we propose some new cryptanalysis of RSA, such as new results about the generalized implicit factorization problem, attacks with known bits of the prime factor, and so on. The motivation of these applications comes from oracle based complexity of factorization problems. The second improvement assumes that the value of $C \equiv z_0\ (\mathrm{mod}\ x_0)$ is known. We present two attacks on RSA as its applications. One focuses on the case with known bits of the private exponent together with the prime factor, and the other considers the case with a small difference of the two prime factors. Our new attacks on RSA improve the previous corresponding results respectively, and the correctness of the approach is verified by experiments.
ePrint Report On the Security Properties of e-Voting Bulletin Boards Aggelos Kiayias, Annabell Kuldmaa, Helger Lipmaa, Janno Siim, Thomas Zacharias
In state-of-the-art e-voting systems, a bulletin board (BB) is a critical component for preserving election integrity and availability. Although it is common in the literature to assume that a BB is a centralized entity that is trusted, in the recent works of Culnane and Schneider [CSF 2014] and Chondros et al. [ICDCS 2016], the importance of removing BB as a single point of failure has been extensively discussed. Motivated by these works, we introduce a framework for the formal security analysis of the BB functionality modeled as a distributed system that comprises (i) a subsystem of item collection (IC) peers that receive and store the submitted items, and (ii) a subsystem of audit board (AB) peers, where the IC subsystem publishes the stored items for verification. Our framework treats a secure BB as a robust public transaction ledger, defined by Garay et al. [Eurocrypt 2015], that additionally supports the generation of receipts for successful posting. Namely, in our model, a secure BB system achieves Persistence and Liveness that are confirmable, in the sense that any malicious behavior of the BB system can be detected via a verification mechanism.

As a case study for our framework, we analyze the BB system of Culnane and Schneider and point out its weaknesses. We demonstrate an attack revealing that the said system does not achieve Confirmable Liveness, even in the case where the adversary is computationally bounded and covert, i.e., it may deviate from the protocol specification but does not want to be detected. In addition, we show that special care should be taken for the choice of the underlying cryptographic primitives, so that the claimed fault tolerance threshold of N/3 out-of N corrupted IC peers is preserved.

Next, based on our analysis, we introduce a new BB protocol that upgrades the [CSF 2014] protocol. We prove that it tolerates any number less than N/3 out-of N corrupted IC peers both for Persistence and Confirmable Liveness, against a computationally bounded general Byzantine adversary. Furthermore, Persistence can also be Confirmable, if we distribute the AB (originally a centralized entity in [CSF 2014]) as a replicated service with honest majority.
4 June 2018
ePrint Report Private Circuits: A Modular Approach Prabhanjan Ananth, Yuval Ishai, Amit Sahai
We consider the problem of protecting general computations against constant-rate random leakage. That is, the computation is performed by a randomized boolean circuit that maps a randomly encoded input to a randomly encoded output, such that even if the value of every wire is independently leaked with some constant probability $p > 0$, the leakage reveals essentially nothing about the input.

In this work we provide a conceptually simple, modular approach for solving the above problem, providing a simpler and self-contained alternative to previous constructions of Ajtai (STOC 2011) and Andrychowicz et al. (Eurocrypt 2016). We also obtain several extensions and generalizations of this result. In particular, we show that for every leakage probability $p<1$, there is a finite basis B such that leakage-resilient computation with leakage probability $p$ can be realized using circuits over the basis B.

We obtain similar positive results for the stronger notion of leakage tolerance, where the input is not encoded, but the leakage from the entire computation can be simulated given random $p'$-leakage of input values alone, for any $p<p'<1$. Finally, we complement this by a negative result, showing that for every basis B there is some leakage probability $p<1$ such that for any $p'<1$, leakage tolerance as above cannot be achieved in general.

We show that our modular approach is also useful for protecting computations against worst case leakage. In this model, we require that leakage of any t (adversarially chosen) wires reveal nothing about the input. By combining our construction with a previous derandomization technique of Ishai et al. (ICALP 2013), we show that security in this setting can be achieved with $O(t^{1+\varepsilon})$ random bits, for every constant $\varepsilon > 0$. This (near-optimal) bound significantly improves upon previous constructions that required more than $t^{3}$ random bits.
Homomorphic Encryption for Arithmetic of Approximate Numbers (HEAAN) with its vector packing technique proved its potential in cryptographic applications. In this paper, we propose MHEAAN - a generalization of the HEAAN scheme to a multivariate case. Our design takes advantage of the HEAAN scheme, that the precision losses during the evaluation are limited by the depth of the circuit, and it exceeds no more than one bit compared to unencrypted approximate arithmetic, such as floating point operations. In addition, with a multivariate structure of the plaintext space, we suggest a general method of packing multidimensional structures as matrices and tensors in a single ciphertext. We provide a concrete two-dimensional construction and show the efficiency of our scheme on several matrix operations, such as matrix transposition, matrix multiplication, and inverse.
ePrint Report Impossibility on Tamper-Resilient Cryptography with Uniqueness Properties Yuyu Wang, Takahiro Matsuda, Goichiro Hanaoka, Keisuke Tanaka
In this work, we show negative results on the tamper-resilience of a wide class of cryptographic primitives with uniqueness properties, such as unique signatures, verifiable random functions, signatures with unique keys, injective one-way functions, and encryption schemes with a property we call unique-message property. Concretely, we prove that for these primitives, it is impossible to derive their (even extremely weak) tamper-resilience from any common assumption, via black-box reductions. Our proofs exploit the simulatable attack paradigm proposed by Wichs (ITCS ’13), and the tampering model we treat is the plain model, where public parameters and public/secret key pairs are potentially tampered with.
ePrint Report Multi-client Predicate-only Encryption for Conjunctive Equality Tests Tim van de Kamp, Andreas Peter, Maarten H. Everts, Willem Jonker
We propose the first multi-client predicate-only encryption scheme capable of efficiently testing the equality of two encrypted vectors. Our construction can be used for the privacy-preserving monitoring of relations among multiple clients. Since both the clients’ data and the predicates are encrypted, our system is suitable for situations in which this information is considered sensitive. We prove our construction plaintext and predicate private in the generic bilinear group model using random oracles, and secure under chosen-plaintext attack with unbounded corruptions under the symmetric external Diffie-Hellman assumption. Additionally, we provide a proof-of-concept implementation that is capable of evaluating one thousand predicates defined over the inputs of ten clients in less than a minute on commodity hardware.
ePrint Report maskVerif: a formal tool for analyzing software and hardware masked implementations Gilles Barthe, Sonia Belaïd, Pierre-Alain Fouque, Benjamin Grégoire
Masking is a popular countermeasure for protecting both hardware and software implementations against differential power analysis. A main strength of software masking is that its security guarantees can be captured formally through well-established models. The existence of such models, and their relation with probabilistic information flow studied in formal methods, has been instrumental to the emergence of fully automated methods for analyzing masked implementations. In particular, state-of-the-art tools such as maskVerif (Barthe et al., EUROCRYPT 2015), have been used successfully for analyzing masked implementations at high orders. In contrast, security models for hardware implementations have remained somewhat less developed, and no prior verification tool has accommodated hardware-specific sources of vulnerabilities such as glitches. Recently, Bloem et al. formalize the security of masked hardware implementations against glitches and give a method based on SAT solvers for verifying security automatically. However, their method works for small functionalities and low orders.

In this paper, we extend maskVerif tool (Barthe et al., EUROCRYPT 2015) with a unified framework to efficiently and formally verify both software and hardware implementations. In this process, we introduce a simple but expressive intermediate language. Our representation requires that each instruction is instrumented with leakage expressions that may depend on the expressions that arise in the instruction and on previous computation. Despite its simplicity, our intermediate representation covers a broad range of models from the literature; moreover, it is also easily amenable to efficient formal verification.

Our results significantly improve over prior work, both in terms of coverage and efficiency. In particular, we demonstrate that our tool is able to analyze examples from (Bloem et al, EUROCRYPT 2018) much faster and at high orders.
ePrint Report Blockchain Abstract Data Type Emmanuelle Anceaume, Antonella Del Pozzo, Romaric Ludinard, Maria Potop-Butucaru, Sara Tucci-Piergiovanni
The presented work continues the line of recent distributed computing community efforts dedicated to the theoretical aspects of blockchains. This paper is the first to specify blockchains as a composition of abstract data types all together with a hierarchy of consistency criteria that formally characterizes the histories admissible for distributed programs that use them. Our work is based on an original oracle-based construction that, along with new consistency definitions, captures the eventual convergence process in blockchain systems. The paper presents as well some results on implementability of the presented abstractions and a mapping of representative existing blockchains from both academia and industry in our framework.
ePrint Report Sub-Linear Lattice-Based Zero-Knowledge Arguments for Arithmetic Circuits Carsten Baum, Jonathan Bootle, Andrea Cerulli, Rafael del Pino, Jens Groth, Vadim Lyubashevsky
We propose the first zero-knowledge argument with sub-linear communication complexity for arithmetic circuit satisfiability over a prime $p$ whose security is based on the hardness of the short integer solution (SIS) problem. For a circuit with $N$ gates, the communication complexity of our protocol is $O\left(\sqrt{N\lambda\log^3{N}}\right)$, where $\lambda$ is the security parameter. A key component of our construction is a surprisingly simple zero-knowledge proof for pre-images of linear relations whose amortized communication complexity depends only logarithmically on the number of relations being proved. This latter protocol is a substantial improvement, both theoretically and in practice, over the previous results in this line of research of Damg{\aa}rd et al. (CRYPTO 2012), Baum et al. (CRYPTO 2016), Cramer et al. (EUROCRYPT 2017) and del Pino and Lyubashevsky (CRYPTO 2017), and we believe it to be of independent interest.
ePrint Report Proofs of Work from Worst-Case Assumptions Marshall Ball, Alon Rosen, Manuel Sabin, Prashant Nalini Vasudevan
We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC '17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a `proof of concept' of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs' structure can be exploited in ways previous heuristic PoWs could not.

This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS '16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC '17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.

Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.
Often the simplest way of specifying game-based cryptographic definitions is apparently barred because the adversary would have some trivial win. Disallowing or invalidating these wins can lead to complex or unconvincing definitions. We suggest a generic way around this difficulty. We call it indistinguishability up to correctness, or IND|C. Given games G and H and a correctness condition C we define an advantage measure Adv_{G,H,C}^indc wherein G/H distinguishing attacks are effaced to the extent that they are inevitable due to C. We formalize this in the language of oracle silencing, an alternative to exclusion-style and penalty-style definitions. We apply our ideas to a domain where game-based definitions have been cumbersome: stateful authenticated-encryption (sAE). We rework existing sAE notions and encompass new ones, like replay-free AE permitting a specified degree of out-of-order message delivery.
ePrint Report Non-Interactive Zero-Knowledge Proofs for Composite Statements Shashank Agrawal, Chaya Ganesh, Payman Mohassel
The two most common ways to design non-interactive zero-knowledge (NIZK) proofs are based on Sigma protocols and QAP-based SNARKs. The former is highly efficient for proving algebraic statements while the latter is superior for arithmetic representations.

Motivated by applications such as privacy-preserving credentials and privacy-preserving audits in cryptocurrencies, we study the design of NIZKs for composite statements that compose algebraic and arithmetic statements in arbitrary ways. Specifically, we provide a framework for proving statements that consist of ANDs, ORs and function compositions of a mix of algebraic and arithmetic components. This allows us to explore the full spectrum of trade-offs between proof size, prover cost, and CRS size/generation cost. This leads to proofs for statements of the form: knowledge of $x$ such that $SHA(g^x)=y$ for some public $y$ where the prover's work is 500 times fewer exponentiations compared to a QAP-based SNARK at the cost of increasing the proof size to 2404 group and field elements. In application to anonymous credentials, our techniques result in 8 times fewer exponentiations for the prover at the cost of increasing the proof size to 298 elements.
Format-preserving encryption (FPE) produces ciphertexts which have the same format as the plaintexts. Building secure FPE is very challenging, and recent attacks (Bellare, Hoang, Tessaro, CCS '16; Durak and Vaudenay, CRYPTO '17) have highlighted security deficiencies in the recent NIST SP800-38G standard. This has left the question open of whether practical schemes with high security exist.

In this paper, we continue the investigation of attacks against FPE schemes. Our first contribution are new known-plaintext message recovery attacks against Feistel-based FPEs (such as FF1/FF3 from the NIST SP800-38G standard) which improve upon previous work in terms of amortized complexity in multi-target scenarios, where multiple ciphertexts are to be decrypted. Our attacks are also qualitatively better in that they make no assumptions on the correlation between the targets to be decrypted and the known plaintexts. We also surface a new vulnerability specific to FF3 and how it handles odd length domains, which leads to a substantial speedup in our attacks.

We also show the first attacks against non-Feistel based FPEs. Specifically, we show a strong message-recovery attack for FNR, a construction proposed by Cisco which replaces two rounds in the Feistel construction with a pairwise-independent permutation, following the paradigm by Naor and Reingold (JoC, '99). We also provide a strong ciphertext-only attack against a variant of the DTP construction by Brightwell and Smith, which is deployed by Protegrity within commercial applications.

All of our attacks show that existing constructions fall short of achieving desirable security levels. For Feistel and the FNR schemes, our attacks become feasible on small domains, e.g., 8 bits, for suggested round numbers. Our attack against the DTP construction is practical even for large domains. We provide proof-of-concept implementations of our attacks that verify our theoretical findings.
ePrint Report Limits on the Power of Garbling Techniques for Public-Key Encryption Sanjam Garg, Mohammad Hajiabadi, Mohammad Mahmoody, Ameer Mohammed
Understanding whether public-key encryption can be based on one-way functions is a fundamental open problem in cryptography. The seminal work of Impagliazzo and Rudich [STOC'89] shows that black-box constructions of public-key encryption from one-way functions are impossible. However, this impossibility result leaves open the possibility of using non-black-box techniques for achieving this goal.

One of the most powerful classes of non-black-box techniques, which can be based on one-way functions (OWFs) alone, is Yao's garbled circuit technique [FOCS'86]. As for the non-black-box power of this technique, the recent work of Dottling and Garg [CRYPTO'17] shows that the use of garbling allows us to circumvent known black-box barriers in the context of identity-based encryption.

We prove that garbling of circuits that have OWF (or even random oracle) gates in them are insufficient for obtaining public-key encryption. Additionally, we show that this model also captures (non-interactive) zero-knowledge proofs for relations with OWF gates. This indicates that currently known OWF-based non-black-box techniques are perhaps insufficient for realizing public-key encryption.
We introduce a new class of irreducible pentanomials over $\mathbb{F}_2$ of the form $f(x) = x^{2b+c} + x^{b+c} + x^b + x^c + 1$. Let $m=2b+c$ and use $f$ to define the finite field extension of degree $m$. We give the exact number of operations required for computing the reduction modulo $f$. We also provide a multiplier based on Karatsuba algorithm in $\mathbb{F}_2[x]$ combined with our reduction process. We give the total cost of the multiplier and found that the bit-parallel multiplier defined by this new class of polynomials has improved XOR and AND complexity. Our multiplier has comparable time delay when compared to other multipliers based on Karatsuba algorithm.
We aim to understand the best possible security of a (bidirectional) cryptographic channel against an adversary that may arbitrarily and repeatedly learn the secret state of either communicating party. We give a formal security definition and a proven-secure construction. This construction provides better security against state compromise than the Signal Double Ratchet Algorithm or any other known channel construction. To facilitate this we define and construct new forms of public-key encryption and digital signatures that update their keys over time.
ePrint Report On the Complexity of Compressing Obfuscation Gilad Asharov, Naomi Ephraim, Ilan Komargodski, Rafael Pass
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct.

In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters.

We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.

First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.

Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.

Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
ePrint Report Structured Encryption and Leakage Suppression Seny Kamara, Tarik Moataz, Olga Ohrimenko
Structured encryption (STE) schemes encrypt data structures in such a way that they can be privately queried. One aspect of STE that is still poorly understood is its leakage. In this work, we describe a general framework to design STE schemes that do not leak the query/search pattern (i.e., if and when a query was previously made).

Our framework consists of two compilers. The first can be used to make any dynamic STE scheme rebuildable in the sense that the encrypted structures it produces can be rebuilt efficiently using only O(1) client storage. The second transforms any rebuildable scheme that leaks the query/search pattern into a new scheme that does not. Our second compiler is a generalization of Goldreich and Ostrovsky's square root oblivious RAM (ORAM) solution but does not make use of black-box ORAM simulation. We show that our framework produces STE schemes with query complexity that is asymptotically better than ORAM simulation in certain (natural) settings and comparable to special-purpose oblivious data structures.

We use our framework to design a new STE scheme that is ``almost" zero-leakage in the sense that it reveals an, intuitively-speaking, small amount of information. We also show how the scheme can be used to achieve zero-leakage queries when one can tolerate a probabilistic guarantee of correctness. This construction results from applying our compilers to a new STE scheme we design called the piggyback scheme. This scheme is a general-purpose STE construction (in the sense that it can encrypt any data structure) that leaks the search/query pattern but hides the response length on non-repeating queries.
Rank estimation is an important tool for a side-channel evaluations laboratories. It allows estimating the remaining security after an attack has been performed, quantified as the time complexity and the memory consumption required to brute force the key given the leakages as probability distributions over $d$ subkeys (usually key bytes). These estimations are particularly useful where the key is not reachable with exhaustive search. We propose a new method called PRank for rank estimation, that is conceptually simple, and more time and memory efficient than previous proposals. Our main idea is to bound each subkey distribution by a Pareto-like function: since these are analytical functions, we can then estimate the rank by a closed formula. We evaluated the performance of PRank through extensive simulations based on two real SCA data corpora, and compared it to the currently-best histogram-based algorithm. We show that PRank gives a good rank estimation with much improved time and memory efficiency, especially for large ranks: For ranks between $2^{80}-2^{100}$ PRank estimation is at most 10 bits above the histogram rank and for ranks beyond $2^{100}$ the PRank estimation is only 4 bits above the histogram rank---yet it runs faster, and uses negligible memory. PRank gives a new and interesting method to solve the rank estimation problem based on reduction to analytical functions and calculating one closed formula hence using negligible time and space.
ePrint Report Adaptive Garbled RAM from Laconic Oblivious Transfer Sanjam Garg, Rafail Ostrovsky, Akshayaram Srinivasan
We give a construction of an adaptive garbled RAM scheme. In the adaptive setting, a client first garbles a ``large'' persistent database which is stored on a server. Next, the client can provide multiple adaptively and adversarially chosen RAM garbled programs that execute and modify the stored database arbitrarily. The garbled database and the garbled program should reveal nothing more than the running time and the output of the computation. Furthermore, the sizes of the garbled database and the garbled program grow only linearly in the size of the database and the running time of the executed program respectively (up to polylogarithmic factors). The security of our construction is based on the assumption that laconic oblivious transfer (Cho et al., CRYPTO 2017) exists. Previously, such adaptive garbled RAM constructions were only known using indistinguishability obfuscation or in random oracle model. As an additional application, we note that this work yields the first constant round secure computation protocol for persistent RAM programs in the malicious setting from standard assumptions. Prior works did not support persistence in the malicious setting.

newer items   older items